You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
VM running with matching mount → Action.NOTHING, skip provision
That's coarse. Real-world workflows have a third case: the user changed project.toml or a pack script, and the running VM is now stale even
though mount_matches is True. Today nothing surfaces this — packs are
idempotent so they'd no-op for already-installed tools, but adding a new pack (e.g. the uv pack a user just appended to packs.enabled)
won't run until the VM is bootout-ed or recreated. The user has to know
to do that manually.
Docker / devcontainer solve the same problem with two ideas worth
stealing:
Configuration hash drives rebuild detection. devcontainer CLI
hashes devcontainer.json + Dockerfile + features lockfile and
compares against the hash recorded when the container was last
created. Mismatch → "Configuration changed, rebuild?" banner.
Lifecycle hooks fan out to different cadences.onCreateCommand
(once per container), updateContentCommand (when content changes), postCreateCommand (after create), postStartCommand (every start), postAttachCommand (every attach). One blob of "provision" gets
sliced so expensive things run once, cheap things run often.
Proposal
Phase 1 — config-hash rebuild detection
Extend ensure_attached to compare a hash of the live config against
what was used last time provision succeeded:
Effect: a user appending "uv" to packs.enabled and running up
again gets the new pack installed automatically — without bootout, without tart delete. The state file lives next to the manifest so it's per-VM
and survives up cycles.
Open question: should config drift trigger Action.UPDATE_AND_REPROVISION
(implicit) or print a "config changed, will reprovision" line and proceed
(explicit)? Devcontainer requires user confirmation; tart's tighter
coupling to a single user's machine probably tolerates implicit.
Phase 2 — lifecycle hook layering
Today everything is "provision". Split the project config into:
[scripts]
post_create = ".tart/post-create.sh"# once per CREATE; mint bootstrap, plugin installpost_start = ".tart/post-start.sh"# every START; light env setuppost_attach = ".tart/post-attach.sh"# every ensure_attached; null currentlyverify_worktree = ".tart/verify-worktree.sh"# explicit, doctor-ish
packs ensure → post_start → post_attach (no post_create)
UPDATE_MOUNT_AND_RESTART
(same as START — VM was stopped to update)
ATTACH_MOUNT_AND_START
(same as START)
NOTHING (config drift detected)
packs ensure → post_attach
NOTHING (no drift)
nothing
This makes the mint bootstrap / claude plugin install cost paid once
(post_create), while a cheap ~/.zshrc fixup or cargo build
incremental rebuild can live in post_start without doubling attach time.
Phase 3 — surface drift in status and doctor
remo-tart status shows provisioned: hash<8 chars> and an
"out of date" marker if config has drifted since last provision.
remo-tart doctor adds a check for "config changed since last
provision" — informational, not a hard failure.
#67 closes the "provision never runs" gap. This issue closes the
"provision runs at the wrong granularity" gap that opens up once #67
lands. Together they make remo-tart up a real Docker-like
self-healing primitive instead of a "fingers crossed, did this rebuild?"
escape hatch.
I'm happy to PR Phase 1 if there's interest in the direction.
Motivation
After #66/#67 land,
remo-tarthas two states from the orchestrator'sperspective:
provision.sh(Fix feat: make Remo SDK debug-only with #if DEBUG guards #3 path)Action.NOTHING, skip provisionThat's coarse. Real-world workflows have a third case: the user changed
project.tomlor a pack script, and the running VM is now stale eventhough
mount_matchesis True. Today nothing surfaces this — packs areidempotent so they'd no-op for already-installed tools, but adding a
new pack (e.g. the
uvpack a user just appended topacks.enabled)won't run until the VM is bootout-ed or recreated. The user has to know
to do that manually.
Docker / devcontainer solve the same problem with two ideas worth
stealing:
hashes
devcontainer.json + Dockerfile + features lockfileandcompares against the hash recorded when the container was last
created. Mismatch → "Configuration changed, rebuild?" banner.
onCreateCommand(once per container),
updateContentCommand(when content changes),postCreateCommand(after create),postStartCommand(every start),postAttachCommand(every attach). One blob of "provision" getssliced so expensive things run once, cheap things run often.
Proposal
Phase 1 — config-hash rebuild detection
Extend
ensure_attachedto compare a hash of the live config againstwhat was used last time provision succeeded:
Effect: a user appending
"uv"topacks.enabledand runningupagain gets the new pack installed automatically — without bootout, without
tart delete. The state file lives next to the manifest so it's per-VMand survives
upcycles.Open question: should config drift trigger
Action.UPDATE_AND_REPROVISION(implicit) or print a "config changed, will reprovision" line and proceed
(explicit)? Devcontainer requires user confirmation; tart's tighter
coupling to a single user's machine probably tolerates implicit.
Phase 2 — lifecycle hook layering
Today everything is "provision". Split the project config into:
Backwards compat: legacy
[scripts] provision = ...maps topost_create.verify_worktreestays.Maps onto current actions:
This makes the
mint bootstrap/claude plugin installcost paid once(post_create), while a cheap
~/.zshrcfixup orcargo buildincremental rebuild can live in post_start without doubling attach time.
Phase 3 — surface drift in
statusanddoctorremo-tart statusshowsprovisioned: hash<8 chars>and an"out of date" marker if config has drifted since last provision.
remo-tart doctoradds a check for "config changed since lastprovision" — informational, not a hard failure.
Phase 4 —
--no-provision/--reprovisionescape hatches--no-provisionforup— skip even when drift detected (debug case).remo-tart reprovision [--full]— explicit reprovision command,doesn't restart the VM.
Why this is the right scope for one issue
These four phases map to ~four PRs and can land independently:
users, immediate UX win for users adding packs).
and docs).
Phase 1 is the most valuable on its own and could be merged before the
rest is designed.
Relationship to #66 / #67
#67 closes the "provision never runs" gap. This issue closes the
"provision runs at the wrong granularity" gap that opens up once #67
lands. Together they make
remo-tart upa real Docker-likeself-healing primitive instead of a "fingers crossed, did this rebuild?"
escape hatch.
I'm happy to PR Phase 1 if there's interest in the direction.