Operational Truth Yard (OTY) is a file driven process yard for operational
truth data cargo. It treats your filesystem as the control plane. You drop
evidence warehouse databases into a cargo directory, and Truth Yard discovers,
classifies, spawns, supervises, and exposes them as local web services using
deterministic, inspectable state written entirely to disk.
There is no registry, no background daemon requirement, and no internal control plane database. Everything Truth Yard knows is encoded in files you can read, version, copy, audit, or generate tooling around.
A database file on disk is cargo. This includes SQLite, DuckDB, Excel, Markdown and other data suppliers and artifacts now and in the future.
Dropping cargo into the yard, the ./cargo.d directory, makes it eligible to be
launched. The spawn state directory is the operational ledger. JSON context
manifests and logs are written to disk so other tools, scripts, reverse proxies,
and later invocations of yard.ts can see what is running without needing an
API.
The filesystem is the API.
- The yard is a place where cargo crates get launched as vessels.
- Databases and data files are cargo crates.
- Spawned processes are launched vessels.
- Ports are berths.
- JSON context files are manifests you can hand to other tools.
- The
ledger.ddirectory is the ship’s log.
This framing is intentional. Truth Yard is not an app server or an orchestrator in the Kubernetes sense. It is closer to a dockyard that launches things predictably and writes down exactly what it did, forming an inspectable, auditable record of operational truth.
Today, OTY focuses on local-first, deterministic process orchestration for:
- SQLPage applications stored inside SQLite databases
- surveilr RSSDs (SQLite databases with
uniform_resourcetables)
Other tabular formats such as DuckDB, Markdown or Excel may be discovered as cargo but are not currently exposable services. Please create tickets to accelerate our roadmap for the data suppliers you're interested in.
All workflows are built on the same on-disk ledger and reconciliation logic.
Scan cargo roots, spawn everything exposable, write state to disk, and exit.
This is ideal for CI pipelines, deterministic local runs, reverse proxy generation, and one-shot demos.
Command:
bin/yard.ts startThe yard command can run continuously. The filesystem is watched and
operational truth is reconciled to intent.
- New cargo appears: spawned
- Cargo disappears: killed
- Process dies: respawned
Command:
bin/yard.ts start --watchWatch mode reuses the same ledger format as materialize mode, but continuously reconciles it instead of producing a one-shot session.
--watch option starts a service and when the service ends using Ctrl+C
(SIGINT) all the spawned processes will get cleaned up.
The yard command recursively walks one or more cargo roots using deterministic
glob rules.
Typical defaults include:
**/*.db**/*.sqlite**/*.sqlite3**/*.sqlite.db**/*.duckdb**/*.xlsx
Each candidate file is classified cheaply and deterministically:
-
If SQLite-like:
- If it has a
uniform_resourcetable, it is a surveilr RSSD and spawned viasurveilr web-ui - Else if it has a
sqlpage_filestable, it is a SQLPage app and spawned viasqlpage - Else it is plain SQLite and ignored by
exposable()
- If it has a
-
Non-SQLite tabular files may be discovered but are not exposable today
There are no heuristics beyond this and no background indexing.
Each exposable service is assigned a proxy prefix derived from its path relative to the cargo root.
The filename is normalized by stripping compound database extensions such as
.sqlite.db.
Examples:
cargo.d/controls/scf-2025.3.sqlite.db→/apps/sqlpage/controls/scf-2025.3cargo.d/two/example-two.db→/apps/sqlpage/two/example-two
This prefix is passed to the spawned service, written into the context JSON, used by the web UI, and consumed by reverse proxy generators.
Ports are assigned incrementally starting at a configurable base (default 3000).
OTY automatically skips ports that are already in use and continues searching
until a free port is found. This behavior is enabled by default.
In watch and smart-spawn modes, OTY also avoids collisions by inspecting the
existing ledger and live processes.
Every spawned service writes three files:
<name>.context.json<name>.stdout.log<name>.stderr.log
These live under a session directory inside the spawn-state home.
Materialize mode (start without --watch) creates a timestamped session
directory:
ledger.d/2026-01-07-20-15-00/
Watch and smart-spawn modes use a stable session directory so state is continuously reconciled rather than replaced.
Session directories mirror the cargo directory structure:
ledger.d/2026-01-07-20-15-00/
controls/
scf-2025.3.sqlite.db.context.json
scf-2025.3.sqlite.db.stdout.log
scf-2025.3.sqlite.db.stderr.log
This makes tracing a running service back to its source file trivial.
By default, materialization runs in smart-spawn mode.
Smart spawn inspects existing tagged processes and the ledger before starting anything:
- If a service for the same provenance is already running, it is not spawned again
- If cargo is removed, the corresponding process is killed
- If cargo is added, materialization is re-run and only the delta is applied
Process identity is keyed by the absolute provenance path of the database file. An optional strict mode can also require matching the session ID.
This allows repeated runs of yard.ts start or yard.ts start --watch to be
idempotent and safe.
bin/yard.ts start \
--cargo-home ./cargo.d \
--ledger-home ./ledger.d \
--verbose essential|comprehensive \
--watchbin/yard.ts ps
bin/yard.ts ps --extended
bin/yard.ts ps --reconcileThe --reconcile option compares the on-disk ledger with live tagged processes
and highlights drift.
bin/yard.ts lsbin/yard.ts kill
bin/yard.ts kill --cleanbin/yard.ts proxy-conf --type nginx
bin/yard.ts proxy-conf --type traefik
bin/yard.ts proxy-conf --type bothSupported options include prefix overrides, strip-prefix, and engine-specific flags.
OTY includes a deterministic composite SQL DDL generator for "virtual
aggregation" of multiple embedded databases (like SQLite, Excel, etc.) in a
single connection using DuckDB as the federated query layer.
Most composites are executed against an ephemeral database (often :memory: for
SQLite, or an in-memory DuckDB connection) when you only need a transient
“single-connection” view.
You can also execute the generated SQL against a persistent composite database
file (e.g. composite.sqlite.auto.db or composite.duckdb.auto.db). In that
case:
- The ATTACH statements are not “saved” as permanent mounts. They run per-connection, so each time you open the composite DB you must execute the composite.sql again (unless your runtime always bootstraps the connection with the SQL).
- Any CREATE VIEW / CREATE TABLE emitted by composite.sql is persisted in that composite DB file. This can be desirable for stable views or materialized rollups, but it also means schema changes require regeneration or migration.
# SQLite SQL DDL for composite connections in admin scope (default)
./bin/yard.ts cc --volume-root /var/truth-yard --scope admin
# Tenant SQL DDL for composite connections
./bin/yard.ts cc --volume-root /var/truth-yard --scope tenant --tenant-id tenant-123
# DuckDB SQL DDL for composite connections that can attach SQLite DBs (includes INSTALL/LOAD sqlite preamble)
./bin/yard.ts cc --dialect DuckDB --duckdb-sqlite-ext --volume-root /var/truth-yard --scope cross-tenantYou can start a web-based administration server using:
./bin/web-ui/serve.tsBy default it starts on:
http://127.0.0.1:8787/
The root URL redirects automatically to:
/.truth-yard/ui/
This shows all currently tagged processes managed by OTY.
For each service you can see:
- PID of the running process
- Upstream URL (where the service actually listens)
- Proxied URL (the local path exposed by
OTY) - DB Yard Service (serviceId, with sessionId available via tooltip)
- Ledger Context (link to the exact
*.context.jsonfile inledger.d) - Actions to view STDOUT and STDERR logs
Everything shown here maps directly to files in ledger.d or to an active
process.
This compares live tagged processes against ledger context files.
It highlights:
- ledger entries without a running process (likely crashed or stopped)
- processes without a corresponding ledger entry (unexpected or orphaned)
This is useful for quickly spotting inconsistencies between “what should be running” and “what actually is”.
The “Browse ledger.d” link lets you navigate the ledger directory directly in the browser, including context files and logs.
The UI and API expose explicit debug endpoints to understand proxy behavior:
-
/.truth-yard/api/proxy-debug.json?path=/some/pathShows which proxy rule matched, how the upstream URL is constructed, and which headers are forwarded (with secrets redacted). -
/.truth-yard/api/proxy-roundtrip.json?path=/some/pathPerforms a real upstream request and reports status, headers, latency, and a small response preview.
You can also trace any proxied request end-to-end by adding:
?__truth_yard_trace=1or sending the header:
x-truth-yard-trace: 1The response will include trace headers showing the matched base path and upstream, and the server logs a single structured trace line for correlation.
The OTY proxy is intentionally simple and transparent.
It’s ideal for:
- local development
- testing routing behavior
- validating upstream services
- lightweight, low-traffic use
- debugging headers and path rewriting
For anything beyond that (higher traffic, TLS termination, auth, rate limiting,
resilience, observability), you should point industrial-grade proxy servers such
as NGINX, Traefik, Envoy, or cloud load balancers directly at the upstream URLs.
OTY can already generate proxy configuration inputs from ledger state to
support that workflow.
The design intent is clarity and debuggability first, not to replace production proxy infrastructure.

