Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions doc/source/reference/tutorials/sql_39_schema_from.rst
Original file line number Diff line number Diff line change
Expand Up @@ -146,10 +146,10 @@ today is what the script reads/writes. ETL between two DBs,
archival readers, admin tooling --- all good fits.

For "the DB grows over time, run versioned schema migrations at
startup", a future ``daslib/sql_migrate`` module ships
startup", a future ``daslib/sqlite_migrate`` module ships
``[sql_migration(version=N)]`` + a runtime runner. The two are
orthogonal: ``schema_from`` gives compile-time contract checks
against a known schema; ``sql_migrate`` evolves the schema over
against a known schema; ``sqlite_migrate`` evolves the schema over
time. See :ref:`tutorial_sql_schema_evolution` for the
multi-version ETL pattern.

Expand Down
4 changes: 2 additions & 2 deletions doc/source/reference/tutorials/sql_42_schema_evolution.rst
Original file line number Diff line number Diff line change
Expand Up @@ -113,11 +113,11 @@ surfaces, you add a third struct + a third loop and the type
system tells you exactly where to wire it up.

For "my app's schema evolves at runtime; I run migrations at
startup", a future ``daslib/sql_migrate`` module ships a
startup", a future ``daslib/sqlite_migrate`` module ships a
different shape: versioned ``[sql_migration(version=N)]``
functions applied in order, tracked in ``__schema_version``,
transactional per migration. The two patterns coexist:
``schema_from`` for *code-on-current-schema*, ``sql_migrate``
``schema_from`` for *code-on-current-schema*, ``sqlite_migrate``
for the *schema-grows-over-time* runner.

.. seealso::
Expand Down
79 changes: 69 additions & 10 deletions doc/source/reference/tutorials/sql_43_migrations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ Define a schema and write migrations

.. code-block:: das

require sqlite/sql_migrate
require sqlite/sqlite_migrate

[sql_table(name = "users")]
struct User {
Expand All @@ -72,12 +72,12 @@ Define a schema and write migrations

[sql_migration(version = 2, description = "add email column")]
def migration_002(db : SqlRunner) {
db |> exec("ALTER TABLE users ADD COLUMN Email TEXT NOT NULL DEFAULT ''")
db |> add_column(type<User>, "Email", "")
}

[sql_migration(version = 3, description = "add score; backfill", analyze = true)]
def migration_003(db : SqlRunner) {
db |> exec("ALTER TABLE users ADD COLUMN Score INTEGER NOT NULL DEFAULT 0")
db |> add_column(type<User>, "Score", 0)
db |> exec("UPDATE users SET Score = 100 WHERE Email != ''")
}

Expand Down Expand Up @@ -154,6 +154,68 @@ no-op for any DB that's already past it --- the audit row keeps
the old text. Fresh installs pick up the new text on first
apply.

Typed ALTER: when daslang has enough info to validate
======================================================

Migrations 2 and 3 above use the typed surface in
``daslib/sqlite_boost``:

.. code-block:: das

db |> add_column(type<T>, "Field" [, defaultLit])
db |> create_index(type<T>, "Field" | ("A","B") [, "name"])
db |> create_unique_index(type<T>, ... same shape ...)
db |> drop_index_if_exists("name")

Field selectors are **string literals**, not ``.Field`` syntax:
gen2's parser only accepts ``.Field`` inside an ``_sql {...}``
block. Plain call sites pass explicit string field names ---
the same convention ``[sql_index(fields="A")]`` already uses.

What the typed forms buy you:

1. **Compile-time field-name checks.**
``add_column(type<T>, "Emial")`` fails at compile time, not
on the user's DB at startup.

2. **Type-derived NOT NULL.** Forgetting ``NOT NULL + DEFAULT``
used to mean existing rows panic on the next read after the
migration. The macro derives nullability from
``Option<>`` wrapping and refuses to ADD a non-nullable
column without a literal default.

3. **Annotation reuse.** ``@sql_column`` rename + ``@sql_json``
/ ``@sql_blob`` storage are honored automatically --- same
conventions as ``[sql_table]`` / ``[sql_index]``, no
double-bookkeeping.

A second migration that adds an index and a unique composite,
with an idempotent rebuild pattern via
``drop_index_if_exists``:

.. code-block:: das

[sql_migration(version = 4, description = "index Email")]
def migration_004(db : SqlRunner) {
db |> create_index(type<User>, "Email")
}

[sql_migration(version = 5, description = "unique (Email, Name)")]
def migration_005(db : SqlRunner) {
db |> drop_index_if_exists("ux_email_name")
db |> create_unique_index(type<User>, ("Email", "Name"), "ux_email_name")
}

What stays on raw ``db |> exec(...)``:

- DROP COLUMN / RENAME COLUMN --- old names aren't in the
current struct, so daslang has nothing to validate against.
- PK / UNIQUE inline / generated columns added post-hoc ---
SQLite can't ALTER these in place; needs a table rebuild
(chunk 14c, ``struct_convert``).
- ``CHECK`` constraints, FK ADD/DROP, column type changes.
- Anything ad-hoc that doesn't map to a struct field.

Adopting migrations on an existing DB
======================================

Expand Down Expand Up @@ -228,17 +290,14 @@ What does NOT ship
Future work, also dovetails with the eventual ``dasSQL``
abstraction layer.

- **Typed ``ALTER`` macros** (``add_column``, ``create_index``,
``drop_index_if_exists``). Coming in chunk 14b. For now,
every ALTER is raw SQL --- which works for every form on the
bundled SQLite 3.41.2, including ``DROP COLUMN`` and
``RENAME COLUMN``.

- **Struct-to-struct rebuild support** (``struct_convert``,
``[sql_table(legacy=true)]``, ``name=`` overrides). Coming
in chunk 14c. For now, schema rebuilds are hand-written
``CREATE TABLE T_new`` + ``INSERT ... SELECT`` --- works,
just verbose.
just verbose. The typed ALTER surface above covers the
additive cases (ADD COLUMN, CREATE INDEX); ``DROP COLUMN`` /
``RENAME COLUMN`` stay on raw ``db |> exec(...)`` until that
chunk lands.

.. seealso::

Expand Down
2 changes: 1 addition & 1 deletion modules/dasSQLITE/.das_module
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ def initialize(project_path : string) {
if (das_is_dll_build()) {
register_dynamic_module("{project_path}/dasModuleSQLITE.shared_module", "Module_dasSQLITE")
}
let sqlite_paths = ["sqlite_boost", "sqlite_linq", "sql_migrate"]
let sqlite_paths = ["sqlite_boost", "sqlite_linq", "sqlite_migrate"]
for (path in sqlite_paths) {
register_native_path("sqlite", "{path}", "{project_path}/daslib/{path}.das")
}
Expand Down
58 changes: 37 additions & 21 deletions modules/dasSQLITE/API_MIGRATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ with_sqlite("app.db") <| $(db) {
```

**Convenience wrapper (decided 2026-05-04):** ship `with_latest_sqlite(path)`
in `daslib/sql_migrate` that combines `with_sqlite` + `migrate_to_latest`
in `daslib/sqlite_migrate` that combines `with_sqlite` + `migrate_to_latest`
into one call. Less ceremony, harder to ship a binary that opens the DB
without bringing it up to date. Preferred form for app code; the explicit
two-step form remains for tools/CI that may want to inspect before
Expand Down Expand Up @@ -351,24 +351,30 @@ when (if) a concrete use case shows up.

`detect_duplicates` from chunk 13 is fuzzy — produces false positives.
Wrong shape for a compile-time hard error. Right shape for a PR-review
helper: a `daslib/sql_migrate_lint` (or similar) tool that scans the
helper: a `daslib/sqlite_migrate_lint` (or similar) tool that scans the
migration registry and flags suspiciously-similar bodies for human
review. Useful for catching copy-paste mistakes during merges, but
the dev keeps the final call. Out of scope for v1; revisit when the
ecosystem matures.

**Decision (locked 2026-05-04) — ship typed ALTER surface for the
additive cases.**
**Decision (locked 2026-05-04, shipped 14b chunk) — typed ALTER surface
for the additive cases.**

Migration bodies don't have to stay raw-SQL-only. The cases where
daslang has enough information to validate against the current struct
ship as typed macros; everything else stays raw `db |> exec(…)`.

Field selectors use **string literals**, not `.Field` syntax: gen2's
parser only parses `.Field` inside `_sql {…}` blocks where `_` is a
synthetic placeholder. Plain call sites need explicit string field
names — same convention `[sql_index(fields="A")]` already uses.

| Form | Shipped surface | Emits |
|---|---|---|
| ADD COLUMN | `db \|> add_column(type<T>, .Field [, default = expr])` | `ALTER TABLE t ADD COLUMN Field <SQL_TYPE> [NOT NULL] [DEFAULT …]`. SQL type from `_::sql_bind` adapter rail; NOT NULL derived from absence of `Option<>` wrapping; default from the expression via `to_sql_literal`. |
| CREATE INDEX | `db \|> create_index(type<T>, fields = (.A, .B), unique = true [, name = "ix_…"])` | `CREATE [UNIQUE] INDEX <name> ON t(A, B)`. Name auto-derived as `idx_<table>_<col1>_<col2>` to match the `[sql_index]` annotation convention from tut 24. |
| DROP INDEX | `db \|> drop_index_if_exists("ix_users_email")` | `DROP INDEX IF EXISTS ix_users_email`. Just a name string — no struct typing. |
| ADD COLUMN | `db \|> add_column(type<T>, "Field" [, defaultLit])` | `ALTER TABLE t ADD COLUMN Field <SQL_TYPE> [NOT NULL] [DEFAULT …]`. SQL type from `_::sql_storage_type_for` (the `_::sql_bind` adapter rail); NOT NULL derived from absence of `Option<>` wrapping; DEFAULT from the literal via `literal_init_to_default_clause`. `@sql_column` rename + `@sql_json` / `@sql_blob` storage are honored. |
| CREATE INDEX | `db \|> create_index(type<T>, "Field" \| ("A","B") [, "ix_…"])` | `CREATE INDEX <name> ON t(A, B)`. Name auto-derived as `idx_<table>_<col1>_<col2>` to match the `[sql_index]` annotation convention from tut 24. |
| CREATE UNIQUE INDEX | `db \|> create_unique_index(type<T>, … same shape …)` | `CREATE UNIQUE INDEX <name> ON t(A, B)`. Separate macro (no boolean named arg, since gen2 named-args use `[name=val]` bracket form which call_macros don't intercept). |
| DROP INDEX | `db \|> drop_index_if_exists("ix_users_email")` | Plain runtime function. `DROP INDEX IF EXISTS "ix_users_email"`. |

**Reasoning:**

Expand Down Expand Up @@ -397,16 +403,26 @@ escape is `db |> exec(…)`):
The reader's mental model is consistent: *typed when daslang has enough
info to validate, raw when it doesn't.*

**Test cases (typed ALTER):**
**Test cases (typed ALTER) — shipped 14b:**

| File | What it asserts |
|---|---|
| `15_add_column_simple.das` | `add_column(type<User>, .Email, default = "")` emits the expected DDL; column appears in `pragma_table_info` with TEXT NOT NULL DEFAULT ''. |
| `16_add_column_optional.das` | `add_column(type<User>, .LastLoginAt)` for `Option<int64>` field emits INTEGER (no NOT NULL, no default). |
| `17_create_index_default_name.das` | `create_index(type<User>, fields = (.Email,))` emits `idx_users_email`; visible in `sqlite_master`. |
| `18_create_index_unique_composite.das` | `create_index(type<User>, fields = (.Email, .LastLoginAt), unique = true, name = "ix_lookup")` emits `CREATE UNIQUE INDEX ix_lookup ON users(Email, LastLoginAt)`. |
| `19_drop_index_if_exists.das` | Drop nonexistent index → no error; create + drop → second pragma shows it gone. |
| `20_typed_and_raw_mix.das` | Migration body using both `add_column` (typed) and `db \|> exec("DROP COLUMN OldField")` (raw) compiles and applies cleanly. Locks the "mix freely" claim. |
| `migrate_14_add_column_optional.das` | `Option<string>` field emits nullable TEXT (no NOT NULL, no DEFAULT); `pragma_table_info` confirms. |
| `migrate_15_add_column_default.das` | `add_column(type<T>, "Score", 0)` emits NOT NULL INTEGER DEFAULT 0; pre-existing rows backfill via SQLite's ALTER. |
| `migrate_16_add_column_string_quoting.das` | Embedded single-quote (`"O'Reilly"`) escapes correctly via `sql_quote_string_literal`; round-trips. |
| `migrate_17_add_column_renamed.das` | `@sql_column("email_addr")` propagates: emitted DDL uses the renamed identifier, not the daslang field name. |
| `migrate_18_add_column_json.das` | `@sql_json` field of struct type emits TEXT storage. |
| `migrate_19_add_column_blob.das` | `@sql_blob` field of struct type emits BLOB storage; `Option<T>` keeps it nullable. |
| `migrate_20_create_index_basic.das` | Auto-name `idx_<table>_<col>`, non-unique by default. |
| `migrate_21_create_index_unique_composite.das` | Composite UNIQUE with explicit name; `drop_index_if_exists` is idempotent across missing/existing/missing names. |
| `migrate_22_add_column_dup_runtime.das` | Re-adding the same column at runtime panics inside the migration; α-shape transaction rolls back the whole call (audit table empty, table doesn't exist). |
| `failed_add_column_unknown_field.das` | macro_error: field not on struct. |
| `failed_add_column_no_sql_table.das` | macro_error: type<T> lacks `[sql_table]`. |
| `failed_add_column_pk_rejected.das` | macro_error: cannot ADD a PK column (recreate table). |
| `failed_add_column_unique_rejected.das` | macro_error: cannot ADD UNIQUE inline (use `create_unique_index`). |
| `failed_add_column_nonliteral_default.das` | macro_error: DEFAULT must be a compile-time literal. |
| `failed_create_index_unknown_field.das` | macro_error: field not on struct. |
| `failed_create_index_empty_fields.das` | macro_error: empty fields list. |

---

Expand Down Expand Up @@ -510,7 +526,7 @@ body, possibly with intermediate `with_transaction { … }` savepoints

**Future polish (not blocking v1) — `chunked_update` helper.**

A `daslib/sql_migrate_chunk` follow-up could ship `db |>
A `daslib/sqlite_migrate_chunk` follow-up could ship `db |>
chunked_update(type<T>, where, set, chunk_size = 10000)` for large data
backfills with controlled lock contention. Logged for if real demand
surfaces.
Expand Down Expand Up @@ -664,8 +680,8 @@ attempted-write boundary (`migrate_to_latest`).
Closes Q4 from the open-questions list.

**Setup.** User has been shipping the app for a year *without*
`daslib/sql_migrate`. Their `app.db` already has `users`, `orders`, etc.
Dev pulls in `daslib/sql_migrate`, writes `[sql_migration(version=1)]`
`daslib/sqlite_migrate`. Their `app.db` already has `users`, `orders`, etc.
Dev pulls in `daslib/sqlite_migrate`, writes `[sql_migration(version=1)]`
that does `db |> exec("CREATE TABLE users (…)")` representing the v1
shape, swaps `with_sqlite` → `with_latest_sqlite`. Boom — runtime crashes
with "table users already exists."
Expand Down Expand Up @@ -829,7 +845,7 @@ runs once per call.
Catching two distinct version numbers with semantically-similar bodies
(copy-paste merge mistakes) is fuzzy — `detect_duplicates` produces
false positives. Wrong shape for a build gate. Right shape for a
`daslib/sql_migrate_lint` (or similar) PR-review helper that flags
`daslib/sqlite_migrate_lint` (or similar) PR-review helper that flags
suspect bodies for human review. Out of scope for v1; revisit when the
ecosystem matures.

Expand Down Expand Up @@ -1066,7 +1082,7 @@ answer they get when they ask.
API_REWORK §30 deferred-list item #1 originally left the door slightly
ajar ("Future: if shipped, live as a separate function macro"). The
discussion in this scenario closes it. Down-migrations are a *design
anti-pattern*, not a missing feature. Future versions / `daslib/sql_migrate_*`
anti-pattern*, not a missing feature. Future versions / `daslib/sqlite_migrate_*`
follow-ups should not ship them either. If a real use case forces our
hand, revisit then; default posture is permanent "no."

Expand Down Expand Up @@ -1474,9 +1490,9 @@ union of:
- **Typed ALTER:** `add_column(type<T>, .Field, default=…)`;
`create_index(type<T>, fields=…, unique=…, name=…)`;
`drop_index_if_exists(name)`.
- **Module:** all of the above lives in `daslib/sql_migrate`
- **Module:** all of the above lives in `daslib/sqlite_migrate`
(separate from core `daslib/sql`); `[sql_migration]` annotation
is the only thing that registers without `require daslib/sql_migrate`
is the only thing that registers without `require daslib/sqlite_migrate`
if convenient (TBD during impl).
- **Audit table:** `__schema_version` 3-column shape (`version`,
`description`, `applied_at`).
Expand Down
4 changes: 2 additions & 2 deletions modules/dasSQLITE/API_MISSING.md
Original file line number Diff line number Diff line change
Expand Up @@ -1074,7 +1074,7 @@ with_sqlite("app.db") <| $(db) {
- **Where does `__schema_version` live?** Reserved table name? User-
configurable via a setting? Single integer or migration-name-set?
- **Out of scope for the dasSQLITE rework itself.** Probably ships as
`daslib/sql_migrate` (separate module) once the core API stabilizes.
`daslib/sqlite_migrate` (separate module) once the core API stabilizes.
Design constraint: the `[sql_table]` machinery must not preclude a
future struct-diff implementation — keep field metadata
introspectable.
Expand Down Expand Up @@ -1697,7 +1697,7 @@ layer" or "this lives in the provider." Quick sweep:
| 27 indexes | abstract |
| 28 defaults_computed | abstract surface; raw-SQL defaults inherently provider-specific |
| 29 custom_types | abstract registration API; per-provider type table |
| 30 migrations | separate module (`daslib/sql_migrate`) |
| 30 migrations | separate module (`daslib/sqlite_migrate`) |
| 31 views | abstract |
| 32 sql_functions | abstract registration; per-provider built-in baseline |
| 33 PRAGMA | provider-only (`sqlite/sqlite_boost`) |
Expand Down
Loading
Loading