diff --git a/doc/source/reference/tutorials/sql_43_migrations.rst b/doc/source/reference/tutorials/sql_43_migrations.rst index a288a2461..aa70effa8 100644 --- a/doc/source/reference/tutorials/sql_43_migrations.rst +++ b/doc/source/reference/tutorials/sql_43_migrations.rst @@ -211,11 +211,118 @@ What stays on raw ``db |> exec(...)``: - DROP COLUMN / RENAME COLUMN --- old names aren't in the current struct, so daslang has nothing to validate against. - PK / UNIQUE inline / generated columns added post-hoc --- - SQLite can't ALTER these in place; needs a table rebuild - (chunk 14c, ``struct_convert``). + SQLite can't ALTER these in place; the rebuild path below + handles them. - ``CHECK`` constraints, FK ADD/DROP, column type changes. - Anything ad-hoc that doesn't map to a struct field. +Schema rebuild via ``struct_convert`` +====================================== + +SQLite's ``ALTER`` vocabulary handles ADD COLUMN, RENAME COLUMN, +RENAME TABLE, and DROP COLUMN cleanly. Everything else --- PK +changes, type narrowing, FK alterations, ``CHECK`` changes --- +needs a *rebuild*: build a new table with the desired shape, copy +the rows through a converter, drop the old, rename. SQLite docs +call this the "12-step recipe"; daslang collapses it to three +lines of user code. + +Three pieces work together: + +1. ``[sql_table(name=..., legacy=true)]`` keeps the OLD shape as + a *historical* struct. Read-only --- usable with + ``select_from`` and ``drop_table_if_exists``, but the compiler + refuses ``create_table`` / ``insert`` / ``update`` / ``delete`` + on it (the write-side helpers are not emitted; the natural + unresolved-overload error fires at the call site). + +2. ``[struct_convert] def my_v1_to_v2(old : S; var dst : T) {...}`` + is a function annotation that walks T's fields. For each one + it emits a single dispatch call: + ``_::struct_convert_field(tgt.X, src.X_or_renamed)``. + + The conversion itself lives in an overloaded + ``struct_convert_field`` set. dasSQLITE ships: + + - **Identity** (``T -> T``) --- ``dst = src``. + - **``S -> Option`` wrap** --- recurses on the inner + ``S -> T``, then wraps with ``some()``. So ``int -> Option`` + works because the inner ``int -> string`` overload fires. + - **``Option -> T`` unwrap** --- NULL collapses to + ``default``, then recurses on the inner ``S -> T``. + - **``Option -> Option`` cross-payload** --- unwraps, + dispatches the inner conversion, re-wraps (NULL stays NULL). + - **Primitive targets** --- ``int`` / ``int64`` / ``float`` / + ``double`` / ``string`` from any source via the type's + constructor (``int(src)``, ``int64(src)``, ...) or string + interp. + + To extend, drop your own overload in your module --- the + macro emits ``_::struct_convert_field(...)`` so the call + resolves at the user-side overload set:: + + def struct_convert_field(var dst : MyEnum&; src : int) : void { + dst = MyEnum(src) + } + + Now ``int -> MyEnum`` auto-derives. The ``Option -> int`` + path also picks up your overload via the recursive dispatch. + + The macro itself still handles: ``@sql_renamed_from = "OldName"`` + lookup (read from old's renamed field), default-init for fields + absent from S (uses ``T``'s field initializer or ``none()`` for + ``Option``), body validation (must be a brace block), and + user-override detection. An explicit ``dst.X = ...`` (or ``<-``, + ``:=``) on the LHS in the body suppresses the auto-fill for X. + +3. ``db |> convert_and_rename(type, type)`` runs the + whole rebuild: CREATE staging table, copy rows through the + converter, DROP original, ALTER ... RENAME staging -> target. + Lower-level ``db |> convert(type, type, name=...)`` is + available if you only need the staging step (and want to + manage the swap separately). + +The rebuild runs inside ``migrate_to_latest``'s big transaction +--- a later migration failing rolls the rebuild back too. + +.. code-block:: das + + [sql_table(name = "users", legacy = true)] + struct UserV6 { + @sql_primary_key Id : int + Name : string + LegacyEmail : string + } + + [sql_table(name = "users")] + struct User { + @sql_primary_key Id : int + Name : string + Email : string + } + + [struct_convert] + def my_v6_to_v7(old : UserV6; var dst : User) { + dst.Email = (old.LegacyEmail != "") + ? old.LegacyEmail + : "unknown@example.com" + } + + [sql_migration(version = 7, description = "restructure users")] + def migration_007(db : SqlRunner) { + db |> convert_and_rename(type, type) + } + +The auto-rules pick up ``Id`` and ``Name`` (same-name same-type); +the user body's one line handles ``Email`` (the new field). +``LegacyEmail`` is in S but not in T --- it gets dropped on +conversion. After the rebuild, the table named ``users`` has the +new shape with rows preserved. + +Path 2 (no legacy struct): hand-written +``db |> exec("INSERT INTO users_new (...) SELECT (...) FROM users")`` +also works. The legacy struct is convenience, not a requirement. + Adopting migrations on an existing DB ====================================== @@ -290,15 +397,6 @@ What does NOT ship Future work, also dovetails with the eventual ``dasSQL`` abstraction layer. -- **Struct-to-struct rebuild support** (``struct_convert``, - ``[sql_table(legacy=true)]``, ``name=`` overrides). Coming - in chunk 14c. For now, schema rebuilds are hand-written - ``CREATE TABLE T_new`` + ``INSERT ... SELECT`` --- works, - just verbose. The typed ALTER surface above covers the - additive cases (ADD COLUMN, CREATE INDEX); ``DROP COLUMN`` / - ``RENAME COLUMN`` stay on raw ``db |> exec(...)`` until that - chunk lands. - .. seealso:: Full source: :download:`tutorials/sql/43-migrations.das <../../../../tutorials/sql/43-migrations.das>` diff --git a/modules/dasSQLITE/daslib/sqlite_boost.das b/modules/dasSQLITE/daslib/sqlite_boost.das index 25fda37a8..defdb39c8 100644 --- a/modules/dasSQLITE/daslib/sqlite_boost.das +++ b/modules/dasSQLITE/daslib/sqlite_boost.das @@ -87,6 +87,7 @@ def private _qualify_schema(sql : string; schema_name : string) : string { out = out |> replace("DROP VIEW IF EXISTS \"", "DROP VIEW IF EXISTS {q}\"") out = out |> replace("DROP VIEW \"", "DROP VIEW {q}\"") out = out |> replace("DROP INDEX IF EXISTS \"", "DROP INDEX IF EXISTS {q}\"") + out = out |> replace("ALTER TABLE \"", "ALTER TABLE {q}\"") return out } @@ -1081,6 +1082,31 @@ def create_table(db : SqlRunner; t : auto(TT)) : void { } } +[unused_argument(t), template(t)] +def try_create_table(db : SqlRunner; t : type; name : string) : SqlError { + //! Like ``try_create_table(type)`` but emits the DDL with table identifier ``name`` instead of T's + //! own ``[sql_table(name=...)]``. Use during a struct_convert rebuild to land the new schema in + //! ``users_new`` before the swap. Indexes auto-derive against the runtime name. + let err = db |> try_exec(_::_sql_create_table_sql(default, name)) + if (err |> is_some) { + return err + } + let idx = _::_sql_create_indexes_sql(default, name) + if (empty(idx)) { + return none(type) + } + return db |> try_exec(idx) +} + +[unused_argument(t), template(t)] +def create_table(db : SqlRunner; t : type; name : string) : void { + //! Strict variant of ``try_create_table(type, name)``. Panics with the libsqlite3 errmsg on failure. + let err = db |> try_create_table(type, name) + if (err |> is_some) { + panic(err |> unwrap) + } +} + [template(t)] def try_drop_table_if_exists(db : SqlRunner; t : auto(TT)) : SqlError { //! Emits DROP TABLE IF EXISTS for the type ``T``. @@ -1207,6 +1233,32 @@ def insert(db : SqlRunner; row : auto(TT)) : int64 { return r |> unwrap } +def try_insert(db : SqlRunner; row : auto(TT); name : string) : Result { + //! Inserts a single row into table identifier ``name`` instead of T's own ``[sql_table(name=...)]``. + //! Used during a struct_convert rebuild to write into the staging table before the rename swap. + //! Same PK semantics as ``try_insert(row)`` — zero PK triggers the no-PK INSERT for autoincrement. + let pk_unset = _::_sql_pk_is_unset(row) + let sql = (pk_unset + ? _::_sql_insert_no_pk_sql(default, name) + : _::_sql_insert_with_pk_sql(default, name)) + let r = db |> try_step_dml(sql, $(var stmt : sqlite3_stmt?) { + _::_sql_bind_row(stmt, row, !pk_unset) + }, "insert") + if (r |> is_err) { + return err(r |> unwrap_err, type) + } + return ok(sqlite3_last_insert_rowid(db.sqlite_handle), type) +} + +def insert(db : SqlRunner; row : auto(TT); name : string) : int64 { + //! Strict variant of ``try_insert(row, name)``. Panics with the libsqlite3 errmsg on failure. + let r = db |> try_insert(row, name) + if (r |> is_err) { + panic(r |> unwrap_err) + } + return r |> unwrap +} + def try_insert(db : SqlRunner; rows : array) : Result { //! Batched INSERT inside a single BEGIN IMMEDIATE / COMMIT transaction. //! All rows must agree on PK presence — mixed-PK arrays return ``Err`` without opening a txn. @@ -2143,7 +2195,7 @@ def private resolve_schema_from_path(p : string) : string { return path_join(get_das_root(), p) } -def private is_option_field_type(td : TypeDeclPtr) : bool { +def is_option_field_type(td : TypeDeclPtr) : bool { if (td == null) { return false } @@ -2163,7 +2215,7 @@ def private is_option_field_type(td : TypeDeclPtr) : bool { return false } -def private unwrap_option_payload_type(var td : TypeDeclPtr) : TypeDeclPtr { +def unwrap_option_payload_type(var td : TypeDeclPtr) : TypeDeclPtr { // Unwrap Option → T (or td unchanged); typeMacro carries T in dimExpr[1], structType in `_value`. if (td == null) { return td @@ -2180,14 +2232,16 @@ def private unwrap_option_payload_type(var td : TypeDeclPtr) : TypeDeclPtr { } return clone_type((td.dimExpr[1] as ExprTypeDecl).typeexpr) } - if (td.baseType == Type.tStructure - && td.structType != null - && td.structType.name == "Option" - && td.structType._module != null - && td.structType._module.name == "option") { - for (f in td.structType.fields) { - if ("{f.name}" == "_value") { - return clone_type(f._type) + // Post-instantiation form: tStructure named `Option` or `Option`. Mirrors is_option_field_type's + // shape detection — module identity doesn't load-bear once we're inside an Option-shaped struct; + // template instantiation downstream of "option" stamps the callee's _module, not "option". + if (td.baseType == Type.tStructure && td.structType != null) { + let sname = string(td.structType.name) + if (sname == "Option" || sname |> starts_with("Option<")) { + for (f in td.structType.fields) { + if ("{f.name}" == "_value") { + return clone_type(f._type) + } } } } @@ -2301,6 +2355,12 @@ class SqlIndexMacro : AstStructureAnnotation { //! Stackable struct-level annotation: ``[sql_index(fields="A"|("A","B"), unique=?, name=?)]``. //! Must appear after ``[sql_table]`` in the annotation list (it rewrites that helper's body). def override apply(var st : StructurePtr; var group : ModuleGroup; args : AnnotationArgumentList; var errors : das_string) : bool { + // Reject on legacy=true structs — the [sql_table] emits no CREATE TABLE, so the index would + // have no statement to attach to and would collide with the current struct's index on rebuild. + if (find_struct_helper_fn("_sql_legacy_table_marker", st) != null) { + errors := "[sql_index] on '{st.name}': not supported on [sql_table(legacy=true)] structs (they don't define schema). Drop the [sql_index] annotation; the current (non-legacy) struct's indexes are reapplied automatically by ``convert_and_rename``." + return false + } let cols = find_string_array_annotation(args, "fields") if (length(cols) == 0) { errors := "[sql_index]: missing or empty `fields=` argument. Use `fields=\"ColName\"` for a single-column index or `fields=(\"A\", \"B\")` for a composite." @@ -2414,6 +2474,8 @@ def private find_parent_table_and_pk(var st : StructurePtr; parent_name : string class SqlTableMacro : AstStructureAnnotation { def override apply(var st : StructurePtr; var group : ModuleGroup; args : AnnotationArgumentList; var errors : das_string) : bool { let table_name = find_annotation(args, "name", string(st.name)) + // legacy=true: emit read-side helpers only. Writes fail as unresolved-overload. + let is_legacy = find_annotation(args, "legacy", false) var schema_assert_exprs : array let schema_from_arg = find_arg(args, "schema_from") @@ -2607,44 +2669,78 @@ class SqlTableMacro : AstStructureAnnotation { return false } - // [sql_index] siblings rewrite the placeholder body in finish() (st.annotations not populated yet here). + // Read-side helpers are always emitted; write-side helpers are skipped for legacy=true structs + // so the natural overload-resolution error fires when a caller tries to write to one. // schema_assert_exprs ride along inside _sql_table_name's body as typecheck-time concept_asserts. + if (is_legacy) { + // Marker so [sql_index] can reject legacy structs with a precise error. + var fn_legacy_marker = qmacro_function("_sql_legacy_table_marker") $(typ : $t(st)) : bool { + return true + } + fn_legacy_marker.flags |= FunctionFlags.generated + fn_legacy_marker.body |> force_at(st.at) + compiling_module() |> add_function(fn_legacy_marker) + } var fn_table_name = make_table_name_fn(st, table_name, schema_assert_exprs) compiling_module() |> add_function(fn_table_name) + // DROP is read-side enough — devs may legitimately drop a legacy table during migration cleanup. + // 2-arg form must be added BEFORE the 1-arg form so find_struct_helper_fn returns the 1-arg form. + var fn_drop_named = make_drop_sql_named_fn(st) + compiling_module() |> add_function(fn_drop_named) var fn_drop = make_drop_sql_fn(st, table_name) compiling_module() |> add_function(fn_drop) - var fn_create = make_create_table_sql_fn(st, table_name, fields) - compiling_module() |> add_function(fn_create) - // Empty `_sql_create_indexes_sql` placeholder; [sql_index] siblings accumulate into it. - var fn_indexes = make_create_indexes_sql_fn(st, "") - compiling_module() |> add_function(fn_indexes) - var fn_ins_pk = make_insert_sql_fn(st, table_name, fields, true) - compiling_module() |> add_function(fn_ins_pk) - var fn_ins_nopk = make_insert_sql_fn(st, table_name, fields, false) - compiling_module() |> add_function(fn_ins_nopk) - var fn_pk_unset = make_pk_is_unset_fn(st, fields) - compiling_module() |> add_function(fn_pk_unset) - var fn_bind = make_bind_row_fn(st, fields) - compiling_module() |> add_function(fn_bind) + // SELECT and read-row stay so select_from(type) works on legacy structs. + var fn_select_all_named = make_select_all_sql_named_fn(st, fields) + compiling_module() |> add_function(fn_select_all_named) var fn_select_all = make_select_all_sql_fn(st, table_name, fields) compiling_module() |> add_function(fn_select_all) var fn_read = make_read_row_fn(st, fields) compiling_module() |> add_function(fn_read) - var fn_upd_pk = make_update_by_pk_sql_fn(st, table_name, fields) - compiling_module() |> add_function(fn_upd_pk) - var fn_del_pk = make_delete_by_pk_sql_fn(st, table_name, fields) - compiling_module() |> add_function(fn_del_pk) - var fn_bind_upd = make_bind_row_for_update_fn(st, fields) - compiling_module() |> add_function(fn_bind_upd) - var fn_bind_pk = make_bind_row_pk_only_fn(st, fields) - compiling_module() |> add_function(fn_bind_pk) var fn_col_info = make_column_info_fn(st, fields) compiling_module() |> add_function(fn_col_info) + // ---- Write-side helpers (skipped for [sql_table(legacy=true)]) ---- + if (!is_legacy) { + var fn_create_named = make_create_table_sql_named_fn(st, fields) + compiling_module() |> add_function(fn_create_named) + var fn_create = make_create_table_sql_fn(st, table_name, fields) + compiling_module() |> add_function(fn_create) + // 2-arg form must be added BEFORE the 1-arg form so find_struct_helper_fn (used by [sql_index] + // finish-pass) returns the 1-arg form whose body owns the burned-in SQL. + var fn_indexes_named = make_create_indexes_sql_named_fn(st, table_name) + compiling_module() |> add_function(fn_indexes_named) + // Empty `_sql_create_indexes_sql` placeholder; [sql_index] siblings accumulate into it. + var fn_indexes = make_create_indexes_sql_fn(st, "") + compiling_module() |> add_function(fn_indexes) + var fn_ins_pk_named = make_insert_sql_named_fn(st, fields, true) + compiling_module() |> add_function(fn_ins_pk_named) + var fn_ins_pk = make_insert_sql_fn(st, table_name, fields, true) + compiling_module() |> add_function(fn_ins_pk) + var fn_ins_nopk_named = make_insert_sql_named_fn(st, fields, false) + compiling_module() |> add_function(fn_ins_nopk_named) + var fn_ins_nopk = make_insert_sql_fn(st, table_name, fields, false) + compiling_module() |> add_function(fn_ins_nopk) + var fn_pk_unset = make_pk_is_unset_fn(st, fields) + compiling_module() |> add_function(fn_pk_unset) + var fn_bind = make_bind_row_fn(st, fields) + compiling_module() |> add_function(fn_bind) + var fn_upd_pk_named = make_update_by_pk_sql_named_fn(st, fields) + compiling_module() |> add_function(fn_upd_pk_named) + var fn_upd_pk = make_update_by_pk_sql_fn(st, table_name, fields) + compiling_module() |> add_function(fn_upd_pk) + var fn_del_pk_named = make_delete_by_pk_sql_named_fn(st, fields) + compiling_module() |> add_function(fn_del_pk_named) + var fn_del_pk = make_delete_by_pk_sql_fn(st, table_name, fields) + compiling_module() |> add_function(fn_del_pk) + var fn_bind_upd = make_bind_row_for_update_fn(st, fields) + compiling_module() |> add_function(fn_bind_upd) + var fn_bind_pk = make_bind_row_pk_only_fn(st, fields) + compiling_module() |> add_function(fn_bind_pk) + } + return true } - def apply_schema_from(var st : StructurePtr; raw_path : string; tbl_name : string; @@ -2770,9 +2866,21 @@ class SqlTableMacro : AstStructureAnnotation { } def make_drop_sql_fn(var st : StructurePtr; table_name : string) : FunctionPtr { - let sql = "DROP TABLE IF EXISTS \"{table_name}\"" + // 1-arg wrapper forwards via `default` so `typ` stays unread (avoids "type expression result is used"). var fn = qmacro_function("_sql_drop_table_if_exists_sql") $(typ : $t(st)) : string { - return $v(sql) + return _sql_drop_table_if_exists_sql(default<$t(st)>, $v(table_name)) + } + fn.flags |= FunctionFlags.generated + fn.body |> force_at(st.at) + return <- fn + } + + def make_drop_sql_named_fn(var st : StructurePtr) : FunctionPtr { + // 2-arg form: runtime ``name`` lets a caller drop a staging/aliased table identifier + // (used by struct_convert rebuild). [unused_argument(typ)] because daslang dispatches on T, + // but the body needs only ``name``. + var fn = qmacro_function("_sql_drop_table_if_exists_sql") $(typ : $t(st); name : string) : string { + return "DROP TABLE IF EXISTS " + sql_quote_id(name) } fn.flags |= FunctionFlags.generated fn.body |> force_at(st.at) @@ -2790,12 +2898,44 @@ class SqlTableMacro : AstStructureAnnotation { return <- fn } + def make_create_indexes_sql_named_fn(var st : StructurePtr; table_name : string) : FunctionPtr { + //! 2-arg form: rewrites the 1-arg SQL onto ``name``. Auto-named ``idx__`` follows; user-explicit + //! ``[sql_index(name=...)]`` is preserved. + let orig_quoted = "\"{table_name}\"" + let orig_idx_prefix = "\"idx_{table_name}_" + var fn = qmacro_function("_sql_create_indexes_sql") $(typ : $t(st); name : string) : string { + let escaped_name = name |> replace("\"", "\"\"") + return (_sql_create_indexes_sql(default<$t(st)>) + |> replace($v(orig_quoted), "\"" + escaped_name + "\"") + |> replace($v(orig_idx_prefix), "\"idx_" + escaped_name + "_")) + } + fn.flags |= FunctionFlags.generated + fn.body |> force_at(st.at) + return <- fn + } + def make_create_table_sql_fn(var st : StructurePtr; table_name : string; fields : array) : FunctionPtr { + // 1-arg form: delegates to the 2-arg form with the struct's own [sql_table(name=...)] identifier. + // See make_drop_sql_fn for why we construct default<$t(st)> rather than forwarding typ. + var fn = qmacro_function("_sql_create_table_sql") $(typ : $t(st)) : string { + return _sql_create_table_sql(default<$t(st)>, $v(table_name)) + } + fn.flags |= FunctionFlags.generated + fn.body |> force_at(st.at) + return <- fn + } + + def make_create_table_sql_named_fn(var st : StructurePtr; fields : array) : FunctionPtr { + // 2-arg form: composes CREATE TABLE DDL against runtime ``name``. Per-field SQL is baked at macro time + // (column name, type, constraints), but the table identifier comes from the runtime parameter so a + // struct_convert rebuild can land the new schema in a staging table. var write_exprs : array - write_exprs |> reserve(3 * length(fields) + 2) + write_exprs |> reserve(3 * length(fields) + 4) - let header = "CREATE TABLE \"{table_name}\"(" - write_exprs |> push(qmacro_expr(${ writer |> write($v(header)); })) + // CREATE TABLE "" ( — name routed through sql_quote_id for `"` escaping. + write_exprs |> push(qmacro_expr(${ writer |> write("CREATE TABLE "); })) + write_exprs |> push(qmacro_expr(${ writer |> write(sql_quote_id(name)); })) + write_exprs |> push(qmacro_expr(${ writer |> write("("); })) for (i in range(length(fields))) { let info = fields[i] @@ -2830,7 +2970,7 @@ class SqlTableMacro : AstStructureAnnotation { } write_exprs |> push(qmacro_expr(${ writer |> write(")"); })) - var fn = qmacro_function("_sql_create_table_sql") $(typ : $t(st)) : string { + var fn = qmacro_function("_sql_create_table_sql") $(typ : $t(st); name : string) : string { return build_string() $(var writer) { $b(write_exprs) } @@ -2889,7 +3029,21 @@ class SqlTableMacro : AstStructureAnnotation { } def make_insert_sql_fn(var st : StructurePtr; table_name : string; fields : array; include_pk : bool) : FunctionPtr { - // Explicit column list skips GENERATED columns; falls back to `DEFAULT VALUES` when no bindable cols remain. + // 1-arg form: delegates to the same-named 2-arg form. The 1-arg call site historically takes + // either an instance OR `type`; the wrapper must not READ typ or the type-witness path errors. + let helper_name = include_pk ? "_sql_insert_with_pk_sql" : "_sql_insert_no_pk_sql" + var fn = qmacro_function(helper_name) $(typ : $t(st)) : string { + return $c(helper_name)(default<$t(st)>, $v(table_name)) + } + fn.flags |= FunctionFlags.generated + fn.body |> force_at(st.at) + return <- fn + } + + def make_insert_sql_named_fn(var st : StructurePtr; fields : array; include_pk : bool) : FunctionPtr { + // 2-arg form: composes the INSERT with runtime ``name``. The cols-list / placeholders chunk is + // burned at macro time; the table identifier is the runtime parameter so a struct_convert rebuild + // can target a staging table. let helper_name = include_pk ? "_sql_insert_with_pk_sql" : "_sql_insert_no_pk_sql" let cols = build_string() $(var w) { var first = true @@ -2919,11 +3073,13 @@ class SqlTableMacro : AstStructureAnnotation { first = false } } - let sql = (empty(cols) - ? "INSERT INTO \"{table_name}\" DEFAULT VALUES" - : "INSERT INTO \"{table_name}\" ({cols}) VALUES ({placeholders})") - var fn = qmacro_function(helper_name) $(typ : $t(st)) : string { - return $v(sql) + // Two-tail SQL: with explicit column list, or DEFAULT VALUES if every column is computed/excluded. + // name routed through sql_quote_id for `"` escaping; suffix carries the post-identifier tail only. + let suffix = (empty(cols) + ? " DEFAULT VALUES" + : " ({cols}) VALUES ({placeholders})") + var fn = qmacro_function(helper_name) $(typ : $t(st); name : string) : string { + return "INSERT INTO " + sql_quote_id(name) + $v(suffix) } fn.flags |= FunctionFlags.generated fn.body |> force_at(st.at) @@ -3001,6 +3157,17 @@ class SqlTableMacro : AstStructureAnnotation { } def make_select_all_sql_fn(var st : StructurePtr; table_name : string; fields : array) : FunctionPtr { + // 1-arg form: delegates to the 2-arg form. See make_drop_sql_fn for the default<$t(st)> rationale. + var fn = qmacro_function("_sql_select_all_sql") $(typ : $t(st)) : string { + return _sql_select_all_sql(default<$t(st)>, $v(table_name)) + } + fn.flags |= FunctionFlags.generated + fn.body |> force_at(st.at) + return <- fn + } + + def make_select_all_sql_named_fn(var st : StructurePtr; fields : array) : FunctionPtr { + // 2-arg form: column list is baked at macro time; table identifier is the runtime parameter. let col_list = build_string() $(var w) { for (i in range(length(fields))) { if (i > 0) { @@ -3011,9 +3178,9 @@ class SqlTableMacro : AstStructureAnnotation { w |> write("\"") } } - let sql = "SELECT {col_list} FROM \"{table_name}\"" - var fn = qmacro_function("_sql_select_all_sql") $(typ : $t(st)) : string { - return $v(sql) + let prefix = "SELECT {col_list} FROM " + var fn = qmacro_function("_sql_select_all_sql") $(typ : $t(st); name : string) : string { + return $v(prefix) + sql_quote_id(name) } fn.flags |= FunctionFlags.generated fn.body |> force_at(st.at) @@ -3059,12 +3226,24 @@ class SqlTableMacro : AstStructureAnnotation { } def make_update_by_pk_sql_fn(var st : StructurePtr; table_name : string; fields : array) : FunctionPtr { + // 1-arg form: delegates to the 2-arg form. See make_drop_sql_fn for the default<$t(st)> rationale. + // The PK-less / no-SET-columns runtime-panic stubs ride inside the 2-arg form. + var fn = qmacro_function("_sql_update_by_pk_sql") $(typ : $t(st)) : string { + return _sql_update_by_pk_sql(default<$t(st)>, $v(table_name)) + } + fn.flags |= FunctionFlags.generated + fn.body |> force_at(st.at) + return <- fn + } + + def make_update_by_pk_sql_named_fn(var st : StructurePtr; fields : array) : FunctionPtr { + // 2-arg form: composes the UPDATE with runtime ``name``. SET-clause is baked at macro time. // PK-less structs get a runtime-panic stub (concept_assert would fire on every [sql_table] decl). let pk_idx = find_pk_field(fields) if (pk_idx < 0) { let st_name = string(st.name) let msg = "[sql_table] {st_name}: update(row) / delete_(row) / delete_by_id are unavailable — no @sql_primary_key field. Use the macro forms `_sql_update(type<{st_name}>, where, set)` / `_sql_delete(type<{st_name}>, where)` instead." - var fn = qmacro_function("_sql_update_by_pk_sql") $(typ : $t(st)) : string { + var fn = qmacro_function("_sql_update_by_pk_sql") $(typ : $t(st); name : string) : string { panic($v(msg)) return "" } @@ -3092,7 +3271,7 @@ class SqlTableMacro : AstStructureAnnotation { if (empty(set_clause)) { let st_name = string(st.name) let msg = "[sql_table] {st_name}: update(row) has nothing to SET — every non-PK column is missing or @sql_computed. Add at least one writable non-PK column, or use raw `exec`." - var fn = qmacro_function("_sql_update_by_pk_sql") $(typ : $t(st)) : string { + var fn = qmacro_function("_sql_update_by_pk_sql") $(typ : $t(st); name : string) : string { panic($v(msg)) return "" } @@ -3100,9 +3279,10 @@ class SqlTableMacro : AstStructureAnnotation { fn.body |> force_at(st.at) return <- fn } - let sql = "UPDATE \"{table_name}\" SET {set_clause} WHERE \"{pk_col}\" = ?" - var fn = qmacro_function("_sql_update_by_pk_sql") $(typ : $t(st)) : string { - return $v(sql) + let prefix = "UPDATE " + let suffix = " SET {set_clause} WHERE \"{pk_col}\" = ?" + var fn = qmacro_function("_sql_update_by_pk_sql") $(typ : $t(st); name : string) : string { + return $v(prefix) + sql_quote_id(name) + $v(suffix) } fn.flags |= FunctionFlags.generated fn.body |> force_at(st.at) @@ -3110,11 +3290,22 @@ class SqlTableMacro : AstStructureAnnotation { } def make_delete_by_pk_sql_fn(var st : StructurePtr; table_name : string; fields : array) : FunctionPtr { + // 1-arg form: delegates to the 2-arg form. See make_drop_sql_fn for the default<$t(st)> rationale. + var fn = qmacro_function("_sql_delete_by_pk_sql") $(typ : $t(st)) : string { + return _sql_delete_by_pk_sql(default<$t(st)>, $v(table_name)) + } + fn.flags |= FunctionFlags.generated + fn.body |> force_at(st.at) + return <- fn + } + + def make_delete_by_pk_sql_named_fn(var st : StructurePtr; fields : array) : FunctionPtr { + // 2-arg form: composes the DELETE with runtime ``name``. WHERE-clause + PK column baked at macro time. let pk_idx = find_pk_field(fields) if (pk_idx < 0) { let st_name = string(st.name) let msg = "[sql_table] {st_name}: delete_(row) / delete_by_id require a @sql_primary_key field. Use `_sql_delete(type<{st_name}>, where)` instead." - var fn = qmacro_function("_sql_delete_by_pk_sql") $(typ : $t(st)) : string { + var fn = qmacro_function("_sql_delete_by_pk_sql") $(typ : $t(st); name : string) : string { panic($v(msg)) return "" } @@ -3123,9 +3314,10 @@ class SqlTableMacro : AstStructureAnnotation { return <- fn } let pk_col = fields[pk_idx].col_name - let sql = "DELETE FROM \"{table_name}\" WHERE \"{pk_col}\" = ?" - var fn = qmacro_function("_sql_delete_by_pk_sql") $(typ : $t(st)) : string { - return $v(sql) + let prefix = "DELETE FROM " + let suffix = " WHERE \"{pk_col}\" = ?" + var fn = qmacro_function("_sql_delete_by_pk_sql") $(typ : $t(st); name : string) : string { + return $v(prefix) + sql_quote_id(name) + $v(suffix) } fn.flags |= FunctionFlags.generated fn.body |> force_at(st.at) @@ -3685,7 +3877,6 @@ class private SqlFts5Macro : AstStructureAnnotation { let fname = string(field.name) let is_optional = is_option_field_type(field._type) let is_fts_rank = find_annotation(field.annotation, "sql_fts_rank", false) - // Reject every annotation that doesn't apply to a virtual table. for (rej in FTS5_REJECTED_ANNOTATIONS) { var present = false @@ -3956,7 +4147,6 @@ def private make_fts5_column_info_fn(var st : StructurePtr; fields : array exec(…)`). [macro_function] def private resolve_sql_table(prog : ProgramPtr; at : LineInfo; var typeArg : ExpressionPtr; macroName : string; var st : StructurePtr&; var table_name : string&) : bool { - // Decode `type` arg: T must be a struct carrying the [sql_table]-generated - // `_sql_table_name` helper, otherwise we have no table identifier to ALTER. + // T must carry [sql_table]'s _sql_table_name helper. if (typeArg._type == null) { macro_error(prog, at, "{macroName}: `type` argument has no inferred type yet") return false @@ -4313,11 +4498,6 @@ def private decode_string_const(e : ExpressionPtr) : tuple add_column(type, "FieldName" [, defaultLiteral])``. - //! Emits ``ALTER TABLE "" ADD COLUMN "" [ NOT NULL][ DEFAULT …]``. - //! Honors @sql_column rename, @sql_json / @sql_blob storage. Rejects PK / UNIQUE / - //! computed / @sql_default_fn at macro time (those need a table rebuild). def override visit(prog : ProgramPtr; mod : Module?; var call : ExprCallMacro?) : Expression? { // call.arguments = [db, type, "FieldName"] (3) or [..., defaultLit] (4) after the `db |>` desugar. let nArgs = call.arguments |> length @@ -4426,8 +4606,7 @@ class private AddColumnMacro : AstCallMacro { def private resolve_field_columns(prog : ProgramPtr; at : LineInfo; var st : StructurePtr; fieldsArg : ExpressionPtr; macroName : string; var sql_cols : array&) : bool { - // Accept a single string literal ("Email") or a tuple literal of string literals ("A", "B"). - // Array literals like ["A", "B"] parse as `to_array_move(fixed_array(…))` — too hairy to AST-match. + // Single string literal or a tuple of string literals. Array literals are AST-hairy and rejected. if (fieldsArg == null) { macro_error(prog, at, "{macroName}: missing fields argument") return false @@ -4525,9 +4704,6 @@ def private emit_create_index(prog : ProgramPtr; var call : ExprCallMacro?; is_u [call_macro(name="create_index")] class private CreateIndexMacro : AstCallMacro { - //! Migration-context typed CREATE INDEX. Form: - //! ``db |> create_index(type, "Field" | ["A", "B"] [, "idx_name"])``. - //! Auto-name follows the [sql_index] convention: ``idx_
__``. def override visit(prog : ProgramPtr; mod : Module?; var call : ExprCallMacro?) : Expression? { return emit_create_index(prog, call, false) } @@ -4535,9 +4711,6 @@ class private CreateIndexMacro : AstCallMacro { [call_macro(name="create_unique_index")] class private CreateUniqueIndexMacro : AstCallMacro { - //! Migration-context typed CREATE UNIQUE INDEX. Same shape as ``create_index`` plus - //! the UNIQUE constraint. SQLite emits a duplicate-violation panic at INSERT time - //! if existing rows already conflict; wrap in a migration body for atomic rollback. def override visit(prog : ProgramPtr; mod : Module?; var call : ExprCallMacro?) : Expression? { return emit_create_index(prog, call, true) } diff --git a/modules/dasSQLITE/daslib/sqlite_linq.das b/modules/dasSQLITE/daslib/sqlite_linq.das index c8a3368c8..4648fafaf 100644 --- a/modules/dasSQLITE/daslib/sqlite_linq.das +++ b/modules/dasSQLITE/daslib/sqlite_linq.das @@ -42,27 +42,14 @@ def _first_opt(arr : array) : Option { } def _first(arr : array) : TT -const -& { - //! Compat-mode fallback: delegates to ``daslib/linq.das`` ``first``, - //! which panics with "sequence contains no elements" on empty and - //! handles non-copyable ``TT`` via ``clone_to_move``. Inside `_sql(...)` - //! the macro intercepts this call before evaluation and emits ` LIMIT 1` - //! with the One materializer. + //! Compat-mode fallback to ``linq.first`` (panics on empty). Inside ``_sql(...)`` + //! the macro intercepts and emits ``LIMIT 1`` with the One materializer. return first(arr) } - // ===== to_sql_literal — runtime stringifier for `_create_view` body inlining ===== -// -// SQLite stores view bodies as text in `sqlite_schema` and rejects `?` placeholders -// inside `CREATE VIEW`, so any value referenced by a view body must be inlined as a -// SQL literal at view-creation time. `_create_view` emits `_::to_sql_literal()` -// per bound expression; `_::` resolves at the user's call site, so a user can extend -// the set with their own one-liner overload (`def to_sql_literal(s : Status) : string => "{int(s)}"`). -// -// Defaults cover all numeric primitives, bool, and string. Enums are handled by -// the `to_sql_literal(auto(TT))` catch-all below (emitted as their underlying -// integer); other types hit the catch-all's `concept_assert` with a "define a -// one-line overload in YourType's module" pointer. +// `_::` resolution lets users extend with a one-line overload in their own module +// (`def to_sql_literal(s : Status) : string => "{int(s)}"`). def to_sql_literal(v : int) : string => "{v}" def to_sql_literal(v : int8) : string => "{v}" @@ -89,7 +76,6 @@ def to_sql_literal(value : auto(TT)) : string { } } - // ===== SqlQuery — accumulator for chain analysis ===== let private INT_MAX_PHASE : int = int(0x7fffffff) @@ -202,9 +188,9 @@ struct private SqlQuery { argSourceIdx : array dbExpr : ExpressionPtr upsertMode : bool - inView : bool //! `_create_view` body context — rejects `[sql_function(directonly=true)]` UDF calls + inView : bool hadError : bool - lastError : string //! First `pred_fail` message, re-emitted by call_macros if their deeper macro_error gets eaten by AST cascade + lastError : string // Multi-Q lowering: phase-order conflict (e.g. `take(n) |> _where(p)`) snapshots // the inner subtree's SQL+binds here; outer FROM renders as `() AS t0`. innerSql : string // null/empty = base-table FROM; else nested SELECT body diff --git a/modules/dasSQLITE/daslib/sqlite_migrate.das b/modules/dasSQLITE/daslib/sqlite_migrate.das index 09d53803b..8ee87fdb5 100644 --- a/modules/dasSQLITE/daslib/sqlite_migrate.das +++ b/modules/dasSQLITE/daslib/sqlite_migrate.das @@ -20,7 +20,6 @@ require daslib/macro_boost require daslib/templates require daslib/templates_boost - // ===== Public types ===== struct PendingMigration { @@ -40,7 +39,6 @@ struct MigrationRecord { applied_at : int64 } - // ===== Private registry types ===== struct private MigrationEntry { @@ -53,12 +51,10 @@ struct private MigrationEntry { module_name : string } - // ===== Audit table constants ===== let private AUDIT_CREATE_DDL = "CREATE TABLE IF NOT EXISTS __schema_version (version INTEGER PRIMARY KEY, description TEXT NOT NULL DEFAULT '', applied_at INTEGER NOT NULL)" - // ===== Process-global registry ===== // Populated by per-module [init] thunks emitted by the [sql_migration] macro. @@ -66,7 +62,6 @@ let private AUDIT_CREATE_DDL = "CREATE TABLE IF NOT EXISTS __schema_version (ver // by additional module loads. var private _registry : array - def _add_migration_entry(version : int; description : string; vacuum : bool; analyze : bool; body : function<(db : SqlRunner) : void>; func_name : string; module_name : string) : void { @@ -83,7 +78,6 @@ def _add_migration_entry(version : int; description : string; vacuum : bool; ana _registry |> emplace(entry) } - def private _registry_max_version() : int { var m = 0 for (e in _registry) { @@ -94,14 +88,12 @@ def private _registry_max_version() : int { return m } - def private _registry_sorted() : array { var out := _registry sort(out, $(a, b) => a.version < b.version) return <- out } - def private _registry_find_description(version : int) : string { for (e in _registry) { if (e.version == version) { @@ -111,7 +103,6 @@ def private _registry_find_description(version : int) : string { return "" } - def private _verify_no_duplicate_versions() : void { // Belt-and-suspenders runtime defense for dynamic plugin loading past compile-time visibility. var seen : table @@ -124,33 +115,28 @@ def private _verify_no_duplicate_versions() : void { } } - // ===== Internal helpers — audit table I/O and diagnostics ===== let private ADOPTION_HINT = "this DB has existing tables/views/indexes but __schema_version is empty (no migration history yet). If your first migration creates tables that already exist, it will fail. To adopt migrations on an existing DB, call `db |> baseline(version = N)` before `migrate_to_latest()` to mark v1..vN as already applied. See tut 43 § \"Adopting migrations on an existing DB\"." - def private _scalar_int_or_zero(db : SqlRunner; sql : string) : int { // Treats query failure (e.g. "no such table") as 0 — natural default for COUNT/MAX probes. let r = db |> try_query_scalar(sql, type) return (r |> is_ok) ? (r |> unwrap) : 0 } - def private _has_user_objects(db : SqlRunner) : bool { // sqlite_* names are reserved for internals (sqlite_sequence, sqlite_autoindex_*). return _scalar_int_or_zero(db, "SELECT COUNT(*) FROM sqlite_master WHERE name NOT LIKE 'sqlite_%' AND name != '__schema_version'") > 0 } - def private _audit_insert(db : SqlRunner; version : int; description : string) : SqlError { return db |> try_exec( "INSERT INTO __schema_version (version, description, applied_at) VALUES (?, ?, unixepoch())", version, description) } - def private _format_versions_csv(pending : array; applied_count : int) : string { // "v4, v5" — every applied migration (inside the txn) plus the failing one — all rolled back. var parts : array @@ -166,7 +152,6 @@ def private _format_versions_csv(pending : array; applied_count return parts |> join(", ") } - def private _build_enriched_err(failed_version : int; failed_desc : string; underlying : string; rolled_back : string; layer2_hint : bool) : string { var msg = "migration v{failed_version} '{failed_desc}' failed:\n {underlying}\n\nnote: this migrate_to_latest call's transaction was rolled back — {rolled_back} are NOT applied. Fix the migration body and re-run; the corrected migration set will replay on the next call. The DB does not need to be reset." @@ -176,14 +161,12 @@ def private _build_enriched_err(failed_version : int; failed_desc : string; unde return msg } - def private _log_warn_on_some(err : SqlError) : void { if (err |> is_some) { to_log(LOG_WARNING, "{err |> unwrap}\n") } } - // ===== Internal runner — _migrate_inner ===== def private _migrate_inner(db : SqlRunner) : Result { @@ -294,7 +277,6 @@ def private _migrate_inner(db : SqlRunner) : Result { return ok(applied_count, type) } - // ===== Inspection — pure reads, no side effects ===== // Never create the audit table or write a row; absent __schema_version → 0 / all-pending / []. @@ -308,14 +290,12 @@ def private _try_audit_table_exists(db : SqlRunner) : Result { return ok((r |> unwrap) > 0, type) } - def private _audit_table_exists(db : SqlRunner) : bool { // Strict-form callers swallow transient errors (treat as "absent" → return 0/empty). let r <- _try_audit_table_exists(db) return (r |> is_ok) && (r |> unwrap) } - def current_schema_version(db : SqlRunner) : int { //! Highest applied migration version, or 0 if no migrations were applied, the audit table is absent, or a transient SQLite error (e.g. ``SQLITE_BUSY``) blocks the read. //! Use ``try_current_schema_version`` to distinguish "nothing applied" from "couldn't tell". @@ -325,7 +305,6 @@ def current_schema_version(db : SqlRunner) : int { return _scalar_int_or_zero(db, "SELECT COALESCE(MAX(version), 0) FROM __schema_version") } - def try_current_schema_version(db : SqlRunner) : Result { //! ``ok(0)`` when the audit table is absent or empty; ``err`` on transient SQLite errors //! (lock contention etc.) — including failures of the sqlite_master probe. @@ -339,7 +318,6 @@ def try_current_schema_version(db : SqlRunner) : Result { return <- db |> try_query_scalar("SELECT COALESCE(MAX(version), 0) FROM __schema_version", type) } - def try_pending_migrations(db : SqlRunner) : Result, string> { //! Registered migrations whose ``version > current_schema_version(db)``, in ascending order. //! Description sourced from the annotation (current code), not the audit table. @@ -358,7 +336,6 @@ def try_pending_migrations(db : SqlRunner) : Result, str return move_ok(out, type) } - def pending_migrations(db : SqlRunner) : array { //! Strict variant. Panics on transient errors; ``empty`` when no pending migrations. var r <- db |> try_pending_migrations() @@ -368,7 +345,6 @@ def pending_migrations(db : SqlRunner) : array { return <- r._value } - def try_migration_history(db : SqlRunner) : Result, string> { //! Every row in ``__schema_version``, in ascending version order. Description here is the //! frozen-at-apply text from the audit row (NOT the current annotation). ``ok([])`` when the @@ -391,7 +367,6 @@ def try_migration_history(db : SqlRunner) : Result, strin }) } - def migration_history(db : SqlRunner) : array { //! Strict variant of ``try_migration_history``. var r <- db |> try_migration_history() @@ -401,7 +376,6 @@ def migration_history(db : SqlRunner) : array { return <- r._value } - // ===== Public runner surfaces ===== def migrate_to_latest(db : SqlRunner) : int { @@ -414,7 +388,6 @@ def migrate_to_latest(db : SqlRunner) : int { return r |> unwrap } - def try_migrate_to_latest(db : SqlRunner) : Result { //! Non-panic variant of ``migrate_to_latest``. Returns ``ok(count)`` on success or //! ``err(enriched_message)`` for both soft failures and body panics. ROLLBACK is @@ -422,7 +395,6 @@ def try_migrate_to_latest(db : SqlRunner) : Result { return <- _migrate_inner(db) } - // ===== Adoption — baseline ===== def private _audit_has_version(db : SqlRunner; version : int) : bool { @@ -430,7 +402,6 @@ def private _audit_has_version(db : SqlRunner; version : int) : bool { return _scalar_int_or_zero(db, "SELECT COUNT(*) FROM __schema_version WHERE version = {version}") > 0 } - def try_baseline(db : SqlRunner; version : int) : Result { //! Stamps v1..``version`` as already-applied without running their bodies. Idempotent //! (versions already in the audit are skipped). Returns ``ok(stamped_count)`` or ``err``. @@ -485,7 +456,6 @@ def try_baseline(db : SqlRunner; version : int) : Result { return ok(stamped, type) } - def baseline(db : SqlRunner; version : int) : void { //! Strict variant of ``try_baseline``. Panics on error. let r <- db |> try_baseline(version) @@ -494,7 +464,6 @@ def baseline(db : SqlRunner; version : int) : void { } } - // ===== with_latest_sqlite — convenience wrapper ===== def with_latest_sqlite(path : string; blk : block<(db : SqlRunner) : void>) : void { @@ -508,7 +477,6 @@ def with_latest_sqlite(path : string; blk : block<(db : SqlRunner) : void>) : vo } } - // ===== Compile-time migration registry helpers (used by [sql_migration] dup detection) ===== struct private RegisteredMigration { @@ -517,7 +485,6 @@ struct private RegisteredMigration { mod_name : string } - def private _collect_registered_migrations(var out : array) : void { // Single pass over compiling_program() — collects (version, fn, mod) for every [sql_migration]. let prog = compiling_program() |> get_ptr() @@ -544,7 +511,6 @@ def private _collect_registered_migrations(var out : array) }) } - // ===== [sql_migration] annotation ===== [function_macro(name="sql_migration")] @@ -640,3 +606,331 @@ class SqlMigrationMacro : AstFunctionAnnotation { return true } } + +// ===== struct_convert_field — overloaded conversion dispatch ===== +// User extension: declare ``def struct_convert_field(var dst : MyEnum&; src : int) { ... }`` in your module. + +def struct_convert_field(var dst : auto(SCF_T)&; src : SCF_T) : void { + //! Identity — same source and target type. + dst = src +} + +def struct_convert_field(var dst : Option&; src : auto(SCF_S)) : void { + //! ``S → Option`` auto-wrap; inner S→T dispatches recursively so any S→T overload fires. + var inner : SCF_T + _::struct_convert_field(inner, src) + dst = some(inner) +} + +def struct_convert_field(var dst : auto(SCF_T)&; src : Option) : void { + //! ``Option → T``. NULL collapses to ``default``; inner S→T dispatches recursively. + //! Override in your module for different NULL-handling semantics. + _::struct_convert_field(dst, src ?? default) +} + +def struct_convert_field(var dst : Option&; src : Option) : void { + //! ``Option → Option`` cross-payload — unwraps src, dispatches S→T inner, re-wraps. + if (src |> is_some) { + var inner : SCF_T + _::struct_convert_field(inner, src |> unwrap) + dst = some(inner) + } else { + dst = none(type) + } +} + +// Target-typed conversion via the type's constructor. ``int(string)``, ``int(double)``, etc. dispatch +// through daslang's existing primitive constructors. Identity above wins for same-source-target. +def struct_convert_field(var dst : int&; src : auto(SCF_S)) : void { + dst = int(src) +} +def struct_convert_field(var dst : int64&; src : auto(SCF_S)) : void { + dst = int64(src) +} +def struct_convert_field(var dst : float&; src : auto(SCF_S)) : void { + dst = float(src) +} +def struct_convert_field(var dst : double&; src : auto(SCF_S)) : void { + dst = double(src) +} +def struct_convert_field(var dst : string&; src : auto(SCF_S)) : void { + dst = "{src}" +} + +// Disambiguators — primitive target from Option source. Beats both auto(T)←Option and primitive←auto(S) +// at the typer's specificity check, so ``Option → int`` (etc.) routes here cleanly with NULL→default. +def struct_convert_field(var dst : int&; src : Option) : void { + dst = int(src ?? default) +} +def struct_convert_field(var dst : int64&; src : Option) : void { + dst = int64(src ?? default) +} +def struct_convert_field(var dst : float&; src : Option) : void { + dst = float(src ?? default) +} +def struct_convert_field(var dst : double&; src : Option) : void { + dst = double(src ?? default) +} +def struct_convert_field(var dst : string&; src : Option) : void { + dst = "{src ?? default}" +} + + +// ===== [struct_convert] annotation ===== + +class private OverrideCollector : AstVisitor { + // Records `dst.X` ONLY when it appears on the LHS of `=`/`<-`/`:=`. + // Read-only mentions don't count (would silently leave dst.X at default). + target_var_name : string + overrides : table + def OverrideCollector(name : string) { + target_var_name = name + } + def record_lhs(lhs : ExpressionPtr) : void { + if (lhs == null || !(lhs is ExprField)) { + return + } + let f = lhs as ExprField + if (f.value == null || !(f.value is ExprVar)) { + return + } + let v = f.value as ExprVar + if (v.name == target_var_name) { + overrides |> insert(string(f.name), true) + } + } + def override preVisitExprCopy(expr : ExprCopy?) : void { + record_lhs(expr.left) + } + def override preVisitExprMove(expr : ExprMove?) : void { + record_lhs(expr.left) + } + def override preVisitExprClone(expr : ExprClone?) : void { + record_lhs(expr.left) + } +} + +def private collect_overrides(var body : ExpressionPtr; target_var_name : string) : table { + var v = new OverrideCollector(target_var_name) + make_visitor(*v) $(adapter) { + visit(body, adapter) + } + var result <- v.overrides + unsafe { + delete v + } + return <- result +} + +def private make_struct_convert_helper(var src_st : StructurePtr; var tgt_st : StructurePtr; + fn_name : string; var auto_exprs : array; + at : LineInfo) : FunctionPtr { + // ``src`` / ``tgt`` literals — ``new`` is a daslang keyword. + var fn = qmacro_function(fn_name) $(src : $t(src_st); var tgt : $t(tgt_st)) : void { + $b(auto_exprs) + } + fn.flags |= FunctionFlags.generated + fn.flags |= FunctionFlags.privateFunction + fn.body |> force_at(at) + return <- fn +} + +def private make_sct_dispatch(var src_st : StructurePtr; var tgt_st : StructurePtr; + user_fn_name : string; at : LineInfo) : FunctionPtr { + var fn = qmacro_function("_sct") $(src : $t(src_st); var tgt : $t(tgt_st)) : void { + $c(user_fn_name)(src, tgt) + } + fn.flags |= FunctionFlags.generated + fn.body |> force_at(at) + return <- fn +} + +[function_macro(name="struct_convert")] +class SqlStructConvertMacro : AstFunctionAnnotation { + //! Marks ``def fn(old : S; var dst : T) : void`` for auto-derived field-by-field translation. + //! Same-name-same-type / ``T → Option`` / ``INT↔STRING↔FLOAT`` cast / ``@sql_renamed_from`` are + //! filled in; an explicit ``dst.X = …`` (or ``<-``/``:=``) on the LHS suppresses the auto-fill for X. + def override apply(var func : FunctionPtr; var group : ModuleGroup; + args : AnnotationArgumentList; var errors : das_string) : bool { + if (func.fromGeneric != null) { + errors := "[struct_convert]: not supported on generic functions" + return false + } + // Signature: def fn(old : S; var new : T) : void + let nArgs = func.arguments |> length + if (nArgs != 2) { + errors := "[struct_convert] '{func.name}': must take exactly two arguments — `old : S` and `var new : T`. Got {nArgs}." + return false + } + let arg0_type = func.arguments[0]._type + let arg1_type = func.arguments[1]._type + if (arg0_type.structType == null) { + errors := "[struct_convert] '{func.name}': first argument '{func.arguments[0].name}' must be a struct type. Got '{describe(arg0_type)}'." + return false + } + if (arg1_type.structType == null) { + errors := "[struct_convert] '{func.name}': second argument '{func.arguments[1].name}' must be a struct type. Got '{describe(arg1_type)}'." + return false + } + if (arg1_type.flags.constant) { + errors := "[struct_convert] '{func.name}': second argument '{func.arguments[1].name}' must be a `var` parameter so the body can assign to its fields. Write `var {func.arguments[1].name} : {describe(arg1_type)}`." + return false + } + if (func.result.baseType != Type.tVoid && func.result.baseType != Type.autoinfer) { + errors := "[struct_convert] '{func.name}': must return void. Got '{describe(func.result)}'." + return false + } + if (func.body == null || !(func.body is ExprBlock)) { + errors := "[struct_convert] '{func.name}': body must be a brace-block (the macro prepends auto-fill statements). Use `def {func.name}(...) \{ ... \}`, not the `=> expr` short form." + return false + } + // Strip const off the field-accessed StructurePtrs so `$t($t(...))` reification accepts them. + // The parsing pipeline gives us const Structure? from a TypeDeclPtr field walk; the gc_node + // representation is the same pointer either way — reinterpret is safe. + var src_st : StructurePtr = unsafe(reinterpret(arg0_type.structType)) + var tgt_st : StructurePtr = unsafe(reinterpret(arg1_type.structType)) + let new_param_name = string(func.arguments[1].name) + + // Collect the user-overrides set by walking the existing body. + let overrides <- collect_overrides(func.body, new_param_name) + + // Build auto-derived assignments for fields in T order, skipping user-overrides. + // Nested for-loops because handled-type FieldDeclaration doesn't support indexed-let access — + // for (f in fields) is the supported iteration form. + var auto_exprs : array + auto_exprs |> reserve(length(tgt_st.fields)) + var failed = false + for (t_field in tgt_st.fields) { + if (failed) { + break + } + let tname = string(t_field.name) + if (key_exists(overrides, tname)) { + continue + } + // Honor @sql_renamed_from on T's field — read from old's instead of . + let renamed_arg = find_arg(t_field.annotation, "sql_renamed_from") + let renamed_from = renamed_arg is tString ? string(renamed_arg as tString) : "" + let lookup_name = empty(renamed_from) ? tname : renamed_from + let t_type = t_field._type + let t_is_option = is_option_field_type(t_type) + // Find matching field in S by lookup_name; nested loop because indexed access on handled FieldDeclaration arrays needs unsafe. + var s_found = false + for (s_field in src_st.fields) { + if (s_field.name != lookup_name) { + continue + } + s_found = true + // Emit a dispatch call. ``_::`` resolves at the user's module so they can extend the + // overload set with their own (S_type, T_type) pairs (custom enums, value-types, etc.). + auto_exprs |> push(qmacro_expr(${ + _::struct_convert_field(tgt.$f(tname), src.$f(lookup_name)); + })) + break + } + if (failed) { + break + } + if (!s_found) { + // Field absent from S — use T's default initializer or none() for Option. + if (t_field.init != null) { + let init_expr = t_field.init + auto_exprs |> push(qmacro_expr(${ + tgt.$f(tname) = $e(clone_expression(init_expr)); + })) + } elif (t_is_option) { + let t_payload = unwrap_option_payload_type(unsafe(reinterpret(t_type))) + auto_exprs |> push(qmacro_expr(${ + tgt.$f(tname) = none(type<$t(t_payload)>); + })) + } else { + errors := "[struct_convert] '{func.name}': new field '{tname}' has no default initializer and is not Option. Provide a default like `{tname} : {describe(t_type)} = …` on the struct, or write `{new_param_name}.{tname} = …` in the body." + failed = true + } + } + } + if (failed) { + return false + } + + // Emit the per-(S,T) helper carrying the auto-derived assignments. + let helper_name = "_struct_convert_helper`{func.name}" + var helper_fn = make_struct_convert_helper(src_st, tgt_st, helper_name, auto_exprs, func.at) + compiling_module() |> add_function(helper_fn) + + // Prepend a call to the helper at the top of the user's body. Build a fresh ExprBlock + // that contains [helper_call, ...user_body.list] and replace func.body. ExprVar lookups + // by name resolve to the user's actual params at typecheck time. + let call_args <- array( + new ExprVar(at = func.at, name := func.arguments[0].name), + new ExprVar(at = func.at, name := func.arguments[1].name) + ) + var user_body_stmts : array + let user_body_block = func.body as ExprBlock + for (s in user_body_block.list) { + user_body_stmts |> push(clone_expression(s)) + } + var combined <- qmacro_block() { + $c(helper_name)($a(call_args)); + $b(user_body_stmts); + } + func.body = combined + func.body |> force_at(func.at) + + // Emit `_sct(S, T)` dispatch overload. + var sct_fn = make_sct_dispatch(src_st, tgt_st, string(func.name), func.at) + compiling_module() |> add_function(sct_fn) + + return true + } +} + +// ===================================================================== +// `convert` and `convert_and_rename` — db-driving primitives +// ===================================================================== + +[unused_argument(t1, t2), template(t1, t2)] +def convert(db : SqlRunner; t1 : type; t2 : type) : void { + //! Streams every row of S through the [struct_convert] converter into T's own table. + //! For the same-name legacy/current convention, use ``convert_and_rename`` instead. + for (old in db |> select_from(type)) { + var new_row : TT = default + _::_sct(old, new_row) + db |> insert(new_row) + } +} + +[unused_argument(t1, t2), template(t1, t2)] +def convert(db : SqlRunner; t1 : type; t2 : type; name : string) : void { + //! Like ``convert(type, type)`` but inserts rows into table identifier ``name`` instead + //! of T's own ``[sql_table(name=…)]``. Used during ``convert_and_rename`` to land rows in the + //! staging table before the swap. + for (old in db |> select_from(type)) { + var new_row : TT = default + _::_sct(old, new_row) + db |> insert(new_row, name) + } +} + +[unused_argument(t1, t2), template(t1, t2)] +def convert_and_rename(db : SqlRunner; t1 : type; t2 : type) : void { + //! Full schema rebuild: CREATE staging, copy via [struct_convert], DROP original, RENAME. + //! S and T must share the same ``[sql_table(name=…)]`` (legacy/current convention). + let source = _::_sql_table_name(default) + let target = _::_sql_table_name(default) + if (source != target) { + panic("convert_and_rename: source '{source}' and target '{target}' must share the same [sql_table(name=...)]. Use `db |> convert(type, type, name=...)` for cross-table conversions.") + } + let staging = "{target}_new" + // Staging table is created WITHOUT indexes — explicit `[sql_index(name=...)]` would + // collide with the original's still-existing index of the same name (SQLite index + // names are schema-global). Indexes are recreated on the renamed target below. + db |> exec(_::_sql_create_table_sql(default, staging)) + db |> convert(type, type, staging) + db |> exec("DROP TABLE {sql_quote_id(target)}") + db |> exec("ALTER TABLE {sql_quote_id(staging)} RENAME TO {sql_quote_id(target)}") + let idx_sql = _::_sql_create_indexes_sql(default) + if (!empty(idx_sql)) { + db |> exec(idx_sql) + } +} diff --git a/tests/dasSQLITE/failed_sql_index_on_legacy.das b/tests/dasSQLITE/failed_sql_index_on_legacy.das new file mode 100644 index 000000000..527fcf288 --- /dev/null +++ b/tests/dasSQLITE/failed_sql_index_on_legacy.das @@ -0,0 +1,15 @@ +options gen2 + +// `[sql_index]` on a `legacy=true` struct is rejected: legacy structs are read-only +// and don't define schema, so the index would have no CREATE TABLE to attach to and +// would collide with the current struct's index of the same name on rebuild. +expect 30111:1 + +require sqlite/sqlite_migrate + +[sql_table(name = "users", legacy = true), + sql_index(name = "ux_users_name", fields = "Name", unique)] +struct UserV1 { + @sql_primary_key Id : int + Name : string +} diff --git a/tests/dasSQLITE/failed_struct_convert_new_field_no_default.das b/tests/dasSQLITE/failed_struct_convert_new_field_no_default.das new file mode 100644 index 000000000..6d155eba7 --- /dev/null +++ b/tests/dasSQLITE/failed_struct_convert_new_field_no_default.das @@ -0,0 +1,25 @@ +options gen2 + +// `[struct_convert]` rejects new T fields that are not Option and have no default initializer +// when the user provides no override — there's no auto-derivation rule that can populate them. +expect 30111:1 + +require sqlite/sqlite_migrate + +[sql_table(name = "users", legacy = true)] +struct UserV1 { + @sql_primary_key Id : int + Name : string +} + +[sql_table(name = "users")] +struct User { + @sql_primary_key Id : int + Name : string + Email : string // new field, not Option, no default — needs an override +} + +[struct_convert] +def my_v1_to_v2(old : UserV1; var dst : User) { + pass +} diff --git a/tests/dasSQLITE/migrate_80_struct_convert_trivial.das b/tests/dasSQLITE/migrate_80_struct_convert_trivial.das new file mode 100644 index 000000000..1afbf8fc5 --- /dev/null +++ b/tests/dasSQLITE/migrate_80_struct_convert_trivial.das @@ -0,0 +1,37 @@ +options gen2 + +require dastest/testing_boost public +require sqlite/sqlite_migrate + +[sql_table(name = "users", legacy = true)] +struct UserV1 { + @sql_primary_key Id : int + Name : string + Score : int +} + +[sql_table(name = "users")] +struct User { + @sql_primary_key Id : int + Name : string + Score : int +} + +[struct_convert] +def my_v1_to_v2(old : UserV1; var dst : User) { + pass +} + +[test] +def test_trivial_round_trip(t : T?) { + var inscope db = open_sqlite(":memory:") + db |> exec("CREATE TABLE users (Id INTEGER PRIMARY KEY, Name TEXT NOT NULL, Score INTEGER NOT NULL DEFAULT 0)") + db |> exec("INSERT INTO users (Id, Name, Score) VALUES (1, 'alice', 42), (2, 'bob', 7)") + db |> convert_and_rename(type, type) + let n = db |> query_scalar("SELECT COUNT(*) FROM users", type) + t |> equal(n, 2, "row count preserved") + let alice_score = db |> query_scalar("SELECT Score FROM users WHERE Name='alice'", type) + t |> equal(alice_score, 42, "alice's Score copied verbatim") + let bob_score = db |> query_scalar("SELECT Score FROM users WHERE Name='bob'", type) + t |> equal(bob_score, 7, "bob's Score copied verbatim") +} diff --git a/tests/dasSQLITE/migrate_81_struct_convert_option_flip.das b/tests/dasSQLITE/migrate_81_struct_convert_option_flip.das new file mode 100644 index 000000000..ce30000ca --- /dev/null +++ b/tests/dasSQLITE/migrate_81_struct_convert_option_flip.das @@ -0,0 +1,42 @@ +options gen2 + +require dastest/testing_boost public +require sqlite/sqlite_migrate +require daslib/option public + +// T → Option: source has bare Score, target has Option. Auto-rule wraps with some(). + +[sql_table(name = "users", legacy = true)] +struct UserV1 { + @sql_primary_key Id : int + Name : string + Score : int +} + +[sql_table(name = "users")] +struct User { + @sql_primary_key Id : int + Name : string + Score : Option +} + +[struct_convert] +def my_v1_to_v2(old : UserV1; var dst : User) { + pass +} + +[test] +def test_t_to_option_wraps_with_some(t : T?) { + var inscope db = open_sqlite(":memory:") + db |> exec("CREATE TABLE users (Id INTEGER PRIMARY KEY, Name TEXT NOT NULL, Score INTEGER NOT NULL DEFAULT 0)") + db |> exec("INSERT INTO users (Id, Name, Score) VALUES (1, 'alice', 42)") + db |> convert_and_rename(type, type) + var rows : array + rows |> reserve(2) + for (row in db |> select_from(type)) { + rows |> emplace(row) + } + t |> equal(length(rows), 1, "row preserved") + t |> success(rows[0].Score |> is_some, "Option wraps to some(...)") + t |> equal(rows[0].Score |> unwrap, 42, "wrapped value preserved") +} diff --git a/tests/dasSQLITE/migrate_82_struct_convert_primitive_cast.das b/tests/dasSQLITE/migrate_82_struct_convert_primitive_cast.das new file mode 100644 index 000000000..d6fea5c52 --- /dev/null +++ b/tests/dasSQLITE/migrate_82_struct_convert_primitive_cast.das @@ -0,0 +1,52 @@ +options gen2 + +require dastest/testing_boost public +require sqlite/sqlite_migrate + +// [struct_convert] dispatches to overloaded ``struct_convert_field``. This test exercises the +// numeric/string conversion overloads — int↔int64, float↔double, string→numeric, numeric→string. + +[sql_table(name = "items", legacy = true)] +struct ItemV1 { + @sql_primary_key Id : int + Count : int // → int64 (widening) + Score : int64 // → int (narrowing — silent, user opted in) + Ratio : float // → double (widening) + Avg : double // → float (narrowing) + LabelInt : int // → string (numeric → string) + PriceStr : string // → double (string → double parse) +} + +[sql_table(name = "items")] +struct Item { + @sql_primary_key Id : int + Count : int64 + Score : int + Ratio : double + Avg : float + LabelInt : string + PriceStr : double +} + +[struct_convert] +def convert_item_v1_to_v2(old : ItemV1; var dst : Item) {} + +[test] +def test_primitive_casts_through_dispatch(t : T?) { + var inscope db = open_sqlite(":memory:") + db |> exec("CREATE TABLE items (Id INTEGER PRIMARY KEY, Count INTEGER NOT NULL, Score INTEGER NOT NULL, Ratio REAL NOT NULL, Avg REAL NOT NULL, LabelInt INTEGER NOT NULL, PriceStr TEXT NOT NULL)") + db |> exec("INSERT INTO items (Id, Count, Score, Ratio, Avg, LabelInt, PriceStr) VALUES (1, 42, 9999, 1.5, 2.75, 7, '3.14')") + db |> convert_and_rename(type, type) + let count_int64 = db |> query_scalar("SELECT Count FROM items WHERE Id=1", type) + t |> equal(count_int64, 42l, "int → int64 widening preserved") + let score_int = db |> query_scalar("SELECT Score FROM items WHERE Id=1", type) + t |> equal(score_int, 9999, "int64 → int narrowing preserved (within range)") + let ratio_double = db |> query_scalar("SELECT Ratio FROM items WHERE Id=1", type) + t |> equal(ratio_double, 1.5lf, "float → double widening preserved") + let avg_float = db |> query_scalar("SELECT Avg FROM items WHERE Id=1", type) + t |> equal(avg_float, 2.75f, "double → float narrowing preserved") + let label_str = db |> query_scalar("SELECT LabelInt FROM items WHERE Id=1", type) + t |> equal(label_str, "7", "int → string interp") + let price_double = db |> query_scalar("SELECT PriceStr FROM items WHERE Id=1", type) + t |> equal(price_double, 3.14lf, "string → double parse") +} diff --git a/tests/dasSQLITE/migrate_83_struct_convert_option_unwrap.das b/tests/dasSQLITE/migrate_83_struct_convert_option_unwrap.das new file mode 100644 index 000000000..c5953071b --- /dev/null +++ b/tests/dasSQLITE/migrate_83_struct_convert_option_unwrap.das @@ -0,0 +1,42 @@ +options gen2 + +require dastest/testing_boost public +require sqlite/sqlite_migrate + +// `Option → T` dispatches through the shipped ``struct_convert_field`` overload — +// NULL collapses to ``default`` (0 / "" / 0.0). Override in your module for different +// NULL-handling semantics. + +[sql_table(name = "users", legacy = true)] +struct UserV1 { + @sql_primary_key Id : int + Name : Option + Score : Option +} + +[sql_table(name = "users")] +struct User { + @sql_primary_key Id : int + Name : string + Score : int +} + +[struct_convert] +def m_v1_to_v2(old : UserV1; var dst : User) {} + +[test] +def test_option_unwrap_default_on_null(t : T?) { + var inscope db = open_sqlite(":memory:") + db |> exec("CREATE TABLE users (Id INTEGER PRIMARY KEY, Name TEXT, Score INTEGER)") + db |> exec("INSERT INTO users (Id, Name, Score) VALUES (1, 'alice', 42)") + db |> exec("INSERT INTO users (Id, Name, Score) VALUES (2, NULL, NULL)") + db |> convert_and_rename(type, type) + let alice_name = db |> query_scalar("SELECT Name FROM users WHERE Id=1", type) + let alice_score = db |> query_scalar("SELECT Score FROM users WHERE Id=1", type) + t |> equal(alice_name, "alice", "Some(string) → string preserved") + t |> equal(alice_score, 42, "Some(int) → int preserved") + let bob_name = db |> query_scalar("SELECT Name FROM users WHERE Id=2", type) + let bob_score = db |> query_scalar("SELECT Score FROM users WHERE Id=2", type) + t |> equal(bob_name, "", "NULL string → default=\"\"") + t |> equal(bob_score, 0, "NULL int → default=0") +} diff --git a/tests/dasSQLITE/migrate_84_struct_convert_option_cross.das b/tests/dasSQLITE/migrate_84_struct_convert_option_cross.das new file mode 100644 index 000000000..501fd2d6e --- /dev/null +++ b/tests/dasSQLITE/migrate_84_struct_convert_option_cross.das @@ -0,0 +1,38 @@ +options gen2 + +require dastest/testing_boost public +require sqlite/sqlite_migrate + +// ``Option → Option`` cross-payload — the macro emits ``struct_convert_field(dst, src)`` +// and the cross-payload Option overload unwraps src, dispatches S→T inner, re-wraps. + +[sql_table(name = "users", legacy = true)] +struct UserV1 { + @sql_primary_key Id : int + Score : Option +} + +[sql_table(name = "users")] +struct User { + @sql_primary_key Id : int + Score : Option +} + +[struct_convert] +def m_v1_to_v2(old : UserV1; var dst : User) {} + +[test] +def test_option_cross_payload(t : T?) { + var inscope db = open_sqlite(":memory:") + db |> exec("CREATE TABLE users (Id INTEGER PRIMARY KEY, Score INTEGER)") + db |> exec("INSERT INTO users (Id, Score) VALUES (1, 42), (2, NULL)") + db |> convert_and_rename(type, type) + let alice = db |> query_scalar("SELECT Score FROM users WHERE Id=1", type) + t |> equal(alice, "42", "Option(some) → Option(some) via int→string inner") + // SQLite returns "" for NULL when target is non-nullable string column; with Option + // target the rebuild stores actual NULL, but query_scalar collapses to default. + let n_null = db |> query_scalar( + "SELECT COUNT(*) FROM users WHERE Id=2 AND Score IS NULL", + type) + t |> equal(n_null, 1, "Option(none) → Option(none) preserved as NULL") +} diff --git a/tests/dasSQLITE/migrate_85_struct_convert_user_overload.das b/tests/dasSQLITE/migrate_85_struct_convert_user_overload.das new file mode 100644 index 000000000..3e6845577 --- /dev/null +++ b/tests/dasSQLITE/migrate_85_struct_convert_user_overload.das @@ -0,0 +1,40 @@ +options gen2 + +require dastest/testing_boost public +require sqlite/sqlite_migrate + +// User-side overload extension: declare ``struct_convert_field`` for ``int → string`` in +// your module and the [struct_convert] macro picks it up automatically — your overload +// shadows the shipped ``string ← auto(S)`` catch-all because it's more specific. + +[sql_table(name = "items", legacy = true)] +struct ItemV1 { + @sql_primary_key Id : int + Score : int +} + +[sql_table(name = "items")] +struct Item { + @sql_primary_key Id : int + Score : string +} + +// User-side dispatch overload — formats the int with a domain-specific prefix instead of +// the default ``"{src}"`` interp. The macro's ``_::`` resolves at user's module, so this +// overload wins for the int→string field of [struct_convert]. +def struct_convert_field(var dst : string&; src : int) : void { + dst = "score={src}" +} + +[struct_convert] +def m_v1_to_v2(old : ItemV1; var dst : Item) {} + +[test] +def test_user_overload_extends_dispatch(t : T?) { + var inscope db = open_sqlite(":memory:") + db |> exec("CREATE TABLE items (Id INTEGER PRIMARY KEY, Score INTEGER NOT NULL)") + db |> exec("INSERT INTO items (Id, Score) VALUES (1, 42)") + db |> convert_and_rename(type, type) + let landed = db |> query_scalar("SELECT Score FROM items WHERE Id=1", type) + t |> equal(landed, "score=42", "user-defined overload wins over shipped string<-auto(S) catch-all") +} diff --git a/tests/dasSQLITE/migrate_86_struct_convert_readonly_mention_preserves.das b/tests/dasSQLITE/migrate_86_struct_convert_readonly_mention_preserves.das new file mode 100644 index 000000000..efd4acb66 --- /dev/null +++ b/tests/dasSQLITE/migrate_86_struct_convert_readonly_mention_preserves.das @@ -0,0 +1,45 @@ +options gen2 + +require dastest/testing_boost public +require sqlite/sqlite_migrate + +// Read-only mentions of `dst.X` (e.g. `if (dst.X == "")`) must NOT suppress the +// auto-derived `dst.X = old.X`. A previous broader visitor catching ANY ExprField +// under `dst` would silently leave dst.X at default — copying produced empty rows +// instead of the source data. + +[sql_table(name = "users", legacy = true)] +struct OldUser { + @sql_primary_key Id : int + Name : string + Email : string +} + +[sql_table(name = "users")] +struct NewUser { + @sql_primary_key Id : int + Name : string + Email : string +} + +[struct_convert] +def convert_readonly_mention(old : OldUser; var dst : NewUser) { + // Read-only mention of dst.Name — auto-fill MUST still emit `dst.Name = old.Name`. + if (dst.Name == "") { + // Path is unreachable in practice; the conditional exists to anchor the read mention. + } + // Read-only mention of dst.Email used as RHS — same: auto-fill MUST still copy. + let _peek = dst.Email +} + +[test] +def test_readonly_mentions_preserve_auto_fill(t : T?) { + var inscope db = open_sqlite(":memory:") + db |> exec("CREATE TABLE users (Id INTEGER PRIMARY KEY, Name TEXT NOT NULL, Email TEXT NOT NULL)") + db |> exec("INSERT INTO users (Id, Name, Email) VALUES (1, 'alice', 'alice@x')") + db |> convert_and_rename(type, type) + let landed_name = db |> query_scalar("SELECT Name FROM users WHERE Id=1", type) + let landed_email = db |> query_scalar("SELECT Email FROM users WHERE Id=1", type) + t |> equal(landed_name, "alice", "Name preserved (read-only mention did NOT suppress auto-copy)") + t |> equal(landed_email, "alice@x", "Email preserved (read-only RHS mention did NOT suppress auto-copy)") +} diff --git a/tests/dasSQLITE/migrate_87_struct_convert_override_via_mention.das b/tests/dasSQLITE/migrate_87_struct_convert_override_via_mention.das new file mode 100644 index 000000000..02b0261e0 --- /dev/null +++ b/tests/dasSQLITE/migrate_87_struct_convert_override_via_mention.das @@ -0,0 +1,42 @@ +options gen2 + +require dastest/testing_boost public +require sqlite/sqlite_migrate + +// Override-detection records `dst.X` ONLY when it appears on the LHS of an +// assignment (`=`, `<-`, `:=`). The visitor sees `dst.Score = …` and suppresses +// the auto-derive that would otherwise macro_error on `Option → int` +// (existing rows may be NULL). + +[sql_table(name = "users", legacy = true)] +struct UserV1 { + @sql_primary_key Id : int + Name : string + Score : Option +} + +[sql_table(name = "users")] +struct User { + @sql_primary_key Id : int + Name : string + Score : int +} + +[struct_convert] +def my_v1_to_v2(old : UserV1; var dst : User) { + // LHS write recorded; auto-derive suppressed; user expression resolves the Option. + dst.Score = old.Score ?? 0 +} + +[test] +def test_lhs_write_suppresses_auto_derive(t : T?) { + // The compile passing IS the test — without LHS-detection [struct_convert] + // would macro_error on Option → T. With it, the user's body resolves the + // Option and the macro stays out of the way. + var inscope db = open_sqlite(":memory:") + db |> exec("CREATE TABLE users (Id INTEGER PRIMARY KEY, Name TEXT NOT NULL, Score INTEGER)") + db |> exec("INSERT INTO users (Id, Name, Score) VALUES (1, 'alice', 42)") + db |> convert_and_rename(type, type) + let alice_score = db |> query_scalar("SELECT Score FROM users WHERE Name='alice'", type) + t |> equal(alice_score, 42, "alice's some(42) unwrapped via the body's resolution") +} diff --git a/tests/dasSQLITE/migrate_88_create_table_name_override.das b/tests/dasSQLITE/migrate_88_create_table_name_override.das new file mode 100644 index 000000000..faf2b8695 --- /dev/null +++ b/tests/dasSQLITE/migrate_88_create_table_name_override.das @@ -0,0 +1,45 @@ +options gen2 + +require dastest/testing_boost public +require sqlite/sqlite_migrate + +[sql_table(name = "users"), sql_index(fields = "Name")] +struct User { + @sql_primary_key Id : int + Name : string +} + +[test] +def test_create_table_emits_with_alternate_name(t : T?) { + var inscope db = open_sqlite(":memory:") + db |> create_table(type, "users_new") + let count = db |> query_scalar( + "SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='users_new'", type) + t |> equal(count, 1, "users_new table created via name override") + let original_count = db |> query_scalar( + "SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='users'", type) + t |> equal(original_count, 0, "original 'users' table not auto-created") +} + +[test] +def test_create_table_emits_full_schema(t : T?) { + var inscope db = open_sqlite(":memory:") + db |> create_table(type, "users_new") + let pk = db |> query_scalar( + "SELECT pk FROM pragma_table_info('users_new') WHERE name='Id'", type) + t |> equal(pk, 1, "Id column is PRIMARY KEY in users_new") + let notnull = db |> query_scalar( + "SELECT \"notnull\" FROM pragma_table_info('users_new') WHERE name='Name'", type) + t |> equal(notnull, 1, "Name column is NOT NULL") +} + +[test] +def test_create_table_emits_indexes_with_runtime_name(t : T?) { + // [sql_index] auto-name uses the runtime table name, so indexes for the staging table + // are named `idx_users_new_` — no collision with original table's `idx_users_`. + var inscope db = open_sqlite(":memory:") + db |> create_table(type, "users_new") + let idx = db |> query_scalar( + "SELECT COUNT(*) FROM sqlite_master WHERE type='index' AND tbl_name='users_new' AND name='idx_users_new_Name'", type) + t |> equal(idx, 1, "auto-named index targets the staging table") +} diff --git a/tests/dasSQLITE/migrate_89_insert_name_override.das b/tests/dasSQLITE/migrate_89_insert_name_override.das new file mode 100644 index 000000000..c8350f928 --- /dev/null +++ b/tests/dasSQLITE/migrate_89_insert_name_override.das @@ -0,0 +1,25 @@ +options gen2 + +require dastest/testing_boost public +require sqlite/sqlite_migrate + +[sql_table(name = "users")] +struct User { + @sql_primary_key Id : int + Name : string + Email : string +} + +[test] +def test_insert_lands_in_named_table(t : T?) { + var inscope db = open_sqlite(":memory:") + db |> create_table(type, "users_new") + let u = User(Name = "alice", Email = "alice@x") + let id = db |> insert(u, "users_new") + t |> success(id > 0l, "insert returned a row id") + let n = db |> query_scalar("SELECT COUNT(*) FROM users_new", type) + t |> equal(n, 1, "row landed in users_new, not users") + let landed_email = db |> query_scalar( + "SELECT Email FROM users_new WHERE Name='alice'", type) + t |> equal(landed_email, "alice@x", "row content preserved") +} diff --git a/tests/dasSQLITE/migrate_90_sql_table_legacy_marker.das b/tests/dasSQLITE/migrate_90_sql_table_legacy_marker.das new file mode 100644 index 000000000..0009723c7 --- /dev/null +++ b/tests/dasSQLITE/migrate_90_sql_table_legacy_marker.das @@ -0,0 +1,44 @@ +options gen2 + +require dastest/testing_boost public +require sqlite/sqlite_migrate + +[sql_table(name = "users", legacy = true)] +struct UserV1 { + @sql_primary_key Id : int + Name : string + LegacyEmail : string +} + +[sql_table(name = "users")] +struct User { + @sql_primary_key Id : int + Name : string + Email : string +} + +[test] +def test_legacy_struct_select_from_works(t : T?) { + var inscope db = open_sqlite(":memory:") + db |> exec("CREATE TABLE users (Id INTEGER PRIMARY KEY, Name TEXT NOT NULL, LegacyEmail TEXT NOT NULL DEFAULT '')") + db |> exec("INSERT INTO users (Id, Name, LegacyEmail) VALUES (1, 'alice', 'alice@x')") + var rows : array + rows |> reserve(2) + for (row in db |> select_from(type)) { + rows |> emplace(row) + } + t |> equal(length(rows), 1, "select_from on legacy struct returns rows") + t |> equal(rows[0].LegacyEmail, "alice@x", "legacy field readable") +} + +[test] +def test_legacy_struct_drop_works(t : T?) { + // DROP TABLE IF EXISTS is read-side-enough — devs may legitimately drop a legacy table + // during migration cleanup. Tests that the helper is emitted for legacy structs. + var inscope db = open_sqlite(":memory:") + db |> exec("CREATE TABLE users (Id INTEGER PRIMARY KEY, Name TEXT NOT NULL, LegacyEmail TEXT NOT NULL DEFAULT '')") + db |> drop_table_if_exists(type) + let n = db |> query_scalar( + "SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='users'", type) + t |> equal(n, 0, "drop_table_if_exists works on legacy struct") +} diff --git a/tests/dasSQLITE/migrate_91_full_rebuild_end_to_end.das b/tests/dasSQLITE/migrate_91_full_rebuild_end_to_end.das new file mode 100644 index 000000000..e195cbc73 --- /dev/null +++ b/tests/dasSQLITE/migrate_91_full_rebuild_end_to_end.das @@ -0,0 +1,68 @@ +options gen2 + +require dastest/testing_boost public +require sqlite/sqlite_migrate + +// The flagship rebuild example: legacy + current pair, [struct_convert] with one override line, +// migration body collapses to a single `convert_and_rename` call. Verifies that the entire +// rebuild dance (CREATE staging + copy + DROP + RENAME) preserves rows with field translation. + +[sql_table(name = "users", legacy = true)] +struct UserV6 { + @sql_primary_key Id : int + Name : string + LegacyEmail : string +} + +[sql_table(name = "users")] +struct User { + @sql_primary_key Id : int + Name : string + Email : string +} + +[struct_convert] +def my_v6_to_v7(old : UserV6; var dst : User) { + dst.Email = (old.LegacyEmail != "") ? old.LegacyEmail : "unknown@example.com" +} + +[sql_migration(version = 1, description = "create v6 schema with fixture rows")] +def m_001(db : SqlRunner) { + db |> exec("CREATE TABLE users (Id INTEGER PRIMARY KEY, Name TEXT NOT NULL, LegacyEmail TEXT NOT NULL DEFAULT '')") + db |> exec("INSERT INTO users (Id, Name, LegacyEmail) VALUES (1, 'alice', 'alice@x'), (2, 'bob', '')") +} + +[sql_migration(version = 2, description = "rebuild users with new shape")] +def m_002(db : SqlRunner) { + db |> convert_and_rename(type, type) +} + +[test] +def test_full_rebuild_preserves_rows(t : T?) { + var inscope db = open_sqlite(":memory:") + db |> migrate_to_latest() + let n = db |> query_scalar("SELECT COUNT(*) FROM users", type) + t |> equal(n, 2, "row count preserved across rebuild") +} + +[test] +def test_full_rebuild_translates_fields(t : T?) { + var inscope db = open_sqlite(":memory:") + db |> migrate_to_latest() + let alice_email = db |> query_scalar( + "SELECT Email FROM users WHERE Name='alice'", type) + t |> equal(alice_email, "alice@x", "alice's LegacyEmail mapped to Email") + let bob_email = db |> query_scalar( + "SELECT Email FROM users WHERE Name='bob'", type) + t |> equal(bob_email, "unknown@example.com", "bob's empty LegacyEmail picked the fallback") +} + +[test] +def test_rebuild_inside_migration_txn(t : T?) { + // The whole migrate_to_latest call runs inside a single BEGIN IMMEDIATE / COMMIT. + // Test by introspecting the audit table. + var inscope db = open_sqlite(":memory:") + db |> migrate_to_latest() + let v = db |> current_schema_version() + t |> equal(v, 2, "audit recorded both migrations") +} diff --git a/tests/dasSQLITE/migrate_92_hostile_name_safe.das b/tests/dasSQLITE/migrate_92_hostile_name_safe.das new file mode 100644 index 000000000..8c5e2b903 --- /dev/null +++ b/tests/dasSQLITE/migrate_92_hostile_name_safe.das @@ -0,0 +1,34 @@ +options gen2 + +require dastest/testing_boost public +require sqlite/sqlite_migrate + +// Identifier injection guard: every runtime ``name : string`` overload +// (create_table / insert / select_from / drop / update / delete + the +// migrate path) must route through sql_quote_id so embedded `"` is +// doubled. Without escaping, a `"` in name would close the identifier +// and execute attacker-controlled SQL through sqlite3_exec. + +[sql_table(name = "users")] +struct User { + @sql_primary_key Id : int + Name : string +} + +[test] +def test_hostile_name_does_not_inject(t : T?) { + var inscope db = open_sqlite(":memory:") + let hostile = "evil\" (Injected INTEGER); CREATE TABLE hacked(z TEXT); --" + db |> create_table(type, hostile) + let landed = User(Name = "alice") + let id = db |> insert(landed, hostile) + t |> success(id > 0l, "insert into hostile-named table succeeded") + let n_landed = db |> query_scalar( + "SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name={sql_quote_lit(hostile)}", + type) + t |> equal(n_landed, 1, "hostile name landed verbatim as a table identifier") + let n_hacked = db |> query_scalar( + "SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='hacked'", + type) + t |> equal(n_hacked, 0, "no injected `hacked` table was created") +} diff --git a/tests/dasSQLITE/migrate_93_rebuild_attached_schema.das b/tests/dasSQLITE/migrate_93_rebuild_attached_schema.das new file mode 100644 index 000000000..52e73a609 --- /dev/null +++ b/tests/dasSQLITE/migrate_93_rebuild_attached_schema.das @@ -0,0 +1,49 @@ +options gen2 + +require dastest/testing_boost public +require sqlite/sqlite_migrate + +// _qualify_schema must rewrite ALTER TABLE so that convert_and_rename +// rebuilds run inside an attached schema, not silently against `main`. + +[sql_table(name = "users", legacy = true)] +struct UserV1 { + @sql_primary_key Id : int + Name : string + LegacyEmail : string +} + +[sql_table(name = "users")] +struct User { + @sql_primary_key Id : int + Name : string + Email : string +} + +[struct_convert] +def convert_v1_to_v2(old : UserV1; var dst : User) { + dst.Email = (old.LegacyEmail != "") ? old.LegacyEmail : "unknown@example.com" +} + +[test] +def test_rebuild_runs_in_attached_schema(t : T?) { + var inscope main = open_sqlite(":memory:") + main |> with_attached("file:rebuild_attached_test?mode=memory&cache=shared", "att") $(att) { + att |> exec("DROP TABLE IF EXISTS \"users\"") + att |> exec("DROP TABLE IF EXISTS \"users_new\"") + att |> exec("CREATE TABLE \"users\" (Id INTEGER PRIMARY KEY, Name TEXT NOT NULL, LegacyEmail TEXT NOT NULL DEFAULT '')") + att |> exec("INSERT INTO \"users\" (Id, Name, LegacyEmail) VALUES (1, 'alice', 'alice@x')") + att |> convert_and_rename(type, type) + let n_in_att = main |> query_scalar( + "SELECT COUNT(*) FROM \"att\".sqlite_master WHERE type='table' AND name='users'", + type) + t |> equal(n_in_att, 1, "rebuilt `users` lives in attached schema") + let main_users = main |> query_scalar( + "SELECT COUNT(*) FROM main.sqlite_master WHERE type='table' AND name='users'", + type) + t |> equal(main_users, 0, "no rebuild side-effect in main") + let landed = att |> query_scalar( + "SELECT Email FROM users WHERE Id=1", type) + t |> equal(landed, "alice@x", "row content preserved through rebuild") + } +} diff --git a/tests/dasSQLITE/migrate_94_rebuild_custom_named_index.das b/tests/dasSQLITE/migrate_94_rebuild_custom_named_index.das new file mode 100644 index 000000000..6787d2d01 --- /dev/null +++ b/tests/dasSQLITE/migrate_94_rebuild_custom_named_index.das @@ -0,0 +1,46 @@ +options gen2 + +require dastest/testing_boost public +require sqlite/sqlite_migrate + +// `[sql_index(name = "...")]` (explicit custom name) must survive convert_and_rename. +// Earlier rebuild path created the staging table WITH indexes, which collided on the +// custom name — SQLite index names are schema-global and the original still owned it. +// Fixed by deferring index creation to after the RENAME. + +[sql_table(name = "users", legacy = true)] +struct UserV1 { + @sql_primary_key Id : int + Name : string + Email : string +} + +[sql_table(name = "users"), + sql_index(name = "ux_users_name", fields = "Name", unique)] +struct User { + @sql_primary_key Id : int + Name : string + Email : string +} + +[struct_convert] +def convert_v1_to_v2(old : UserV1; var dst : User) {} + +[test] +def test_custom_named_index_survives_rebuild(t : T?) { + var inscope db = open_sqlite(":memory:") + db |> exec("CREATE TABLE users (Id INTEGER PRIMARY KEY, Name TEXT NOT NULL, Email TEXT NOT NULL DEFAULT '')") + db |> exec("CREATE UNIQUE INDEX ux_users_name ON users (Name)") + db |> exec("INSERT INTO users (Id, Name, Email) VALUES (1, 'alice', 'alice@x')") + db |> convert_and_rename(type, type) + let n_idx = db |> query_scalar( + "SELECT COUNT(*) FROM sqlite_master WHERE type='index' AND name='ux_users_name'", + type) + t |> equal(n_idx, 1, "custom-named unique index lives after rebuild") + let n_tab = db |> query_scalar( + "SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='users'", + type) + t |> equal(n_tab, 1, "renamed `users` table is the only one (no `users_new` left)") + let landed = db |> query_scalar("SELECT Name FROM users WHERE Id=1", type) + t |> equal(landed, "alice", "row preserved through rebuild") +} diff --git a/tests/dasSQLITE/migrate_95_rebuild_mismatched_table_names_panics.das b/tests/dasSQLITE/migrate_95_rebuild_mismatched_table_names_panics.das new file mode 100644 index 000000000..42aa14648 --- /dev/null +++ b/tests/dasSQLITE/migrate_95_rebuild_mismatched_table_names_panics.das @@ -0,0 +1,39 @@ +options gen2 + +require dastest/testing_boost public +require sqlite/sqlite_migrate + +// `convert_and_rename` requires source and target structs to share the same +// ``[sql_table(name=...)]`` (legacy/current convention). Mismatched names would +// silently SELECT from the source table but DROP/RENAME the target — a footgun. +// The runtime guard panics with a clear message pointing at the proper escape +// hatch (``convert(type, type, name=...)``) for cross-table conversions. + +[sql_table(name = "items_old", legacy = true)] +struct ItemV1 { + @sql_primary_key Id : int + Name : string +} + +[sql_table(name = "items")] +struct Item { + @sql_primary_key Id : int + Name : string +} + +[struct_convert] +def m_v1_to_v2(old : ItemV1; var dst : Item) {} + +[test] +def test_convert_and_rename_panics_on_mismatched_table_names(t : T?) { + var inscope db = open_sqlite(":memory:") + db |> exec("CREATE TABLE items_old (Id INTEGER PRIMARY KEY, Name TEXT NOT NULL)") + var panicked = false + try { + db |> convert_and_rename(type, type) + } recover { + panicked = true + } + t |> success(panicked, + "convert_and_rename must panic when source/target [sql_table(name=...)] disagree") +} diff --git a/tutorials/sql/43-migrations.das b/tutorials/sql/43-migrations.das index 03ea9c77e..bb4d3d1c2 100644 --- a/tutorials/sql/43-migrations.das +++ b/tutorials/sql/43-migrations.das @@ -30,7 +30,8 @@ require sqlite/sqlite_migrate // PART 1 — Define a schema and write a few migrations // ===================================================================== -// Current shape. Migrations 1-3 below describe the path from empty to here. +// Current shape. Migrations 1-3 below describe the path from empty to here; +// migration 6 (PART 5) shows the rebuild path that adds LastLoginAt. [sql_table(name = "users")] struct User { @@ -38,6 +39,27 @@ struct User { Name : string Email : string // added in migration 2 Score : int = 0 // added in migration 3 with default + LastLoginAt : Option // added in migration 6 via convert_and_rename +} + +// Pre-rebuild shape, kept around as a [sql_table(legacy=true)] historical +// reference. Read-only — usable with select_from inside the rebuild migration, +// but the compiler refuses create_table / insert / update / delete on it. +[sql_table(name = "users", legacy = true)] +struct UserV5 { + @sql_primary_key Id : int + Name : string + Email : string + Score : int = 0 +} + +// [struct_convert] walks T's fields and emits `_::struct_convert_field(tgt.X, src.X)` +// per field. Identity / Option-wrap / numeric / string conversions ship out of the box; +// drop your own overload in your module to extend (e.g. `int → MyEnum`). +// Body is empty because every field auto-derives — no per-field overrides needed. +[struct_convert] +def m_v5_to_v6(old : UserV5; var dst : User) { + pass } // Migration 1 — bootstrap. @@ -111,6 +133,28 @@ def migration_005(db : SqlRunner) { db |> create_unique_index(type, ("Email", "Name"), "ux_email_name") } +// Migration 6 — schema REBUILD via convert_and_rename. +// +// Adds LastLoginAt to User. Strictly speaking, `add_column(type, +// "LastLoginAt")` would also work for this case (SQLite handles ADD COLUMN +// on nullable types fine). Migration 6 uses convert_and_rename anyway to +// demonstrate the rebuild shape, which IS the only path for the cases ALTER +// can't reach: PK changes, type narrowing, FK alterations, CHECK changes. +// +// The dance: +// 1. CREATE TABLE users_new with the new shape. +// 2. SELECT every row of `users` (read via UserV5 — the legacy struct), +// run it through m_v5_to_v6 (auto-derived field copy + LastLoginAt +// defaults to none()), INSERT into users_new. +// 3. DROP TABLE users. +// 4. ALTER TABLE users_new RENAME TO users. +// Runs inside migrate_to_latest's α-shape transaction; failure rolls back. + +[sql_migration(version = 6, description = "rebuild: add LastLoginAt")] +def migration_006(db : SqlRunner) { + db |> convert_and_rename(type, type) +} + [export] def main() : int { // ================================================================= @@ -177,15 +221,40 @@ def main() : int { // - DROP COLUMN / RENAME COLUMN — old names aren't in the current // struct, so daslang has nothing to validate against. // - PK / UNIQUE inline / generated columns added post-hoc — SQLite - // can't ALTER these in place; needs a table rebuild - // (chunk 14c, struct_convert). + // can't ALTER these in place; needs a table rebuild — see + // PART 5 below for the struct_convert + convert_and_rename path. // - CHECK constraints, FK ADD/DROP, column type changes. // - Anything ad-hoc that doesn't map to a struct field. + // ================================================================ + // PART 5 — Schema rebuild: verify migration 6 landed + // ================================================================ + // + // Migration 6 ran inside migrate_to_latest. Two checks: + // - The new column is on the table. + // - alice survived the rebuild (Email preserved). + + let last_login_count = db |> query_scalar( + "SELECT COUNT(*) FROM pragma_table_info('users') WHERE name='LastLoginAt'", type) + print("post-rebuild LastLoginAt column present: {last_login_count == 1}\n") + let alice_post_rebuild = db |> query_scalar( + "SELECT Email FROM users WHERE Name='alice'", type) + print("alice survived rebuild: Email={alice_post_rebuild}\n") + + // GOTCHA: indexes created at runtime (migrations 4 and 5 used + // `create_index` / `create_unique_index`) are NOT reflected in + // [sql_index] annotations on the User struct, so convert_and_rename's + // `create_table(type, "users_new")` doesn't recreate them. + // After the rebuild, the indexes are gone: let n_idx = db |> query_scalar( "SELECT COUNT(*) FROM sqlite_master WHERE type='index' AND tbl_name='users' AND name NOT LIKE 'sqlite_%'", type) - print("user indexes after migration 5: {n_idx}\n") + print("user indexes after rebuild: {n_idx} (gone — see note above)\n") + // Two ways to avoid this: + // - Declare indexes via [sql_index(...)] siblings on the struct so + // create_table emits them during convert_and_rename. + // - Recreate indexes inside migration_006 after convert_and_rename + // via create_index / create_unique_index calls. } // ================================================================= diff --git a/utils/hygiene/rule_collapse_blank_lines.das b/utils/hygiene/rule_collapse_blank_lines.das new file mode 100644 index 000000000..3077172d6 --- /dev/null +++ b/utils/hygiene/rule_collapse_blank_lines.das @@ -0,0 +1,31 @@ +options gen2 + +module rule_collapse_blank_lines shared private + +require strings +require daslib/strings_boost + + +def public collapse_blank_lines(src : string) : string { + var lines <- split(src, "\n") + var crlf = false + for (raw_line in lines) { + if (ends_with(raw_line, "\r")) { + crlf = true + break + } + } + let nl = crlf ? "\r\n" : "\n" + var out : array + var prev_blank = false + for (raw_line in lines) { + let line = ends_with(raw_line, "\r") ? slice(raw_line, 0, length(raw_line) - 1) : raw_line + let is_blank = empty(strip(line)) + if (is_blank && prev_blank) { + continue + } + out |> push(line) + prev_blank = is_blank + } + return out |> join(nl) +} diff --git a/utils/hygiene/rules.das b/utils/hygiene/rules.das index 9b80148d0..4c712ef0c 100644 --- a/utils/hygiene/rules.das +++ b/utils/hygiene/rules.das @@ -3,8 +3,9 @@ options gen2 module rules shared private require rule_private_docs public +require rule_collapse_blank_lines public def public apply_all_rules(src : string) : string { - return trim_private_docstrings(src) + return collapse_blank_lines(trim_private_docstrings(src)) } diff --git a/utils/hygiene/test_hygiene.das b/utils/hygiene/test_hygiene.das index fc1295492..bbed1b722 100644 --- a/utils/hygiene/test_hygiene.das +++ b/utils/hygiene/test_hygiene.das @@ -4,6 +4,8 @@ require dastest/testing_boost public require strings require rule_private_docs +require rule_collapse_blank_lines +require rules require process @@ -186,3 +188,69 @@ def test_process_path_directory(t : T?) { let r = process_path(".", false) t |> equal(r.kind, ProcessResultKind.not_a_file) } + + +[test] +def test_collapse_double_blank(t : T?) { + let input = "a\n\n\nb\n" + let want = "a\n\nb\n" + t |> equal(collapse_blank_lines(input), want) +} + + +[test] +def test_collapse_many_blank(t : T?) { + let input = "a\n\n\n\n\nb\n" + let want = "a\n\nb\n" + t |> equal(collapse_blank_lines(input), want) +} + + +[test] +def test_preserve_single_blank(t : T?) { + let input = "a\n\nb\n" + t |> equal(collapse_blank_lines(input), input) +} + + +[test] +def test_no_blanks_unchanged(t : T?) { + let input = "a\nb\nc\n" + t |> equal(collapse_blank_lines(input), input) +} + + +[test] +def test_collapse_whitespace_only_blank(t : T?) { + // Lines containing only spaces/tabs count as blank for collapsing. + let input = "a\n \n\t\nb\n" + let want = "a\n \nb\n" + t |> equal(collapse_blank_lines(input), want) +} + + +[test] +def test_collapse_idempotent(t : T?) { + let input = "a\n\n\n\nb\n\n\nc\n" + let once = collapse_blank_lines(input) + let twice = collapse_blank_lines(once) + t |> equal(once, twice) +} + + +[test] +def test_collapse_preserves_crlf(t : T?) { + let input = "a\r\n\r\n\r\nb\r\n" + let want = "a\r\n\r\nb\r\n" + t |> equal(collapse_blank_lines(input), want) +} + + +[test] +def test_apply_all_rules_chains_collapse(t : T?) { + // Top-level double-blank between two public defs must be collapsed by + // the rule chain (collapse_blank_lines runs after trim_private_docstrings). + let input = "def a() \{\n return 1\n\}\n\n\n\ndef b() \{\n return 2\n\}\n" + let want = "def a() \{\n return 1\n\}\n\ndef b() \{\n return 2\n\}\n" + t |> equal(apply_all_rules(input), want) +}