You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current migrator is not suitable for zero downtime deployments (as in, you need to support an intermediate database state until new code is running). There isn't exactly any best practice, but a couple of solutions emerged over the years.
Here is a nice collection of them: https://xata.io/blog/zero-downtime-schema-migrations-postgresql
One sticks out as written in Rust Reshape, but I also like the ideas presented by pgroll. I believe the current most novel approach for zero downtime database migrations is based on the "Expand and Contract Pattern".
Different issues here talked about reworking the migration API.
E.g. #2066 for consolidation, #3178
The current migrator is not up to the task of providing support for this. It will run all migrations present in the migration dir. With a CD pipeline, this would mean any migration steps that need to be executed after the old code stopped running, and the new code is running needs to happen in another rollout/deployment.
The idea of Reshape and pgroll is to let the user define the target schema using custom operations defined in JSON. They take care of handling the expanding before deploying new code and contracting after the deployment finished.
I envision similar tooling that will take those high-level operations and emit a single migration file split into a befor, after, and abort file. sqlx migrate and Migrator::run would be then similar to Reshape and pgroll and accept a complete flat and a complete operation. The difference would be that instead of directly executing migrations written as high-level operations, SQLx's migrator would run the generated SQL files. This allows quick generation of migrations using high-level operations but also gives a user the freedom to hand-write SQL migrations.
Consolidation could be established for high-level operations only, leading to a single set of migrations files per release (release can also mean deployment in this case).
The consolidation/transformation step would be similar to cargo publish pre-steps such as updating the version in a cargo file.
The exact details for support everything mentioned in #2066 need to be worked out of course. Also, Reshape runs some steps that are dynamic and can't be simply exported as SQL files (e.g. batch_touch_rows), this also needs consideration and a solution. Just want to bring the ball rolling and see if anyone else has opinions about this.
Also tagging @fabianlindfors to see if they are interested in opening up some of Reshape's underlying functionality, such as getting a text output of the SQL queries that are part of the run, complete, abort steps.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
The current migrator is not suitable for zero downtime deployments (as in, you need to support an intermediate database state until new code is running). There isn't exactly any best practice, but a couple of solutions emerged over the years.
Here is a nice collection of them: https://xata.io/blog/zero-downtime-schema-migrations-postgresql
One sticks out as written in Rust Reshape, but I also like the ideas presented by pgroll. I believe the current most novel approach for zero downtime database migrations is based on the "Expand and Contract Pattern".
Different issues here talked about reworking the migration API.
E.g. #2066 for consolidation, #3178
The current migrator is not up to the task of providing support for this. It will run all migrations present in the migration dir. With a CD pipeline, this would mean any migration steps that need to be executed after the old code stopped running, and the new code is running needs to happen in another rollout/deployment.
The idea of Reshape and pgroll is to let the user define the target schema using custom operations defined in JSON. They take care of handling the expanding before deploying new code and contracting after the deployment finished.
I envision similar tooling that will take those high-level operations and emit a single migration file split into a befor, after, and abort file.
sqlx migrate
andMigrator::run
would be then similar to Reshape and pgroll and accept acomplete
flat and a complete operation. The difference would be that instead of directly executing migrations written as high-level operations, SQLx's migrator would run the generated SQL files. This allows quick generation of migrations using high-level operations but also gives a user the freedom to hand-write SQL migrations.Consolidation could be established for high-level operations only, leading to a single set of migrations files per release (release can also mean deployment in this case).
The consolidation/transformation step would be similar to cargo publish pre-steps such as updating the version in a cargo file.
The exact details for support everything mentioned in #2066 need to be worked out of course. Also, Reshape runs some steps that are dynamic and can't be simply exported as SQL files (e.g.
batch_touch_rows
), this also needs consideration and a solution. Just want to bring the ball rolling and see if anyone else has opinions about this.Also tagging @fabianlindfors to see if they are interested in opening up some of Reshape's underlying functionality, such as getting a text output of the SQL queries that are part of the run, complete, abort steps.
Beta Was this translation helpful? Give feedback.
All reactions