diff --git a/src/current/v24.1/take-and-restore-locality-aware-backups.md b/src/current/v24.1/take-and-restore-locality-aware-backups.md index 1681edf896c..516d53f3c52 100644 --- a/src/current/v24.1/take-and-restore-locality-aware-backups.md +++ b/src/current/v24.1/take-and-restore-locality-aware-backups.md @@ -125,7 +125,7 @@ RESTORE FROM LATEST IN ('s3://us-east-bucket/', 's3://us-west-bucket/'); To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% link {{ page.version.version }}/restore.md %}#restore-a-specific-full-or-incremental-backup). {{site.data.alerts.callout_info}} -[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you can [manually restore zone configurations from a locality-aware backup](#manually-restore-zone-configurations-from-a-locality-aware-backup). +[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. {{site.data.alerts.end}} ## Create an incremental locality-aware backup @@ -191,47 +191,6 @@ To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% When [restoring from an incremental locality-aware backup]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}#restore-from-an-incremental-locality-aware-backup), you need to include **every** locality ever used, even if it was only used once. {{site.data.alerts.end}} -## Manually restore zone configurations from a locality-aware backup - -During a [locality-aware restore](#restore-from-a-locality-aware-backup), some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you need to manually restore [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}) first: - -Once the locality-aware restore has started, [pause the restore]({% link {{ page.version.version }}/pause-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -PAUSE JOB 27536791415282; -~~~ - -The `system.zones` table stores your cluster's [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}), which will prevent the data from rebalancing. To restore them, you must restore the `system.zones` table into a new database because you cannot drop the existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -RESTORE TABLE system.zones FROM '2021/03/23-213101.37' IN - 'azure-blob://acme-co-backup?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' - WITH into_db = 'newdb'; -~~~ - -After it's restored into a new database, you can write the restored `zones` table data to the cluster's existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO system.zones SELECT * FROM newdb.zones; -~~~ - -Then drop the temporary table you created: - -{% include_cached copy-clipboard.html %} -~~~ sql -DROP TABLE newdb.zones; -~~~ - -Then, [resume the restore]({% link {{ page.version.version }}/resume-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -RESUME JOB 27536791415282; -~~~ - ## See also - [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) diff --git a/src/current/v24.3/take-and-restore-locality-aware-backups.md b/src/current/v24.3/take-and-restore-locality-aware-backups.md index 79b17f60876..0a75073cc1b 100644 --- a/src/current/v24.3/take-and-restore-locality-aware-backups.md +++ b/src/current/v24.3/take-and-restore-locality-aware-backups.md @@ -125,7 +125,7 @@ RESTORE FROM LATEST IN ('s3://us-east-bucket/', 's3://us-west-bucket/'); To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% link {{ page.version.version }}/restore.md %}#restore-a-specific-full-or-incremental-backup). {{site.data.alerts.callout_info}} -[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you can [manually restore zone configurations from a locality-aware backup](#manually-restore-zone-configurations-from-a-locality-aware-backup). +[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. {{site.data.alerts.end}} ## Create an incremental locality-aware backup @@ -191,47 +191,6 @@ To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% When [restoring from an incremental locality-aware backup]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}#restore-from-an-incremental-locality-aware-backup), you need to include **every** locality ever used, even if it was only used once. {{site.data.alerts.end}} -## Manually restore zone configurations from a locality-aware backup - -During a [locality-aware restore](#restore-from-a-locality-aware-backup), some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you need to manually restore [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}) first: - -Once the locality-aware restore has started, [pause the restore]({% link {{ page.version.version }}/pause-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -PAUSE JOB 27536791415282; -~~~ - -The `system.zones` table stores your cluster's [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}), which will prevent the data from rebalancing. To restore them, you must restore the `system.zones` table into a new database because you cannot drop the existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -RESTORE TABLE system.zones FROM '2021/03/23-213101.37' IN - 'azure-blob://acme-co-backup?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' - WITH into_db = 'newdb'; -~~~ - -After it's restored into a new database, you can write the restored `zones` table data to the cluster's existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO system.zones SELECT * FROM newdb.zones; -~~~ - -Then drop the temporary table you created: - -{% include_cached copy-clipboard.html %} -~~~ sql -DROP TABLE newdb.zones; -~~~ - -Then, [resume the restore]({% link {{ page.version.version }}/resume-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -RESUME JOB 27536791415282; -~~~ - ## See also - [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) diff --git a/src/current/v25.2/take-and-restore-locality-aware-backups.md b/src/current/v25.2/take-and-restore-locality-aware-backups.md index 79b17f60876..fad14e6aed0 100644 --- a/src/current/v25.2/take-and-restore-locality-aware-backups.md +++ b/src/current/v25.2/take-and-restore-locality-aware-backups.md @@ -125,7 +125,7 @@ RESTORE FROM LATEST IN ('s3://us-east-bucket/', 's3://us-west-bucket/'); To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% link {{ page.version.version }}/restore.md %}#restore-a-specific-full-or-incremental-backup). {{site.data.alerts.callout_info}} -[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you can [manually restore zone configurations from a locality-aware backup](#manually-restore-zone-configurations-from-a-locality-aware-backup). +[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. {{site.data.alerts.end}} ## Create an incremental locality-aware backup @@ -191,49 +191,6 @@ To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% When [restoring from an incremental locality-aware backup]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}#restore-from-an-incremental-locality-aware-backup), you need to include **every** locality ever used, even if it was only used once. {{site.data.alerts.end}} -## Manually restore zone configurations from a locality-aware backup - -During a [locality-aware restore](#restore-from-a-locality-aware-backup), some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you need to manually restore [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}) first: - -Once the locality-aware restore has started, [pause the restore]({% link {{ page.version.version }}/pause-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -PAUSE JOB 27536791415282; -~~~ - -The `system.zones` table stores your cluster's [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}), which will prevent the data from rebalancing. To restore them, you must restore the `system.zones` table into a new database because you cannot drop the existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -RESTORE TABLE system.zones FROM '2021/03/23-213101.37' IN - 'azure-blob://acme-co-backup?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' - WITH into_db = 'newdb'; -~~~ - -After it's restored into a new database, you can write the restored `zones` table data to the cluster's existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO system.zones SELECT * FROM newdb.zones; -~~~ - -Then drop the temporary table you created: - -{% include_cached copy-clipboard.html %} -~~~ sql -DROP TABLE newdb.zones; -~~~ - -Then, [resume the restore]({% link {{ page.version.version }}/resume-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -RESUME JOB 27536791415282; -~~~ - -## See also - - [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) - [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) - [Take Full and Incremental Backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}) diff --git a/src/current/v25.3/take-and-restore-locality-aware-backups.md b/src/current/v25.3/take-and-restore-locality-aware-backups.md index 79b17f60876..0a75073cc1b 100644 --- a/src/current/v25.3/take-and-restore-locality-aware-backups.md +++ b/src/current/v25.3/take-and-restore-locality-aware-backups.md @@ -125,7 +125,7 @@ RESTORE FROM LATEST IN ('s3://us-east-bucket/', 's3://us-west-bucket/'); To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% link {{ page.version.version }}/restore.md %}#restore-a-specific-full-or-incremental-backup). {{site.data.alerts.callout_info}} -[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you can [manually restore zone configurations from a locality-aware backup](#manually-restore-zone-configurations-from-a-locality-aware-backup). +[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. {{site.data.alerts.end}} ## Create an incremental locality-aware backup @@ -191,47 +191,6 @@ To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% When [restoring from an incremental locality-aware backup]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}#restore-from-an-incremental-locality-aware-backup), you need to include **every** locality ever used, even if it was only used once. {{site.data.alerts.end}} -## Manually restore zone configurations from a locality-aware backup - -During a [locality-aware restore](#restore-from-a-locality-aware-backup), some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you need to manually restore [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}) first: - -Once the locality-aware restore has started, [pause the restore]({% link {{ page.version.version }}/pause-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -PAUSE JOB 27536791415282; -~~~ - -The `system.zones` table stores your cluster's [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}), which will prevent the data from rebalancing. To restore them, you must restore the `system.zones` table into a new database because you cannot drop the existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -RESTORE TABLE system.zones FROM '2021/03/23-213101.37' IN - 'azure-blob://acme-co-backup?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' - WITH into_db = 'newdb'; -~~~ - -After it's restored into a new database, you can write the restored `zones` table data to the cluster's existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO system.zones SELECT * FROM newdb.zones; -~~~ - -Then drop the temporary table you created: - -{% include_cached copy-clipboard.html %} -~~~ sql -DROP TABLE newdb.zones; -~~~ - -Then, [resume the restore]({% link {{ page.version.version }}/resume-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -RESUME JOB 27536791415282; -~~~ - ## See also - [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) diff --git a/src/current/v25.4/take-and-restore-locality-aware-backups.md b/src/current/v25.4/take-and-restore-locality-aware-backups.md index 79b17f60876..0a75073cc1b 100644 --- a/src/current/v25.4/take-and-restore-locality-aware-backups.md +++ b/src/current/v25.4/take-and-restore-locality-aware-backups.md @@ -125,7 +125,7 @@ RESTORE FROM LATEST IN ('s3://us-east-bucket/', 's3://us-west-bucket/'); To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% link {{ page.version.version }}/restore.md %}#restore-a-specific-full-or-incremental-backup). {{site.data.alerts.callout_info}} -[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you can [manually restore zone configurations from a locality-aware backup](#manually-restore-zone-configurations-from-a-locality-aware-backup). +[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. {{site.data.alerts.end}} ## Create an incremental locality-aware backup @@ -191,47 +191,6 @@ To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% When [restoring from an incremental locality-aware backup]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}#restore-from-an-incremental-locality-aware-backup), you need to include **every** locality ever used, even if it was only used once. {{site.data.alerts.end}} -## Manually restore zone configurations from a locality-aware backup - -During a [locality-aware restore](#restore-from-a-locality-aware-backup), some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you need to manually restore [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}) first: - -Once the locality-aware restore has started, [pause the restore]({% link {{ page.version.version }}/pause-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -PAUSE JOB 27536791415282; -~~~ - -The `system.zones` table stores your cluster's [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}), which will prevent the data from rebalancing. To restore them, you must restore the `system.zones` table into a new database because you cannot drop the existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -RESTORE TABLE system.zones FROM '2021/03/23-213101.37' IN - 'azure-blob://acme-co-backup?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' - WITH into_db = 'newdb'; -~~~ - -After it's restored into a new database, you can write the restored `zones` table data to the cluster's existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO system.zones SELECT * FROM newdb.zones; -~~~ - -Then drop the temporary table you created: - -{% include_cached copy-clipboard.html %} -~~~ sql -DROP TABLE newdb.zones; -~~~ - -Then, [resume the restore]({% link {{ page.version.version }}/resume-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -RESUME JOB 27536791415282; -~~~ - ## See also - [`BACKUP`]({% link {{ page.version.version }}/backup.md %})