From 32c7b93eaf15aecce0cbbe3fc6ccfa2db6a62c65 Mon Sep 17 00:00:00 2001 From: Peach Leach Date: Thu, 18 Sep 2025 16:35:14 -0400 Subject: [PATCH 1/3] Deleted incorrect section MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Deleted “Manually restore zone configurations from a locality-aware backup” as it is no longer accurate --- ...take-and-restore-locality-aware-backups.md | 41 ------------------- 1 file changed, 41 deletions(-) diff --git a/src/current/v25.3/take-and-restore-locality-aware-backups.md b/src/current/v25.3/take-and-restore-locality-aware-backups.md index 79b17f60876..ab4227d1fb9 100644 --- a/src/current/v25.3/take-and-restore-locality-aware-backups.md +++ b/src/current/v25.3/take-and-restore-locality-aware-backups.md @@ -191,47 +191,6 @@ To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% When [restoring from an incremental locality-aware backup]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}#restore-from-an-incremental-locality-aware-backup), you need to include **every** locality ever used, even if it was only used once. {{site.data.alerts.end}} -## Manually restore zone configurations from a locality-aware backup - -During a [locality-aware restore](#restore-from-a-locality-aware-backup), some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you need to manually restore [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}) first: - -Once the locality-aware restore has started, [pause the restore]({% link {{ page.version.version }}/pause-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -PAUSE JOB 27536791415282; -~~~ - -The `system.zones` table stores your cluster's [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}), which will prevent the data from rebalancing. To restore them, you must restore the `system.zones` table into a new database because you cannot drop the existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -RESTORE TABLE system.zones FROM '2021/03/23-213101.37' IN - 'azure-blob://acme-co-backup?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' - WITH into_db = 'newdb'; -~~~ - -After it's restored into a new database, you can write the restored `zones` table data to the cluster's existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO system.zones SELECT * FROM newdb.zones; -~~~ - -Then drop the temporary table you created: - -{% include_cached copy-clipboard.html %} -~~~ sql -DROP TABLE newdb.zones; -~~~ - -Then, [resume the restore]({% link {{ page.version.version }}/resume-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -RESUME JOB 27536791415282; -~~~ - ## See also - [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) From e4f5f184bd2803fb75c25bdb55eeb8b699b2ecbf Mon Sep 17 00:00:00 2001 From: Peach Leach Date: Wed, 1 Oct 2025 16:14:40 -0400 Subject: [PATCH 2/3] Removed broken link Removed broken link --- src/current/v25.3/take-and-restore-locality-aware-backups.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/current/v25.3/take-and-restore-locality-aware-backups.md b/src/current/v25.3/take-and-restore-locality-aware-backups.md index ab4227d1fb9..0a75073cc1b 100644 --- a/src/current/v25.3/take-and-restore-locality-aware-backups.md +++ b/src/current/v25.3/take-and-restore-locality-aware-backups.md @@ -125,7 +125,7 @@ RESTORE FROM LATEST IN ('s3://us-east-bucket/', 's3://us-west-bucket/'); To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% link {{ page.version.version }}/restore.md %}#restore-a-specific-full-or-incremental-backup). {{site.data.alerts.callout_info}} -[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you can [manually restore zone configurations from a locality-aware backup](#manually-restore-zone-configurations-from-a-locality-aware-backup). +[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. {{site.data.alerts.end}} ## Create an incremental locality-aware backup From 9e17d6ff5de0ec356c6dc7b378e7d09c47f7502b Mon Sep 17 00:00:00 2001 From: Peach Leach Date: Mon, 6 Oct 2025 15:40:26 -0400 Subject: [PATCH 3/3] Forward and backport Forward and backport --- ...take-and-restore-locality-aware-backups.md | 43 +----------------- ...take-and-restore-locality-aware-backups.md | 43 +----------------- ...take-and-restore-locality-aware-backups.md | 45 +------------------ ...take-and-restore-locality-aware-backups.md | 43 +----------------- 4 files changed, 4 insertions(+), 170 deletions(-) diff --git a/src/current/v24.1/take-and-restore-locality-aware-backups.md b/src/current/v24.1/take-and-restore-locality-aware-backups.md index 1681edf896c..516d53f3c52 100644 --- a/src/current/v24.1/take-and-restore-locality-aware-backups.md +++ b/src/current/v24.1/take-and-restore-locality-aware-backups.md @@ -125,7 +125,7 @@ RESTORE FROM LATEST IN ('s3://us-east-bucket/', 's3://us-west-bucket/'); To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% link {{ page.version.version }}/restore.md %}#restore-a-specific-full-or-incremental-backup). {{site.data.alerts.callout_info}} -[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you can [manually restore zone configurations from a locality-aware backup](#manually-restore-zone-configurations-from-a-locality-aware-backup). +[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. {{site.data.alerts.end}} ## Create an incremental locality-aware backup @@ -191,47 +191,6 @@ To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% When [restoring from an incremental locality-aware backup]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}#restore-from-an-incremental-locality-aware-backup), you need to include **every** locality ever used, even if it was only used once. {{site.data.alerts.end}} -## Manually restore zone configurations from a locality-aware backup - -During a [locality-aware restore](#restore-from-a-locality-aware-backup), some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you need to manually restore [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}) first: - -Once the locality-aware restore has started, [pause the restore]({% link {{ page.version.version }}/pause-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -PAUSE JOB 27536791415282; -~~~ - -The `system.zones` table stores your cluster's [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}), which will prevent the data from rebalancing. To restore them, you must restore the `system.zones` table into a new database because you cannot drop the existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -RESTORE TABLE system.zones FROM '2021/03/23-213101.37' IN - 'azure-blob://acme-co-backup?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' - WITH into_db = 'newdb'; -~~~ - -After it's restored into a new database, you can write the restored `zones` table data to the cluster's existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO system.zones SELECT * FROM newdb.zones; -~~~ - -Then drop the temporary table you created: - -{% include_cached copy-clipboard.html %} -~~~ sql -DROP TABLE newdb.zones; -~~~ - -Then, [resume the restore]({% link {{ page.version.version }}/resume-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -RESUME JOB 27536791415282; -~~~ - ## See also - [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) diff --git a/src/current/v24.3/take-and-restore-locality-aware-backups.md b/src/current/v24.3/take-and-restore-locality-aware-backups.md index 79b17f60876..0a75073cc1b 100644 --- a/src/current/v24.3/take-and-restore-locality-aware-backups.md +++ b/src/current/v24.3/take-and-restore-locality-aware-backups.md @@ -125,7 +125,7 @@ RESTORE FROM LATEST IN ('s3://us-east-bucket/', 's3://us-west-bucket/'); To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% link {{ page.version.version }}/restore.md %}#restore-a-specific-full-or-incremental-backup). {{site.data.alerts.callout_info}} -[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you can [manually restore zone configurations from a locality-aware backup](#manually-restore-zone-configurations-from-a-locality-aware-backup). +[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. {{site.data.alerts.end}} ## Create an incremental locality-aware backup @@ -191,47 +191,6 @@ To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% When [restoring from an incremental locality-aware backup]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}#restore-from-an-incremental-locality-aware-backup), you need to include **every** locality ever used, even if it was only used once. {{site.data.alerts.end}} -## Manually restore zone configurations from a locality-aware backup - -During a [locality-aware restore](#restore-from-a-locality-aware-backup), some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you need to manually restore [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}) first: - -Once the locality-aware restore has started, [pause the restore]({% link {{ page.version.version }}/pause-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -PAUSE JOB 27536791415282; -~~~ - -The `system.zones` table stores your cluster's [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}), which will prevent the data from rebalancing. To restore them, you must restore the `system.zones` table into a new database because you cannot drop the existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -RESTORE TABLE system.zones FROM '2021/03/23-213101.37' IN - 'azure-blob://acme-co-backup?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' - WITH into_db = 'newdb'; -~~~ - -After it's restored into a new database, you can write the restored `zones` table data to the cluster's existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO system.zones SELECT * FROM newdb.zones; -~~~ - -Then drop the temporary table you created: - -{% include_cached copy-clipboard.html %} -~~~ sql -DROP TABLE newdb.zones; -~~~ - -Then, [resume the restore]({% link {{ page.version.version }}/resume-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -RESUME JOB 27536791415282; -~~~ - ## See also - [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) diff --git a/src/current/v25.2/take-and-restore-locality-aware-backups.md b/src/current/v25.2/take-and-restore-locality-aware-backups.md index 79b17f60876..fad14e6aed0 100644 --- a/src/current/v25.2/take-and-restore-locality-aware-backups.md +++ b/src/current/v25.2/take-and-restore-locality-aware-backups.md @@ -125,7 +125,7 @@ RESTORE FROM LATEST IN ('s3://us-east-bucket/', 's3://us-west-bucket/'); To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% link {{ page.version.version }}/restore.md %}#restore-a-specific-full-or-incremental-backup). {{site.data.alerts.callout_info}} -[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you can [manually restore zone configurations from a locality-aware backup](#manually-restore-zone-configurations-from-a-locality-aware-backup). +[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. {{site.data.alerts.end}} ## Create an incremental locality-aware backup @@ -191,49 +191,6 @@ To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% When [restoring from an incremental locality-aware backup]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}#restore-from-an-incremental-locality-aware-backup), you need to include **every** locality ever used, even if it was only used once. {{site.data.alerts.end}} -## Manually restore zone configurations from a locality-aware backup - -During a [locality-aware restore](#restore-from-a-locality-aware-backup), some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you need to manually restore [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}) first: - -Once the locality-aware restore has started, [pause the restore]({% link {{ page.version.version }}/pause-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -PAUSE JOB 27536791415282; -~~~ - -The `system.zones` table stores your cluster's [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}), which will prevent the data from rebalancing. To restore them, you must restore the `system.zones` table into a new database because you cannot drop the existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -RESTORE TABLE system.zones FROM '2021/03/23-213101.37' IN - 'azure-blob://acme-co-backup?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' - WITH into_db = 'newdb'; -~~~ - -After it's restored into a new database, you can write the restored `zones` table data to the cluster's existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO system.zones SELECT * FROM newdb.zones; -~~~ - -Then drop the temporary table you created: - -{% include_cached copy-clipboard.html %} -~~~ sql -DROP TABLE newdb.zones; -~~~ - -Then, [resume the restore]({% link {{ page.version.version }}/resume-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -RESUME JOB 27536791415282; -~~~ - -## See also - - [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) - [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) - [Take Full and Incremental Backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}) diff --git a/src/current/v25.4/take-and-restore-locality-aware-backups.md b/src/current/v25.4/take-and-restore-locality-aware-backups.md index 79b17f60876..0a75073cc1b 100644 --- a/src/current/v25.4/take-and-restore-locality-aware-backups.md +++ b/src/current/v25.4/take-and-restore-locality-aware-backups.md @@ -125,7 +125,7 @@ RESTORE FROM LATEST IN ('s3://us-east-bucket/', 's3://us-west-bucket/'); To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% link {{ page.version.version }}/restore.md %}#restore-a-specific-full-or-incremental-backup). {{site.data.alerts.callout_info}} -[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you can [manually restore zone configurations from a locality-aware backup](#manually-restore-zone-configurations-from-a-locality-aware-backup). +[`RESTORE`]({% link {{ page.version.version }}/restore.md %}) is not truly locality-aware; while restoring from backups, a node may read from a store that does not match its locality. This can happen in the cases that either the [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) or [`RESTORE`]({% link {{ page.version.version }}/restore.md %}) was not of a [full cluster]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups). Note that during a locality-aware restore, some data may be temporarily located on another node before it is eventually relocated to the appropriate node. {{site.data.alerts.end}} ## Create an incremental locality-aware backup @@ -191,47 +191,6 @@ To restore from a specific backup, use [`RESTORE FROM {subdirectory} IN ...`]({% When [restoring from an incremental locality-aware backup]({% link {{ page.version.version }}/take-and-restore-locality-aware-backups.md %}#restore-from-an-incremental-locality-aware-backup), you need to include **every** locality ever used, even if it was only used once. {{site.data.alerts.end}} -## Manually restore zone configurations from a locality-aware backup - -During a [locality-aware restore](#restore-from-a-locality-aware-backup), some data may be temporarily located on another node before it is eventually relocated to the appropriate node. To avoid this, you need to manually restore [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}) first: - -Once the locality-aware restore has started, [pause the restore]({% link {{ page.version.version }}/pause-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -PAUSE JOB 27536791415282; -~~~ - -The `system.zones` table stores your cluster's [zone configurations]({% link {{ page.version.version }}/configure-replication-zones.md %}), which will prevent the data from rebalancing. To restore them, you must restore the `system.zones` table into a new database because you cannot drop the existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -RESTORE TABLE system.zones FROM '2021/03/23-213101.37' IN - 'azure-blob://acme-co-backup?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' - WITH into_db = 'newdb'; -~~~ - -After it's restored into a new database, you can write the restored `zones` table data to the cluster's existing `system.zones` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO system.zones SELECT * FROM newdb.zones; -~~~ - -Then drop the temporary table you created: - -{% include_cached copy-clipboard.html %} -~~~ sql -DROP TABLE newdb.zones; -~~~ - -Then, [resume the restore]({% link {{ page.version.version }}/resume-job.md %}): - -{% include_cached copy-clipboard.html %} -~~~ sql -RESUME JOB 27536791415282; -~~~ - ## See also - [`BACKUP`]({% link {{ page.version.version }}/backup.md %})