Skip to content
This repository was archived by the owner on Oct 18, 2023. It is now read-only.

Commit 6c6b1ac

Browse files
committed
bottomless: increase the max batch size to 10000
The reasoning is as follows: 10000 uncompressed frames weigh 40MiB. Gzip is expected to create a ~20MiB file from them, while xz can compress it down to ~800KiB. The previous limit would make xz create a 50KiB file, which is less than the minimum 128KiB that S3-like services charge for when writing to an object store.
1 parent f232316 commit 6c6b1ac

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

bottomless/src/replicator.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@ impl Options {
171171
let secret_access_key = env_var("LIBSQL_BOTTOMLESS_AWS_SECRET_ACCESS_KEY").ok();
172172
let region = env_var("LIBSQL_BOTTOMLESS_AWS_DEFAULT_REGION").ok();
173173
let max_frames_per_batch =
174-
env_var_or("LIBSQL_BOTTOMLESS_BATCH_MAX_FRAMES", 500).parse::<usize>()?;
174+
env_var_or("LIBSQL_BOTTOMLESS_BATCH_MAX_FRAMES", 10000).parse::<usize>()?;
175175
let s3_upload_max_parallelism =
176176
env_var_or("LIBSQL_BOTTOMLESS_S3_PARALLEL_MAX", 32).parse::<usize>()?;
177177
let restore_transaction_page_swap_after =

0 commit comments

Comments
 (0)