Skip to content

Commit

Permalink
Remove UUID generation from SQLite functions
Browse files Browse the repository at this point in the history
  • Loading branch information
julik committed May 20, 2024
1 parent 96d0dcc commit 36dc3cf
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 11 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
- Add Redis-based adapter derived from Prorate
- Formalize and test the adapter API
- Add a memory-based adapter for single-process applications (and as a reference)
- For SQLite tables, do not use UUID primary keys - there is no need for that, and SQLite does not have a antive UUID gen functin that is enabled on all builds

## 0.6.0

Expand Down
18 changes: 7 additions & 11 deletions lib/pecorino/adapters/sqlite_adapter.rb
Original file line number Diff line number Diff line change
Expand Up @@ -54,16 +54,14 @@ def add_tokens(key:, capacity:, leak_rate:, n_tokens:)
delete_after_s: may_be_deleted_after_seconds,
leak_rate: leak_rate.to_f,
now_s: Time.now.to_f, # See above as to why we are using a time value passed in
fillup: n_tokens.to_f,
id: SecureRandom.uuid # SQLite3 does not autogenerate UUIDs
fillup: n_tokens.to_f
}

sql = @model_class.sanitize_sql_array([<<~SQL, query_params])
INSERT INTO pecorino_leaky_buckets AS t
(id, key, last_touched_at, may_be_deleted_after, level)
(key, last_touched_at, may_be_deleted_after, level)
VALUES
(
:id,
:key,
:now_s, -- Precision loss must be avoided here as it is used for calculations
DATETIME('now', '+:delete_after_s seconds'), -- Precision loss is acceptable here
Expand Down Expand Up @@ -111,19 +109,17 @@ def add_tokens_conditionally(key:, capacity:, leak_rate:, n_tokens:)
delete_after_s: may_be_deleted_after_seconds,
leak_rate: leak_rate.to_f,
now_s: Time.now.to_f, # See above as to why we are using a time value passed in
fillup: n_tokens.to_f,
id: SecureRandom.uuid # SQLite3 does not autogenerate UUIDs
fillup: n_tokens.to_f
}

# Sadly with SQLite we need to do an INSERT first, because otherwise the inserted row is visible
# to the WITH clause, so we cannot combine the initial fillup and the update into one statement.
# This shuld be fine however since we will suppress the INSERT on a key conflict
insert_sql = @model_class.sanitize_sql_array([<<~SQL, query_params])
INSERT INTO pecorino_leaky_buckets AS t
(id, key, last_touched_at, may_be_deleted_after, level)
(key, last_touched_at, may_be_deleted_after, level)
VALUES
(
:id,
:key,
:now_s, -- Precision loss must be avoided here as it is used for calculations
DATETIME('now', '+:delete_after_s seconds'), -- Precision loss is acceptable here
Expand Down Expand Up @@ -168,12 +164,12 @@ def add_tokens_conditionally(key:, capacity:, leak_rate:, n_tokens:)

def set_block(key:, block_for:)
raise ArgumentError, "block_for must be positive" unless block_for > 0
query_params = {id: SecureRandom.uuid, key: key.to_s, block_for: block_for.to_f, now_s: Time.now.to_f}
query_params = {key: key.to_s, block_for: block_for.to_f, now_s: Time.now.to_f}
block_set_query = @model_class.sanitize_sql_array([<<~SQL, query_params])
INSERT INTO pecorino_blocks AS t
(id, key, blocked_until)
(key, blocked_until)
VALUES
(:id, :key, :now_s + :block_for)
(:key, :now_s + :block_for)
ON CONFLICT (key) DO UPDATE SET
blocked_until = MAX(EXCLUDED.blocked_until, t.blocked_until)
RETURNING blocked_until;
Expand Down

0 comments on commit 36dc3cf

Please sign in to comment.