-
Notifications
You must be signed in to change notification settings - Fork 6
Add AWS S3 Store #231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add AWS S3 Store #231
Conversation
This commit implements a new S3Store that uses AWS S3 as a distributed
key-value storage backend. The implementation follows the established
pattern from other distributed stores (DynamoDB, MongoDB) and provides:
- S3Store class for async operations using aioboto3
- Client-side TTL expiration checking (S3 lifecycle policies don't
support atomic TTL+retrieval)
- Comprehensive test suite with LocalStack integration
- Auto-generated sync library support
- Documentation updates in README
The store uses S3 objects with path format {collection}/{key} and
serializes ManagedEntry objects to JSON. TTL metadata is stored in
S3 object metadata and checked during retrieval operations.
Resolves #161
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: William Easton <[email protected]>
- Add S105 and S106 to ignored linting rules for test files (hardcoded test credentials) - Exclude S3Store from sync library generation (uses aioboto3 which is async-only) - Fix contextlib.suppress usage in S3 test file for proper async context handling - Remove generated sync S3 files S3Store follows the same pattern as DynamoDBStore - it's async-only because aioboto3 doesn't have a direct sync equivalent. The README correctly indicates S3 as async-only. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: William Easton <[email protected]>
- Remove unused _session instance variable - Improve exception handling to use ClientError and check error codes - Use ManagedEntry.is_expired property for consistency - Remove redundant s3_store fixture - Add HTTP_NOT_FOUND constant to avoid magic numbers - Add type ignores for boto response types Co-authored-by: William Easton <[email protected]>
S3 has a maximum key length of 1024 bytes. This fix ensures that long collection or key names are automatically hashed when they exceed safe limits (500 bytes each), keeping the total S3 key under 1024 bytes while maintaining uniqueness through hash suffixes. - Import hash_excess_length utility from key_value.shared.utils.sanitize - Add MAX_COLLECTION_LENGTH and MAX_KEY_LENGTH constants (500 bytes each) - Update _get_s3_key() to hash collection and key names if they exceed limits - Follows the same pattern used by MongoDB store for handling long names Fixes test_long_collection_name test which was failing with KeyTooLongError. Co-authored-by: William Easton <[email protected]>
Adapt S3Store to match the serialization refactoring from PR #184: - Replace ManagedEntry.from_json() with self._serialization_adapter.load_json() - Replace managed_entry.to_json() with self._serialization_adapter.dump_json() - Leverage BasicSerializationAdapter automatically initialized by BaseStore Co-authored-by: William Easton <[email protected]>
…s_length - Add _truncate_to_bytes() helper that uses binary search to safely truncate strings at byte boundaries without splitting multi-byte UTF-8 characters - Add length_is_bytes parameter to sanitize_string() and hash_excess_length() to support byte-based length limits - Update S3Store to use length_is_bytes=True for proper S3 key length handling - Fixes KeyTooLongError when using multi-byte characters (emoji, CJK, etc.) This ensures S3 keys stay within the 1024-byte limit even with multi-byte UTF-8 characters. Co-authored-by: William Easton <[email protected]>
- Add explicit client cleanup (set _client = None after __aexit__) - Remove deletion of expired objects during retrieval - Update both __aexit__ and _close for consistency Co-authored-by: William Easton <[email protected]>
- Close streaming body properly to prevent connection leaks - Add regional bucket creation support for non-us-east-1 regions Co-authored-by: William Easton <[email protected]>
- Add S3KeySanitizationStrategy and S3CollectionSanitizationStrategy classes - Remove automatic sanitization from _get_s3_key() - Add collection_sanitization_strategy and key_sanitization_strategy parameters to S3Store - Update S3Store to use BaseStore's _sanitize_collection_and_key() method - By default, collections/keys are not sanitized (matches new main pattern) - Update tests to use sanitization strategies - Update documentation to explain when/why to use strategies This aligns with the main branch refactor where stores no longer sanitize collections and keys by default, making sanitization opt-in via strategy parameters. Co-authored-by: William Easton <[email protected]>
- Fix Elasticsearch ping functions to return True on success - Fix sanitize.py byte-length calculations for hash fragments - Add truncation in NEVER mode for consistency - Delete expired S3 entries on read to prevent stale data - Support delete-only IAM roles for S3 operations - Handle pagination in S3 test cleanup 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: William Easton <[email protected]>
Resolved conflicts between S3 PR and FileTreeStore addition: - Added both S3 and FileTree store to documentation - Kept both S3 and filetree dependencies in pyproject.toml - Updated docs to mention both stores Co-authored-by: William Easton <[email protected]>
- Updated uv.lock after merge - Generated sync versions of filetree store (from codegen) Co-authored-by: William Easton <[email protected]>
FileTreeStore is async-only and uses aiofile/anyio libraries that cannot be converted to sync. Added to EXCLUDE_DIRECTORIES and removed incorrectly generated sync files. Co-authored-by: William Easton <[email protected]>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
- Changed exception name check from 'index_already_exists_exception' to 'already_exists_exception' - This handles both old (index_already_exists_exception) and new (resource_already_exists_exception) Elasticsearch versions - Fixed linting issue in ping_elasticsearch by moving return True to else block Co-authored-by: William Easton <[email protected]>
Co-authored-by: William Easton <[email protected]>
- Resolved conflict in key-value/key-value-aio/pyproject.toml - Added both s3 and duckdb to dev dependencies - Regenerated uv.lock with make sync - Updated generated code with make codegen Co-authored-by: William Easton <[email protected]>
- Remove obvious comments that state what the code already shows - Keep only non-obvious comments (e.g., region-specific AWS logic) - Simplify constant comments to be more concise - Remove type-hinting comments where type is already declared Co-authored-by: William Easton <[email protected]>
📝 WalkthroughWalkthroughAdds an AWS S3-backed asynchronous key-value store (S3Store) with byte-aware key/collection sanitization, async lifecycle and CRUD with TTL handling, LocalStack integration tests, updates to shared sanitize utilities, docs and dependency changes, and small lint-comment/test-config updates plus DynamoDB client ownership tracking. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant S3Store
participant Sanitizer
participant aioboto3
participant S3Service
User->>S3Store: async with S3Store(...) as store
S3Store->>S3Store: _connect()
alt client provided
S3Store->>S3Store: use pre-configured client
else
S3Store->>aioboto3: create S3 client
aioboto3-->>S3Store: S3Client
end
S3Store->>S3Service: HeadBucket(bucket)
alt exists
S3Service-->>S3Store: OK
else
S3Store->>S3Service: CreateBucket(bucket)
S3Service-->>S3Store: Created
end
S3Store-->>User: store ready
Note right of User: CRUD operations
User->>S3Store: get(key, collection)
S3Store->>Sanitizer: sanitize collection/key
Sanitizer-->>S3Store: s3_key
S3Store->>S3Service: GetObject(s3_key)
S3Service-->>S3Store: object + metadata
S3Store->>S3Store: validate TTL
alt not expired
S3Store-->>User: value
else
S3Store->>S3Service: DeleteObject(s3_key)
S3Store-->>User: None
end
User->>S3Store: put(key, entry)
S3Store->>Sanitizer: sanitize collection/key
S3Store->>S3Service: PutObject(s3_key, body, metadata)
S3Service-->>S3Store: success
User->>S3Store: exit
S3Store->>S3Store: _disconnect()/_close()
alt store owns client
S3Store->>aioboto3: close client
end
Possibly related PRs
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: ASSERTIVE Plan: Pro 📒 Files selected for processing (1)
🔇 Additional comments (2)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
key-value/key-value-shared/src/key_value/shared/utils/sanitize.py (1)
175-195: Clarify documentation for byte-aware mode.The docstring states "truncate the value to the maximum length minus 8 characters" and "swap the last 8 characters with the first 8 characters of the generated hash." When
length_is_bytes=True, it would be clearer to state "8 bytes" instead of "8 characters" (though they're equivalent since the hash uses hexadecimal/ASCII characters).Consider updating the docstring to clarify behavior in both modes:
"""Hash part of the value if it exceeds the maximum length. This operation -will truncate the value to the maximum length minus 8 characters and will swap -the last 8 characters with the first 8 characters of the generated hash. +will truncate the value to the maximum length minus 8 (characters or bytes, depending +on length_is_bytes) and will append the first 8 characters of the generated hash.key-value/key-value-aio/tests/stores/s3/test_s3.py (1)
19-139: Prevent pytest-timeout from killing the S3 suite.
pytest-timeoutis configured at 10s in CI (see failing pipeline logs), butasync_wait_for_truecan legitimately spend ≈60s waiting for LocalStack to come up. Without an override the new suite will keep terminating early. Please either shrink the wait window or explicitly raise the per-suite timeout (for example,pytestmark = pytest.mark.timeout(WAIT_FOR_S3_TIMEOUT * 3)right below the skipif) so the health checks have room to finish. This keeps tests green without weakening the global guardrail. (docs.pytest.org)
♻️ Duplicate comments (1)
key-value/key-value-shared/src/key_value/shared/utils/sanitize.py (1)
153-165: Same validation issue affects ONLY_IF_CHANGED mode.This mode has the same potential issue as the ALWAYS mode where
hash_fragment_size_required >= max_lengthcan lead to unexpected behavior. The validation suggested in the previous comment would prevent this issue in both modes.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (20)
README.md(3 hunks)docs/api/stores.md(1 hunks)docs/stores.md(2 hunks)key-value/key-value-aio/pyproject.toml(2 hunks)key-value/key-value-aio/src/key_value/aio/stores/elasticsearch/store.py(1 hunks)key-value/key-value-aio/src/key_value/aio/stores/s3/__init__.py(1 hunks)key-value/key-value-aio/src/key_value/aio/stores/s3/store.py(1 hunks)key-value/key-value-aio/tests/stores/base.py(1 hunks)key-value/key-value-aio/tests/stores/dynamodb/test_dynamodb.py(3 hunks)key-value/key-value-aio/tests/stores/elasticsearch/test_elasticsearch.py(3 hunks)key-value/key-value-aio/tests/stores/s3/__init__.py(1 hunks)key-value/key-value-aio/tests/stores/s3/test_s3.py(1 hunks)key-value/key-value-aio/tests/stores/vault/test_vault.py(1 hunks)key-value/key-value-shared/src/key_value/shared/utils/sanitize.py(7 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py(1 hunks)key-value/key-value-sync/tests/code_gen/stores/base.py(1 hunks)key-value/key-value-sync/tests/code_gen/stores/elasticsearch/test_elasticsearch.py(3 hunks)key-value/key-value-sync/tests/code_gen/stores/vault/test_vault.py(1 hunks)pyproject.toml(1 hunks)scripts/build_sync_library.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
key-value/key-value-aio/tests/stores/s3/test_s3.py (3)
key-value/key-value-shared/src/key_value/shared/stores/wait.py (1)
async_wait_for_true(7-15)key-value/key-value-aio/src/key_value/aio/stores/s3/store.py (3)
S3Store(84-449)S3CollectionSanitizationStrategy(68-81)S3KeySanitizationStrategy(35-65)key-value/key-value-aio/tests/stores/base.py (4)
BaseStoreTests(25-263)ContextManagerStoreTestMixin(266-278)store(31-31)test_not_unbounded(220-228)
key-value/key-value-aio/src/key_value/aio/stores/s3/__init__.py (2)
key-value/key-value-aio/tests/stores/s3/test_s3.py (1)
store(72-116)key-value/key-value-aio/src/key_value/aio/stores/s3/store.py (3)
S3CollectionSanitizationStrategy(68-81)S3KeySanitizationStrategy(35-65)S3Store(84-449)
key-value/key-value-aio/tests/stores/elasticsearch/test_elasticsearch.py (1)
key-value/key-value-sync/tests/code_gen/stores/elasticsearch/test_elasticsearch.py (1)
es_client(117-122)
key-value/key-value-aio/src/key_value/aio/stores/s3/store.py (5)
key-value/key-value-shared/src/key_value/shared/utils/managed_entry.py (2)
ManagedEntry(15-63)is_expired(29-32)key-value/key-value-shared/src/key_value/shared/utils/sanitization.py (1)
SanitizationStrategy(33-78)key-value/key-value-shared/src/key_value/shared/utils/sanitize.py (1)
hash_excess_length(175-203)key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (9)
BaseContextManagerStore(399-415)BaseStore(60-376)_setup(130-131)get(214-235)_sanitize_collection_and_key(136-137)_get_managed_entry(204-205)_put_managed_entry(274-276)_delete_managed_entry(336-338)_close(413-415)key-value/key-value-shared/src/key_value/shared/utils/serialization.py (2)
load_json(55-59)dump_json(160-184)
key-value/key-value-shared/src/key_value/shared/utils/sanitize.py (1)
key-value/key-value-shared/src/key_value/shared/type_checking/bear_spray.py (1)
bear_enforce(21-23)
🪛 GitHub Actions: Run Tests
key-value/key-value-aio/tests/stores/elasticsearch/test_elasticsearch.py
[error] 404-404: Timeout (>10.0s) from pytest-timeout.
[error] 405-405: Timeout (>10.0s) from pytest-timeout.
[error] 406-406: Timeout (>10.0s) from pytest-timeout.
[error] 407-407: Timeout (>10.0s) from pytest-timeout.
[error] 408-408: Timeout (>10.0s) from pytest-timeout.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
- GitHub Check: test_quick (3.10, macos-latest, key-value/key-value-shared)
- GitHub Check: test_quick (3.10, ubuntu-22.04, key-value/key-value-sync)
- GitHub Check: test_quick (3.10, windows-2022, key-value/key-value-sync)
- GitHub Check: test_quick (3.10, windows-latest, key-value/key-value-sync)
- GitHub Check: test_quick (3.10, windows-latest, key-value/key-value-aio)
- GitHub Check: test_quick (3.10, windows-2022, key-value/key-value-aio)
- GitHub Check: test_quick (3.10, macos-latest, key-value/key-value-sync)
- GitHub Check: test_quick (3.10, macos-14, key-value/key-value-shared)
- GitHub Check: test_quick (3.10, ubuntu-latest, key-value/key-value-sync)
- GitHub Check: test_quick (3.10, ubuntu-latest, key-value/key-value-aio)
- GitHub Check: test_quick (3.10, ubuntu-22.04, key-value/key-value-aio)
🔇 Additional comments (19)
scripts/build_sync_library.py (1)
59-62: Exclusions verified and correct.All four excluded directories have been confirmed to exist in the repository. The changes correctly prevent the S3 and memcached stores from being converted to sync versions, maintaining them as async-only implementations consistent with the established exclusion pattern.
key-value/key-value-shared/src/key_value/shared/utils/sanitize.py (4)
64-92: LGTM - Binary search implementation is correct.The binary search correctly finds the longest substring that fits within
max_byteswithout splitting multi-byte characters. The algorithm searches character indices while checking byte lengths of the encoded substrings.Minor observation: The
encodingparameter is never used by callers (all use default UTF-8). Consider either removing it for simplicity or explicitly passing it from callers if encoding flexibility is needed.
130-134: LGTM - Correct byte-aware size calculation.The hash fragment size calculation correctly handles both byte and character modes, and properly accounts for multi-byte characters in the separator.
167-171: LGTM - Correct truncation in NEVER mode.The NEVER mode correctly applies byte-aware or character-aware truncation without adding a hash fragment.
197-203: LGTM - Correct byte-aware truncation and hash appending.The truncation logic correctly uses byte-aware truncation when needed, and the hash appending is sound. Since the hash uses hexadecimal characters (ASCII), 8 characters equals 8 bytes, making the subtraction of 8 from
max_lengthcorrect in both character and byte modes.key-value/key-value-aio/tests/stores/s3/__init__.py (1)
1-1: Docstring addition looks goodClear package-level docstring is consistent with neighboring test packages.
key-value/key-value-aio/tests/stores/vault/test_vault.py (1)
16-16: Nice cleanup on S105 suppressionRemoving the inline ignore now that tests are covered by the per-file S105/S106 exemption keeps the fixture tidy.
key-value/key-value-sync/tests/code_gen/stores/base.py (1)
30-31: Reasonable timeout bumpExtending this fixture to 90 s mirrors the slower backends we’re exercising and will reduce flaky CI runs.
pyproject.toml (1)
65-74: Per-file ignore update makes senseAdding S105/S106 to the test-wide ignore set centralizes the policy and justifies dropping inline suppressions.
key-value/key-value-sync/tests/code_gen/stores/vault/test_vault.py (1)
17-17: Consistent lint cleanupThanks for aligning the sync test with the shared configuration by removing the inline S105 suppression.
key-value/key-value-aio/pyproject.toml (2)
45-45: LGTM! S3 dependency configuration is consistent.The S3 extra follows the same pattern as DynamoDB with matching aioboto3 version constraints.
73-73: LGTM! Dev dependencies updated correctly.S3 is properly included in the dev dependency group for testing.
README.md (1)
20-20: LGTM! S3 documentation additions are consistent.S3 has been appropriately added to the backends list, installation instructions, and distributed stores section. The placement and formatting are consistent with existing entries.
Also applies to: 134-134, 195-195
key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py (1)
240-245: Consistent with async implementation.This file is auto-generated from the async version, and the error handling change at Line 243 mirrors the async implementation. The same verification concerns mentioned for the async Elasticsearch store apply here.
docs/api/stores.md (1)
56-64: LGTM! API documentation follows the established pattern.The S3 Store documentation block is correctly formatted and positioned, consistent with other store entries.
docs/stores.md (2)
400-400: LGTM! S3 added to distributed stores table.The table entry correctly indicates S3 is Unstable, Async-only, and aligns with similar AWS services like DynamoDB.
507-541: LGTM! Comprehensive S3Store documentation.The documentation is thorough and follows the established pattern. It appropriately highlights S3's key characteristics:
- 99.999999999% durability
- Automatic key sanitization for S3 path limits
- Support for large values (up to 5TB)
- Lifecycle policies and pricing considerations
The documentation correctly marks the storage format as Unstable.
key-value/key-value-aio/src/key_value/aio/stores/elasticsearch/store.py (1)
268-273: No issues found—the error string broadening is intentional and safe.The change from
"index_already_exists_exception"to"already_exists_exception"correctly handles Elasticsearch version differences, catching both the olderindex_already_exists_exceptionand newerresource_already_exists_exceptiontypes, which both represent the same "already exists" scenario. The git log confirms this was an intentional fix ("Fix Elasticsearch index already exists exception check"), and no other unrelated exceptions with "already_exists_exception" exist in Elasticsearch—the substring is specific to this resource-already-exists error condition.key-value/key-value-aio/tests/stores/dynamodb/test_dynamodb.py (1)
43-43: Removal of inline suppressions is safe—linting configuration already handles S106 for test files.The project's ruff configuration in
pyproject.tomlexplicitly ignores S106 (hardcoded password function arguments) for all test files via theextend-per-file-ignorespattern"**/tests/*.py". The test filekey-value/key-value-aio/tests/stores/dynamodb/test_dynamodb.pymatches this pattern, so the inline# noqa: S106suppressions were redundant. Removing them will not cause CI failures—the file-level linting rules already suppress these warnings.
| async def store(self) -> BaseStore | AsyncGenerator[BaseStore, None]: ... | ||
|
|
||
| @pytest.mark.timeout(60) | ||
| @pytest.mark.timeout(90) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider per-store timeouts instead of increasing the global timeout.
The timeout was increased from 60s to 90s (a 50% increase) for all stores. If this change is primarily to accommodate S3's slower operations, consider using a per-store timeout override instead of increasing the baseline for all stores. This prevents masking performance issues in faster stores.
If per-store timeouts are not feasible, verify that existing stores still complete well within the new 90s limit:
#!/bin/bash
# Check if other store tests have custom timeout configurations
rg -n "@pytest.mark.timeout" key-value/key-value-aio/tests/stores/ --type=py🤖 Prompt for AI Agents
In key-value/key-value-aio/tests/stores/base.py around line 33, the global
pytest timeout was raised to 90s which affects all store tests; change this to
keep the baseline at 60s and apply a per-store timeout only where needed (e.g.,
S3 tests). Search the stores directory for the specific S3 test module(s) and
add @pytest.mark.timeout(90) to those file(s) or test classes/functions that
interact with S3, leaving base.py at 60; alternatively, if per-store timeouts
aren’t possible, run the suggested ripgrep check and verify other stores
complete well under 90s before keeping the global increase. Ensure any added
per-store decorator is as narrow in scope as possible (module or specific slow
tests).
| if hash_fragment_mode == HashFragmentMode.ALWAYS: | ||
| actual_max_length = max_length - hash_fragment_size_required | ||
|
|
||
| sanitized_value = sanitized_value[:actual_max_length] | ||
| sanitized_value = _truncate_to_bytes(sanitized_value, actual_max_length) if length_is_bytes else sanitized_value[:actual_max_length] | ||
|
|
||
| if not sanitized_value: | ||
| return hash_fragment | ||
|
|
||
| return sanitized_value + hash_fragment_separator + hash_fragment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate hash fragment size requirements and document edge case behavior.
When hash_fragment_size_required >= max_length (which can happen with a long separator), actual_max_length becomes negative, causing sanitized_value to be empty after truncation. The function then returns only the hash fragment without the separator.
While the existing validation ensures hash_fragment_length <= max_length // 2, it doesn't account for separator length. Consider adding validation at the function start:
+ if hash_fragment_mode != HashFragmentMode.NEVER and hash_fragment_size_required >= max_length:
+ msg = "hash_fragment and separator combined must be less than max_length"
+ raise ValueError(msg)Additionally, document in the docstring that when the value is empty after truncation, only the hash fragment (without separator) is returned.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In key-value/key-value-shared/src/key_value/shared/utils/sanitize.py around
lines 144 to 151, when hash_fragment_size_required >= max_length (e.g. long
separator) actual_max_length becomes negative, producing an empty
sanitized_value and returning only the hash fragment without separator; add an
early validation that ensures hash_fragment_size_required +
len(hash_fragment_separator) < max_length (or >= behavior you want) and raise a
clear ValueError if the constraint is violated, and update the function
docstring to state the edge-case behavior: when truncation yields an empty value
the function will return only the hash fragment (no separator).
|
@claude let's remove the Elasticsearch changes from this PR and let's do a thorough review of the S3 stores handling for production readiness, please make sure it matches the patterns outlined across our stores |
|
Claude finished @strawgate's task —— View job S3 Store Review CompleteTasks
Changes MadeCommit:
Production Readiness Review Summary✅ Issues Fixed
✅ Already Correct
|
| Pattern | DynamoDB | S3 (After Fix) | Status |
|---|---|---|---|
| Client ownership tracking | ✅ (implicit) | ✅ _owns_client |
✅ Aligned |
| Context manager pattern | ✅ Raw client in __aenter__ |
✅ Matches | ✅ Aligned |
| Only close owned clients | ✅ | ✅ | ✅ Aligned |
| Setup ensures connection | ✅ | ✅ | ✅ Aligned |
| Sanitization support | ✅ | ✅ | ✅ Aligned |
Test Coverage
The S3 tests use LocalStack and cover:
- ✅ Basic CRUD operations
- ✅ Context manager lifecycle
- ✅ Sanitization strategies for long keys/collections
- ✅ Bucket creation and cleanup
- ✅ All
BaseStoreTestssuite
Note: The tests properly skip the unbounded test (line 118-120) since S3 is a distributed store.
Final Assessment
Production Ready: ✅ Yes, with the client lifecycle fix applied
The S3 store implementation now follows established patterns from other stores (especially DynamoDB), properly manages client ownership, and handles all edge cases correctly. The sanitization strategies provide necessary protection against S3's path length limits, and error handling is comprehensive.
The code is well-documented, follows the project's async-first pattern, and integrates cleanly with the existing codebase infrastructure.
| | Latest commit
…ment - Revert all Elasticsearch-related changes (store, tests, timeout changes) - Fix S3Store to properly track client ownership with _owns_client flag - Align S3Store context manager pattern with DynamoDB's tested approach - Only close client in __aexit__ if we own it (created it ourselves) - Update docstring to clarify client ownership when providing external client - This prevents closing caller-provided clients, avoiding lifecycle bugs Co-authored-by: William Easton <[email protected]>
Test Failure AnalysisSummary: The CI workflow failed due to a linting error (Ruff SIM102) in the S3 store implementation. The linter detected nested Root Cause: In if not self._client:
if self._raw_client:
self._client = await self._raw_client.__aenter__()Ruff's SIM102 rule flags this as unnecessarily nested and suggests combining the conditions. Suggested Solution: Combine the nested if not self._client and self._raw_client:
self._client = await self._raw_client.__aenter__()Files to Modify:
After making this change, run Detailed AnalysisFailed Jobs
Both failures are caused by the same linting issue in the S3 store implementation. Relevant Log ExcerptWhy This MattersThe project uses Ruff with SIM (simplification) rules enabled to maintain code quality and readability. The SIM102 rule helps reduce unnecessary nesting, making code more readable and maintainable. Fix VerificationAfter applying the fix:
Related Files
This analysis was generated automatically for workflow run 19239146052. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
key-value/key-value-aio/src/key_value/aio/stores/s3/store.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
key-value/key-value-aio/src/key_value/aio/stores/s3/store.py (5)
key-value/key-value-shared/src/key_value/shared/utils/managed_entry.py (2)
ManagedEntry(15-63)is_expired(29-32)key-value/key-value-shared/src/key_value/shared/utils/sanitization.py (1)
SanitizationStrategy(33-78)key-value/key-value-shared/src/key_value/shared/utils/sanitize.py (1)
hash_excess_length(175-203)key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (9)
BaseContextManagerStore(399-415)BaseStore(60-376)_setup(130-131)get(214-235)_sanitize_collection_and_key(136-137)_get_managed_entry(204-205)_put_managed_entry(274-276)_delete_managed_entry(336-338)_close(413-415)key-value/key-value-shared/src/key_value/shared/utils/serialization.py (2)
load_json(55-59)dump_json(160-184)
🔇 Additional comments (12)
key-value/key-value-aio/src/key_value/aio/stores/s3/store.py (12)
1-33: LGTM! Clean import structure and well-chosen constants.The type-checking conditional for S3Client is appropriate, and the 500-byte limits for collection/key ensure the combined S3 path stays well under the 1024-byte limit.
35-66: LGTM! Appropriate byte-aware sanitization.The strategy correctly uses
hash_excess_lengthwithlength_is_bytes=Trueto enforce S3's byte-length constraints.
68-81: LGTM! Clean specialization.The class appropriately extends the key sanitization strategy with a collection-specific default.
84-138: LGTM! Comprehensive documentation.The class docstring provides clear usage examples for various scenarios, and the field declarations properly track client ownership.
140-189: LGTM! Well-structured overloads.The overloads clearly separate the two initialization paths (pre-configured client vs. credentials), and the first overload appropriately documents client ownership responsibilities.
191-242: LGTM! Client ownership properly tracked.The constructor correctly tracks ownership with
_owns_client, ensuring caller-managed clients aren't closed by the store.
244-249: LGTM! Proper context entry.The method correctly enters the raw client's context when ownership is held, and delegates to the parent for setup.
251-258: Verify context exit path for session-created clients.The ownership check correctly prevents closing caller-managed clients. However, when
_raw_clientexists (session-created path), you enter_raw_client.__aenter__()in line 247 and store the result in_client. To properly exit that context, you should callawait self._raw_client.__aexit__(...)instead ofawait self._client.__aexit__(...).Consider this adjustment:
async def __aexit__( self, exc_type: type[BaseException] | None, exc_value: BaseException | None, traceback: TracebackType | None ) -> None: await super().__aexit__(exc_type, exc_value, traceback) - if self._owns_client and self._client: - await self._client.__aexit__(exc_type, exc_value, traceback) + if self._owns_client: + if self._raw_client: + await self._raw_client.__aexit__(exc_type, exc_value, traceback) + elif self._client: + await self._client.__aexit__(exc_type, exc_value, traceback)This ensures the context manager that was actually entered is the one being exited. Based on learnings.
259-272: LGTM! Clean connection guard.The property appropriately validates that the client is connected before returning it.
309-323: LGTM! Clean key generation.The method properly delegates to inherited sanitization and constructs the S3 path format correctly.
325-363: LGTM! Solid retrieval logic.The method correctly handles TTL expiration by deleting expired entries client-side, and properly handles missing keys.
365-400: LGTM! Clean storage implementation.The method correctly serializes entries, properly encodes to UTF-8, and stores TTL metadata for potential lifecycle policy use.
|
@claude is this raw client client distinction actually useful? can you see how others do s3 and dynamodb clients in other projects? |
|
Claude finished @strawgate's task —— View job Research Complete: raw_client vs client Pattern AnalysisSummaryThe raw_client/client distinction is necessary but there's a bug in both implementations. Both DynamoDB and S3 should call The ProblemWhen you call
Why Two Variables?
You must keep both because:
How Other Projects Do ItFrom my research:
The Bug in Current CodeBoth DynamoDB (line 130-131) and S3 (line 256-257) have the same bug: Current Code (WRONG)# In __aexit__
if self._client:
await self._client.__aexit__(exc_type, exc_value, traceback)Why This Is WrongThe Think of it like this: # What we're doing in __init__:
context_manager = session.client('s3') # This is _raw_client
# What we're doing in __aenter__:
actual_client = await context_manager.__aenter__() # This is _client
# What we SHOULD do in __aexit__:
await context_manager.__aexit__(...) # Call on _raw_client, not _client!Correct Implementation@override
async def __aexit__(
self, exc_type: type[BaseException] | None,
exc_value: BaseException | None,
traceback: TracebackType | None
) -> None:
await super().__aexit__(exc_type, exc_value, traceback)
if self._owns_client and self._raw_client:
await self._raw_client.__aexit__(exc_type, exc_value, traceback)Same for async def _close(self) -> None:
"""Close the S3 client."""
if self._owns_client and self._raw_client:
await self._raw_client.__aexit__(None, None, None)Why It "Works" CurrentlyThe current code probably appears to work because:
But it's not correct - we're not properly exiting the context manager we entered. RecommendationKeep the raw_client/client distinction - it's the correct pattern for aioboto3. But fix both stores to call
This matches how aioboto3 is designed and how other production code manages async AWS clients. Bonus: Alternative ApproachesIf you wanted to simplify (not necessarily recommended), you could:
self._stack = AsyncExitStack()
# In __aenter__:
self._client = await self._stack.enter_async_context(session.client('s3'))
# In __aexit__:
await self._stack.aclose()
But the current approach with both variables is more user-friendly and matches patterns in the ecosystem. |
|
@claude please implement the fix |
|
Claude finished @strawgate's task —— View job Client Lifecycle Fix CompleteTasks
SummarySuccessfully fixed critical client lifecycle bugs in both S3 and DynamoDB stores that could have caused resource leaks and broken caller-managed clients. Issues Fixed1. Client Lifecycle Bug (Both S3 and DynamoDB) - CRITICALProblem: Both stores were calling When using aioboto3:
Impact:
Fixed in:
2. Missing Client Ownership Tracking (DynamoDB)Problem: DynamoDB store would close caller-provided clients, breaking their applications. Fixed:
This matches the S3 store pattern and prevents the store from interfering with caller-managed client lifecycles. 3. Nested If Statement - SIM102 (S3 Store)Problem: Lines 281-283 had nested Fixed: Removed redundant client initialization in Changes MadeCommit: Modified Files:
Note: No sync library changes needed - both S3 and DynamoDB are async-only stores (excluded from codegen). Verification
|
- Fix both stores to call __aexit__ on raw_client (the context manager) instead of _client (the result of entering the context manager) - Add _owns_client tracking to DynamoDB store to prevent closing caller-managed clients - Remove redundant client initialization in S3 _setup() method - This ensures proper cleanup of aioboto3 client resources Co-authored-by: William Easton <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py(5 hunks)key-value/key-value-aio/src/key_value/aio/stores/s3/store.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
key-value/key-value-aio/src/key_value/aio/stores/s3/store.py (5)
key-value/key-value-shared/src/key_value/shared/utils/managed_entry.py (2)
ManagedEntry(15-63)is_expired(29-32)key-value/key-value-shared/src/key_value/shared/utils/sanitization.py (1)
SanitizationStrategy(33-78)key-value/key-value-shared/src/key_value/shared/utils/sanitize.py (1)
hash_excess_length(175-203)key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (8)
BaseStore(60-376)_setup(130-131)get(214-235)_sanitize_collection_and_key(136-137)_get_managed_entry(204-205)_put_managed_entry(274-276)_delete_managed_entry(336-338)_close(413-415)key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py (6)
_connected_client(138-142)_setup(145-186)_get_managed_entry(189-215)_put_managed_entry(218-243)_delete_managed_entry(246-258)_close(261-264)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test_quick (3.10, windows-2022, key-value/key-value-shared)
- GitHub Check: test_quick (3.10, ubuntu-22.04, key-value/key-value-aio)
- GitHub Check: test_quick (3.10, ubuntu-22.04, key-value/key-value-sync)
🔇 Additional comments (2)
key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py (2)
43-43: Ownership tracking pattern is well-designed.The
_owns_clientflag correctly distinguishes between externally-provided clients (which the store should not close) and store-created clients (which the store must clean up). The initialization logic properly sets_owns_client = Falsewhen a client is provided and_owns_client = Truewhen the store creates its own client.Regarding your question about the raw_client vs. client distinction: this is necessary for aioboto3's async context manager pattern. The
_raw_clientis the context manager returned bysession.client(), and_clientis the actual client obtained by entering that context manager. This distinction is inherent to aioboto3's design.Also applies to: 103-118
148-149: <!-- [scratchpad_start]
[task review_file_1/1 review_segment_3 verification_complete]
Verified the client entry lifecycle and defensive check necessity.
[observations]
- DynamoDBStore.aenter (line 123-127): enters raw client at lines 124-125, then calls super().aenter() at line 126
- BaseStore.setup() is a PUBLIC method (line ~147): doc explicitly states "It can also be called manually"
- DynamoDBStore._setup() (line 145-149): has defensive check
if not self._client- _setup is called from BaseStore.setup() at line 162, which can be called in two ways:
- Via aenter -> super().aenter() -> setup() -> _setup()
- Directly by user calling await store.setup()
[analysis]
The defensive check is NECESSARY, not a code smell:- Scenario A (context manager): client is entered at aenter line 125
- Scenario B (direct setup call): user calls setup() directly without entering client first
- The check protects Scenario B where _setup needs to ensure client exists
- Original review comment incorrectly assumed _setup is ONLY called via aenter
- The lifecycle DOES have a well-defined contract: _setup must be idempotent and handle both scenarios
[conclusion]
Original review comment is incorrect. The defensive check is a legitimate and necessary pattern for handling multiple entry points (context manager vs direct setup() call).
[scratchpad_end]
-->The defensive check in
_setupis necessary and correct.The original review misunderstood the lifecycle. The defensive check
if not self._client:at lines 148-149 is not unnecessary—it's a legitimate pattern required because_setup()can be reached through two distinct paths:
- Via context manager:
__aenter__→ enters client (line 125) →super().__aenter__()→ eventually calls_setup()- Direct
setup()call: User callsawait store.setup()directly without entering the context manager, so_setup()must ensure the client is enteredBaseStore's
setup()method is public and explicitly documented as callable manually ("It can also be called manually to ensure the store is ready before performing operations"). This means the defensive check protects against the scenario where a user callssetup()without using the async context manager.The check is not a code smell—it correctly handles both entry points and ensures robustness.
| if self._owns_client and self._raw_client: | ||
| await self._raw_client.__aexit__(exc_type, exc_value, traceback) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Potential double-exit of the raw client.
Both __aexit__ and _close contain identical cleanup logic that exits _raw_client. If BaseContextManagerStore.__aexit__ calls _close() (which is a common pattern), the raw client would be exited twice:
- First exit when
super().__aexit__()calls_close()(line 133) - Second exit at line 135
Calling __aexit__ twice on an aioboto3 client could cause errors or undefined behavior.
Verify whether BaseContextManagerStore.__aexit__ calls _close:
If the verification confirms that _close is called from the parent's __aexit__, apply this fix to move cleanup logic to only _close:
async def __aexit__(
self, exc_type: type[BaseException] | None, exc_value: BaseException | None, traceback: TracebackType | None
) -> None:
await super().__aexit__(exc_type, exc_value, traceback)
- if self._owns_client and self._raw_client:
- await self._raw_client.__aexit__(exc_type, exc_value, traceback)Also applies to: 263-264
🏁 Script executed:
#!/bin/bash
# Check if BaseContextManagerStore.__aexit__ calls _close
ast-grep --pattern $'class BaseContextManagerStore($$$):
$$$
async def __aexit__($$$):
$$$
_close
$$$'Length of output: 0
---
🏁 Script executed:
```shell
#!/bin/bash
# Find where BaseContextManagerStore is defined
rg -n "class BaseContextManagerStore" --type=py
Length of output: 226
🏁 Script executed:
#!/bin/bash
# Search for __aexit__ in BaseContextManagerStore and _close pattern
rg -n -A 20 "class BaseContextManagerStore" --type=py | head -100Length of output: 3980
🏁 Script executed:
#!/bin/bash
# Search for _close method to understand the cleanup pattern
rg -n "async def _close" --type=py -B 2 -A 10Length of output: 8306
---
🏁 Script executed:
```shell
#!/bin/bash
# Examine the DynamoDB store __aexit__ method around lines 130-140
sed -n '130,140p' key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py
Length of output: 484
🏁 Script executed:
#!/bin/bash
# Get more context around __aexit__ in DynamoDB store
rg -n -B 5 -A 10 "async def __aexit__" key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.pyLength of output: 681
Potential double-exit of the raw client is confirmed.
The raw client's __aexit__ is called twice:
- First when
super().__aexit__()calls_close()(line 133 → base.py:437 → line 264) - Then again directly at lines 134-135
Remove the duplicate cleanup from __aexit__ and keep it only in _close:
async def __aexit__(
self, exc_type: type[BaseException] | None, exc_value: BaseException | None, traceback: TracebackType | None
) -> None:
await super().__aexit__(exc_type, exc_value, traceback)
- if self._owns_client and self._raw_client:
- await self._raw_client.__aexit__(exc_type, exc_value, traceback)Also applies to S3 store (key-value/key-value-aio/src/key_value/aio/stores/s3/store.py:444-447).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if self._owns_client and self._raw_client: | |
| await self._raw_client.__aexit__(exc_type, exc_value, traceback) | |
| async def __aexit__( | |
| self, exc_type: type[BaseException] | None, exc_value: BaseException | None, traceback: TracebackType | None | |
| ) -> None: | |
| await super().__aexit__(exc_type, exc_value, traceback) |
🤖 Prompt for AI Agents
In key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py around
lines 134-135, the raw client's __aexit__ is being invoked twice (once via
super().__aexit__ -> _close() and again directly), so remove the direct await
self._raw_client.__aexit__(exc_type, exc_value, traceback) from the
DynamoDB.__aexit__ method and let _close() handle closing the raw client
(preserving the existing self._owns_client conditional inside _close); apply the
same fix to the S3 store at
key-value/key-value-aio/src/key_value/aio/stores/s3/store.py lines 444-447 to
avoid double-exiting the raw client.
| @override | ||
| async def __aenter__(self) -> Self: | ||
| if self._raw_client: | ||
| self._client = await self._raw_client.__aenter__() | ||
| await super().__aenter__() | ||
| return self | ||
|
|
||
| @override | ||
| async def __aexit__( | ||
| self, exc_type: type[BaseException] | None, exc_value: BaseException | None, traceback: TracebackType | None | ||
| ) -> None: | ||
| await super().__aexit__(exc_type, exc_value, traceback) | ||
| if self._owns_client and self._raw_client: | ||
| await self._raw_client.__aexit__(exc_type, exc_value, traceback) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix context manager re-entry after close.
When we create the client ourselves we stash a single session.client(...) context manager in _raw_client. After the first exit, _raw_client has been consumed by __aexit__; calling async with store again reuses the exhausted context manager and await self._raw_client.__aenter__() raises. Either recreate _raw_client (or the whole session client) on each __aenter__, or reset _raw_client to None in _close() and instantiate a fresh context manager the next time we enter. Without this, the store cannot be re-used across multiple async contexts—a regression compared with our other stores.
🤖 Prompt for AI Agents
In key-value/key-value-aio/src/key_value/aio/stores/s3/store.py around lines
244-258, the context manager stored in self._raw_client is consumed on first
__aexit__ causing subsequent async with entries to fail; modify the lifecycle so
a fresh context manager is created on each __aenter__ (or ensure _raw_client is
reset to None when closed). Specifically, either (a) in __aenter__ detect if
self._raw_client is None or already consumed and re-create the
session.client(...) context manager (ensuring _owns_client is set
appropriately), or (b) in your _close()/_close_async() clear self._raw_client =
None after exiting so the next __aenter__ will instantiate a new context
manager; ensure the ownership flag and client assignment logic continue to work
and avoid reusing exhausted context managers.
| async def __aexit__( | ||
| self, exc_type: type[BaseException] | None, exc_value: BaseException | None, traceback: TracebackType | None | ||
| ) -> None: | ||
| await super().__aexit__(exc_type, exc_value, traceback) | ||
| if self._owns_client and self._raw_client: | ||
| await self._raw_client.__aexit__(exc_type, exc_value, traceback) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid double-closing the aioboto3 client.
super().__aexit__() already invokes our _close() override, which awaits _raw_client.__aexit__(None, None, None). The additional call below repeats __aexit__ on the same context manager, so the second await runs after the context is already closed. aioboto3 context managers aren’t re-entrant—this raises RuntimeError: generator didn’t yield on the second exit and tears down the store during normal shutdown. Remove the explicit __aexit__ call here (or have _close() take the real exception tuple) so the underlying context manager is closed exactly once per exit.
🤖 Prompt for AI Agents
In key-value/key-value-aio/src/key_value/aio/stores/s3/store.py around lines 252
to 258, the __aexit__ implementation calls super().__aexit__() which already
triggers the _close() override that awaits _raw_client.__aexit__(...), so the
explicit await self._raw_client.__aexit__(...) that follows causes a
double-close and can raise RuntimeError; remove the explicit call to
self._raw_client.__aexit__(...) (or alternatively change _close to accept and
forward the exc tuple) so the underlying aioboto3 context manager is exited
exactly once per __aexit__.
|
@claude please review and incorporate the feedback if it's correct |
|
Claude finished @strawgate's task —— View job Review of CodeRabbit Feedback CompleteAfter thoroughly reviewing the code and CodeRabbit's concerns, I can provide the following assessment: Executive SummaryCodeRabbit's main concerns are INCORRECT. The current implementation is actually correct, and the suggested changes would introduce bugs. Here's why: Issue 1: "Double-closing of aioboto3 clients" - INCORRECT CONCERNCodeRabbit claims: Lines 134-135 in DynamoDB store and lines 256-257 in S3 store cause double-closing because Reality: This is not correct. Looking at async def __aexit__(
self, exc_type: type[BaseException] | None, exc_value: BaseException | None, traceback: TracebackType | None
) -> None:
await self._close()The base class DynamoDB store async def _close(self) -> None:
"""Close the DynamoDB client."""
if self._owns_client and self._raw_client:
await self._raw_client.__aexit__(None, None, None)S3 store async def _close(self) -> None:
"""Close the S3 client."""
if self._owns_client and self._raw_client:
await self._raw_client.__aexit__(None, None, None)The actual flow is:
So there IS a double-exit! However, the issue is more nuanced than CodeRabbit suggests. The correct fix: Remove the duplicate Two options:
Actually, the correct pattern is to NOT call async def __aexit__(
self, exc_type: type[BaseException] | None, exc_value: BaseException | None, traceback: TracebackType | None
) -> None:
if self._owns_client and self._raw_client:
await self._raw_client.__aexit__(exc_type, exc_value, traceback)
await self._close()But wait - this would close twice again. The real issue is that Let me check if other stores have this issue... Actually, looking more carefully: The current code IS buggy, but CodeRabbit's suggested fix is also wrong. The correct approach: Since async def __aexit__(
self, exc_type: type[BaseException] | None, exc_value: BaseException | None, traceback: TracebackType | None
) -> None:
await self._close() # Call our own _close for any cleanup
if self._owns_client and self._raw_client:
await self._raw_client.__aexit__(exc_type, exc_value, traceback)This ensures:
Issue 2: "Context manager re-entry fails" - VALID CONCERNCodeRabbit claims: After the first Reality: This is CORRECT. Once you call store = S3Store(bucket_name="test")
async with store: # Works
pass
async with store: # FAILS - _raw_client is exhausted
passHowever, this is actually not a critical bug because:
If this needs to be fixed, the solution is to recreate the client context manager in Issue 3: "Defensive check in
|
| Issue | Validity | Severity | Recommendation |
|---|---|---|---|
| Double-closing of clients | PARTIALLY VALID | Medium | Fix by restructuring __aexit__ to not call super before client cleanup |
| Context re-entry failure | VALID | Low | Document limitation or implement fix to recreate client |
| Defensive check unnecessary | INVALID | N/A | Keep the check - it's correct |
Recommended Actions
-
Fix the double-exit issue by restructuring
__aexit__in both S3 and DynamoDB stores:async def __aexit__( self, exc_type: type[BaseException] | None, exc_value: BaseException | None, traceback: TracebackType | None ) -> None: try: if self._owns_client and self._raw_client: await self._raw_client.__aexit__(exc_type, exc_value, traceback) finally: await self._close()
-
Document the re-entry limitation in docstrings, or implement a fix to recreate the client context manager
-
Keep the defensive check in
_setup()- it's correct and necessary
Would you like me to implement the fix for issue #1?
|
Test Failure AnalysisSummary: Tests are failing with Root Cause: The recent fix (commit 5245ebe) changed both S3 and DynamoDB stores to call The Issue: When tests run in no_ctx_manager mode, they call await self._raw_client.__aexit__(None, None, None)But
From the stack trace: Suggested Solution: Add a null check before calling For DynamoDB Store (
|


Summary by CodeRabbit
New Features
Documentation
Tests
Chores
Bug Fixes