-
Notifications
You must be signed in to change notification settings - Fork 1
DR-6663 Added rate limiter to create import
#73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Deploying with
|
| Status | Name | Latest Commit | Preview URL | Updated (UTC) |
|---|---|---|---|---|
| ⛔ Deployment terminated View logs |
claim-db-worker | 46b9bf2 | Commit Preview URL Branch Preview URL |
Dec 18 2025, 03:32 PM |
WalkthroughThis change introduces programmatic-specific rate limiting for database creation. A new optional Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Possibly related PRs
Pre-merge checks❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Comment |
|
✅ Preview CLIs & Workers are live! Test the CLIs locally under tag npx create-db@pr73
npx create-pg@pr73
npx create-postgres@pr73Worker URLs
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
create-db/src/rate-limiter.ts (2)
21-37: Consider clarifying method name to reflect side effect.The
checkLimit()method both checks the limit AND records the request (line 35). While this works correctly for the use case, the name suggests it's read-only. Consider renaming to make the side effect explicit, e.g.,tryAcquire()orcheckAndRecord().This is a minor naming concern and doesn't affect correctness.
21-37: Consider extracting duplicate cleanup logic.The request cleanup logic is duplicated between
checkLimit()(lines 25-27) andgetCurrentCount()(lines 63-65). Consider extracting this into a private method:🔎 View refactoring suggestion
+ /** + * Remove expired requests from tracking + * @private + */ + private cleanupExpiredRequests(): void { + const now = Date.now(); + this.requests = this.requests.filter( + (record) => now - record.timestamp < WINDOW_MS + ); + } + checkLimit(): boolean { - const now = Date.now(); - - // Remove expired requests outside the time window - this.requests = this.requests.filter( - (record) => now - record.timestamp < WINDOW_MS - ); + this.cleanupExpiredRequests(); // Check if we've exceeded the limit if (this.requests.length >= MAX_REQUESTS) { return false; } // Add this request to the tracking - this.requests.push({ timestamp: now }); + this.requests.push({ timestamp: Date.now() }); return true; } getCurrentCount(): number { - const now = Date.now(); - this.requests = this.requests.filter( - (record) => now - record.timestamp < WINDOW_MS - ); + this.cleanupExpiredRequests(); return this.requests.length; }Also applies to: 61-67
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
create-db/__tests__/rate-limiter.test.ts(1 hunks)create-db/src/index.ts(2 hunks)create-db/src/rate-limiter.ts(1 hunks)create-db/src/types.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
create-db/src/index.ts (1)
create-db/src/rate-limiter.ts (1)
globalRateLimiter(84-84)
create-db/__tests__/rate-limiter.test.ts (1)
create-db/src/rate-limiter.ts (1)
globalRateLimiter(84-84)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Workers Builds: create-db-worker
- GitHub Check: Workers Builds: claim-db-worker
🔇 Additional comments (4)
create-db/src/types.ts (1)
57-61: LGTM!The
rateLimitInfoproperty addition is well-structured and appropriately optional. The fields clearly convey retry timing and limit context for rate-limited errors.create-db/src/index.ts (2)
35-35: LGTM!The import is clean and brings in the global rate limiter instance for use in the create function.
394-410: Verify per-process rate limiting is acceptable for your deployment model.The rate limit check is correctly positioned and the error response is well-structured. However, note that the in-memory rate limiter is per-process—if multiple instances run concurrently (e.g., in a cluster, serverless functions, or containers), each maintains its own independent limit.
For a CLI tool, this is typically acceptable. But if
create()is called from long-running services or serverless environments with multiple instances, users could exceed the intended global limit.Consider whether distributed rate limiting (e.g., Redis-based) is needed for your deployment scenarios, or document the per-process limitation clearly.
create-db/src/rate-limiter.ts (1)
1-84: LGTM! Well-designed rate limiter with correct concurrency handling.The implementation is solid:
- ✓ Synchronous methods are safe in Node.js's single-threaded event loop (no race conditions)
- ✓ Fixed limits as documented (security feature)
- ✓ Clean API with clear method responsibilities
- ✓ Correct window-based rate limiting logic
The per-process, in-memory design is appropriate for a CLI tool. Note that this won't enforce limits across multiple running instances, which is acceptable for the use case.
|
✅ Preview CLIs & Workers are live! Test the CLIs locally under tag npx create-db@pr73
npx create-pg@pr73
npx create-postgres@pr73Worker URLs
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
create-db-worker/src/index.ts(3 hunks)create-db-worker/wrangler.jsonc(1 hunks)create-db/src/database.ts(3 hunks)create-db/src/index.ts(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
create-db/src/index.ts (1)
create-db/src/database.ts (1)
createDatabaseCore(12-154)
create-db-worker/src/index.ts (2)
claim-db-worker/worker-configuration.d.ts (1)
env(6913-6913)create-db-worker/worker-configuration.d.ts (1)
env(6794-6794)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Workers Builds: claim-db-worker
- GitHub Check: Workers Builds: create-db-worker
🔇 Additional comments (8)
create-db-worker/src/index.ts (3)
9-9: LGTM!The
PROGRAMMATIC_RATE_LIMITERbinding is correctly added to theEnvinterface, matching the new binding inwrangler.jsonc.
134-134: LGTM!The
sourcefield properly restricts values to the expected union type.
175-189: Good fail-closed behavior for programmatic requests.Returning a 503 when the rate limiter is unavailable is the correct security posture for programmatic API access, preventing abuse during outages.
create-db-worker/wrangler.jsonc (1)
43-51: LGTM!The stricter rate limit (5 requests per 60 seconds) for programmatic access is appropriately configured with a separate namespace, allowing independent rate tracking from general API traffic.
create-db/src/database.ts (2)
17-18: LGTM!The
sourceparameter is correctly added with the appropriate type and defaulting behavior.
44-57: Good defensive parsing with structured error propagation.The try/catch around JSON parsing ensures graceful fallback if the response format changes, while properly extracting
rateLimitInfowhen available from programmatic rate limit responses.create-db/src/index.ts (2)
77-90: LGTM!The
sourceparameter is properly threaded through the wrapper function to the core implementation.
395-400: LGTM!The programmatic
create()function correctly passessource: "programmatic"to enable the stricter rate limiting on the worker side, whilecliRunIdis appropriately set toundefinedsince there's no CLI session.
| if (!res.success) { | ||
| return new Response( | ||
| JSON.stringify({ | ||
| error: 'RATE_LIMIT_EXCEEDED', | ||
| message: 'Rate limit exceeded for programmatic database creation. You can create up to 1 database per minute. Please try again later.', | ||
| rateLimitInfo: { | ||
| retryAfterMs: 60000, // Approximate - Cloudflare doesn't expose exact timing | ||
| currentCount: 1, | ||
| maxRequests: 1, | ||
| }, | ||
| }), | ||
| { | ||
| status: 429, | ||
| headers: { | ||
| 'Content-Type': 'application/json', | ||
| 'Retry-After': '60', | ||
| }, | ||
| }, | ||
| ); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent rate limit values in error response.
The error message says "1 database per minute" and rateLimitInfo reports maxRequests: 1, but the actual rate limiter config in wrangler.jsonc is limit: 5 per 60 seconds. This mismatch will confuse API consumers and may cause incorrect retry behavior.
🔎 Apply this diff to fix the inconsistency:
if (!res.success) {
return new Response(
JSON.stringify({
error: 'RATE_LIMIT_EXCEEDED',
- message: 'Rate limit exceeded for programmatic database creation. You can create up to 1 database per minute. Please try again later.',
+ message: 'Rate limit exceeded for programmatic database creation. You can create up to 5 databases per minute. Please try again later.',
rateLimitInfo: {
retryAfterMs: 60000, // Approximate - Cloudflare doesn't expose exact timing
- currentCount: 1,
- maxRequests: 1,
+ maxRequests: 5,
},
}),Also consider removing currentCount since it's hardcoded and doesn't reflect the actual request count.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!res.success) { | |
| return new Response( | |
| JSON.stringify({ | |
| error: 'RATE_LIMIT_EXCEEDED', | |
| message: 'Rate limit exceeded for programmatic database creation. You can create up to 1 database per minute. Please try again later.', | |
| rateLimitInfo: { | |
| retryAfterMs: 60000, // Approximate - Cloudflare doesn't expose exact timing | |
| currentCount: 1, | |
| maxRequests: 1, | |
| }, | |
| }), | |
| { | |
| status: 429, | |
| headers: { | |
| 'Content-Type': 'application/json', | |
| 'Retry-After': '60', | |
| }, | |
| }, | |
| ); | |
| } | |
| if (!res.success) { | |
| return new Response( | |
| JSON.stringify({ | |
| error: 'RATE_LIMIT_EXCEEDED', | |
| message: 'Rate limit exceeded for programmatic database creation. You can create up to 5 databases per minute. Please try again later.', | |
| rateLimitInfo: { | |
| retryAfterMs: 60000, // Approximate - Cloudflare doesn't expose exact timing | |
| maxRequests: 5, | |
| }, | |
| }), | |
| { | |
| status: 429, | |
| headers: { | |
| 'Content-Type': 'application/json', | |
| 'Retry-After': '60', | |
| }, | |
| }, | |
| ); | |
| } |
🤖 Prompt for AI Agents
In create-db-worker/src/index.ts around lines 155 to 174, the rate-limit error
response hardcodes "1 database per minute", maxRequests: 1 and a currentCount: 1
while the actual rate limiter in wrangler.jsonc is limit: 5 per 60 seconds;
update the response to reflect the real limiter (set message to "5 databases per
minute" or better: derive values from the rate-limit config/constants, set
rateLimitInfo.maxRequests to 5, remove the hardcoded currentCount field, and
keep retryAfterMs/Retry-After in sync with the configured window (60 seconds));
ensure these values are read from the same config/constants used by the limiter
rather than duplicated literals.
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.