Skip to content

Conversation

@aidankmcalister
Copy link
Member

@aidankmcalister aidankmcalister commented Dec 18, 2025

Summary by CodeRabbit

  • New Features
    • Rate limiting now applied to programmatic database creation (5 requests per 60 seconds)
    • Improved rate limit error responses with detailed information including retry timing and usage metrics

✏️ Tip: You can customize this high-level summary in your review settings.

@cloudflare-workers-and-pages
Copy link

cloudflare-workers-and-pages bot commented Dec 18, 2025

Deploying with  Cloudflare Workers  Cloudflare Workers

The latest updates on your project. Learn more about integrating Git with Workers.

Status Name Latest Commit Preview URL Updated (UTC)
⛔ Deployment terminated
View logs
claim-db-worker 46b9bf2 Commit Preview URL

Branch Preview URL
Dec 18 2025, 03:32 PM

@coderabbitai
Copy link

coderabbitai bot commented Dec 18, 2025

Walkthrough

This change introduces programmatic-specific rate limiting for database creation. A new optional source parameter ("programmatic" | "cli") is threaded through the client library to distinguish creation origins. The worker applies rate limiting (5 requests per 60 seconds per IP) for programmatic requests, and enhanced error handling parses rate-limit responses to return structured limit information.

Changes

Cohort / File(s) Summary
Type Definitions
create-db/src/types.ts
Added optional rateLimitInfo property to DatabaseError interface with fields for retryAfterMs, currentCount, and maxRequests.
Source Parameter Threading
create-db/src/index.ts, create-db/src/database.ts
Added optional source?: "programmatic" | "cli" parameter to distinguish creation origins; programmatic paths supply source = "programmatic" while CLI paths use default "cli"; updated createDatabaseCore to forward source in worker payload.
Rate Limit Error Handling
create-db/src/database.ts
Extended 429 response handling to parse RATE_LIMIT_EXCEEDED errors and return structured result with rateLimitInfo object; preserves fallback to generic 429 handling if parsing fails.
Worker Rate Limiting Configuration & Implementation
create-db-worker/wrangler.jsonc, create-db-worker/src/index.ts
Added PROGRAMMATIC_RATE_LIMITER binding (5 req/60s); implemented programmatic-specific rate limiting in /create flow that keys by client IP, returns 429 with detailed error on limit exceeded, and 503 on limiter errors.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Rate limiting logic in create-db-worker/src/index.ts — Verify correct IP-based key generation, proper error code responses (429 vs. 503), and Retry-After header compliance.
  • Error parsing in create-db/src/database.ts — Ensure RATE_LIMIT_EXCEEDED parsing logic correctly extracts rateLimitInfo and gracefully falls back to generic handling on parse failure.
  • Type safety of rateLimitInfo — Confirm optional property and nested object structure are handled correctly downstream.

Possibly related PRs

  • fix: add ratelimiting key #57 — Modifies rate-limiting logic to key by client IP address; directly overlaps with the programmatic rate limiter implementation in this PR.

Pre-merge checks

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main change: adding rate limiting functionality to the create database import feature. It directly corresponds to the primary objective of this PR.

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

Preview CLIs & Workers are live!

Test the CLIs locally under tag pr73-DR-6663-rate-limiting-import-20324781629:

npx create-db@pr73
npx create-pg@pr73
npx create-postgres@pr73

Worker URLs
• Create-DB Worker: https://create-db-temp.prisma.io
• Claim-DB Worker: https://create-db.prisma.io

These will live as long as this PR exists under tag pr73-DR-6663-rate-limiting-import-20324781629.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
create-db/src/rate-limiter.ts (2)

21-37: Consider clarifying method name to reflect side effect.

The checkLimit() method both checks the limit AND records the request (line 35). While this works correctly for the use case, the name suggests it's read-only. Consider renaming to make the side effect explicit, e.g., tryAcquire() or checkAndRecord().

This is a minor naming concern and doesn't affect correctness.


21-37: Consider extracting duplicate cleanup logic.

The request cleanup logic is duplicated between checkLimit() (lines 25-27) and getCurrentCount() (lines 63-65). Consider extracting this into a private method:

🔎 View refactoring suggestion
+  /**
+   * Remove expired requests from tracking
+   * @private
+   */
+  private cleanupExpiredRequests(): void {
+    const now = Date.now();
+    this.requests = this.requests.filter(
+      (record) => now - record.timestamp < WINDOW_MS
+    );
+  }
+
   checkLimit(): boolean {
-    const now = Date.now();
-
-    // Remove expired requests outside the time window
-    this.requests = this.requests.filter(
-      (record) => now - record.timestamp < WINDOW_MS
-    );
+    this.cleanupExpiredRequests();

     // Check if we've exceeded the limit
     if (this.requests.length >= MAX_REQUESTS) {
       return false;
     }

     // Add this request to the tracking
-    this.requests.push({ timestamp: now });
+    this.requests.push({ timestamp: Date.now() });
     return true;
   }

   getCurrentCount(): number {
-    const now = Date.now();
-    this.requests = this.requests.filter(
-      (record) => now - record.timestamp < WINDOW_MS
-    );
+    this.cleanupExpiredRequests();
     return this.requests.length;
   }

Also applies to: 61-67

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c2dec54 and c0b1b65.

📒 Files selected for processing (4)
  • create-db/__tests__/rate-limiter.test.ts (1 hunks)
  • create-db/src/index.ts (2 hunks)
  • create-db/src/rate-limiter.ts (1 hunks)
  • create-db/src/types.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
create-db/src/index.ts (1)
create-db/src/rate-limiter.ts (1)
  • globalRateLimiter (84-84)
create-db/__tests__/rate-limiter.test.ts (1)
create-db/src/rate-limiter.ts (1)
  • globalRateLimiter (84-84)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Workers Builds: create-db-worker
  • GitHub Check: Workers Builds: claim-db-worker
🔇 Additional comments (4)
create-db/src/types.ts (1)

57-61: LGTM!

The rateLimitInfo property addition is well-structured and appropriately optional. The fields clearly convey retry timing and limit context for rate-limited errors.

create-db/src/index.ts (2)

35-35: LGTM!

The import is clean and brings in the global rate limiter instance for use in the create function.


394-410: Verify per-process rate limiting is acceptable for your deployment model.

The rate limit check is correctly positioned and the error response is well-structured. However, note that the in-memory rate limiter is per-process—if multiple instances run concurrently (e.g., in a cluster, serverless functions, or containers), each maintains its own independent limit.

For a CLI tool, this is typically acceptable. But if create() is called from long-running services or serverless environments with multiple instances, users could exceed the intended global limit.

Consider whether distributed rate limiting (e.g., Redis-based) is needed for your deployment scenarios, or document the per-process limitation clearly.

create-db/src/rate-limiter.ts (1)

1-84: LGTM! Well-designed rate limiter with correct concurrency handling.

The implementation is solid:

  • ✓ Synchronous methods are safe in Node.js's single-threaded event loop (no race conditions)
  • ✓ Fixed limits as documented (security feature)
  • ✓ Clean API with clear method responsibilities
  • ✓ Correct window-based rate limiting logic

The per-process, in-memory design is appropriate for a CLI tool. Note that this won't enforce limits across multiple running instances, which is acceptable for the use case.

@github-actions
Copy link

Preview CLIs & Workers are live!

Test the CLIs locally under tag pr73-DR-6663-rate-limiting-import-20341570038:

npx create-db@pr73
npx create-pg@pr73
npx create-postgres@pr73

Worker URLs
• Create-DB Worker: https://create-db-temp.prisma.io
• Claim-DB Worker: https://create-db.prisma.io

These will live as long as this PR exists under tag pr73-DR-6663-rate-limiting-import-20341570038.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c0b1b65 and 46b9bf2.

📒 Files selected for processing (4)
  • create-db-worker/src/index.ts (3 hunks)
  • create-db-worker/wrangler.jsonc (1 hunks)
  • create-db/src/database.ts (3 hunks)
  • create-db/src/index.ts (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
create-db/src/index.ts (1)
create-db/src/database.ts (1)
  • createDatabaseCore (12-154)
create-db-worker/src/index.ts (2)
claim-db-worker/worker-configuration.d.ts (1)
  • env (6913-6913)
create-db-worker/worker-configuration.d.ts (1)
  • env (6794-6794)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Workers Builds: claim-db-worker
  • GitHub Check: Workers Builds: create-db-worker
🔇 Additional comments (8)
create-db-worker/src/index.ts (3)

9-9: LGTM!

The PROGRAMMATIC_RATE_LIMITER binding is correctly added to the Env interface, matching the new binding in wrangler.jsonc.


134-134: LGTM!

The source field properly restricts values to the expected union type.


175-189: Good fail-closed behavior for programmatic requests.

Returning a 503 when the rate limiter is unavailable is the correct security posture for programmatic API access, preventing abuse during outages.

create-db-worker/wrangler.jsonc (1)

43-51: LGTM!

The stricter rate limit (5 requests per 60 seconds) for programmatic access is appropriately configured with a separate namespace, allowing independent rate tracking from general API traffic.

create-db/src/database.ts (2)

17-18: LGTM!

The source parameter is correctly added with the appropriate type and defaulting behavior.


44-57: Good defensive parsing with structured error propagation.

The try/catch around JSON parsing ensures graceful fallback if the response format changes, while properly extracting rateLimitInfo when available from programmatic rate limit responses.

create-db/src/index.ts (2)

77-90: LGTM!

The source parameter is properly threaded through the wrapper function to the core implementation.


395-400: LGTM!

The programmatic create() function correctly passes source: "programmatic" to enable the stricter rate limiting on the worker side, while cliRunId is appropriately set to undefined since there's no CLI session.

Comment on lines +155 to +174
if (!res.success) {
return new Response(
JSON.stringify({
error: 'RATE_LIMIT_EXCEEDED',
message: 'Rate limit exceeded for programmatic database creation. You can create up to 1 database per minute. Please try again later.',
rateLimitInfo: {
retryAfterMs: 60000, // Approximate - Cloudflare doesn't expose exact timing
currentCount: 1,
maxRequests: 1,
},
}),
{
status: 429,
headers: {
'Content-Type': 'application/json',
'Retry-After': '60',
},
},
);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Inconsistent rate limit values in error response.

The error message says "1 database per minute" and rateLimitInfo reports maxRequests: 1, but the actual rate limiter config in wrangler.jsonc is limit: 5 per 60 seconds. This mismatch will confuse API consumers and may cause incorrect retry behavior.

🔎 Apply this diff to fix the inconsistency:
 					if (!res.success) {
 						return new Response(
 							JSON.stringify({
 								error: 'RATE_LIMIT_EXCEEDED',
-								message: 'Rate limit exceeded for programmatic database creation. You can create up to 1 database per minute. Please try again later.',
+								message: 'Rate limit exceeded for programmatic database creation. You can create up to 5 databases per minute. Please try again later.',
 								rateLimitInfo: {
 									retryAfterMs: 60000, // Approximate - Cloudflare doesn't expose exact timing
-									currentCount: 1,
-									maxRequests: 1,
+									maxRequests: 5,
 								},
 							}),

Also consider removing currentCount since it's hardcoded and doesn't reflect the actual request count.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (!res.success) {
return new Response(
JSON.stringify({
error: 'RATE_LIMIT_EXCEEDED',
message: 'Rate limit exceeded for programmatic database creation. You can create up to 1 database per minute. Please try again later.',
rateLimitInfo: {
retryAfterMs: 60000, // Approximate - Cloudflare doesn't expose exact timing
currentCount: 1,
maxRequests: 1,
},
}),
{
status: 429,
headers: {
'Content-Type': 'application/json',
'Retry-After': '60',
},
},
);
}
if (!res.success) {
return new Response(
JSON.stringify({
error: 'RATE_LIMIT_EXCEEDED',
message: 'Rate limit exceeded for programmatic database creation. You can create up to 5 databases per minute. Please try again later.',
rateLimitInfo: {
retryAfterMs: 60000, // Approximate - Cloudflare doesn't expose exact timing
maxRequests: 5,
},
}),
{
status: 429,
headers: {
'Content-Type': 'application/json',
'Retry-After': '60',
},
},
);
}
🤖 Prompt for AI Agents
In create-db-worker/src/index.ts around lines 155 to 174, the rate-limit error
response hardcodes "1 database per minute", maxRequests: 1 and a currentCount: 1
while the actual rate limiter in wrangler.jsonc is limit: 5 per 60 seconds;
update the response to reflect the real limiter (set message to "5 databases per
minute" or better: derive values from the rate-limit config/constants, set
rateLimitInfo.maxRequests to 5, remove the hardcoded currentCount field, and
keep retryAfterMs/Retry-After in sync with the configured window (60 seconds));
ensure these values are read from the same config/constants used by the limiter
rather than duplicated literals.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants