Implement Caddy integration, service CRUD, and proxy management#1
Implement Caddy integration, service CRUD, and proxy management#1
Conversation
- Merge Caddy into root docker-compose.yml with shared network/volumes - Add proxy table columns (name, service_type, certificate_id, listen_port, etc.) - Rewrite caddyControl.ts with full Caddy JSON API (routes, TLS, ACME policies) - Implement POST /api/web/create with home cert, custom cert, and Let's Encrypt support - Implement GET /api/web/get_all with paginated queries - Add DELETE /api/web/delete with Caddy route + file cleanup - Rewrite web/client.tsx with service table, create dialog, and delete mutation - Fix empty ACME route stubs and auth.ts type error to unblock builds - Switch from bun.lock to pnpm-lock.yaml Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughThis PR introduces Caddy reverse proxy integration with expanded database schema supporting multiple service types, SSL certificate management, and file serving capabilities. It adds authentication to API routes, implements comprehensive Caddy HTTP API control, and refactors the client UI with React Query data fetching and enhanced form workflows. Changes
Sequence DiagramssequenceDiagram
actor User
participant UI as Client UI
participant API as Web API
participant DB as Database
participant Caddy
User->>UI: Submit Create Service Form
UI->>API: POST /api/web/create (FormData)
API->>API: Authenticate user
API->>API: Parse form fields & file uploads
alt serviceType === "files"
API->>API: Upload file to server
end
alt sslType === "homecert"
API->>API: Generate CSR & certificate
API->>DB: Insert certificate record
else sslType === "custom"
API->>API: Store custom cert/key files
end
API->>DB: Insert proxy/service record
API->>Caddy: POST config (buildRoute + TLS)
Caddy-->>API: Configuration applied
API-->>UI: { ok: true, serviceId }
UI->>UI: Invalidate services cache
UI->>UI: Show success toast
sequenceDiagram
actor User
participant UI as Client UI
participant API as Web API
participant DB as Database
participant Caddy
User->>UI: Click Delete Service
UI->>UI: Show delete confirmation
User->>UI: Confirm deletion
UI->>API: DELETE /api/web/delete (serviceId)
API->>API: Authenticate user
API->>DB: Query service by ID
API->>Caddy: Delete service route
API->>API: Delete file-serve directory if exists
API->>DB: Delete certificate record if present
API->>DB: Delete service record
DB-->>API: Records deleted
API-->>UI: { ok: true }
UI->>UI: Invalidate services cache
UI->>UI: Show deletion toast
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~45 minutes
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
There was a problem hiding this comment.
Actionable comments posted: 19
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/components/auth.ts (1)
18-27:⚠️ Potential issue | 🟠 MajorPotential crash if the
"setup-completed"key doesn't exist inkvData.Line 23 accesses
checkIfSetupCompleted[0].valuewithout checking if the query returned any rows. If no row matches,checkIfSetupCompleted[0]isundefinedand.valuethrows aTypeError. This would cause the sign-up hook to crash instead of gracefully proceeding.Proposed fix
const checkIfSetupCompleted = await db .select() .from(kvData) .where(eq(kvData.key, "setup-completed")) .execute(); - if (checkIfSetupCompleted[0].value === true) { + if (checkIfSetupCompleted.length > 0 && checkIfSetupCompleted[0].value === true) { throw new APIError("BAD_REQUEST", { message: "Setup already completed", }); }
🤖 Fix all issues with AI agents
In `@db_migrations/0008_proxy_service_columns.sql`:
- Around line 10-14: The migration drops the default and converts
proxy.public_urls from jsonb[] to jsonb using COALESCE(public_urls[1],
'[]'::jsonb) which discards elements beyond the first; change the USING
expression in the ALTER TABLE "proxy" ALTER COLUMN "public_urls" TYPE jsonb
USING ... to aggregate all array elements into a single jsonb (e.g.,
COALESCE((SELECT jsonb_agg(elem) FROM unnest(public_urls) AS elem),
'[]'::jsonb)) so existing multiple elements are preserved, then reapply the SET
DEFAULT '[]'::jsonb and SET NOT NULL; alternatively, if you intentionally expect
at most one element, add a clear comment in the migration referencing
public_urls and the reason for keeping only the first element.
In `@db_migrations/meta/_journal.json`:
- Around line 61-67: Update the out-of-order timestamp in the migration journal:
locate the JSON object with "idx": 8 and "tag": "0008_proxy_service_columns" and
change its "when" value to a timestamp greater than the prior entry's "when"
(1769526940405) so the "when" field is monotonically increasing; ensure the
updated value is a valid epoch-millisecond integer and commit the modified
_journal.json entry.
In `@db_migrations/meta/0008_snapshot.json`:
- Line 2: The snapshot "id" field contains a placeholder UUID-like value instead
of a real generated UUID; regenerate the snapshot for 0008_snapshot.json using
the drizzle-kit generate command so the "id" key is replaced with the proper
random UUID, then commit the updated JSON (ensure the "id" value in the JSON is
an actual UUID and not the placeholder string).
In `@docker-compose.yml`:
- Around line 37-43: The docker-compose exposes Caddy's admin API to the host
(port mapping "2019:2019" and CADDY_ADMIN="0.0.0.0:2019"), which is a security
risk; remove the host port mapping for 2019 (delete the "2019:2019" entry under
ports) so the admin API stays accessible only on the Docker network, and ensure
CADDY_ADMIN is set to the internal address (e.g., keep
CADDY_ADMIN="http://caddy:2019" or a non-0.0.0.0 bind); if host debugging is
required, bind the admin port to localhost only by replacing the mapping with
"127.0.0.1:2019:2019".
In `@src/app/api/web/create/route.ts`:
- Around line 88-89: The code rewrites homecert paths by replacing
"./cert/created/" with "/cert/" (customCertPath/customKeyPath) but the custom
SSL branch writes files under "/certs/created/custom-${serviceId}", causing a
mismatched root ("/cert/" vs "/certs/"). Make the certificate root consistent:
either change the homecert replacement to use "/certs/" or change the custom
write path to "/cert/" so both branches reference the same directory; update the
code paths that build fullchainPath, privateKey, customCertPath, customKeyPath
and the custom write destination (custom-${serviceId}) accordingly, and add a
short comment documenting the chosen root if the difference is intentional.
- Around line 132-169: The DB insert of schema.proxy (the call to
db.insert(...).returning()) and the subsequent addServiceToCaddy call are not
atomic, so if addServiceToCaddy fails the DB row is orphaned; fix by performing
the insert and Caddy push inside a single transaction or, if a DB transaction
cannot encompass the external Caddy call, add a compensating delete of the
inserted service inside a nested catch around addServiceToCaddy (use the
returned service.id) to rollback the DB state and rethrow the original error;
reference the insert operation that returns service and the addServiceToCaddy
invocation to locate where to add the transaction/compensating delete.
- Around line 38-43: Validate and reject bad inputs early: ensure the form
fields serviceType, proxyHostUrl, and publicURL are validated before creating
the route. Constrain serviceType to the allowed literal set ("proxy" | "files")
and return 400 for any other value; when serviceType === "proxy" require
proxyHostUrl be non-empty and a valid host (not an empty string); validate each
publicURL entry as a hostname/domain (reject values containing protocols, paths,
or invalid characters) and return a 400 with a clear error message if any
publicURL fails validation. Add these checks in route creation flow in
create/route.ts near the existing name validation so the downstream casts and
Caddy route creation (references: serviceType, proxyHostUrl, publicURL) never
receive invalid values.
- Around line 32-34: The listenPort value parsed from formData via parseInt can
be NaN and end up in the DB; update the parsing logic in route.ts (the
listenPort assignment inside the create route) to validate the result of
parseInt: if Number.isFinite(parsed) && Number.isInteger(parsed) and within TCP
port range (1–65535) use it, otherwise fall back to the default 443 (or another
safe default) before inserting into the DB and before constructing server names
like `srv-<port>`; ensure the sanitized numeric value (not NaN) is what gets
persisted and used downstream.
In `@src/app/api/web/delete/route.ts`:
- Around line 43-47: service.fileServePath is used directly in fs.promises.rm
with recursive:true and force:true which allows path-traversal deletes; fix by
canonicalizing and validating the path before removal: in the delete route
handler, use path.resolve/baseDir to compute an absolute path for
service.fileServePath, ensure path.relative(baseDir, resolvedPath) does not
start with '..' (or check resolvedPath.startsWith(baseDir + path.sep)) and
reject any absolute or out-of-tree paths, and only then call fs.promises.rm;
also remove or reconsider force:true (fail loudly on errors) to avoid silent
destructive failures.
- Around line 57-65: Wrap the proxy and certificate deletions in a single DB
transaction so they are atomic: start a transaction, perform the delete on
schema.proxy using db.delete(...).where(eq(schema.proxy.id, id)), then if
service.certificateId is set, first query schema.proxy for any other rows
referencing that certificate (e.g., select/count where
eq(schema.proxy.certificateId, service.certificateId) and id != id) and only
delete from schema.certificates if the count is zero; finally commit the
transaction (or rollback on error) so both deletions succeed or both are
reverted. Use the existing db, schema.proxy, schema.certificates,
service.certificateId, and eq(...) symbols to locate and implement this change.
In `@src/app/api/web/get_all/route.ts`:
- Line 16: The current parsing of the offset ("const offset =
parseInt(searchParams.get(\"offset\") || \"0\");") can produce NaN for
non-numeric input and break the subsequent .offset() call; update the parsing to
use parseInt(..., 10) or Number(...) and then validate/fallback to 0 (e.g., if
Number.isFinite(parsed) and parsed >= 0 use parsed else 0) before passing it to
.offset(), ensuring the variable referenced as offset is always a safe
non-negative integer.
In `@src/app/web/client.tsx`:
- Around line 360-364: The date cell renderer currently forces Traditional
Chinese by calling toLocaleString("zh-tw") inside the cell callback that uses
row.getValue("createdAt"); change this to respect user locale (e.g., pass
undefined or use navigator.language or a configurable locale prop) and/or
document intent with a comment; update the cell renderer in the component that
defines the column (the function using row.getValue("createdAt") and
toLocaleString) so it either uses the browser/default locale or reads a provided
locale setting.
- Around line 260-271: The protocol tab UI is not wired to form state so the
hidden input always submits "https"; update the Tabs to lift protocol into
component state (e.g., const [listenProtocol, setListenProtocol] =
useState("https")), pass listenProtocol as the Tabs value and provide
onValueChange={setListenProtocol} on the Tabs component (or equivalent handler)
and change the hidden input's value to the listenProtocol state so the selected
tab (Tabs, TabsList, TabsTrigger) drives the submitted listenProtocol form
field.
In `@src/components/core/caddyControl.ts`:
- Around line 325-331: removeServiceFromCaddy currently only calls
removeTLSCertificate and doesn't remove any ACME automation policies created by
addLetsEncryptPolicy; to fix, change removeServiceFromCaddy to obtain the
service's configuration (either accept the full service object instead of
serviceId or query the DB by serviceId), inspect certificateOrigin and
publicUrls on that service, and if certificateOrigin is "letsencrypt_http" or
"letsencrypt_dns" (and publicUrls exists/non-empty) call removeLetsEncryptPolicy
with the appropriate identifiers (serviceId/publicUrls or the same args used in
addLetsEncryptPolicy) before or after removeTLSCertificate so the Let's Encrypt
policy is also cleaned up. Ensure you reference removeServiceFromCaddy,
removeTLSCertificate, addLetsEncryptPolicy and removeLetsEncryptPolicy when
implementing this change.
- Around line 287-303: The call in uploadWebHostFilesToServer that builds a
shell command with execAsync using tmpFolder/srvFolder is vulnerable to command
injection via the fileUUID parameter; fix it by validating fileUUID against a
strict UUID v4 regex (reject anything else) and/or stop using shell
interpolation: replace execAsync(`unzip -o ${tmpFolder}archive.zip -d
${srvFolder}`) with a spawn/execFile style call that passes arguments (e.g.,
child_process.spawn or execFile with ['-o', `${tmpFolder}archive.zip`, '-d',
srvFolder]) or use a library API for zip extraction, and ensure
tmpFolder/srvFolder are constructed only after validation of fileUUID to
guarantee safe paths.
- Around line 107-119: The ternary assigning listen in ensureServer is dead code
(both branches yield [`:${listenPort}`]); simplify by replacing the ternary with
a single assignment const listen = [`:${listenPort}`]; and keep the existing
conditional that adds tls_connection_policies only when listenProtocol ===
"https"; if the intent was to vary the listen value by protocol instead,
implement the correct protocol-specific listen value (e.g., default ports like
80 vs 443 or a different host:port string) and update the branch accordingly in
the ensureServer function (references: ensureServer, listen,
tls_connection_policies, caddyGet, caddyPut).
- Around line 270-283: The current removeLetsEncryptPolicy function removes
entire policy objects when any subject matches; instead, update it to iterate
the fetched policies and for each policy with a subjects array filter out only
the subjects that are in the domainSet, preserving all other keys on the policy
object; if the filtered subjects array is non-empty push the modified policy
(with subjects replaced) into the new policies list, and if it becomes empty
skip that policy so it gets deleted; finally call
caddyPut("/config/apps/tls/automation/policies", newPolicies). Ensure you
reference removeLetsEncryptPolicy, policies, subjects, domainSet, and caddyPut
so only matching subjects are removed and policies are only deleted when their
subjects become empty.
- Around line 150-152: The local variable upstreamUrl is computed but never
used; remove the upstreamUrl construction (the conditional that prepends
"http://" to config.proxyHostUrl) and keep using config.proxyHostUrl for the
Caddy dial field, since dial expects host:port; ensure no other code references
upstreamUrl (remove any leftover imports or references) and run a quick compile
to confirm no unused-variable warnings remain.
- Around line 333-363: syncAllServicesToCaddy currently calls addServiceToCaddy
for every DB service which appends routes and creates duplicates on re-sync;
before iterating services, fetch or reset the Caddy config (use the existing
ensureBaseConfig and Caddy API helpers like caddyPost/caddyGet) and either
remove existing service routes by their unique id or replace the routes block
entirely, then proceed to addServiceToCaddy for each ServiceConfig;
alternatively implement a check inside addServiceToCaddy to skip posting when a
route with the service.id/@id already exists to avoid duplicates.
🧹 Nitpick comments (11)
src/app/api/certs/acme/renewal-info/route.ts (1)
1-3: Stub endpoints are consistent and appropriate as placeholders.All seven ACME route stubs (
acct,key-change,new-acct,new-nonce,new-order,renewal-info,revoke-cert) follow the same 501 pattern. This is fine for unblocking builds. Consider tracking the implementation of these endpoints as follow-up work items so they don't remain forgotten.Would you like me to open an issue to track the implementation of the remaining ACME endpoint stubs?
package.json (1)
20-20:@tailwindcss/clishould be indevDependencies, notdependencies.This is a build-time tool like
tailwindcssitself. Placing it independenciescauses it to be installed in production where it's not needed.Also,
@tailwindcss/postcsson Line 42 is still pinned to"^4"whiletailwindcssand@tailwindcss/cliare at"^4.1.18". Consider aligning all Tailwind packages to the same version range to avoid potential incompatibilities.Proposed fix
"dependencies": { "@base-ui/react": "^1.0.0", "@better-auth/sso": "^1.4.10", "@tabler/icons-react": "^3.36.0", - "@tailwindcss/cli": "^4.1.18", "@tanstack/react-query": "^5.90.16","devDependencies": { - "@tailwindcss/postcss": "^4", + "@tailwindcss/postcss": "^4.1.18", "@types/node": "^20", "@types/pg": "^8.16.0", "@types/react": "^19", "@types/react-dom": "^19", "babel-plugin-react-compiler": "1.0.0", + "@tailwindcss/cli": "^4.1.18", "drizzle-kit": "^0.31.8", "tailwindcss": "^4.1.18",src/components/auth.ts (1)
32-32:as anycast suppresses type checking — consider a narrower type.This unblocks the build, but the cast hides the actual shape of
ctx.context.returned. A narrower assertion (e.g.,as { status?: number }) or a runtime type guard would preserve some safety.db_migrations/0008_proxy_service_columns.sql (1)
1-1:namecolumn defaults to empty string — consider if this is intentional for existing rows.Existing proxy rows will get
name = ''. Depending on the application, you may want a backfill step or a migration script that setsnamefrom another field (e.g.,proxy_host_url) so existing records have meaningful names.src/app/api/web/delete/route.ts (1)
9-9: DELETE handler accepts a JSON body — ensure clients send the correctContent-Type.Using
req.json()in a DELETE handler is valid but uncommon. Some HTTP clients and proxies may not send a body with DELETE requests. The frontend mutation should be verified to send the body properly.src/app/api/web/get_all/route.ts (1)
19-24:select()returns all columns — consider whethercustomCertPathandcustomKeyPathshould be exposed to the client.These fields contain server filesystem paths that could leak infrastructure details. If the client doesn't need them, use a column whitelist via
.select({ id: schema.proxy.id, name: schema.proxy.name, ... }).db_migrations/meta/0008_snapshot.json (1)
239-363: No foreign key oncertificate_idreferencingcertificatestable.The
proxy.certificate_idcolumn is a UUID that logically referencescertificates.id, but no FK constraint is defined. This means the DB won't enforce referential integrity — orphaned certificate IDs or dangling references are possible. Consider adding a FK if you want the DB to enforce this relationship.docker-compose.yml (2)
50-52:host-filesvolume is declared but never used by any service.This appears to be dead configuration. Remove it or mount it in the appropriate service.
6-8: Hardcoded database credentials indocker-compose.yml.
DATABASE_URLon Line 7 and the Postgres environment on Lines 21–23 contain plaintext credentials. This is acceptable for local development, but if this compose file is used in any non-local environment, these should be sourced from environment variables or a.envfile.src/components/drizzle/schema.ts (1)
27-27:certificateIdlacks a foreign key reference to thecertificatestable.This column logically references
certificates.id(the create route inserts a certificate and stores its ID here). Without a FK constraint, orphaned references can accumulate when certificates are deleted independently, and there's no cascading cleanup.Proposed fix
- certificateId: uuid("certificate_id"), + certificateId: uuid("certificate_id").references(() => certificates.id, { onDelete: "set null" }),src/app/web/client.tsx (1)
59-80:deleteServicemutation swallows errors —useMutationstate won't reflect failures.
toast.promise(...)is called insidemutationFnbut the async callback's rejection is caught internally bytoast.promisefor display. ThemutationFnitself resolves successfully even when the delete fails, sodeleteService.isError/deleteService.errorwill never be set. If you rely on mutation state downstream (e.g., disabling the button), it won't work.Either return/await the inner promise directly and handle toast outside via
onSuccess/onErrorcallbacks on the mutation, or accept that mutation state is purely cosmetic here.Proposed refactor using mutation callbacks
const deleteService = useMutation({ mutationFn: async (id: string) => { - toast.promise( - async () => { - const req = await fetch("/api/web/delete", { - method: "DELETE", - body: JSON.stringify({ id }), - }); - const res = await req.json(); - if (!res.ok) { - throw new Error(res.error || "Failed to delete service"); - } - invalidateQuery(); - }, - { - loading: "Deleting service...", - success: "Service deleted!", - error: (e) => `Error: ${e.message}`, - }, - ); + const req = await fetch("/api/web/delete", { + method: "DELETE", + body: JSON.stringify({ id }), + }); + const res = await req.json(); + if (!res.ok) { + throw new Error(res.error || "Failed to delete service"); + } + return res; + }, + onSuccess: () => { + toast.success("Service deleted!"); + invalidateQuery(); + }, + onError: (e) => { + toast.error(`Error: ${e.message}`); }, });
| -- Fix public_urls from jsonb[] to plain jsonb (must drop default first) | ||
| ALTER TABLE "proxy" ALTER COLUMN "public_urls" DROP DEFAULT; | ||
| ALTER TABLE "proxy" ALTER COLUMN "public_urls" TYPE jsonb USING COALESCE(public_urls[1], '[]'::jsonb); | ||
| ALTER TABLE "proxy" ALTER COLUMN "public_urls" SET DEFAULT '[]'::jsonb; | ||
| ALTER TABLE "proxy" ALTER COLUMN "public_urls" SET NOT NULL; |
There was a problem hiding this comment.
Potentially lossy conversion of public_urls from jsonb[] to jsonb.
COALESCE(public_urls[1], '[]'::jsonb) only keeps the first element of the original jsonb[] array. If any existing rows had multiple elements, that data will be silently discarded.
If the intent is to merge all elements into a single JSON array, you'd need something like:
COALESCE(
(SELECT jsonb_agg(elem) FROM unnest(public_urls) AS elem),
'[]'::jsonb
)If existing data is known to never have more than one element, this is fine — but worth a comment in the migration to make it explicit.
🤖 Prompt for AI Agents
In `@db_migrations/0008_proxy_service_columns.sql` around lines 10 - 14, The
migration drops the default and converts proxy.public_urls from jsonb[] to jsonb
using COALESCE(public_urls[1], '[]'::jsonb) which discards elements beyond the
first; change the USING expression in the ALTER TABLE "proxy" ALTER COLUMN
"public_urls" TYPE jsonb USING ... to aggregate all array elements into a single
jsonb (e.g., COALESCE((SELECT jsonb_agg(elem) FROM unnest(public_urls) AS elem),
'[]'::jsonb)) so existing multiple elements are preserved, then reapply the SET
DEFAULT '[]'::jsonb and SET NOT NULL; alternatively, if you intentionally expect
at most one element, add a clear comment in the migration referencing
public_urls and the reason for keeping only the first element.
| { | ||
| "idx": 8, | ||
| "version": "7", | ||
| "when": 1739750400000, | ||
| "tag": "0008_proxy_service_columns", | ||
| "breakpoints": true | ||
| } |
There was a problem hiding this comment.
Migration timestamp is out of chronological order.
Entry 8's when value (1739750400000 ≈ Feb 2025) is earlier than entry 7's (1769526940405 ≈ Jan 2026). All prior entries have monotonically increasing timestamps. This likely indicates the migration was hand-created or the system clock was incorrect.
While Drizzle-kit primarily uses the idx for ordering, an out-of-order when field could cause confusion and may affect tooling that relies on timestamps. Consider updating it to a value after entry 7's timestamp.
🤖 Prompt for AI Agents
In `@db_migrations/meta/_journal.json` around lines 61 - 67, Update the
out-of-order timestamp in the migration journal: locate the JSON object with
"idx": 8 and "tag": "0008_proxy_service_columns" and change its "when" value to
a timestamp greater than the prior entry's "when" (1769526940405) so the "when"
field is monotonically increasing; ensure the updated value is a valid
epoch-millisecond integer and commit the modified _journal.json entry.
| @@ -0,0 +1,609 @@ | |||
| { | |||
| "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", | |||
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd 'snapshot.json' db_migrations/meta --exec head -5 {}Repository: hpware/caddy-and-cert-manager
Length of output: 1395
🏁 Script executed:
# Let's also check the full structure of the problematic file to understand context better
cat db_migrations/meta/0008_snapshot.jsonRepository: hpware/caddy-and-cert-manager
Length of output: 15116
Snapshot id is a placeholder pattern rather than a generated UUID.
The value "a1b2c3d4-e5f6-7890-abcd-ef1234567890" follows a documented example format (with its incremental segments: a1→b2→c3→d4, e5→f6, sequential 7890-abcd-ef...). This contrasts with all other snapshots in the directory, which use genuinely random UUIDs like 9e166f56-0bb7-4285-86a1-68fe56655165. Drizzle-kit uses snapshot IDs to track migration state; using a placeholder ID could cause migration tracking issues. Regenerate this snapshot via drizzle-kit generate to get a proper ID.
🤖 Prompt for AI Agents
In `@db_migrations/meta/0008_snapshot.json` at line 2, The snapshot "id" field
contains a placeholder UUID-like value instead of a real generated UUID;
regenerate the snapshot for 0008_snapshot.json using the drizzle-kit generate
command so the "id" key is replaced with the proper random UUID, then commit the
updated JSON (ensure the "id" value in the JSON is an actual UUID and not the
placeholder string).
| ports: | ||
| - "80:80" | ||
| - "443:443" | ||
| - "443:443/udp" | ||
| - "2019:2019" | ||
| environment: | ||
| CADDY_ADMIN: "0.0.0.0:2019" |
There was a problem hiding this comment.
Caddy admin API is exposed to the host network — this is a significant security risk.
Port 2019 is mapped to the host (Line 41) and CADDY_ADMIN is bound to 0.0.0.0 (Line 43). This means the Caddy admin API — which allows full server reconfiguration — is accessible from outside the Docker network. An attacker on the network can add routes, obtain certificates, or reconfigure TLS.
The cert-signer service accesses the admin API via the internal Docker network (http://caddy:2019), so there's no need to expose port 2019 to the host. Remove the port mapping and keep the admin API internal:
Proposed fix
ports:
- "80:80"
- "443:443"
- "443:443/udp"
- - "2019:2019"
environment:
- CADDY_ADMIN: "0.0.0.0:2019"
+ CADDY_ADMIN: "0.0.0.0:2019" # only accessible within Docker network nowIf you need host-level debugging access to the admin API, bind it to localhost only: "127.0.0.1:2019:2019".
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ports: | |
| - "80:80" | |
| - "443:443" | |
| - "443:443/udp" | |
| - "2019:2019" | |
| environment: | |
| CADDY_ADMIN: "0.0.0.0:2019" | |
| ports: | |
| - "80:80" | |
| - "443:443" | |
| - "443:443/udp" | |
| environment: | |
| CADDY_ADMIN: "0.0.0.0:2019" # only accessible within Docker network now |
🤖 Prompt for AI Agents
In `@docker-compose.yml` around lines 37 - 43, The docker-compose exposes Caddy's
admin API to the host (port mapping "2019:2019" and CADDY_ADMIN="0.0.0.0:2019"),
which is a security risk; remove the host port mapping for 2019 (delete the
"2019:2019" entry under ports) so the admin API stays accessible only on the
Docker network, and ensure CADDY_ADMIN is set to the internal address (e.g.,
keep CADDY_ADMIN="http://caddy:2019" or a non-0.0.0.0 bind); if host debugging
is required, bind the admin port to localhost only by replacing the mapping with
"127.0.0.1:2019:2019".
| const listenPort = parseInt( | ||
| (formData.get("listenPort") as string) || "443", | ||
| ); |
There was a problem hiding this comment.
parseInt can produce NaN, which will be inserted into the database.
If listenPort form field contains a non-numeric string, parseInt returns NaN. This propagates into the DB insert and the Caddy server name (srv-NaN), causing silent misbehavior.
Proposed fix
const listenPort = parseInt(
(formData.get("listenPort") as string) || "443",
);
+ if (isNaN(listenPort) || listenPort < 1 || listenPort > 65535) {
+ return Response.json(
+ { ok: false, error: "Invalid listen port" },
+ { status: 400 },
+ );
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const listenPort = parseInt( | |
| (formData.get("listenPort") as string) || "443", | |
| ); | |
| const listenPort = parseInt( | |
| (formData.get("listenPort") as string) || "443", | |
| ); | |
| if (isNaN(listenPort) || listenPort < 1 || listenPort > 65535) { | |
| return Response.json( | |
| { ok: false, error: "Invalid listen port" }, | |
| { status: 400 }, | |
| ); | |
| } |
🤖 Prompt for AI Agents
In `@src/app/api/web/create/route.ts` around lines 32 - 34, The listenPort value
parsed from formData via parseInt can be NaN and end up in the DB; update the
parsing logic in route.ts (the listenPort assignment inside the create route) to
validate the result of parseInt: if Number.isFinite(parsed) &&
Number.isInteger(parsed) and within TCP port range (1–65535) use it, otherwise
fall back to the default 443 (or another safe default) before inserting into the
DB and before constructing server names like `srv-<port>`; ensure the sanitized
numeric value (not NaN) is what gets persisted and used downstream.
| const upstreamUrl = config.proxyHostUrl.startsWith("http") | ||
| ? config.proxyHostUrl | ||
| : `http://${config.proxyHostUrl}`; |
There was a problem hiding this comment.
upstreamUrl is computed but never used.
Line 150-152 builds upstreamUrl (prepending http:// if missing) but Line 156 uses config.proxyHostUrl directly for the dial field. Either the intent was to use upstreamUrl somewhere, or this is dead code. Note that Caddy's dial expects host:port, not a full URL, so the current dial value is likely correct — just remove the unused variable.
Proposed fix
if (config.serviceType === "proxy" && config.proxyHostUrl) {
- const upstreamUrl = config.proxyHostUrl.startsWith("http")
- ? config.proxyHostUrl
- : `http://${config.proxyHostUrl}`;
-
const handler: Record<string, unknown> = {
handler: "reverse_proxy",
upstreams: [{ dial: config.proxyHostUrl }],
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const upstreamUrl = config.proxyHostUrl.startsWith("http") | |
| ? config.proxyHostUrl | |
| : `http://${config.proxyHostUrl}`; | |
| if (config.serviceType === "proxy" && config.proxyHostUrl) { | |
| const handler: Record<string, unknown> = { | |
| handler: "reverse_proxy", | |
| upstreams: [{ dial: config.proxyHostUrl }], | |
| }; |
🤖 Prompt for AI Agents
In `@src/components/core/caddyControl.ts` around lines 150 - 152, The local
variable upstreamUrl is computed but never used; remove the upstreamUrl
construction (the conditional that prepends "http://" to config.proxyHostUrl)
and keep using config.proxyHostUrl for the Caddy dial field, since dial expects
host:port; ensure no other code references upstreamUrl (remove any leftover
imports or references) and run a quick compile to confirm no unused-variable
warnings remain.
| async function removeLetsEncryptPolicy(domains: string[]) { | ||
| const policies = await caddyGet("/config/apps/tls/automation/policies"); | ||
| if (!policies || !Array.isArray(policies)) return; | ||
|
|
||
| const domainSet = new Set(domains); | ||
| const filtered = policies.filter( | ||
| (policy: { subjects?: string[] }) => { | ||
| if (!policy.subjects) return true; | ||
| return !policy.subjects.some((s: string) => domainSet.has(s)); | ||
| }, | ||
| ); | ||
|
|
||
| await caddyPut("/config/apps/tls/automation/policies", filtered); | ||
| } |
There was a problem hiding this comment.
removeLetsEncryptPolicy can remove policies covering unrelated domains.
The filter on Line 278 removes an entire policy if any of its subjects overlap with the provided domains. A policy with subjects: ["a.com", "b.com"] will be fully removed when deleting a service that only uses a.com, orphaning b.com's ACME automation.
Consider removing only the matching subjects from the policy and deleting the policy only when its subjects list becomes empty.
Proposed fix
async function removeLetsEncryptPolicy(domains: string[]) {
const policies = await caddyGet("/config/apps/tls/automation/policies");
if (!policies || !Array.isArray(policies)) return;
const domainSet = new Set(domains);
- const filtered = policies.filter(
- (policy: { subjects?: string[] }) => {
- if (!policy.subjects) return true;
- return !policy.subjects.some((s: string) => domainSet.has(s));
- },
- );
+ const filtered = policies
+ .map((policy: { subjects?: string[];[key: string]: unknown }) => {
+ if (!policy.subjects) return policy;
+ const remaining = policy.subjects.filter((s: string) => !domainSet.has(s));
+ if (remaining.length === 0) return null;
+ return { ...policy, subjects: remaining };
+ })
+ .filter(Boolean);
await caddyPut("/config/apps/tls/automation/policies", filtered);
}🤖 Prompt for AI Agents
In `@src/components/core/caddyControl.ts` around lines 270 - 283, The current
removeLetsEncryptPolicy function removes entire policy objects when any subject
matches; instead, update it to iterate the fetched policies and for each policy
with a subjects array filter out only the subjects that are in the domainSet,
preserving all other keys on the policy object; if the filtered subjects array
is non-empty push the modified policy (with subjects replaced) into the new
policies list, and if it becomes empty skip that policy so it gets deleted;
finally call caddyPut("/config/apps/tls/automation/policies", newPolicies).
Ensure you reference removeLetsEncryptPolicy, policies, subjects, domainSet, and
caddyPut so only matching subjects are removed and policies are only deleted
when their subjects become empty.
| export async function uploadWebHostFilesToServer( | ||
| file: File, | ||
| fileUUID: string = crypto.randomUUID(), | ||
| ) { | ||
| const tmpFolder = `/tmp/${fileUUID}/`; | ||
| const srvFolder = `/host/${fileUUID}/`; | ||
| await fs.promises.mkdir(tmpFolder, { recursive: true }); | ||
| await fs.promises.mkdir(srvFolder, { recursive: true }); | ||
| await fs.promises.writeFile( | ||
| `${tmpFolder}archive.zip`, | ||
| Buffer.from(await file.arrayBuffer()), | ||
| ); | ||
| await execAsync(`unzip ${tmpFolder}archive.zip -d ${srvFolder}`); | ||
| await execAsync(`unzip -o ${tmpFolder}archive.zip -d ${srvFolder}`); | ||
| // Clean up temp | ||
| await fs.promises.rm(tmpFolder, { recursive: true, force: true }); | ||
| return srvFolder; | ||
| } |
There was a problem hiding this comment.
Command injection risk in unzip call via execAsync.
Line 299 interpolates tmpFolder and srvFolder into a shell command. While the primary caller passes crypto.randomUUID() (safe), the function signature accepts any string as fileUUID (Line 289). A malicious or poorly-sanitized fileUUID (e.g., containing ; rm -rf /) would execute arbitrary commands.
Consider validating the UUID format or using a spawn-based approach that avoids shell interpretation.
Proposed validation
export async function uploadWebHostFilesToServer(
file: File,
fileUUID: string = crypto.randomUUID(),
) {
+ if (!/^[a-f0-9-]+$/i.test(fileUUID)) {
+ throw new Error("Invalid file UUID format");
+ }
const tmpFolder = `/tmp/${fileUUID}/`;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function uploadWebHostFilesToServer( | |
| file: File, | |
| fileUUID: string = crypto.randomUUID(), | |
| ) { | |
| const tmpFolder = `/tmp/${fileUUID}/`; | |
| const srvFolder = `/host/${fileUUID}/`; | |
| await fs.promises.mkdir(tmpFolder, { recursive: true }); | |
| await fs.promises.mkdir(srvFolder, { recursive: true }); | |
| await fs.promises.writeFile( | |
| `${tmpFolder}archive.zip`, | |
| Buffer.from(await file.arrayBuffer()), | |
| ); | |
| await execAsync(`unzip ${tmpFolder}archive.zip -d ${srvFolder}`); | |
| await execAsync(`unzip -o ${tmpFolder}archive.zip -d ${srvFolder}`); | |
| // Clean up temp | |
| await fs.promises.rm(tmpFolder, { recursive: true, force: true }); | |
| return srvFolder; | |
| } | |
| export async function uploadWebHostFilesToServer( | |
| file: File, | |
| fileUUID: string = crypto.randomUUID(), | |
| ) { | |
| if (!/^[a-f0-9-]+$/i.test(fileUUID)) { | |
| throw new Error("Invalid file UUID format"); | |
| } | |
| const tmpFolder = `/tmp/${fileUUID}/`; | |
| const srvFolder = `/host/${fileUUID}/`; | |
| await fs.promises.mkdir(tmpFolder, { recursive: true }); | |
| await fs.promises.mkdir(srvFolder, { recursive: true }); | |
| await fs.promises.writeFile( | |
| `${tmpFolder}archive.zip`, | |
| Buffer.from(await file.arrayBuffer()), | |
| ); | |
| await execAsync(`unzip -o ${tmpFolder}archive.zip -d ${srvFolder}`); | |
| // Clean up temp | |
| await fs.promises.rm(tmpFolder, { recursive: true, force: true }); | |
| return srvFolder; | |
| } |
🤖 Prompt for AI Agents
In `@src/components/core/caddyControl.ts` around lines 287 - 303, The call in
uploadWebHostFilesToServer that builds a shell command with execAsync using
tmpFolder/srvFolder is vulnerable to command injection via the fileUUID
parameter; fix it by validating fileUUID against a strict UUID v4 regex (reject
anything else) and/or stop using shell interpolation: replace execAsync(`unzip
-o ${tmpFolder}archive.zip -d ${srvFolder}`) with a spawn/execFile style call
that passes arguments (e.g., child_process.spawn or execFile with ['-o',
`${tmpFolder}archive.zip`, '-d', srvFolder]) or use a library API for zip
extraction, and ensure tmpFolder/srvFolder are constructed only after validation
of fileUUID to guarantee safe paths.
| export async function removeServiceFromCaddy(serviceId: string) { | ||
| // Remove route by @id | ||
| await caddyDelete(`/id/service-${serviceId}`); | ||
|
|
||
| // Remove TLS certificate entries | ||
| await removeTLSCertificate(serviceId); | ||
| } |
There was a problem hiding this comment.
removeServiceFromCaddy does not clean up Let's Encrypt policies.
removeTLSCertificate is called to remove load_files entries, but if the service used letsencrypt_http or letsencrypt_dns, the corresponding automation policy (added via addLetsEncryptPolicy) is never removed. Over time this leaks stale ACME policies in Caddy's config.
You'd need to know the service's certificateOrigin and publicUrls to call removeLetsEncryptPolicy. Consider either passing the full service config to this function or looking up the service from the DB.
🤖 Prompt for AI Agents
In `@src/components/core/caddyControl.ts` around lines 325 - 331,
removeServiceFromCaddy currently only calls removeTLSCertificate and doesn't
remove any ACME automation policies created by addLetsEncryptPolicy; to fix,
change removeServiceFromCaddy to obtain the service's configuration (either
accept the full service object instead of serviceId or query the DB by
serviceId), inspect certificateOrigin and publicUrls on that service, and if
certificateOrigin is "letsencrypt_http" or "letsencrypt_dns" (and publicUrls
exists/non-empty) call removeLetsEncryptPolicy with the appropriate identifiers
(serviceId/publicUrls or the same args used in addLetsEncryptPolicy) before or
after removeTLSCertificate so the Let's Encrypt policy is also cleaned up.
Ensure you reference removeServiceFromCaddy, removeTLSCertificate,
addLetsEncryptPolicy and removeLetsEncryptPolicy when implementing this change.
| export async function syncAllServicesToCaddy() { | ||
| const services = await db.select().from(schema.proxy); | ||
|
|
||
| if (services.length === 0) return; | ||
|
|
||
| await ensureBaseConfig(); | ||
|
|
||
| for (const service of services) { | ||
| try { | ||
| const config: ServiceConfig = { | ||
| id: service.id, | ||
| name: service.name, | ||
| serviceType: service.serviceType as "proxy" | "files", | ||
| publicUrls: (service.publicUrls as string[]) || [], | ||
| proxyHostUrl: service.proxyHostUrl, | ||
| fileServePath: service.fileServePath || undefined, | ||
| allowWebsocket: service.allowWebsocket, | ||
| cacheAssets: service.cacheAssets, | ||
| certificateOrigin: service.certificateOrigin, | ||
| certificateId: service.certificateId, | ||
| customCertPath: service.customCertPath, | ||
| customKeyPath: service.customKeyPath, | ||
| listenPort: service.listenPort, | ||
| listenProtocol: service.listenProtocol, | ||
| }; | ||
|
|
||
| await addServiceToCaddy(config); | ||
| } catch (e) { | ||
| console.error(`Failed to sync service ${service.id} (${service.name}) to Caddy:`, e); | ||
| } | ||
| } |
There was a problem hiding this comment.
syncAllServicesToCaddy appends routes without clearing existing ones — re-running creates duplicates.
Each call to addServiceToCaddy appends a new route via caddyPost. If this function is invoked on restart or re-sync, every service gets a duplicate route. Consider clearing the server's routes (or the entire Caddy config) before re-syncing, or checking for existing route @id before appending.
Proposed approach — reset config before syncing
export async function syncAllServicesToCaddy() {
const services = await db.select().from(schema.proxy);
- if (services.length === 0) return;
-
await ensureBaseConfig();
+
+ // Reset all servers to avoid duplicate routes on re-sync
+ await caddyPut("/config/apps/http/servers", {});
+ await caddyPut("/config/apps/tls/certificates/load_files", []);
+ await caddyPut("/config/apps/tls/automation/policies", []);
+
+ if (services.length === 0) return;
for (const service of services) {🤖 Prompt for AI Agents
In `@src/components/core/caddyControl.ts` around lines 333 - 363,
syncAllServicesToCaddy currently calls addServiceToCaddy for every DB service
which appends routes and creates duplicates on re-sync; before iterating
services, fetch or reset the Caddy config (use the existing ensureBaseConfig and
Caddy API helpers like caddyPost/caddyGet) and either remove existing service
routes by their unique id or replace the routes block entirely, then proceed to
addServiceToCaddy for each ServiceConfig; alternatively implement a check inside
addServiceToCaddy to skip posting when a route with the service.id/@id already
exists to avoid duplicates.
Summary by CodeRabbit
Release Notes