diff --git a/connectors/README.md b/connectors/README.md index bc6871b..08229bb 100644 --- a/connectors/README.md +++ b/connectors/README.md @@ -10,11 +10,12 @@ agent does the file-writing. ## Supported categories -Flue currently supports **one** connector category: +Flue supports the following connector categories: -| Category | Status | Notes | -| --------- | ---------- | ------------------------------------------------------ | -| `sandbox` | Supported | For remote sandbox providers (Daytona, E2B, Modal, etc.). | +| Category | Status | Notes | +| --------- | ---------- | --------------------------------------------------------------------------- | +| `sandbox` | Supported | For remote sandbox providers (Daytona, E2B, Modal, etc.). | +| `persist` | Supported | For session-store backends (Postgres, D1, etc.) — durable agent sessions. | > **Please don't open PRs introducing new categories.** Adding a category > requires CLI/runtime changes and a long-term maintenance commitment from @@ -81,6 +82,17 @@ For named connectors: --- ``` +Or, for a persist connector: + +```markdown +--- +{ + "category": "persist", + "website": "https://www.postgresql.org" +} +--- +``` + Fields: | Field | Type | Required when | Description | @@ -116,36 +128,41 @@ casing variants. ## Body conventions The body is the prompt an AI coding agent will read and act on. The -existing connectors (`sandbox--daytona.md` and `sandbox--vercel.md`) are -the template — match their structure as closely as possible, and only -diverge where the specifics of the provider you're connecting genuinely -require it. +existing connectors in the same category are the template — match their +structure as closely as possible, and only diverge where the specifics of +what you're connecting genuinely require it. For reference, the shape they share: 1. A single sentence framing what the connector is and that the reader is an AI agent installing it. -2. **What this connector does** — one paragraph, "wraps an +2. **What this connector does** — one paragraph. For `sandbox`, "wraps an already-initialized X into Flue's `SandboxFactory`; user owns the - provider lifecycle". + provider lifecycle". For `persist`, "wraps a configured X client into + Flue's `SessionStore`; user owns the client/binding lifecycle". 3. **Where to write the file** — be explicit about the source-layout choice - (`/.flue/connectors/` vs. `/connectors/`) and tell - the agent to ask if unsure. + (`/.flue/` vs. `/`) and tell the agent to ask if unsure. + `sandbox` connectors land in `connectors/.ts`; `persist` connectors + land in `persist/.ts`. 4. **The full TypeScript file content** in a code block, ready to write verbatim. Don't include placeholders the agent has to fill in. -5. **Required dependencies** — what the agent should `npm install`. -6. **Authentication** — how the provider authenticates (env var, OIDC, - OAuth, certs, etc.), where credentials should live, and a note never - to invent values. The shape of this section will vary the most between - providers; let the provider's actual auth model drive it. +5. **Schema or required dependencies** — `persist` connectors include the + `CREATE TABLE` (or equivalent) for the backend's session table. + `sandbox` connectors list what the agent should `npm install`. +6. **Authentication / credentials.** How the backend or provider + authenticates (env var, OIDC, OAuth, wrangler binding, certs, etc.), + where credentials should live, and a note never to invent values. The + shape of this section will vary the most between connectors; let the + actual auth/binding model drive it. 7. **Wiring it into an agent** — a usage snippet for one of the user's - agents. + agents. `sandbox` connectors pass the factory to `init({ sandbox })`; + `persist` connectors pass the store to `init({ persist })`. 8. **Verify** — typecheck + manual next-steps for the user, ending with `flue dev` / `flue run `. -For category-root files (e.g. `sandbox.md`), instead of a verbatim TS file, -point the agent at the spec doc on raw GitHub plus a known-good reference -connector (e.g. `daytona`). +For category-root files (e.g. `sandbox.md`, `persist.md`), instead of a +verbatim TS file, point the agent at the spec/contract plus a known-good +reference connector (e.g. `daytona` for sandbox, `postgres` for persist). ## Adding a new connector diff --git a/connectors/persist--d1.md b/connectors/persist--d1.md new file mode 100644 index 0000000..f3e2930 --- /dev/null +++ b/connectors/persist--d1.md @@ -0,0 +1,247 @@ +--- +{ + "category": "persist", + "website": "https://developers.cloudflare.com/d1" +} +--- + +# Add a Flue Connector: Cloudflare D1 Session Store + +You are an AI coding agent installing the Cloudflare D1 session-store +connector for a Flue project. Follow these instructions exactly. Confirm +with the user only when something is genuinely ambiguous (e.g. an unusual +project layout). + +## What this connector does + +Wraps a Cloudflare D1 binding into Flue's `SessionStore` interface, so +agent sessions on the Cloudflare target are persisted in D1 instead of +the per-agent Durable Object's storage. The user owns the wrangler binding; +this connector translates Flue's `save / load / delete` calls to SQL. + +This is the right choice when the agent runs on the **Cloudflare target** +and the team needs sessions queryable from a different Worker (admin +dashboards, BI exports, multi-Worker apps). If those needs don't apply, +stick with the default DO-SQLite store — it's faster, simpler, and ships +out of the box. For Node deployments, install the `postgres` connector +instead. + +## Where to write the file + +Pick the location based on the user's project layout: + +- **`.flue/` layout** (project has files at the root and uses `.flue/agents/` + etc.): write to `./.flue/persist/d1.ts`. +- **Root layout** (the project root itself contains `agents/` and friends): + write to `./persist/d1.ts`. + +If neither feels right (uncommon layout, multiple workspaces, etc.), ask the +user before writing. + +Create any missing parent directories. + +## File contents + +Write this file verbatim. Do not "improve" it — it conforms to the published +`SessionStore` contract. + +```ts +import type { SessionStore, SessionData } from '@flue/sdk/client'; + +/** Structural subset of Cloudflare's `D1Database`. */ +interface D1Like { + prepare(sql: string): { + bind(...values: unknown[]): { + first(): Promise; + run(): Promise; + }; + }; +} + +export interface D1StoreOptions { + /** Table name. Defaults to `flue_sessions`. */ + tableName?: string; +} + +/** + * Wrap a Cloudflare D1 binding into a Flue `SessionStore`. Pass `env.DB` + * (or whatever your binding name is). Typed as `unknown` to match the + * convention used by `getVirtualSandbox(bucket: unknown)` in the same + * package — users with `@cloudflare/workers-types` installed pass a + * `D1Database`, users without it work fine too. + */ +export function d1Store(db: unknown, options?: D1StoreOptions): SessionStore { + const table = quoteIdent(options?.tableName ?? 'flue_sessions'); + const d1 = asD1Like(db); + + return { + async save(id: string, data: SessionData): Promise { + await d1 + .prepare( + `INSERT INTO ${table} (id, data, updated_at) + VALUES (?1, ?2, ?3) + ON CONFLICT(id) DO UPDATE SET + data = excluded.data, + updated_at = excluded.updated_at`, + ) + .bind(id, JSON.stringify(data), Date.now()) + .run(); + }, + + async load(id: string): Promise { + const row = await d1 + .prepare(`SELECT data FROM ${table} WHERE id = ?1`) + .bind(id) + .first<{ data: string }>(); + return row ? (JSON.parse(row.data) as SessionData) : null; + }, + + async delete(id: string): Promise { + await d1.prepare(`DELETE FROM ${table} WHERE id = ?1`).bind(id).run(); + }, + }; +} + +function asD1Like(db: unknown): D1Like { + if ( + db === null || + typeof db !== 'object' || + typeof (db as { prepare?: unknown }).prepare !== 'function' + ) { + throw new Error( + '[flue:d1] Expected a Cloudflare D1 binding. Pass env.DB ' + + '(or your configured binding name) to d1Store().', + ); + } + return db as D1Like; +} + +// Duplicated in postgres.ts on purpose — these recipes are copied +// independently into user projects, so they don't share a helper module. +function quoteIdent(name: string): string { + if (!/^[A-Za-z_][A-Za-z0-9_]*$/.test(name)) { + throw new Error( + `[flue:d1] Invalid table name "${name}". ` + + 'Use only letters, digits, and underscores; must not start with a digit.', + ); + } + return `"${name}"`; +} +``` + +## Required dependencies + +None. D1 ships with the Workers runtime — no SDK install needed. + +## Credentials + +This connector needs a D1 database, declared as a binding in the user's +`wrangler.jsonc`. There's no API key — Cloudflare resolves the binding at +deploy time. **Never invent a `database_name` or `database_id`** — they +come from the user running `npx wrangler d1 create `. + +If `wrangler.jsonc` doesn't already declare a D1 binding, surface it and +let the user choose the name. The conventional shape: + +```jsonc +"d1_databases": [ + { "binding": "DB", "database_name": "", "database_id": "" } +] +``` + +The binding name (`DB` here) is what shows up on `env`. If the user picks a +different name, the agent file's `env.DB` reference must change to match. +For local dev, Flue's Cloudflare dev server resolves the same binding against +a local SQLite file (`--local`, the default) through the generated +`dist/wrangler.jsonc`. + +## Schema + +The store expects this table. Run it once against the user's D1, both +remote and local: + +```bash +npx wrangler d1 execute --remote --command=" + CREATE TABLE IF NOT EXISTS flue_sessions ( + id TEXT PRIMARY KEY, + data TEXT NOT NULL, + updated_at INTEGER NOT NULL + ); +" +``` + +For local development, first run `npx flue build --target cloudflare`, then +run the local schema command against the generated config Flue will hand to +Wrangler: + +```bash +(cd dist && npx wrangler d1 execute --local --config wrangler.jsonc --command=" + CREATE TABLE IF NOT EXISTS flue_sessions ( + id TEXT PRIMARY KEY, + data TEXT NOT NULL, + updated_at INTEGER NOT NULL + ); +") +``` + +Local D1 state is keyed by Wrangler's config path. Flue's Cloudflare dev +server uses `dist/wrangler.jsonc`, not the project-root `wrangler.jsonc`, so +running the local migration against the root config creates a different local +SQLite file. + +The schema intentionally mirrors the table the default DO-SQLite store creates +inside each agent's Durable Object — same column names, same types — so a +future migration tool could move rows between the two without translation. + +Treat `data` as opaque — Flue manages its shape and may evolve `SessionData` +between releases. The table name can be customized via `tableName` in the +store options; the schema is otherwise fixed. + +## Wiring it into an agent + +Here's what using this connector looks like inside a Flue agent. If the +user is already working on an agent that this connector is meant to plug +into, you can finish that work by wiring the connector into it. Otherwise, +share this snippet so they can wire it up themselves. + +```ts +import type { FlueContext } from '@flue/sdk'; +import { d1Store } from '../persist/d1'; // adjust path to match the user's layout + +export const triggers = { webhook: true }; + +interface Payload { + threadId: string; + message: string; +} + +export default async function ({ init, payload, env }: FlueContext) { + const agent = await init({ + model: 'anthropic/claude-sonnet-4-6', + persist: d1Store(env.DB), + }); + const session = await agent.session(payload.threadId); + return await session.prompt(payload.message); +} +``` + +The session id (`payload.threadId` here) is whatever the application uses to +identify a conversation thread. + +## Verify + +1. Run the user's typechecker (`npx tsc --noEmit` is a safe default) and + confirm the new file has no errors. +2. Confirm the import path you used for the connector matches where you + actually wrote the file, and that `env.DB` matches the `binding` value + in `wrangler.jsonc`. +3. Tell the user the next steps: run `npx wrangler d1 create ` if + they haven't yet, update the binding in `wrangler.jsonc` with the printed + `database_id`, run the `CREATE TABLE` for `--remote`, run + `npx flue build --target cloudflare`, run the local `CREATE TABLE` against + `dist/wrangler.jsonc`, then `npx flue dev --target cloudflare` to exercise + it on `http://localhost:3583`. + +For deeper reference (D1 vs DO-SQLite trade-offs, schema alternatives, +troubleshooting), point the user at the docs guide: +`https://github.com/withastro/flue/blob/main/docs/persist-d1.md`. diff --git a/connectors/persist--postgres.md b/connectors/persist--postgres.md new file mode 100644 index 0000000..5f5ec2b --- /dev/null +++ b/connectors/persist--postgres.md @@ -0,0 +1,209 @@ +--- +{ + "category": "persist", + "website": "https://www.postgresql.org" +} +--- + +# Add a Flue Connector: Postgres Session Store + +You are an AI coding agent installing the Postgres session-store connector +for a Flue project. Follow these instructions exactly. Confirm with the user +only when something is genuinely ambiguous (e.g. an unusual project layout). + +## What this connector does + +Wraps a configured `pg.Client` or `pg.Pool` into Flue's `SessionStore` +interface, so multi-turn agent sessions survive process restarts, rolling +deploys, and autoscaling events. The user owns the `pg` client lifecycle +(credentials, TLS, pool sizing); this connector translates Flue's +`save / load / delete` calls to SQL. + +This is the right choice when the agent runs on the **Node target** (Render, +Fly, Railway, EC2, Docker, etc.) and the team already operates Postgres. On +Cloudflare, prefer the default Durable Object storage; if you need +queryable-from-other-Workers persistence, install the `d1` connector instead. + +## Where to write the file + +Pick the location based on the user's project layout: + +- **`.flue/` layout** (project has files at the root and uses `.flue/agents/` + etc.): write to `./.flue/persist/postgres.ts`. +- **Root layout** (the project root itself contains `agents/` and friends): + write to `./persist/postgres.ts`. + +If neither feels right (uncommon layout, multiple workspaces, etc.), ask the +user before writing. + +Create any missing parent directories. + +## File contents + +Write this file verbatim. Do not "improve" it — it conforms to the published +`SessionStore` contract. + +```ts +import type { SessionStore, SessionData } from '@flue/sdk/client'; + +/** Structural subset of `pg.Client` and `pg.Pool` — accepts either. */ +interface PgQueryable { + query(sql: string, params?: unknown[]): Promise<{ rows: R[] }>; +} + +export interface PostgresStoreOptions { + /** Table name. Defaults to `flue_sessions`. */ + tableName?: string; +} + +/** + * Wrap a configured `pg` Client or Pool into a Flue `SessionStore`. The user + * owns the client lifecycle (credentials, TLS, pool sizing); this adapter + * just translates `save / load / delete` to SQL. + */ +export function postgresStore( + client: PgQueryable, + options?: PostgresStoreOptions, +): SessionStore { + const table = quoteIdent(options?.tableName ?? 'flue_sessions'); + + return { + async save(id: string, data: SessionData): Promise { + await client.query( + `INSERT INTO ${table} (id, data, updated_at) + VALUES ($1, $2::jsonb, NOW()) + ON CONFLICT (id) DO UPDATE + SET data = EXCLUDED.data, + updated_at = NOW()`, + [id, JSON.stringify(data)], + ); + }, + + async load(id: string): Promise { + const { rows } = await client.query<{ data: SessionData }>( + `SELECT data FROM ${table} WHERE id = $1`, + [id], + ); + return rows[0]?.data ?? null; + }, + + async delete(id: string): Promise { + await client.query(`DELETE FROM ${table} WHERE id = $1`, [id]); + }, + }; +} + +// Duplicated in d1.ts on purpose — these recipes are copied independently +// into user projects, so they don't share a helper module. +function quoteIdent(name: string): string { + if (!/^[A-Za-z_][A-Za-z0-9_]*$/.test(name)) { + throw new Error( + `[flue:postgres] Invalid table name "${name}". ` + + 'Use only letters, digits, and underscores; must not start with a digit.', + ); + } + return `"${name}"`; +} +``` + +## Required dependencies + +The user's agent file imports `pg` to construct the client. If their +`package.json` does not already list it, add it: + +```bash +npm install pg +``` + +(Use the user's package manager — `pnpm add`, `yarn add`, etc. if their +lockfile indicates a different one.) + +## Credentials + +This connector needs a Postgres connection at runtime — typically supplied +as a `DATABASE_URL` env var (or split host/user/password/database vars, +whichever the user already follows). **Never invent a value for it** — it +must come from the user. + +Use your judgment for where it should live. The project's conventions, an +`AGENTS.md`, or an existing setup (`.env`, `.dev.vars`, a secret manager, +CI vars, etc.) will usually tell you the right answer. If nothing in the +project gives you a clear signal, ask the user instead of guessing. + +For reference: `flue dev --env ` and `flue run --env ` load +any `.env`-format file the user points them at. + +## Schema + +The store expects this table. Run it once against the user's database (or +include it in their migration tool of choice): + +```sql +CREATE TABLE IF NOT EXISTS flue_sessions ( + id TEXT PRIMARY KEY, + data JSONB NOT NULL, + updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW() +); +``` + +Treat `data` as opaque — Flue manages its shape and may evolve `SessionData` +between releases. The table name can be customized via `tableName` in the +store options; the schema is otherwise fixed. + +If the user has a long-session workload (50+ turns, embedded tool results, +hundreds-of-KB rows) and asks about alternatives, point them at the +`Schema choices` section in the docs guide +(`https://github.com/withastro/flue/blob/main/docs/persist-postgres.md`) — +it covers append-log/index and hot/cold alternatives, including the +reconciliation caveats required by Flue's current `save(id, data)` contract. +Don't switch shapes without an explicit ask; single-blob is the right default. + +## Wiring it into an agent + +Here's what using this connector looks like inside a Flue agent. If the +user is already working on an agent that this connector is meant to plug +into, you can finish that work by wiring the connector into it. Otherwise, +share this snippet so they can wire it up themselves. + +```ts +import type { FlueContext } from '@flue/sdk'; +import pg from 'pg'; +import { postgresStore } from '../persist/postgres'; // adjust path to match the user's layout + +export const triggers = { webhook: true }; + +interface Payload { + threadId: string; + message: string; +} + +const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL }); + +export default async function ({ init, payload }: FlueContext) { + const agent = await init({ + model: 'anthropic/claude-sonnet-4-6', + persist: postgresStore(pool), + }); + const session = await agent.session(payload.threadId); + return await session.prompt(payload.message); +} +``` + +The session id (`payload.threadId` here) is whatever the application uses to +identify a conversation thread — typically a customer ID, a chat-room ID, or +anything else stable across requests. + +## Verify + +1. Run the user's typechecker (`npx tsc --noEmit` is a safe default) and + confirm the new file has no errors. +2. Confirm the import path you used for the connector matches where you + actually wrote the file. +3. Tell the user the next steps: run the `CREATE TABLE` against their + database, install `pg` (if you didn't), make sure `DATABASE_URL` is + available at runtime (per the Credentials section above), and run + `flue dev` — or `flue run ` to exercise it. + +For deeper reference (schema alternatives, concurrency model, troubleshooting), +point the user at the docs guide: +`https://github.com/withastro/flue/blob/main/docs/persist-postgres.md`. diff --git a/connectors/persist.md b/connectors/persist.md new file mode 100644 index 0000000..0be297d --- /dev/null +++ b/connectors/persist.md @@ -0,0 +1,107 @@ +--- +{ + "category": "persist", + "root": true +} +--- + +# Generic Persist Connector + +## Goal + +You are an AI coding agent being asked to build a Flue **persist** connector +for a database that Flue does not have a built-in recipe for. The deliverable +is one file in the user's project that exports a `SessionStore` for the +backend, satisfying Flue's published contract. + +There's no fixed procedure for getting there — your backend's shape (typed +SDK, HTTP-only client, ORM, raw driver) will dictate most of how you +implement it. The notes below are the things you can't reasonably infer from +the contract or the worked examples. + +## Starting point + +The user invoked `flue add --category persist` with this argument as +their starting point for the database's documentation: + +`{{URL}}` + +It's user-provided and was passed through verbatim — it might be a docs root, +a client-library reference, a GitHub repo, or a marketing page. Treat it as a +hint, not a verified docs link, and use your judgment on where to go from +there to collect the necessary information. + +## References + +Read these before writing code. + +- **Contract** — the `SessionStore` interface and the `SessionData` type are + exported from `@flue/sdk/client`. There is no separate spec document; the + TypeScript types are authoritative. The shape: + + ```ts + interface SessionStore { + save(id: string, data: SessionData): Promise; + load(id: string): Promise; + delete(id: string): Promise; + } + ``` + + `SessionData` is opaque — Flue manages its shape and may evolve it between + releases. Persist it as a single blob and don't reach into it. + +- **Worked examples** — two finished connectors. Both use the single-blob + shape; they differ in DBMS (relational vs Cloudflare's SQLite) and in how + the client is supplied (env-var connection string vs wrangler binding): + - Postgres: `https://flueframework.com/cli/connectors/postgres.md` + - D1: `https://flueframework.com/cli/connectors/d1.md` + +## Flue-specific conventions + +These are the things that aren't obvious from the contract or the examples. + +- **File location.** `./.flue/persist/.ts` if the project uses the + `.flue/` layout, or `./persist/.ts` for the root layout. Ask the + user if their layout is unusual. +- **Imports.** `SessionStore` and `SessionData` are exported from + `@flue/sdk/client`. Don't import from `@flue/sdk/internal` or any other + internal path. +- **`data` is opaque.** Flue manages the shape of `SessionData` and may + evolve it between releases. Persist it as a single JSON blob (JSONB, + TEXT, BSON, etc. depending on the backend). Don't reach into it from the + store, and tell the user not to query it from outside Flue either — + they should add their own application-owned columns alongside if they + need queryable session-level state. +- **Credentials.** If the backend needs secrets at runtime, never invent + values for them. Let the project's conventions (`AGENTS.md`, an existing + `.env` / `.dev.vars`, a secret manager, CI vars) decide where they + belong, and ask the user only if nothing in the project gives a clear + signal. +- **Client lifecycle is the user's.** Accept an already-configured client + (Pool, Connection, binding) as the first argument; don't construct one + inside the store. Same shape as the sandbox connectors. +- **Single-blob is the right default.** One row per session, the entire + `SessionData` blob in one column, rewritten on every `save()`. The + Postgres recipe walks through append-log and hot/cold alternatives, + including the reconciliation caveats. Start with single-blob unless the + user has specific needs. + +## Wrapping up + +- Typecheck the project (`npx tsc --noEmit` is safe). Fix anything you broke. +- If the user is mid-task on an agent that this store is meant to plug into, + finish that wiring (`init({ persist: (client) })`). Otherwise + share a small snippet showing how to wire it up. +- Tell the user what to run next: any new deps you added, the schema / + migration command for their backend, env vars they need to set, and the + command to start the agent (`flue dev` or equivalent). + +## Hard rules + +- Never invent connection strings, API keys, or secrets. +- Don't modify files outside the connector path you've chosen unless the + user agreed (e.g. `package.json` to add a driver dep). +- The published surface is `@flue/sdk/client` for `SessionStore` / + `SessionData`. Don't import from `@flue/sdk/internal` or anywhere else. +- Treat `SessionData` as opaque. Don't pretty-print, transform, or index + into it from inside the store. diff --git a/docs/deploy-cloudflare.md b/docs/deploy-cloudflare.md index f03137b..99a4d52 100644 --- a/docs/deploy-cloudflare.md +++ b/docs/deploy-cloudflare.md @@ -244,7 +244,7 @@ done - **No container** — Still running on a virtual sandbox. Fast startup, low cost. - **Persistent data** — The knowledge base lives in R2 and persists across requests. - **Agent-native search** — The agent uses grep and glob to find relevant articles, just like it would in a real filesystem. -- **Session persistence** — Because this deploys to Cloudflare Workers with Durable Objects, message history and session state are automatically persisted. A customer can revisit a support session days later and pick up where they left off. +- **Session persistence** — Because this deploys to Cloudflare Workers with Durable Objects, message history and session state are automatically persisted. A customer can revisit a support session days later and pick up where they left off. If you need sessions queryable from outside the agent process (admin dashboards, separate Workers, BI exports), see [Persist sessions in D1](./persist-d1.md) for an alternative store. ## Connecting a remote sandbox diff --git a/docs/deploy-node.md b/docs/deploy-node.md index 3172212..1c289e0 100644 --- a/docs/deploy-node.md +++ b/docs/deploy-node.md @@ -332,7 +332,7 @@ export default async function ({ init, payload }: FlueContext) { } ``` -You can back this with any database: SQLite, Postgres, Redis, etc. +You can back this with any database: SQLite, Postgres, Redis, etc. For a worked Postgres example, see [Persist sessions in Postgres](./persist-postgres.md). ## Building and deploying diff --git a/docs/persist-d1.md b/docs/persist-d1.md new file mode 100644 index 0000000..668e1de --- /dev/null +++ b/docs/persist-d1.md @@ -0,0 +1,192 @@ +# Persist Sessions in D1 + +This guide takes a working Flue agent on the Cloudflare target and gives it durable session state backed by [Cloudflare D1](https://developers.cloudflare.com/d1/) — useful when sessions need to be queryable from outside the agent process (admin dashboards, separate UI Workers, BI exports). + +## When you'd want this + +By default, Flue agents on Cloudflare persist sessions automatically via the per-agent Durable Object's storage. That's the fast path — sub-millisecond reads, strong consistency, no extra binding to configure. Switch to D1 when you need: + +- **Sessions queryable from a different Worker.** A separate UI/admin Worker can read agent sessions without the per-agent DO becoming a bottleneck. +- **Sessions queryable across all agents in one query.** "Show me every session that mentioned product X this week" is a single SQL query against D1, vs. iterating every DO. +- **A familiar SQL surface for downstream tooling.** D1 exports cleanly to BI; DO storage doesn't. + +If none of those apply, stick with the default DO-SQLite store — it's faster, simpler, and ships out of the box. + +## Prerequisites + +You should already have a Flue project that builds and runs on the Cloudflare target. If you don't yet, start with [Deploy on Cloudflare](./deploy-cloudflare.md). + +You also need: + +- **A D1 database.** Create one with `npx wrangler d1 create ` and bind it in your `wrangler.jsonc`: + + ```jsonc + "d1_databases": [ + { "binding": "DB", "database_name": "", "database_id": "" } + ] + ``` + + No SDK install is needed — D1 ships with the Workers runtime. + +## Install the connector + +Install the D1 session-store connector with `flue add`. Always pass `--print` — it's the safe default whether you're a human pasting the output into your coding agent of choice, or an agent running this command yourself: + +```bash +# Print the install instructions and let your agent (or you) handle the rest +flue add d1 --print + +# Or pipe directly to a coding agent +flue add d1 --print | claude +``` + +This drops a `persist/d1.ts` file into your workspace (under `.flue/persist/` if you're using the `.flue/` layout, or `persist/` at the project root otherwise) and walks the agent through the schema, `wrangler.jsonc` binding, and agent wiring. + +The connector is a small TypeScript adapter (~70 lines) that wraps a Cloudflare D1 binding into Flue's `SessionStore` interface. + +## Create the table + +The connector instructions include the schema, but for reference: + +```bash +npx wrangler d1 execute --remote --command=" + CREATE TABLE IF NOT EXISTS flue_sessions ( + id TEXT PRIMARY KEY, + data TEXT NOT NULL, + updated_at INTEGER NOT NULL + ); +" +``` + +For local development, first run `npx flue build --target cloudflare`, then run the local schema command against the generated config that Flue passes to Wrangler: + +```bash +npx flue build --target cloudflare +(cd dist && npx wrangler d1 execute --local --config wrangler.jsonc --command=" + CREATE TABLE IF NOT EXISTS flue_sessions ( + id TEXT PRIMARY KEY, + data TEXT NOT NULL, + updated_at INTEGER NOT NULL + ); +") +``` + +Local D1 state is keyed by Wrangler's config path. Flue's Cloudflare dev server uses `dist/wrangler.jsonc`, not the project-root `wrangler.jsonc`, so running the local migration against the root config creates a different local SQLite file. + +The schema intentionally mirrors the table the default DO-SQLite store creates inside each agent's Durable Object — same column names, same types — so a future migration tool could move rows between the two without translation. + +## Use the store in your agent + +The connector instructions include the agent-wiring snippet, but for reference: + +```typescript +import type { FlueContext } from '@flue/sdk'; +import { d1Store } from '../persist/d1'; + +export const triggers = { webhook: true }; + +interface Payload { + threadId: string; + message: string; +} + +export default async function ({ init, payload, env }: FlueContext) { + const agent = await init({ + model: 'anthropic/claude-sonnet-4-6', + persist: d1Store(env.DB), + }); + const session = await agent.session(payload.threadId); + return await session.prompt(payload.message); +} +``` + +The session id (`payload.threadId` here) is whatever your application uses to identify a conversation thread. Flue keys session storage on this id. + +## Verify locally + +Use Flue's Cloudflare dev server with the local D1 database: + +```bash +npx flue build --target cloudflare +(cd dist && npx wrangler d1 execute --local --config wrangler.jsonc --command=" + CREATE TABLE IF NOT EXISTS flue_sessions ( + id TEXT PRIMARY KEY, + data TEXT NOT NULL, + updated_at INTEGER NOT NULL + ); +") +npx flue dev --target cloudflare +``` + +Then, in another shell: + +```bash +curl -X POST http://localhost:3583/agents/with-d1-persist/thread-1 \ + -H 'Content-Type: application/json' \ + -d '{"threadId":"thread-1","message":"My name is Maya."}' + +curl -X POST http://localhost:3583/agents/with-d1-persist/thread-1 \ + -H 'Content-Type: application/json' \ + -d '{"threadId":"thread-1","message":"What did I just say my name was?"}' +``` + +The second response should reference "Maya" — that's the round-trip working. Inspect the row directly to confirm: + +```bash +(cd dist && npx wrangler d1 execute --local --config wrangler.jsonc \ + --command="SELECT id, length(data), updated_at FROM flue_sessions;") +``` + +## Schema choices (and when to revisit them) + +The schema above is the simplest shape that satisfies Flue's `SessionStore` contract: one row per session, the entire `SessionData` blob in a TEXT column, rewritten on every `save()`. Same trade-offs as the [Postgres single-blob option](./persist-postgres.md#schema-choices-and-when-to-revisit-them) — see that section for append-log and hot/cold alternatives. The D1 versions of those alternatives are possible, but they are not a mechanical copy-paste: an append-log adapter still has to treat each `save(id, data)` as the authoritative current session and reconcile entries that disappeared from the latest `SessionData`. + +For most workloads, start with the single-blob shape — that's what the connector ships. Revisit append-log only when you hit row size pressure or need to query session metadata at scale, and budget for the extra reconciliation logic. + +## D1 vs the default DO-SQLite store + +The default DO-SQLite store (used automatically when you don't pass `persist`) and `d1Store` solve overlapping problems with different trade-offs: + +| Dimension | Default DO-SQLite | D1 store | +|---|---|---| +| Setup | Zero — automatic | Create database + binding + table | +| Read latency | Sub-millisecond (in-memory tier) | Single-digit milliseconds | +| Write latency | Sub-millisecond | Single-digit milliseconds | +| Queryable from another Worker? | No — DO-private | Yes | +| Queryable across all sessions? | No — per-DO scope | Yes (one D1, all rows) | +| BI / admin tooling | Manual export | Native SQL export | +| Cost | Included in DO usage | D1 reads/writes billed separately | +| Best for | Hot-path session resume | Multi-Worker apps, admin dashboards, analytics | + +You can use both. Make the agent default to DO-SQLite (don't pass `persist`) for hot reads, and run a small periodic Worker that mirrors completed sessions into D1 for analytics. + +## What Flue manages vs what you manage + +| Flue manages | You manage | +|---|---| +| The shape of `SessionData` | The D1 database — creation, binding, billing | +| The `SessionStore` interface contract | The wrangler binding name (`DB` here) | +| When `save` / `load` / `delete` are called | Schema migrations if you customize the table | +| Compaction — the `data` column may shrink over time | Indexes if you query into `data` yourself | + +Treat `data` as opaque. Don't query into it from application code — internal shape is not a stable interface and may change between releases. Add your own columns if you need queryable session-level state (e.g. a `customer_id TEXT` column populated from your application). + +## Concurrency + +Same model as the [Postgres guide](./persist-postgres.md#concurrency): `INSERT ... ON CONFLICT(id) DO UPDATE` is last-writer-wins on `save(id, data)`. Within a single Worker instance Flue gates concurrent operations on the same session via `runExclusive`. Across instances — D1 is shared, so concurrent saves can interleave and the last commit wins. Cloudflare's request routing typically sends the same session id to the same Worker instance via the agent DO, so cross-instance races are rare; if your application routes outside the Flue handler, fence at the application layer. + +## Troubleshooting + +**`no such table: flue_sessions`** — run the `CREATE TABLE` for your environment. Local DB is separate from remote — you may need both. For local `flue dev`, run the local schema command against `dist/wrangler.jsonc` after `npx flue build --target cloudflare`; using the root config writes to a different local D1 database. + +**`D1_ERROR: ...`** — D1 errors carry the underlying SQLite error class. Most often it's the missing-table case above; otherwise check the SQL against [D1's SQLite dialect notes](https://developers.cloudflare.com/d1/sql-api/sql-statements/). + +**Sessions disappear between requests** — confirm the same `payload.threadId` is being passed to `agent.session(id)`. If the id changes per request, every call starts a fresh session. + +**`SyntaxError: Unexpected token in JSON`** — `JSON.parse` failed because the row's `data` column is malformed. Most often happens because someone wrote into the column from outside Flue. Treat `data` as opaque. + +## Other databases + +`d1Store` is small (~70 lines). Adapting it to a different SQLite-compatible backend (Turso, libSQL, raw `better-sqlite3`) is mostly mechanical: swap `db.prepare(...).bind(...).run()` for the equivalent on your client. The schema-choice trade-offs above carry across. + +If Flue doesn't have a built-in connector for your backend yet, `flue add --category persist` will pipe a generic recipe to your agent. If you ship one, consider opening a pull request with both the connector (`connectors/persist--.md`) and a docs guide modeled on this one or the [Postgres guide](./persist-postgres.md). diff --git a/docs/persist-postgres.md b/docs/persist-postgres.md new file mode 100644 index 0000000..84bf9c3 --- /dev/null +++ b/docs/persist-postgres.md @@ -0,0 +1,212 @@ +# Persist Sessions in Postgres + +This guide takes a working Flue agent on the Node target and gives it durable session state backed by Postgres — so multi-turn conversations survive process restarts, rolling deploys, and autoscaling events. + +## When you'd want this + +The default `InMemorySessionStore` is fine for development and stateless agents. Switch to a Postgres-backed store when: + +- You're deploying to **Render, Fly, Railway, EC2 + autoscaling**, or anywhere a redeploy or rolling restart can happen mid-conversation. +- Your agent runs **multi-turn workflows** that span minutes to hours (a customer-support thread, a long research task, an async tool-call loop). +- You already operate Postgres and would rather not stand up another datastore. + +If you're on Cloudflare, you don't need this — Durable Object storage handles persistence automatically. For the D1 equivalent (queryable from other Workers / admin tooling), see [Persist sessions in D1](./persist-d1.md). + +## Prerequisites + +You should already have a Flue project that builds and runs on the Node target. If you don't yet, start with [Deploy on Node.js](./deploy-node.md) and come back here. + +You also need: + +- **A reachable Postgres** (managed Postgres, a docker-compose container, anything that speaks the wire protocol). +- **The `pg` package** in your project. Install it with whatever package manager your project uses: + + ```bash + npm install pg + ``` + +## Install the connector + +Install the Postgres session-store connector with `flue add`. Always pass `--print` — it's the safe default whether you're a human pasting the output into your coding agent of choice, or an agent running this command yourself: + +```bash +# Print the install instructions and let your agent (or you) handle the rest +flue add postgres --print + +# Or pipe directly to a coding agent +flue add postgres --print | claude +``` + +This drops a `persist/postgres.ts` file into your workspace (under `.flue/persist/` if you're using the `.flue/` layout, or `persist/` at the project root otherwise) and walks the agent through the schema, `pg` install, and agent wiring. + +The connector is a small TypeScript adapter (~50 lines) that wraps a configured `pg.Client` or `pg.Pool` into Flue's `SessionStore` interface. + +## Create the table + +The connector instructions include the `CREATE TABLE` to run, but for reference: + +```sql +CREATE TABLE IF NOT EXISTS flue_sessions ( + id TEXT PRIMARY KEY, + data JSONB NOT NULL, + updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW() +); +``` + +That's the whole schema. Treat the `data` column as opaque — Flue manages its shape and may evolve `SessionData` between releases. You can pick a different table name if you prefer; the store accepts it as an option. + +## Use the store in your agent + +The connector instructions include the agent-wiring snippet, but for reference: + +```typescript +import type { FlueContext } from '@flue/sdk'; +import pg from 'pg'; +import { postgresStore } from '../persist/postgres'; + +export const triggers = { webhook: true }; + +const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL }); + +export default async function ({ init, payload }: FlueContext) { + const agent = await init({ + model: 'anthropic/claude-sonnet-4-6', + persist: postgresStore(pool), + }); + const session = await agent.session(payload.threadId); + return await session.prompt(payload.message); +} +``` + +The session id (`payload.threadId` here) is whatever your application uses to identify a conversation thread — typically a customer ID, a chat-room ID, or anything else stable across requests. Flue keys session storage on this id. + +## Verify locally + +The fastest path to a working setup is `docker-compose` with a throwaway Postgres: + +```yaml +# docker-compose.yml +services: + postgres: + image: postgres:16 + environment: + POSTGRES_PASSWORD: flue + ports: ['5432:5432'] +``` + +```bash +docker compose up -d +DATABASE_URL=postgres://postgres:flue@localhost:5432/postgres \ + npx flue run with-postgres-persist --target node --id thread-1 \ + --payload '{"threadId":"thread-1","message":"My name is Maya."}' + +DATABASE_URL=postgres://postgres:flue@localhost:5432/postgres \ + npx flue run with-postgres-persist --target node --id thread-1 \ + --payload '{"threadId":"thread-1","message":"What did I just say my name was?"}' +``` + +The second invocation should reference "Maya" — that's the round-trip working. Inspect the row directly to confirm: + +```bash +docker compose exec postgres psql -U postgres \ + -c "SELECT id, pg_column_size(data) AS bytes, updated_at FROM flue_sessions;" +``` + +## Schema choices (and when to revisit them) + +The schema above is the simplest shape that satisfies Flue's `SessionStore` contract: one row per session, the entire `SessionData` blob in a JSONB column, rewritten on every `save()`. It's the right default for most workloads. Two cases where you'd want something different — both are real shapes that other coding-agent CLIs converged on as their sessions got long. + +### Option 1 (default): single-blob + +What's above. `save(id, data)` does an `INSERT ... ON CONFLICT DO UPDATE`. `load(id)` is a single SELECT. `delete(id)` is a single DELETE. + +- Pros: trivial schema, atomic save, easy to reason about, easy to back up. +- Cons: `save()` rewrites the entire blob every turn. For long sessions (50+ turns with embedded tool results) the row can grow to hundreds of KB and each save costs that much write I/O. JSONB TOAST handling absorbs a lot of this, but row churn still scales with session length. +- **Use when:** sessions are short (under ~30 turns) or low-volume, and you'd rather have a boring schema than save a few percent on writes. + +### Option 2: append-log + index (matches Claude Code, Codex, OpenCode 1.2+) + +Two tables instead of one. The session row holds queryable metadata (id, created/updated, project, last-leaf-id) and the message history lives in an entries table. A production adapter can make common saves cheap by inserting newly observed entries, but every `save(id, data)` still has to treat the incoming `SessionData` as the authoritative current state. + +```sql +CREATE TABLE IF NOT EXISTS flue_sessions ( + id TEXT PRIMARY KEY, + metadata JSONB NOT NULL DEFAULT '{}'::jsonb, + leaf_id TEXT, + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW() +); + +CREATE TABLE IF NOT EXISTS flue_session_entries ( + session_id TEXT NOT NULL REFERENCES flue_sessions(id) ON DELETE CASCADE, + entry_id TEXT NOT NULL, + parent_id TEXT, + ord BIGSERIAL, + type TEXT NOT NULL, + entry JSONB NOT NULL, + PRIMARY KEY (session_id, entry_id) +); + +CREATE INDEX IF NOT EXISTS flue_session_entries_by_session + ON flue_session_entries (session_id, ord); +``` + +This is what Claude Code does with its per-session JSONL files plus a sidecar SQLite index. It's what Codex CLI does. It's what OpenCode shifted to in 1.2. + +- Pros: most saves write only new entries (`O(delta)` instead of `O(history)`); the metadata table is queryable for admin tooling ("which sessions touched project X this week?"); `ON DELETE CASCADE` makes `delete(id)` clean; you can reconstruct the leaf path by walking `parent_id` without parsing a megabyte of JSONB. +- Cons: the adapter is more code (~120 lines instead of 50) because `save()` has to reconcile the full `SessionData` snapshot against persisted rows. The Flue `SessionStore` interface today gives the adapter the *whole* current session on every save, not an explicit event stream. +- **Use when:** sessions are long (50+ turns), you have many concurrent sessions, or you want metadata queryable from outside the agent process. Most multi-tenant agent SaaS lands here eventually. + +The reconciliation strategy in `save()`: pull the existing entry IDs in one query, insert entries whose IDs are new, update the session metadata/leaf pointer, and remove or tombstone any persisted entries that are absent from the latest `SessionData`. Don't blindly append unseen IDs forever. Compaction entries are append-only, but overflow recovery can discard a just-saved leaf entry before retrying, so `load()` must reconstruct the latest authoritative session, not the union of everything ever seen. + +### Option 3: hot/cold split + +A third shape some teams reach for: hot session data in Redis (fast `load`/`save`), cold sessions flushed to Postgres on TTL expiry. Flue's `SessionStore` interface composes — write a wrapper store that delegates to either backend based on a TTL, and you have this without changes to Flue. + +- Pros: sub-millisecond `load()` for active sessions; bounded Postgres growth. +- Cons: two systems to operate; the failure mode is "user resumes a session right as the TTL expires" — make sure the cold-write happens before the hot eviction, not after. +- **Use when:** you've got the volume to need it, and a small Redis is already in your infra. + +### Recommendation + +Start with Option 1 — that's what the connector ships. Migrate to Option 2 if you hit any of: rows over ~500 KB, sessions where `save()` is the slow path, or operational pressure to query session metadata without going through the agent. Migrate to Option 3 only when Postgres write volume is genuinely the bottleneck — which for most agent workloads, it isn't. + +If you want a worked Option 2 schema with the diff-on-save adapter, open an issue describing your workload and we'll either point at a community recipe or write one. + +## What Flue manages vs what you manage + +| Flue manages | You manage | +|---|---| +| The shape of `SessionData` (entries, leafId, metadata, compaction state) | The Postgres database — provisioning, credentials, TLS, backups | +| The `SessionStore` interface contract | The `pg` client lifecycle — pool sizing, connection management, teardown | +| When `save` / `load` / `delete` are called | Schema migrations if you customize the table | +| Compaction — the `data` column may shrink over time | Indexes on `data` if you query into the JSON yourself | + +Flue treats `data` as opaque. Don't query into it from application code — internal shape is not a stable interface and may change between releases. Add your own columns if you need queryable session-level state (e.g. a `customer_id TEXT` column populated from your application). + +## Concurrency + +The store is **last-writer-wins** on `save(id, data)`. Within a single Node process Flue gates concurrent operations on the same session via `runExclusive`, so two `prompt`/`skill`/`task` calls on the same `Session` instance don't race. **Across processes** — multi-instance Render service, multiple workers behind a load balancer — two saves for the same session id can interleave, and the last commit wins. + +If your application routes multiple in-flight requests for the same session id to multiple processes, fence at your application layer: + +- **Sticky routing.** Load-balancer hashing on the session id sends the same id to the same process; combined with Flue's `runExclusive`, this is sufficient. +- **Application-level lock.** A small Redis-or-similar lock around the Flue handler. Most agent workloads don't need this. + +The store does not implement optimistic concurrency or distributed locks. If you need them, build them on top — or open an issue describing the workload. + +## Troubleshooting + +**`relation "flue_sessions" does not exist`** — run the `CREATE TABLE` from the [Create the table](#create-the-table) section. + +**`cannot find module 'pg'`** — install it: `npm install pg`. Real projects import `pg` statically at the top of the agent (so the import error fails the build rather than the request); the `postgresStore` recipe doesn't import `pg` itself. + +**Sessions disappear between requests** — confirm the same `payload.threadId` (or whatever you key on) is being passed to `agent.session(id)`. If the id changes per request, every call starts a fresh session. + +**`SyntaxError: Unexpected token in JSON`** — `JSON.parse` failed in the SDK because the row's `data` column is malformed. Most often this happens because someone wrote into the column from outside Flue. Treat `data` as opaque. + +## Other databases + +`postgresStore` is a small adapter — about 50 lines of code. Adapting it to MySQL, SQLite, MongoDB, DynamoDB, or Redis is mostly mechanical: implement `save` / `load` / `delete` against your client of choice. The schema-choice trade-offs above carry across — most relational backends fit Option 1 or Option 2, most KV backends fit Option 3 cleanly. + +If Flue doesn't have a built-in connector for your backend yet, `flue add --category persist` will pipe a generic recipe to your agent that walks through the same shape with your backend's docs as the starting point. If you ship one, consider opening a pull request with both the connector (`connectors/persist--.md`) and a docs guide modeled on this one. diff --git a/examples/cloudflare/.flue/agents/with-d1-persist.ts b/examples/cloudflare/.flue/agents/with-d1-persist.ts new file mode 100644 index 0000000..a61be21 --- /dev/null +++ b/examples/cloudflare/.flue/agents/with-d1-persist.ts @@ -0,0 +1,28 @@ +import type { FlueContext } from '@flue/sdk'; +import { d1Store } from '../persist/d1'; + +export const triggers = { webhook: true }; + +interface Payload { + threadId: string; + message: string; +} + +/** + * Demonstrates durable session state backed by Cloudflare D1. + * + * Two requests with the same `threadId` against the same D1 binding share a + * single Flue session — useful when sessions need to be queryable from + * outside the agent process (e.g. from a separate UI Worker or admin tool). + * For per-instance hot-path persistence, the default DO SQLite store is + * usually a better fit. See docs/persist-d1.md for the schema and trade-offs. + */ +export default async function ({ init, payload, env }: FlueContext) { + const agent = await init({ + model: 'anthropic/claude-haiku-4-5', + persist: d1Store(env.DB), + }); + const session = await agent.session(payload.threadId); + const response = await session.prompt(payload.message); + return { threadId: payload.threadId, response: response.text }; +} diff --git a/examples/cloudflare/.flue/persist/d1.ts b/examples/cloudflare/.flue/persist/d1.ts new file mode 100644 index 0000000..86bb291 --- /dev/null +++ b/examples/cloudflare/.flue/persist/d1.ts @@ -0,0 +1,81 @@ +import type { SessionStore, SessionData } from '@flue/sdk/client'; + +/** Structural subset of Cloudflare's `D1Database`. */ +interface D1Like { + prepare(sql: string): { + bind(...values: unknown[]): { + first(): Promise; + run(): Promise; + }; + }; +} + +export interface D1StoreOptions { + /** Table name. Defaults to `flue_sessions`. */ + tableName?: string; +} + +/** + * Wrap a Cloudflare D1 binding into a Flue `SessionStore`. Pass `env.DB` + * (or whatever your binding name is). Typed as `unknown` to match the + * convention used by `getVirtualSandbox(bucket: unknown)` in the same + * package — users with `@cloudflare/workers-types` installed pass a + * `D1Database`, users without it work fine too. + */ +export function d1Store(db: unknown, options?: D1StoreOptions): SessionStore { + const table = quoteIdent(options?.tableName ?? 'flue_sessions'); + const d1 = asD1Like(db); + + return { + async save(id: string, data: SessionData): Promise { + await d1 + .prepare( + `INSERT INTO ${table} (id, data, updated_at) + VALUES (?1, ?2, ?3) + ON CONFLICT(id) DO UPDATE SET + data = excluded.data, + updated_at = excluded.updated_at`, + ) + .bind(id, JSON.stringify(data), Date.now()) + .run(); + }, + + async load(id: string): Promise { + const row = await d1 + .prepare(`SELECT data FROM ${table} WHERE id = ?1`) + .bind(id) + .first<{ data: string }>(); + return row ? (JSON.parse(row.data) as SessionData) : null; + }, + + async delete(id: string): Promise { + await d1.prepare(`DELETE FROM ${table} WHERE id = ?1`).bind(id).run(); + }, + }; +} + +function asD1Like(db: unknown): D1Like { + if ( + db === null || + typeof db !== 'object' || + typeof (db as { prepare?: unknown }).prepare !== 'function' + ) { + throw new Error( + '[flue:d1] Expected a Cloudflare D1 binding. Pass env.DB ' + + '(or your configured binding name) to d1Store().', + ); + } + return db as D1Like; +} + +// Duplicated in postgres.ts on purpose — these recipes are copied +// independently into user projects, so they don't share a helper module. +function quoteIdent(name: string): string { + if (!/^[A-Za-z_][A-Za-z0-9_]*$/.test(name)) { + throw new Error( + `[flue:d1] Invalid table name "${name}". ` + + 'Use only letters, digits, and underscores; must not start with a digit.', + ); + } + return `"${name}"`; +} diff --git a/examples/cloudflare/tsconfig.json b/examples/cloudflare/tsconfig.json index b361441..3a417ed 100644 --- a/examples/cloudflare/tsconfig.json +++ b/examples/cloudflare/tsconfig.json @@ -1,3 +1,5 @@ { - "extends": "../../tsconfig.base.json" + "extends": "../../tsconfig.base.json", + "include": [".flue/**/*.ts"], + "exclude": ["dist"] } diff --git a/examples/cloudflare/wrangler.jsonc b/examples/cloudflare/wrangler.jsonc index 6effa22..2b101b4 100644 --- a/examples/cloudflare/wrangler.jsonc +++ b/examples/cloudflare/wrangler.jsonc @@ -9,4 +9,8 @@ "ai": { "binding": "AI" }, + // For .flue/agents/with-d1-persist.ts, create a D1 database and uncomment: + // "d1_databases": [ + // { "binding": "DB", "database_name": "", "database_id": "" } + // ], } diff --git a/examples/hello-world/.flue/agents/with-postgres-persist.ts b/examples/hello-world/.flue/agents/with-postgres-persist.ts new file mode 100644 index 0000000..e66af9a --- /dev/null +++ b/examples/hello-world/.flue/agents/with-postgres-persist.ts @@ -0,0 +1,30 @@ +import type { FlueContext } from '@flue/sdk'; +import pg from 'pg'; +import { postgresStore } from '../persist/postgres'; + +export const triggers = { webhook: true }; + +interface Payload { + threadId: string; + message: string; +} + +const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL }); + +/** + * Demonstrates durable session state backed by Postgres. + * + * Two requests with the same `threadId` against the same `DATABASE_URL` share + * a single Flue session — the second request can recall what the first one + * said. See docs/persist-postgres.md for the schema and a docker-compose + * Postgres for local verification. + */ +export default async function ({ init, payload }: FlueContext) { + const agent = await init({ + model: 'anthropic/claude-haiku-4-5', + persist: postgresStore(pool), + }); + const session = await agent.session(payload.threadId); + const response = await session.prompt(payload.message); + return { threadId: payload.threadId, response: response.text }; +} diff --git a/examples/hello-world/.flue/persist/postgres.ts b/examples/hello-world/.flue/persist/postgres.ts new file mode 100644 index 0000000..8063b3c --- /dev/null +++ b/examples/hello-world/.flue/persist/postgres.ts @@ -0,0 +1,60 @@ +import type { SessionStore, SessionData } from '@flue/sdk/client'; + +/** Structural subset of `pg.Client` and `pg.Pool` — accepts either. */ +interface PgQueryable { + query(sql: string, params?: unknown[]): Promise<{ rows: R[] }>; +} + +export interface PostgresStoreOptions { + /** Table name. Defaults to `flue_sessions`. */ + tableName?: string; +} + +/** + * Wrap a configured `pg` Client or Pool into a Flue `SessionStore`. The user + * owns the client lifecycle (credentials, TLS, pool sizing); this adapter + * just translates `save / load / delete` to SQL. + */ +export function postgresStore( + client: PgQueryable, + options?: PostgresStoreOptions, +): SessionStore { + const table = quoteIdent(options?.tableName ?? 'flue_sessions'); + + return { + async save(id: string, data: SessionData): Promise { + await client.query( + `INSERT INTO ${table} (id, data, updated_at) + VALUES ($1, $2::jsonb, NOW()) + ON CONFLICT (id) DO UPDATE + SET data = EXCLUDED.data, + updated_at = NOW()`, + [id, JSON.stringify(data)], + ); + }, + + async load(id: string): Promise { + const { rows } = await client.query<{ data: SessionData }>( + `SELECT data FROM ${table} WHERE id = $1`, + [id], + ); + return rows[0]?.data ?? null; + }, + + async delete(id: string): Promise { + await client.query(`DELETE FROM ${table} WHERE id = $1`, [id]); + }, + }; +} + +// Duplicated in d1.ts on purpose — these recipes are copied independently +// into user projects, so they don't share a helper module. +function quoteIdent(name: string): string { + if (!/^[A-Za-z_][A-Za-z0-9_]*$/.test(name)) { + throw new Error( + `[flue:postgres] Invalid table name "${name}". ` + + 'Use only letters, digits, and underscores; must not start with a digit.', + ); + } + return `"${name}"`; +} diff --git a/examples/hello-world/package.json b/examples/hello-world/package.json index c0d94f6..c634305 100644 --- a/examples/hello-world/package.json +++ b/examples/hello-world/package.json @@ -7,6 +7,10 @@ "@daytona/sdk": "*", "agents": "*", "just-bash": "^2.14.2", + "pg": "^8.13.0", "valibot": "^1.0.0" + }, + "devDependencies": { + "@types/pg": "^8.11.0" } } diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index bf06d13..d949c17 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -106,9 +106,16 @@ importers: just-bash: specifier: ^2.14.2 version: 2.14.4 + pg: + specifier: ^8.13.0 + version: 8.20.0 valibot: specifier: ^1.0.0 version: 1.3.1(typescript@5.9.3) + devDependencies: + '@types/pg': + specifier: ^8.11.0 + version: 8.20.0 packages/cli: dependencies: @@ -1394,10 +1401,12 @@ packages: '@mariozechner/pi-agent-core@0.73.0': resolution: {integrity: sha512-ugcpvq0X9fr9fTSK29/3S4+KU/eeVMrBb7ZU3HqiF3xD7I1GlgumLj4FYmDrYSEA6+rzgNWlJUKwjKh9o0Z6AA==} engines: {node: '>=20.0.0'} + deprecated: please use @earendil-works/pi-agent-core instead going forward '@mariozechner/pi-ai@0.73.0': resolution: {integrity: sha512-phKOpcde/ssz6UYszkmaGJ9LF9mgt/AP8LrtSwsfap+kMSeFfSQ2/mCSBT1mLJ2BqVuff9uXs1/+op1aQeaafQ==} engines: {node: '>=20.0.0'} + deprecated: please use @earendil-works/pi-ai instead going forward hasBin: true '@mistralai/mistralai@2.2.1': @@ -2411,6 +2420,9 @@ packages: '@types/node@22.19.17': resolution: {integrity: sha512-wGdMcf+vPYM6jikpS/qhg6WiqSV/OhG+jeeHT/KlVqxYfD40iYJf9/AE1uQxVWFvU7MipKRkRv8NSHiCGgPr8Q==} + '@types/pg@8.20.0': + resolution: {integrity: sha512-bEPFOaMAHTEP1EzpvHTbmwR8UsFyHSKsRisLIHVMXnpNefSbGA1bD6CVy+qKjGSqmZqNqBDV2azOBo8TgkcVow==} + '@types/retry@0.12.0': resolution: {integrity: sha512-wWKOClTTiizcZhXnPY4wikVAwmdYHp8q6DmC+EJUzAMsycb7HB32Kh9RN4+0gExjmPmZSAQjgURXIGATPegAvA==} @@ -2419,6 +2431,7 @@ packages: '@ungap/structured-clone@1.3.0': resolution: {integrity: sha512-WmoN8qaIAo7WTYWbAZuG8PYEhn5fkz7dZrqTBZ7dtt//lL2Gwms1IcnQ5yHqjDfX8Ft5j4YzDM23f87zBfDe9g==} + deprecated: Potential CWE-502 - Update to 1.3.1 or higher '@valibot/to-json-schema@1.6.0': resolution: {integrity: sha512-d6rYyK5KVa2XdqamWgZ4/Nr+cXhxjy7lmpe6Iajw15J/jmU+gyxl2IEd1Otg1d7Rl3gOQL5reulnSypzBtYy1A==} @@ -4032,6 +4045,40 @@ packages: pathe@2.0.3: resolution: {integrity: sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w==} + pg-cloudflare@1.3.0: + resolution: {integrity: sha512-6lswVVSztmHiRtD6I8hw4qP/nDm1EJbKMRhf3HCYaqud7frGysPv7FYJ5noZQdhQtN2xJnimfMtvQq21pdbzyQ==} + + pg-connection-string@2.12.0: + resolution: {integrity: sha512-U7qg+bpswf3Cs5xLzRqbXbQl85ng0mfSV/J0nnA31MCLgvEaAo7CIhmeyrmJpOr7o+zm0rXK+hNnT5l9RHkCkQ==} + + pg-int8@1.0.1: + resolution: {integrity: sha512-WCtabS6t3c8SkpDBUlb1kjOs7l66xsGdKpIPZsg4wR+B3+u9UAum2odSsF9tnvxg80h4ZxLWMy4pRjOsFIqQpw==} + engines: {node: '>=4.0.0'} + + pg-pool@3.13.0: + resolution: {integrity: sha512-gB+R+Xud1gLFuRD/QgOIgGOBE2KCQPaPwkzBBGC9oG69pHTkhQeIuejVIk3/cnDyX39av2AxomQiyPT13WKHQA==} + peerDependencies: + pg: '>=8.0' + + pg-protocol@1.13.0: + resolution: {integrity: sha512-zzdvXfS6v89r6v7OcFCHfHlyG/wvry1ALxZo4LqgUoy7W9xhBDMaqOuMiF3qEV45VqsN6rdlcehHrfDtlCPc8w==} + + pg-types@2.2.0: + resolution: {integrity: sha512-qTAAlrEsl8s4OiEQY69wDvcMIdQN6wdz5ojQiOy6YRMuynxenON0O5oCpJI6lshc6scgAY8qvJ2On/p+CXY0GA==} + engines: {node: '>=4'} + + pg@8.20.0: + resolution: {integrity: sha512-ldhMxz2r8fl/6QkXnBD3CR9/xg694oT6DZQ2s6c/RI28OjtSOpxnPrUCGOBJ46RCUxcWdx3p6kw/xnDHjKvaRA==} + engines: {node: '>= 16.0.0'} + peerDependencies: + pg-native: '>=3.0.1' + peerDependenciesMeta: + pg-native: + optional: true + + pgpass@1.0.5: + resolution: {integrity: sha512-FdW9r/jQZhSeohs1Z3sI1yxFQNFvMcnmfuj4WBMUTxOrAyLMaTcE1aAMBiTlbMNaXvBCQuVi0R7hd8udDSP7ug==} + piccolore@0.1.3: resolution: {integrity: sha512-o8bTeDWjE086iwKrROaDf31K0qC/BENdm15/uH9usSC/uZjJOKb2YGiVHfLY4GhwsERiPI1jmwI2XrA7ACOxVw==} @@ -4062,6 +4109,22 @@ packages: resolution: {integrity: sha512-SoSL4+OSEtR99LHFZQiJLkT59C5B1amGO1NzTwj7TT1qCUgUO6hxOvzkOYxD+vMrXBM3XJIKzokoERdqQq/Zmg==} engines: {node: ^10 || ^12 || >=14} + postgres-array@2.0.0: + resolution: {integrity: sha512-VpZrUqU5A69eQyW2c5CA1jtLecCsN2U/bD6VilrFDWq5+5UIEVO7nazS3TEcHf1zuPYO/sqGvUvW62g86RXZuA==} + engines: {node: '>=4'} + + postgres-bytea@1.0.1: + resolution: {integrity: sha512-5+5HqXnsZPE65IJZSMkZtURARZelel2oXUEO8rH83VS/hxH5vv1uHquPg5wZs8yMAfdv971IU+kcPUczi7NVBQ==} + engines: {node: '>=0.10.0'} + + postgres-date@1.0.7: + resolution: {integrity: sha512-suDmjLVQg78nMK2UZ454hAG+OAW+HQPZ6n++TNDUX+L0+uUlLywnoxJKDou51Zm+zTCjrCl0Nq6J9C5hP9vK/Q==} + engines: {node: '>=0.10.0'} + + postgres-interval@1.2.0: + resolution: {integrity: sha512-9ZhXKM/rw350N1ovuWHbGxnGh/SNJ4cnxHiM0rxE4VN41wsg8P8zWn9hv/buK00RP4WvlOyr/RBDiptyxVbkZQ==} + engines: {node: '>=0.10.0'} + prebuild-install@7.1.3: resolution: {integrity: sha512-8Mf2cbV7x1cXPUILADGI3wuhfqWvtiLA1iclTDbFRZkgRQS0NqsPZphna9V+HyTEadheuPmjaJMsbzKQFOzLug==} engines: {node: '>=10'} @@ -4388,6 +4451,10 @@ packages: space-separated-tokens@2.0.2: resolution: {integrity: sha512-PEGlAwrG8yXGXRjW32fGbg66JAlOAwbObuqVoJpv/mRgoWDQfgH1wDPvtzWyUSNAXBGSk8h755YDbbcEy3SH2Q==} + split2@4.2.0: + resolution: {integrity: sha512-UcjcJOWknrNkF6PLX83qcHM6KHgVKNkV62Y8a5uYDVv9ydGQVwAHMKqHdJje1VTWpljG0WYpCDhrCdAOYH4TWg==} + engines: {node: '>= 10.x'} + sprintf-js@1.1.3: resolution: {integrity: sha512-Oo+0REFV59/rz3gfJNKQiBlwfHaSESl1pcGyABQsnnIfWOFt6JNj5gCog2U6MLZ//IGYD+nA8nI+mTShREReaA==} @@ -4870,6 +4937,10 @@ packages: utf-8-validate: optional: true + xtend@4.0.2: + resolution: {integrity: sha512-LKYU1iAXJXUgAXn9URjiu+MWhyUXHsvfp7mcuYm9dSUKK0/CjtrUwFAxD82/mCWbtLsGjFIad0wIsod4zrTAEQ==} + engines: {node: '>=0.4'} + xxhash-wasm@1.1.0: resolution: {integrity: sha512-147y/6YNh+tlp6nd/2pWq38i9h6mz/EuQ6njIrmW8D1BS5nCqs0P6DG+m6zTGnNz5I+uhZ0SHxBs9BsPrwcKDA==} @@ -7421,6 +7492,12 @@ snapshots: dependencies: undici-types: 6.21.0 + '@types/pg@8.20.0': + dependencies: + '@types/node': 22.19.17 + pg-protocol: 1.13.0 + pg-types: 2.2.0 + '@types/retry@0.12.0': {} '@types/unist@3.0.3': {} @@ -9346,6 +9423,41 @@ snapshots: pathe@2.0.3: {} + pg-cloudflare@1.3.0: + optional: true + + pg-connection-string@2.12.0: {} + + pg-int8@1.0.1: {} + + pg-pool@3.13.0(pg@8.20.0): + dependencies: + pg: 8.20.0 + + pg-protocol@1.13.0: {} + + pg-types@2.2.0: + dependencies: + pg-int8: 1.0.1 + postgres-array: 2.0.0 + postgres-bytea: 1.0.1 + postgres-date: 1.0.7 + postgres-interval: 1.2.0 + + pg@8.20.0: + dependencies: + pg-connection-string: 2.12.0 + pg-pool: 3.13.0(pg@8.20.0) + pg-protocol: 1.13.0 + pg-types: 2.2.0 + pgpass: 1.0.5 + optionalDependencies: + pg-cloudflare: 1.3.0 + + pgpass@1.0.5: + dependencies: + split2: 4.2.0 + piccolore@0.1.3: {} picocolors@1.1.1: {} @@ -9366,6 +9478,16 @@ snapshots: picocolors: 1.1.1 source-map-js: 1.2.1 + postgres-array@2.0.0: {} + + postgres-bytea@1.0.1: {} + + postgres-date@1.0.7: {} + + postgres-interval@1.2.0: + dependencies: + xtend: 4.0.2 + prebuild-install@7.1.3: dependencies: detect-libc: 2.1.2 @@ -9888,6 +10010,8 @@ snapshots: space-separated-tokens@2.0.2: {} + split2@4.2.0: {} + sprintf-js@1.1.3: {} sql.js@1.14.1: {} @@ -10309,6 +10433,8 @@ snapshots: ws@8.20.0: {} + xtend@4.0.2: {} + xxhash-wasm@1.1.0: {} y18n@5.0.8: {}