-
-
Notifications
You must be signed in to change notification settings - Fork 950
TRQL and the Query page #2843
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TRQL and the Query page #2843
Conversation
|
WalkthroughAdds a large TSQL feature surface and integrations: a new internal package Estimated code review effort🎯 5 (Critical) | ⏱️ ~240 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Review CompleteYour review story is ready! Comment !reviewfast on this PR to re-generate the story. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 19
🤖 Fix all issues with AI agents
In @apps/webapp/app/components/code/AIQueryInput.tsx:
- Around line 163-164: submitQuery currently calls processStreamEvent but
processStreamEvent is not included in submitQuery's dependency array, causing
stale closures; fix by either moving the processStreamEvent function declaration
above the submitQuery definition so it can be referenced in the dependency
array, or wrap processStreamEvent in a stable ref (useRef) and reference
ref.current inside submitQuery; then add processStreamEvent (or the ref) to the
submitQuery useCallback dependency list and ensure onQueryGenerated remains
correctly referenced to avoid circular dependencies.
In @apps/webapp/app/components/code/QueryResultsChart.tsx:
- Around line 292-332: The sortData function currently always uses __rawDate
when present, so add an xAxisColumn parameter to sortData and only perform the
date comparison when sortByColumn === xAxisColumn and both a.__rawDate and
b.__rawDate exist; otherwise fall through to the numeric/string comparison.
Update calls to sortData (e.g., where transformDataForChart or the component
sorts the results) to pass the current xAxisColumn. Ensure the function
signature is updated (sortData(data, sortByColumn, sortDirection, xAxisColumn))
and the date branch becomes: if (sortByColumn === xAxisColumn && aDate && bDate)
{ ... } so sorting by other columns uses the numeric/string logic.
In @apps/webapp/app/components/code/TSQLEditor.tsx:
- Around line 208-211: The onBlur handler in TSQLEditor reads
editor.current?.textContent (DOM) which can differ from the editor's document;
instead, call into the editor view/state to get the canonical document text
(e.g., use the editor view/state on the editor ref such as
editor.current?.view.state.doc.toString() or
editor.current?.state.doc.toString()) and pass that string to the onBlur prop;
update the onBlur arrow function to guard for onBlur and editor.current then
extract the document via the editor's state rather than textContent.
In @apps/webapp/app/components/primitives/Table.tsx:
- Around line 286-293: Replace the non-semantic <span> used for the copy action
with a <button> element (keep the same className including "absolute -right-2
top-1/2 z-10 hidden -translate-y-1/2 cursor-pointer
group-hover/copyable-cell:flex"), add type="button", preserve the existing
onClick handler logic (e.stopPropagation(); e.preventDefault(); copy();) and add
an accessible label via aria-label (e.g., aria-label="Copy cell"). This ensures
the interactive element is semantic and keyboard-accessible while keeping the
same styling and behavior.
In @apps/webapp/app/components/runs/v3/TaskRunStatus.tsx:
- Around line 243-249: The reverse lookup in runStatusFromFriendlyTitle fails
because runStatusTitleFromStatus maps both PENDING_VERSION and
WAITING_FOR_DEPLOY to the same friendly string ("Pending version"), so
titlesStatusesArray can’t distinguish them; fix by choosing one of: (A) give
each status a unique friendly title in runStatusTitleFromStatus so
runStatusFromFriendlyTitle works unchanged, or (B) change
runStatusFromFriendlyTitle to use an explicit reverse map (e.g., build a
Map<string, TaskRunStatus | TaskRunStatus[]> from titlesStatusesArray) and
either return an array of matching statuses, return the first by documented
priority, or throw an explicit error on ambiguous titles (mentioning
PENDING_VERSION and WAITING_FOR_DEPLOY) to make the collision handling
deterministic and obvious.
In
@apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx:
- Around line 132-159: The switch case labeled "finish" declares const finalText
and const query which leak into other cases; wrap the entire case body in its
own block (add { ... } immediately after case "finish":) so finalText and query
are block-scoped, keep the sendEvent logic intact and ensure the break remains
inside that block; locate the "finish" case in the switch handling result events
to apply this change.
In @apps/webapp/app/services/queryService.server.ts:
- Around line 97-98: The code parses stats.byte_seconds into byteSeconds and
multiplies to compute costInCents but does not guard against parseFloat
returning NaN; update the logic in queryService.server.ts around the
byteSeconds/costInCents calculation to validate the parsed value (e.g., use
Number.isFinite or !isNaN on byteSeconds), and if invalid set a safe default
(such as 0) or handle the error path (log/warn and skip cost computation) before
multiplying with env.CENTS_PER_QUERY_BYTE_SECOND so costInCents never becomes
NaN.
In @apps/webapp/test/components/code/tsql/tsqlCompletion.test.ts:
- Around line 72-173: The test file is in the wrong location per repo
guidelines—move the test from
apps/webapp/test/components/code/tsql/tsqlCompletion.test.ts to sit beside the
implementation at apps/webapp/app/components/code/tsql/tsqlCompletion.test.ts;
update any relative imports in the test if paths change and ensure your test
runner/tsconfig includes the new folder (adjust include/glob patterns if
necessary) so createTSQLCompletion tests (the describe block for
createTSQLCompletion) keep running from the new location.
In @apps/webapp/test/components/code/tsql/tsqlLinter.test.ts:
- Around line 49-78: The test file for getTSQLError is placed in the wrong
directory; move the test to sit beside the implementation (same directory as the
tsql linter module), update any relative imports to reference getTSQLError from
its local implementation (ensure the test still imports getTSQLError correctly),
and run the test suite to confirm paths and test discovery are unchanged; keep
the test content (describe/getTSQLError cases) intact during the move.
In @internal-packages/clickhouse/src/client/client.ts:
- Around line 363-366: parsedSummary can yield elapsed_ns = 0 causing
elapsedSeconds to be 0 and byteSeconds to become Infinity; guard the computation
in the block that calculates readBytes/elapsedSeconds (variables parsedSummary,
readBytes, elapsedNs, elapsedSeconds, byteSeconds) by checking if elapsedSeconds
is <= 0 (or extremely small) and in that case set byteSeconds to 0 (or null)
before creating the stats object so you never produce "Infinity" strings for
downstream consumers.
- Line 362: The debug logging in the method is inconsistent: replace the call to
this.logger.log("parsedSummary", parsedSummary) with the same debug-level logger
used elsewhere (use this.logger.debug) so the parsedSummary output follows the
existing debug logging conventions in the method; locate the occurrence of
"parsedSummary" in client.ts and change its logging method to debug to match the
other debug entries (e.g., the lines using this.logger.debug earlier in the
function).
In @internal-packages/clickhouse/tsconfig.build.json:
- Line 19: The tsconfig override explicitly disables noImplicitAny which weakens
type safety despite strict: true; remove the "noImplicitAny": false entry (or
set it to true) from tsconfig.build.json so the compiler inherits noImplicitAny
from strict mode, then run the build/type-check and fix any resulting
implicit-any errors in the code referenced by this package.
In @internal-packages/tsql/src/grammar/parser.test.ts:
- Around line 1-4: The test file TSQL parser uses the global test helpers but
never imports them; add an import for vitest test helpers (e.g., import {
describe, it, expect } from "vitest";) to
internal-packages/tsql/src/grammar/parser.test.ts so calls to describe, it, and
expect used in the file resolve at runtime; place the import alongside the other
top-level imports near the top of the file before tests run.
In @internal-packages/tsql/src/query/ast.ts:
- Around line 185-187: IntervalType currently sets data_type to the wrong
literal ("unknown"); update the IntervalType definition so its data_type uses a
distinct literal (e.g., "interval") instead of "unknown" to mirror how
DecimalType defines its own data_type; modify the IntervalType interface (which
extends ConstantType) to declare data_type: "interval" (or the project's
established interval literal) so type discrimination works correctly across the
AST.
- Around line 654-671: The returned object from createSelectSetQueryFromQueries
is missing the required expression_type field for a SelectSetQuery; update the
returned object literal in createSelectSetQueryFromQueries to include
expression_type: "select_set_query" (alongside initial_select_query and
subsequent_select_queries) so the object conforms to the SelectSetQuery type
(and keep subsequent_select_queries mapping to SelectSetNode as before).
- Around line 157-159: DecimalType incorrectly sets data_type to "unknown";
change DecimalType (which extends ConstantType) to use data_type: "decimal" and
add "decimal" to the ConstantDataType union in constants.ts so the type is
recognized project-wide; update any related type guards or switch handling that
pattern-matches ConstantDataType (search for usages of DecimalType and
ConstantDataType) to handle the new "decimal" value.
In @internal-packages/tsql/src/query/escape.ts:
- Around line 188-206: The TSQL branch in SQLValueEscaper.visitDateTime builds a
datetime string but ignores this.timezone; update the this.dialect === "tsql"
branch in visitDateTime to include the timezone as the second argument (e.g.
return `toDateTime(${this.visitString(datetimeString)},
${this.visitString(this.timezone)})`) so it mirrors the ClickHouse branch's use
of visitString(this.timezone); ensure visitString is used for the timezone value
to maintain proper escaping/quoting.
In @internal-packages/tsql/src/query/parser.test.ts:
- Line 7: The imported local name `SyntaxError` shadows the global Error
constructor; rename the import (e.g., `import { SyntaxError as TsqlSyntaxError }
from "./errors.js";`) and update all references in this test file that intend
the tsql module's error to use the new name (replace usages of `SyntaxError`
such as the test assertion that expects the parser to throw the module error
with `TsqlSyntaxError`).
🧹 Nitpick comments (37)
internal-packages/clickhouse/src/taskRuns.ts (1)
48-48: LGTM! New field correctly integrated into TaskRunV2 schema.The addition of
max_duration_in_secondsaligns with the database schema migration and retrieved learnings about task duration management. The use of.nullish()is appropriate for this optional field.Optional: Consider adding range validation
For extra safety, you could add
.min(0)to match the ClickHouseUInt32unsigned constraint:- max_duration_in_seconds: z.number().int().nullish(), + max_duration_in_seconds: z.number().int().min(0).nullish(),However, this is optional since the pattern is consistent with other duration fields in the schema and validation may occur elsewhere.
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.queues/route.tsx (1)
520-520: Consider explicit prop passing for clarity.The spread operator works correctly, but explicitly passing props makes the data flow clearer and avoids passing unnecessary properties:
<QueueName name={queue.name} type={queue.type} paused={queue.paused} />This is a minor style preference—the current implementation is functional.
♻️ Optional refactor
- <QueueName {...queue} /> + <QueueName name={queue.name} type={queue.type} paused={queue.paused} />internal-packages/tsql/package.json (2)
16-16: Grammar build script may not work on Windows.The
grammar:build:typescriptscript uses Unix shell commands (cat,tail,sed) which won't work on Windows without WSL or similar environment. Consider adding documentation about platform requirements or using a cross-platform build tool like Node.js scripts.Alternative: Cross-platform Node.js script
Create a
scripts/build-grammar.jsfile:import { readFileSync, writeFileSync, unlinkSync } from 'fs'; import { execSync } from 'child_process'; const lexerTS = readFileSync('src/grammar/TSQLLexer.typescript.g4', 'utf8'); const lexerCommon = readFileSync('src/grammar/TSQLLexer.common.g4', 'utf8'); const lexerCommonLines = lexerCommon.split('\n').slice(1); const modifiedLines = lexerCommonLines.map(line => line.replace(/isOpeningTag/g, 'self.isOpeningTag') ); const combined = lexerTS + '\n' + modifiedLines.join('\n'); writeFileSync('src/grammar/TSQLLexer.g4', combined); execSync('antlr4ts src/grammar/TSQLLexer.g4'); unlinkSync('src/grammar/TSQLLexer.g4'); execSync('antlr4ts -visitor -no-listener -Dlanguage=TypeScript src/grammar/TSQLParser.g4');Then update package.json:
- "grammar:build:typescript": "cat src/grammar/TSQLLexer.typescript.g4 > src/grammar/TSQLLexer.g4 && tail -n +2 src/grammar/TSQLLexer.common.g4 |sed s/isOpeningTag/self.isOpeningTag/ >> src/grammar/TSQLLexer.g4 && antlr4ts src/grammar/TSQLLexer.g4 && rm src/grammar/TSQLLexer.g4 && antlr4ts -visitor -no-listener -Dlanguage=TypeScript src/grammar/TSQLParser.g4", + "grammar:build:typescript": "node scripts/build-grammar.js",
10-11: Consider evaluating zod upgrade to 4.x series if breaking changes are manageable.The package uses
[email protected], which is the latest (and only) available version for this package. Using an alpha version is unavoidable here. However,[email protected]is significantly outdated—zod 4.3.5 is the current stable release. If the codebase can accommodate zod 4's breaking changes, upgrading would align with the latest stable version.internal-packages/tsql/src/grammar/TSQLLexer.typescript.g4 (1)
9-16: Consider usingString.fromCodePointfor full Unicode support.
String.fromCharCodeonly handles code points in the BMP (0-0xFFFF). For characters above U+FFFF,fromCodePointis needed. Since the check includesc > 0x10FFFF, it seems full Unicode was intended.♻️ Suggested improvement
private _peekChar(k: number): string { // Return the k-th look-ahead as a *single-char string* or '\0' at EOF. const c = this._input.LA(k); // int code point or IntStream.EOF (-1) if (c < 0 || c > 0x10FFFF) { // EOF or out-of-range → sentinel return '\0'; } - return String.fromCharCode(c); + return String.fromCodePoint(c); }internal-packages/tsql/src/query/parse_string.ts (1)
4-4: Consider renaming the import to avoid shadowing globalSyntaxError.The static analysis tool correctly flags that importing
SyntaxErrorshadows the global. This can cause confusion when debugging. Consider using an alias likeTSQLSyntaxError.♻️ Suggested fix
-import { SyntaxError } from './errors'; +import { SyntaxError as TSQLSyntaxError } from './errors';Then update line 43:
- throw new SyntaxError(`Invalid string literal, must start and end with the same quote type: ${text}`); + throw new TSQLSyntaxError(`Invalid string literal, must start and end with the same quote type: ${text}`);apps/webapp/evals/aiQuery.eval.ts (1)
45-52: Add error handling for JSON parsing.If the AI model returns malformed JSON,
JSON.parsewill throw an unhandled exception. Consider wrapping in try/catch and returning a score of 0 for parse failures.♻️ Suggested fix
// Parse the output to extract the query - const outputParsed = JSON.parse(output) as ParsedQueryResult; - const expectedParsed = JSON.parse(expected) as ParsedQueryResult; + let outputParsed: ParsedQueryResult; + let expectedParsed: ParsedQueryResult; + try { + outputParsed = JSON.parse(output) as ParsedQueryResult; + expectedParsed = JSON.parse(expected) as ParsedQueryResult; + } catch { + // Malformed JSON should score 0 + return 0; + }internal-packages/database/prisma/schema.prisma (1)
2402-2439: Well-structured audit model with appropriate indexes.The
CustomerQuerymodel follows a sensible audit-log pattern (noupdatedAtsince records are immutable). The cascade behaviors are appropriate:Cascadefor org/project/environment andSetNullfor user.Consider adding indexes for project/environment scoped queries.
If query history will be displayed at project or environment scope (not just organization), consider adding indexes:
@@index([projectId, createdAt(sort: Desc)]) @@index([environmentId, createdAt(sort: Desc)])This would optimize history lookups when scoped to a specific project or environment. If queries are always fetched at org level and filtered client-side, the current indexes are sufficient.
internal-packages/tsql/src/query/constants.ts (2)
36-44: Prefer string union or const object over enum.Per coding guidelines, enums should be avoided in favor of string unions or const objects.
♻️ Suggested refactor using const object
-export enum LimitContext { - QUERY = "query", - QUERY_ASYNC = "query_async", - EXPORT = "export", - COHORT_CALCULATION = "cohort_calculation", - HEATMAPS = "heatmaps", - SAVED_QUERY = "saved_query", - RETENTION = "retention", -} +export const LimitContext = { + QUERY: "query", + QUERY_ASYNC: "query_async", + EXPORT: "export", + COHORT_CALCULATION: "cohort_calculation", + HEATMAPS: "heatmaps", + SAVED_QUERY: "saved_query", + RETENTION: "retention", +} as const; + +export type LimitContext = (typeof LimitContext)[keyof typeof LimitContext];Based on coding guidelines, enums should be avoided.
47-71: Prefer types over interfaces.Per coding guidelines, types should be used instead of interfaces.
♻️ Suggested refactor using types
-// Settings applied at the SELECT level -export interface TSQLQuerySettings { - optimize_aggregation_in_order?: boolean; - date_time_output_format?: string; - date_time_input_format?: string; - join_algorithm?: string; -} - -// Settings applied on top of all TSQL queries -export interface TSQLGlobalSettings extends TSQLQuerySettings { +// Settings applied at the SELECT level +export type TSQLQuerySettings = { + optimize_aggregation_in_order?: boolean; + date_time_output_format?: string; + date_time_input_format?: string; + join_algorithm?: string; +}; + +// Settings applied on top of all TSQL queries +export type TSQLGlobalSettings = TSQLQuerySettings & { readonly?: number; max_execution_time?: number; // ... rest of fields -} +};Based on coding guidelines, types should be used over interfaces.
internal-packages/tsql/src/query/models.ts (2)
6-66: Prefer types over interfaces.Per coding guidelines, types should be used instead of interfaces. While interfaces work here, consistency with the codebase guidelines is preferred.
Example transformation for a few interfaces:
export type FieldOrTable = { hidden?: boolean; }; export type DatabaseField = FieldOrTable & { name: string; array?: boolean; nullable?: boolean; is_nullable?(): boolean; get_constant_type?(): ConstantType; default_value?(): any; };Based on coding guidelines, types should be used over interfaces.
247-253: Avoidanytype - import the actual type.The comment indicates
LazyJoinTypeexists inast.ts. Import and use the actual type for better type safety.♻️ Suggested fix
-import type { Expr, ConstantType } from "./ast"; +import type { Expr, ConstantType, LazyJoinType } from "./ast"; // ... at line 251 - lazy_join_type: any; // LazyJoinType from ast.ts + lazy_join_type: LazyJoinType;apps/webapp/app/components/navigation/SideMenu.tsx (1)
8-8: Remove unused icon imports.
CircleStackIconandMagnifyingGlassCircleIconare imported but not used in this file. OnlyTableCellsIconis used for the Query menu item.♻️ Suggested fix
import { ArrowPathRoundedSquareIcon, ArrowRightOnRectangleIcon, BeakerIcon, BellAlertIcon, ChartBarIcon, ChevronRightIcon, - CircleStackIcon, ClockIcon, Cog8ToothIcon, CogIcon, FolderIcon, FolderOpenIcon, GlobeAmericasIcon, IdentificationIcon, KeyIcon, - MagnifyingGlassCircleIcon, PencilSquareIcon, PlusIcon, RectangleStackIcon, ServerStackIcon, Squares2X2Icon, TableCellsIcon, UsersIcon, } from "@heroicons/react/20/solid";Also applies to: 17-17
apps/webapp/app/components/code/tsql/tsqlLinter.test.ts (1)
66-71: Consider making the error format assertion more resilient.The test assumes error messages contain the word "line" for position information. If the error format from the parser changes, this test could become brittle.
♻️ Alternative approach
Consider either:
- Testing for a more specific pattern (e.g., position/column info)
- Or simply verifying that a non-null, non-empty error is returned without asserting internal format:
it("should include position information in error", () => { const error = getTSQLError("SELECT * FORM users"); expect(error).not.toBeNull(); - // Error message should contain line/column info - expect(error).toContain("line"); + // Verify error contains useful diagnostic information + expect(error!.length).toBeGreaterThan(0); });apps/webapp/app/components/AlphaBadge.tsx (1)
25-31: Consider adding flex styling for proper alignment.
AlphaTitlerenders aspanandAlphaBadgeas siblings within a fragment. Depending on usage context, this may result in misaligned elements. Consider wrapping in a flex container for consistent alignment.♻️ Suggested improvement
export function AlphaTitle({ children }: { children: React.ReactNode }) { return ( - <> - <span>{children}</span> + <span className="inline-flex items-center gap-1"> + <span>{children}</span> <AlphaBadge /> - </> + </span> ); }apps/webapp/app/components/code/tsql/tsqlCompletion.test.ts (1)
5-28: Consider using a more type-safe mock instead ofas any.The
as anycast at line 27 bypasses TypeScript's type checking. Consider defining a proper type for the mock context or using a partial type assertion.♻️ More type-safe alternative
+import type { CompletionContext } from "@codemirror/autocomplete"; + // Helper to create a mock completion context function createMockContext(doc: string, pos: number, explicit = false) { return { state: { doc: { toString: () => doc, }, }, pos, explicit, matchBefore: (regex: RegExp) => { const beforePos = doc.slice(0, pos); const match = beforePos.match(new RegExp(regex.source + "$")); if (match) { return { from: pos - match[0].length, to: pos, text: match[0], }; } return null; }, - } as any; + } as Partial<CompletionContext> as CompletionContext; }internal-packages/clickhouse/src/client/client.ts (1)
234-304: Consider extracting common logic to reduce duplication.The
queryWithStatsmethod shares significant code with thequerymethod (lines 85-232). Consider extracting common logic for parameter validation, span setup, and error handling into shared helper functions.This would improve maintainability and reduce the risk of divergence between the two methods when future changes are made.
apps/webapp/app/components/code/tsql/tsqlLinter.ts (1)
4-4: Consider renaming the importedSyntaxErrorto avoid shadowing the global.The static analysis tool flags that importing
SyntaxErrorshadows the globalSyntaxErrorclass. While this works correctly since the local import takes precedence, it could cause confusion during debugging or if someone inadvertently expects the global behavior.♻️ Suggested fix
-import { parseTSQLSelect, SyntaxError, QueryError, validateQuery } from "@internal/tsql"; +import { parseTSQLSelect, SyntaxError as TSQLSyntaxError, QueryError, validateQuery } from "@internal/tsql";Then update the usage on line 120:
- if (error instanceof SyntaxError) { + if (error instanceof TSQLSyntaxError) {internal-packages/tsql/src/index.ts (1)
210-233: PrefertypeoverinterfaceforCompileTSQLOptions.As per coding guidelines, use types over interfaces for TypeScript definitions.
Suggested change
-export interface CompileTSQLOptions { +export type CompileTSQLOptions = { /** The organization ID for tenant isolation (required) */ organizationId: string; /** The project ID for tenant isolation (optional - omit to query across all projects) */ projectId?: string; /** The environment ID for tenant isolation (optional - omit to query across all environments) */ environmentId?: string; /** Schema definitions for allowed tables and columns */ tableSchema: TableSchema[]; /** Optional query settings */ settings?: Partial<QuerySettings>; /** * Runtime field mappings for dynamic value translation. * Maps internal ClickHouse values to external user-facing values. * * @example * ```typescript * { * project: { "cm12345": "my-project-ref" }, * } * ``` */ fieldMappings?: FieldMappings; -} +};apps/webapp/app/services/queryService.server.ts (1)
100-112: Consider handling potential errors from history recording.The
prisma.customerQuery.createcall is awaited but errors are not caught. If this fails (e.g., database issue), the entireexecuteQueryfunction will throw even though the actual query succeeded. Consider wrapping this in a try-catch or making it fire-and-forget.Suggested approach
- await prisma.customerQuery.create({ - data: { - query: options.query, - scope: scopeToEnum[scope], - stats: { ...stats }, - costInCents, - source: history.source, - organizationId, - projectId: scope === "project" || scope === "environment" ? projectId : null, - environmentId: scope === "environment" ? environmentId : null, - userId: history.userId ?? null, - }, - }); + // Record history but don't fail the query if recording fails + try { + await prisma.customerQuery.create({ + data: { + query: options.query, + scope: scopeToEnum[scope], + stats: { ...stats }, + costInCents, + source: history.source, + organizationId, + projectId: scope === "project" || scope === "environment" ? projectId : null, + environmentId: scope === "environment" ? environmentId : null, + userId: history.userId ?? null, + }, + }); + } catch (historyError) { + console.error("Failed to record query history:", historyError); + }internal-packages/clickhouse/src/client/types.ts (1)
19-37: Consider usingtypeinstead ofinterfacefor consistency with coding guidelines.The coding guidelines prefer types over interfaces for TypeScript definitions.
Suggested change
-export interface QueryStats { +export type QueryStats = { read_rows: string; read_bytes: string; written_rows: string; written_bytes: string; total_rows_to_read: string; result_rows: string; result_bytes: string; elapsed_ns: string; byte_seconds: string; -} +}; -export interface QueryResultWithStats<TOutput> { +export type QueryResultWithStats<TOutput> = { rows: TOutput[]; stats: QueryStats; -} +};internal-packages/tsql/src/query/errors.ts (1)
39-41: Consider renamingSyntaxErrorto avoid shadowing the global.The
SyntaxErrorclass shadows the globalSyntaxError. While the import is explicit where used, this can cause confusion. Consider renaming toTSQLSyntaxErrorfor clarity.This is a style suggestion. If the shadowing is intentional and the team prefers the cleaner name, this can be safely ignored. The current approach works correctly since imports are explicit.
internal-packages/clickhouse/src/client/tsql.ts (2)
31-69: Consider usingtypeinstead ofinterfaceper coding guidelines.The coding guidelines specify using types over interfaces for TypeScript definitions in this codebase.
♻️ Suggested refactor
-export interface ExecuteTSQLOptions<TOut extends z.ZodSchema> { +export type ExecuteTSQLOptions<TOut extends z.ZodSchema> = { /** The name of the operation (for logging/tracing) */ name: string; // ... rest of properties -} +};
74-78: Consider usingtypeinstead ofinterfacefor consistency.♻️ Suggested refactor
-export interface TSQLQuerySuccess<T> { +export type TSQLQuerySuccess<T> = { rows: T[]; columns: OutputColumnMetadata[]; stats: QueryStats; -} +};internal-packages/tsql/src/query/functions.ts (2)
9-28: Consider usingtypeinstead ofinterfaceper coding guidelines.♻️ Suggested refactor
-export interface TSQLFunctionMeta { +export type TSQLFunctionMeta = { /** The ClickHouse function name to use */ clickhouseName: string; // ... rest of properties -} +};
564-589: Case-insensitive lookup logic has a subtle edge case.The
findFunctionlogic correctly handles case sensitivity, but when the exact-case lookup fails and lowercase lookup succeeds for a case-sensitive function, returningundefinedis correct. However, consider adding a comment explaining this behavior for maintainability since it's non-obvious.📝 Add clarifying comment
function findFunction( name: string, functions: Record<string, TSQLFunctionMeta> ): TSQLFunctionMeta | undefined { + // First try exact case match const func = functions[name]; if (func !== undefined) { return func; } + // Try lowercase lookup for case-insensitive functions const lowerFunc = functions[name.toLowerCase()]; if (lowerFunc === undefined) { return undefined; } - // If we haven't found a function with the case preserved, but we have found it in lowercase, - // then the function names are different case-wise only. + // If the function requires exact case matching (caseSensitive: true), + // reject the lowercase match since the original name had different casing if (lowerFunc.caseSensitive) { return undefined; } return lowerFunc; }apps/webapp/app/v3/services/aiQueryService.server.ts (3)
29-32: Consider usingtypeinstead ofinterfaceper coding guidelines.♻️ Suggested refactor
-export interface AIQueryOptions { +export type AIQueryOptions = { mode?: "new" | "edit"; currentQuery?: string; -} +};
37-41: Consider usingtypeinstead ofinterfacefor internal types.
71-104: Duplicate tool definitions betweenstreamQueryandcallmethods.The
validateTSQLQueryandgetTableSchematools are defined identically in both methods. Extract these to a private method or property to reduce duplication and ensure consistency.♻️ Proposed refactor to DRY up tool definitions
+ private getTools() { + return { + validateTSQLQuery: tool({ + description: + "Validate a TSQL query for syntax errors and schema compliance. Always use this tool to verify your query before returning it to the user.", + parameters: z.object({ + query: z.string().describe("The TSQL query to validate"), + }), + execute: async ({ query }) => { + return this.validateQuery(query); + }, + }), + getTableSchema: tool({ + description: + "Get detailed schema information about available tables and columns. Use this to understand what data is available and how to query it.", + parameters: z.object({ + tableName: z + .string() + .optional() + .describe("Optional: specific table name to get details for"), + }), + execute: async ({ tableName }) => { + return this.getSchemaInfo(tableName); + }, + }), + }; + } streamQuery(prompt: string, options: AIQueryOptions = {}) { // ... return streamText({ model: this.model, system: systemPrompt, prompt: userPrompt, - tools: { - validateTSQLQuery: tool({ ... }), - getTableSchema: tool({ ... }), - }, + tools: this.getTools(), maxSteps: 5, // ... }); }Also applies to: 126-159
internal-packages/tsql/src/query/ast.ts (2)
248-254: Consider using const objects instead of enums per coding guidelines.The coding guidelines suggest avoiding enums in favor of string unions or const objects.
♻️ Suggested refactor using const object
-export enum ArithmeticOperationOp { - Add = "+", - Sub = "-", - Mult = "*", - Div = "/", - Mod = "%", -} +export const ArithmeticOperationOp = { + Add: "+", + Sub: "-", + Mult: "*", + Div: "/", + Mod: "%", +} as const; + +export type ArithmeticOperationOp = (typeof ArithmeticOperationOp)[keyof typeof ArithmeticOperationOp];
256-277: Consider using const objects instead of enums per coding guidelines.internal-packages/tsql/src/query/database.ts (2)
102-120: Consider using const object instead of enum per coding guidelines.♻️ Suggested refactor
-export enum DatabaseSerializedFieldType { - STRING = "string", - INTEGER = "integer", - // ... etc -} +export const DatabaseSerializedFieldType = { + STRING: "string", + INTEGER: "integer", + FLOAT: "float", + DECIMAL: "decimal", + BOOLEAN: "boolean", + DATE: "date", + DATETIME: "datetime", + UUID: "uuid", + ARRAY: "array", + JSON: "json", + TUPLE: "tuple", + UNKNOWN: "unknown", + EXPRESSION: "expression", + VIEW: "view", + LAZY_TABLE: "lazy_table", + VIRTUAL_TABLE: "virtual_table", + FIELD_TRAVERSER: "field_traverser", +} as const; + +export type DatabaseSerializedFieldType = (typeof DatabaseSerializedFieldType)[keyof typeof DatabaseSerializedFieldType];
25-130: Consider usingtypeinstead ofinterfacefor schema definitions.Multiple interfaces in this range could be converted to types per coding guidelines.
apps/webapp/app/v3/querySchemas.ts (1)
400-409: Avoid logging insidewhereTransformto reduce noise and potential PII leakage
bulk_action_group_ids.whereTransformcurrently logs every value it transforms. This will fire on each query that filters on this column, can generate a lot of log noise, and leaks raw bulk IDs into logs without strong justification.Consider removing the log or gating it behind a dedicated debug flag.
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx (1)
2003-2021: Clarify units forbyte_secondsinformatQueryStats
formatQueryStatspassesbyte_secondsthroughformatBytesand then appends"s", yielding values like"12.3 KBs". Sincebyte_secondsis a rate-like or composite metric (bytes * seconds), this label can be misleading.Consider either:
- Using a dedicated formatter that reflects the intended unit (e.g.,
"12.3 MiB·s"), or- Renaming the label to something more generic (e.g.,
"12.3 MB-equivalent"), or omitting it if it’s not user-facingly meaningful.apps/webapp/app/components/code/tsql/tsqlCompletion.ts (1)
326-348: Escape enum values before wrapping them in quotes for completion labels
createEnumValueCompletionscurrently builds labels like:label: `'${userFriendlyValue}'`,or:
label: `'${value}'`,If any user-friendly or allowed value ever contains a single quote, the inserted completion will produce invalid SQL (or at least require manual fixing). Today your enums (statuses, environment types) are probably simple strings, but this is an easy future footgun.
Consider adding a small helper to escape single quotes (e.g., replacing
'with'') before wrapping, so the completion always inserts syntactically valid string literals.apps/webapp/app/components/code/ChartConfigPanel.tsx (1)
14-32: AlignChartConfigurationshape with current UI semantics and TS style guidelinesTwo minor points here:
Type vs interface
The repo guidelines prefer type aliases over interfaces for TS.ChartConfigurationcan be trivially expressed as atypewithout behavior change:export type ChartConfiguration = { chartType: ChartType; xAxisColumn: string | null; yAxisColumns: string[]; groupByColumn: string | null; stacked: boolean; sortByColumn: string | null; sortDirection: SortDirection; };Single vs multiple Y‑axis columns
yAxisColumnsis typed asstring[], but the panel only reads/writesyAxisColumns[0]. If multi-series Y support isn’t planned yet, consider simplifying this to a singleyAxisColumn: string | nullto better reflect current behavior (or extend the UI to genuinely support multiple Y selections).Also applies to: 276-283
| <span | ||
| onClick={(e) => { | ||
| e.stopPropagation(); | ||
| e.preventDefault(); | ||
| copy(); | ||
| }} | ||
| className="absolute -right-2 top-1/2 z-10 hidden -translate-y-1/2 cursor-pointer group-hover/copyable-cell:flex" | ||
| > |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use a button element instead of a span for the copy action.
The copy action uses a <span> with an onClick handler, which lacks semantic meaning and is not keyboard-accessible by default. Interactive elements should use appropriate semantic HTML elements.
Proposed fix
- <span
+ <button
+ type="button"
onClick={(e) => {
e.stopPropagation();
e.preventDefault();
copy();
}}
className="absolute -right-2 top-1/2 z-10 hidden -translate-y-1/2 cursor-pointer group-hover/copyable-cell:flex"
>
<SimpleTooltip
button={
<span
className={cn(
"flex size-6 items-center justify-center rounded border border-charcoal-650 bg-charcoal-750",
copied
? "text-green-500"
: "text-text-dimmed hover:border-charcoal-600 hover:bg-charcoal-700 hover:text-text-bright"
)}
>
{copied ? (
<ClipboardCheckIcon className="size-3.5" />
) : (
<ClipboardIcon className="size-3.5" />
)}
</span>
}
content={copied ? "Copied!" : "Copy"}
disableHoverableContent
/>
- </span>
+ </button>🤖 Prompt for AI Agents
In @apps/webapp/app/components/primitives/Table.tsx around lines 286 - 293,
Replace the non-semantic <span> used for the copy action with a <button> element
(keep the same className including "absolute -right-2 top-1/2 z-10 hidden
-translate-y-1/2 cursor-pointer group-hover/copyable-cell:flex"), add
type="button", preserve the existing onClick handler logic (e.stopPropagation();
e.preventDefault(); copy();) and add an accessible label via aria-label (e.g.,
aria-label="Copy cell"). This ensures the interactive element is semantic and
keyboard-accessible while keeping the same styling and behavior.
PR Review: TRQL and the Query PageThis is a substantial PR that introduces TRQL (TRiggerQueryLanguage), a custom SQL-like query language that compiles to ClickHouse SQL with automatic tenant isolation. The implementation is derived from PostHog's HogQL and represents a significant feature addition. Overall AssessmentThis is a well-architected feature with strong security foundations. The code demonstrates careful thought about SQL injection prevention and multi-tenant data isolation. The test coverage is impressive, particularly for security-related functionality. Security Analysis ✅Strengths
Minor Security Considerations
Code Quality ✅Strengths
Suggestions
Performance Considerations
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
apps/webapp/app/components/code/ChartConfigPanel.tsx (3)
14-22: Consider usingtypeinstead ofinterfacefor ChartConfiguration.The coding guidelines specify preferring types over interfaces for TypeScript definitions.
♻️ Refactor to use type alias
-export interface ChartConfiguration { +export type ChartConfiguration = { chartType: ChartType; xAxisColumn: string | null; yAxisColumns: string[]; groupByColumn: string | null; stacked: boolean; sortByColumn: string | null; sortDirection: SortDirection; -} +};As per coding guidelines.
283-292: Consider preserving user's sort configuration when changing X-axis.When manually selecting a datetime X-axis column, the sort order is always overridden to that column ascending, even if the user had previously configured a different sort. This differs from the initial auto-select behavior (lines 142-150) which only sets sort when none is configured.
This could be surprising if a user has carefully configured their sort order. Consider only auto-setting the sort when
config.sortByColumnisnull:♻️ Preserve existing sort configuration
setValue={(value) => { const updates: Partial<ChartConfiguration> = { xAxisColumn: value || null }; // Auto-set sort to x-axis ASC if selecting a datetime column - if (value) { + if (value && !config.sortByColumn) { const selectedCol = columns.find((c) => c.name === value); if (selectedCol && isDateTimeType(selectedCol.type)) { updates.sortByColumn = value; updates.sortDirection = "asc"; } } updateConfig(updates); }}
469-489: TypeBadge could better handle nested type wrappers.The current sequential processing (Nullable first, then LowCardinality) doesn't optimally handle nested types like
LowCardinality(Nullable(String)). The LowCardinality wrapper would be stripped, leavingNullable(String), but the Nullable indicator isn't applied since that check already passed.For the initial implementation this is acceptable, but you might consider recursively stripping both wrappers and collecting indicators:
♻️ Enhanced nested type handling
function TypeBadge({ type }: { type: string }) { - // Simplify type for display let displayType = type; - if (type.startsWith("Nullable(")) { - displayType = type.slice(9, -1) + "?"; - } - if (type.startsWith("LowCardinality(")) { - displayType = type.slice(15, -1); - } + let suffix = ""; + + // Strip wrappers recursively + while (displayType.startsWith("Nullable(") || displayType.startsWith("LowCardinality(")) { + if (displayType.startsWith("Nullable(")) { + displayType = displayType.slice(9, -1); + suffix = "?"; + } else if (displayType.startsWith("LowCardinality(")) { + displayType = displayType.slice(15, -1); + } + } + + displayType = displayType + suffix; // Shorten long type names if (displayType.length > 12) { displayType = displayType.slice(0, 10) + "…"; } return ( <span className="rounded bg-charcoal-750 px-1 py-0.5 font-mono text-xxs text-text-dimmed"> {displayType} </span> ); }
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
apps/webapp/app/components/code/ChartConfigPanel.tsx
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead
Files:
apps/webapp/app/components/code/ChartConfigPanel.tsx
{packages/core,apps/webapp}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use zod for validation in packages/core and apps/webapp
Files:
apps/webapp/app/components/code/ChartConfigPanel.tsx
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use function declarations instead of default exports
Files:
apps/webapp/app/components/code/ChartConfigPanel.tsx
apps/webapp/app/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
Access all environment variables through the
envexport ofenv.server.tsinstead of directly accessingprocess.envin the Trigger.dev webapp
Files:
apps/webapp/app/components/code/ChartConfigPanel.tsx
apps/webapp/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
apps/webapp/**/*.{ts,tsx}: When importing from@trigger.dev/corein the webapp, use subpath exports from the package.json instead of importing from the root path
Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp
Files:
apps/webapp/app/components/code/ChartConfigPanel.tsx
**/*.{js,ts,jsx,tsx,json,md,css,scss}
📄 CodeRabbit inference engine (AGENTS.md)
Format code using Prettier
Files:
apps/webapp/app/components/code/ChartConfigPanel.tsx
🧠 Learnings (1)
📓 Common learnings
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to internal-packages/database/**/*.{ts,tsx} : Use Prisma for database interactions in internal-packages/database with PostgreSQL
🧬 Code graph analysis (1)
apps/webapp/app/components/code/ChartConfigPanel.tsx (5)
internal-packages/tsql/src/query/printer.ts (1)
isDateTimeType(1178-1188)apps/webapp/app/utils/cn.ts (1)
cn(77-79)apps/webapp/app/components/primitives/Paragraph.tsx (1)
Paragraph(88-107)apps/webapp/app/components/primitives/Buttons.tsx (1)
Button(296-329)apps/webapp/app/components/primitives/Switch.tsx (1)
Switch(64-119)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (24)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (7, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (2, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (8, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (5, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (4, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (2, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (5, 8)
- GitHub Check: units / packages / 🧪 Unit Tests: Packages (1, 1)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (1, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (3, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (7, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (3, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (1, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (6, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (8, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (4, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (6, 8)
- GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - npm)
- GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - pnpm)
- GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - npm)
- GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - pnpm)
- GitHub Check: typecheck / typecheck
- GitHub Check: claude-review
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (4)
apps/webapp/app/components/code/ChartConfigPanel.tsx (4)
42-75: LGTM! Type detection helpers are well-implemented.The type detection functions correctly handle ClickHouse types including Nullable wrappers and parameterized types like
DateTime64(3).
79-156: LGTM! Auto-selection logic is well-designed.The use of
columnsKeyfor change detection and refs to avoid dependency issues demonstrates thoughtful handling of React's reactivity model. The auto-selection logic appropriately defaults to datetime columns for the X-axis and only auto-sets sorting when none is configured.
313-416: LGTM! Selection controls and conditional toggles are well-structured.The consistent use of the
__none__sentinel value for optional selections and conditional rendering of the Stacked and Sort Direction controls based on configuration state creates a clean, intuitive interface.
421-467: LGTM! Helper components are clean and straightforward.Both
ConfigFieldandSortDirectionToggleserve their purposes well as simple internal components. The styling and interaction patterns are consistent with the rest of the component.
PR Review: TRQL and the Query PageThis is an impressive and substantial PR introducing a new query language (TRQL) and a full-featured query page for trigger.dev. The implementation shows excellent attention to security, particularly around tenant isolation and SQL injection prevention. Below is my detailed review. 🔒 Security AnalysisStrengths:
Potential Concerns:
🐛 Potential Bugs or Issues
⚡ Performance Considerations
📊 Test CoverageWell Tested:
Consider Adding:
💻 Code QualityPositive:
Suggestions:
📁 Database MigrationThe
🎨 Minor Suggestions
✅ SummaryThis is a well-architected PR with strong security foundations. The TSQL/TRQL implementation follows best practices for query languages with proper sandboxing, parameterization, and tenant isolation. The test coverage is comprehensive, especially for security-critical paths. Recommended before merge:
Great work on this feature! 🎉 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
apps/webapp/app/components/code/ChartConfigPanel.tsx (1)
45-78: Consider extracting type detection helpers to a shared utility.The type detection functions (
isNumericType,isDateTimeType,isStringType) are well-implemented. However, a similarisDateTimeTypeexists ininternal-packages/tsql/src/query/printer.tswith slight differences. Your version includes the checktype.startsWith("DateTime64(")which handles parameterized DateTime64 types likeDateTime64(3), making it more comprehensive.♻️ Consider creating a shared utility module
These type detection helpers could be extracted to a shared utility module (e.g.,
apps/webapp/app/utils/clickhouse-types.ts) to avoid duplication and ensure consistency across the codebase. This would also make it easier to maintain and extend type detection logic in the future.apps/webapp/app/components/code/QueryResultsChart.tsx (1)
149-187: Consider using mode instead of minimum for interval detection.The current implementation uses the minimum gap (line 166) as a heuristic for data interval, which works for many cases but could be affected by outliers (e.g., one data point that's very close to another).
♻️ Consider calculating the mode (most common gap) for more robust detection
Using the most frequently occurring gap would be more resilient to outliers:
// After line 162, add: // Count gap frequencies const gapCounts = new Map<number, number>(); for (const gap of gaps) { // Round to nearest minute to group similar gaps const rounded = Math.round(gap / MINUTE) * MINUTE; gapCounts.set(rounded, (gapCounts.get(rounded) || 0) + 1); } // Find the most common gap let mostCommonGap = minGap; let maxCount = 0; for (const [gap, count] of gapCounts) { if (count > maxCount) { maxCount = count; mostCommonGap = gap; } } // Use mostCommonGap instead of minGap for snapping logic
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
apps/webapp/app/components/code/ChartConfigPanel.tsxapps/webapp/app/components/code/QueryResultsChart.tsx
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead
Files:
apps/webapp/app/components/code/QueryResultsChart.tsxapps/webapp/app/components/code/ChartConfigPanel.tsx
{packages/core,apps/webapp}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use zod for validation in packages/core and apps/webapp
Files:
apps/webapp/app/components/code/QueryResultsChart.tsxapps/webapp/app/components/code/ChartConfigPanel.tsx
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use function declarations instead of default exports
Files:
apps/webapp/app/components/code/QueryResultsChart.tsxapps/webapp/app/components/code/ChartConfigPanel.tsx
apps/webapp/app/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
Access all environment variables through the
envexport ofenv.server.tsinstead of directly accessingprocess.envin the Trigger.dev webapp
Files:
apps/webapp/app/components/code/QueryResultsChart.tsxapps/webapp/app/components/code/ChartConfigPanel.tsx
apps/webapp/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
apps/webapp/**/*.{ts,tsx}: When importing from@trigger.dev/corein the webapp, use subpath exports from the package.json instead of importing from the root path
Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp
Files:
apps/webapp/app/components/code/QueryResultsChart.tsxapps/webapp/app/components/code/ChartConfigPanel.tsx
**/*.{js,ts,jsx,tsx,json,md,css,scss}
📄 CodeRabbit inference engine (AGENTS.md)
Format code using Prettier
Files:
apps/webapp/app/components/code/QueryResultsChart.tsxapps/webapp/app/components/code/ChartConfigPanel.tsx
🧠 Learnings (1)
📓 Common learnings
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to internal-packages/database/**/*.{ts,tsx} : Use Prisma for database interactions in internal-packages/database with PostgreSQL
🧬 Code graph analysis (2)
apps/webapp/app/components/code/QueryResultsChart.tsx (3)
apps/webapp/app/components/code/ChartConfigPanel.tsx (2)
ChartConfiguration(15-24)AggregationType(13-13)apps/webapp/app/components/primitives/Chart.tsx (1)
ChartConfig(10-18)apps/webapp/app/components/primitives/Paragraph.tsx (1)
Paragraph(88-107)
apps/webapp/app/components/code/ChartConfigPanel.tsx (5)
internal-packages/tsql/src/query/printer.ts (1)
isDateTimeType(1178-1188)apps/webapp/app/utils/cn.ts (1)
cn(77-79)apps/webapp/app/components/primitives/Paragraph.tsx (1)
Paragraph(88-107)apps/webapp/app/components/primitives/Buttons.tsx (1)
Button(296-329)apps/webapp/app/components/primitives/Switch.tsx (1)
Switch(64-119)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (24)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (6, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (7, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (4, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (6, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (1, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (8, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (2, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (7, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (5, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (3, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (8, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (2, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (3, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (4, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (5, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (1, 8)
- GitHub Check: units / packages / 🧪 Unit Tests: Packages (1, 1)
- GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - pnpm)
- GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - npm)
- GitHub Check: typecheck / typecheck
- GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - npm)
- GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - pnpm)
- GitHub Check: claude-review
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (12)
apps/webapp/app/components/code/ChartConfigPanel.tsx (4)
82-105: Column categorization logic is sound.The categorization correctly identifies numeric, datetime, and categorical columns. Note that datetime columns are intentionally included in both
dateTimeColumnsandcategoricalColumns(line 94), which allows them to be used for grouping purposes while maintaining their datetime nature for axis selection.
118-159: Auto-selection logic works correctly.The auto-select defaults effect properly handles initial column selection with good prioritization:
- X-axis: prefers datetime, then categorical, then any column
- Y-axis: selects first numeric column
- Sort: auto-sets to X-axis ascending when it's a datetime column
The use of
columnsKeyto detect actual column changes and refs to access current config without dependency issues is well-implemented.
241-451: Chart configuration UI is well-structured.The UI properly handles:
- Chart type selection with visual feedback
- Dynamic option lists based on column types
- Conditional visibility for stacked toggle (only when grouping) and sort direction (only when sorting)
- Auto-configuration when selecting datetime columns for X-axis
The immediate auto-sort feedback in the X-axis onChange handler (lines 294-305) provides better UX compared to waiting for the next useEffect run.
453-521: Helper components are clean and functional.The helper components are well-designed:
ConfigFieldprovides consistent field layout with proper handling of empty labelsSortDirectionToggleoffers clear ascending/descending selectionTypeBadgeintelligently simplifies ClickHouse type names for display (strips wrapper types, truncates long names)apps/webapp/app/components/code/QueryResultsChart.tsx (8)
72-143: Time granularity detection and formatting are well-implemented.The
detectTimeGranularityfunction uses sensible thresholds to determine the appropriate time scale, andformatDateByGranularityprovides contextually appropriate formatting for each granularity level. The use of native date formatting APIs ensures locale-aware display.
193-288: Time gap filling with aggregation is correctly implemented.This complex function properly handles:
- Limiting maximum points to prevent performance issues (lines 207-224)
- Bucketing data points to align with time intervals (lines 226-252)
- Filling missing time slots with zeros (lines 254-287)
- Applying aggregation to multiple values in the same bucket (line 269)
The logic ensures charts display continuous time series data with visible gaps rendered as zeros rather than connecting distant points.
323-379: Tick generation creates well-aligned, human-friendly labels.The
generateTimeTicksfunction intelligently:
- Selects appropriate intervals from predefined "nice" values
- Aligns ticks to natural boundaries (midnight, hour marks)
- Ensures a reasonable number of ticks (2-8)
- Provides padding to avoid cutting off edge data points
408-437: Date parsing handles multiple formats robustly.The
tryParseDatefunction defensively handles:
- Date objects with validation
- ISO string formats (regex check for YYYY-MM-DD)
- Numbers as milliseconds (prioritized)
- Numbers as Unix timestamps in seconds (fallback)
- Invalid dates return null rather than throwing
The year range check (1970-2100) prevents accepting unrealistic timestamp values.
453-638: Data transformation logic is comprehensive and correct.This critical function handles multiple complex scenarios:
Date detection (lines 471-499):
- Uses an 80% threshold to determine if the X-axis is date-based
- Calculates time domain with 2% padding for visual spacing
- Generates appropriate tick positions
Non-grouped mode (lines 507-568):
- Groups rows by X-axis value to handle duplicates
- Applies configured aggregation to multiple values at the same X coordinate
- Fills time gaps for continuous time series
Grouped mode (lines 570-637):
- Pivots data so each group value becomes a separate chart series
- Applies aggregation within each group
- Fills time gaps while maintaining group structure
The defensive checks (line 520, 586) skip invalid dates for date-based axes, preventing chart rendering errors.
640-708: Utility functions are robust and well-implemented.The helper functions handle edge cases properly:
toNumber: Converts various types to numbers with safe fallbackaggregateValues: Correctly implements all aggregation types (sum, avg, count, min, max) with empty array handlingsortData: Comprehensive sorting logic that tries date comparison first (using__rawDate), then numeric, then string comparison, with proper null handling
710-923: Chart rendering component handles multiple visualization scenarios correctly.The
QueryResultsChartcomponent properly:Data processing (lines 715-743):
- Transforms data with memoization for performance
- Forces chronological order for date-based axes (line 740), which is essential for time series visualization
Axis configuration (lines 752-853):
- Creates dynamic formatters based on data characteristics
- Uses different X-axis configurations for date-based (continuous numeric scale) vs categorical axes
- Provides adaptive Y-axis formatting based on value ranges
Rendering (lines 855-920):
- Conditionally renders BarChart, AreaChart (for stacked), or LineChart
- Properly configures stacking, tooltips, and legends
- Includes validation and empty states
The distinction between date-based continuous axes and categorical axes ensures proper visual representation of time series data.
925-986: Y-axis formatter and empty state are well-designed.The
createYAxisFormatterfunction provides intelligent formatting:
- Uses K/M abbreviations for large numbers (≥1K, ≥1M)
- Dynamically adjusts decimal places based on data range (more precision for smaller ranges)
- Handles edge cases (infinite values, zero range)
The
EmptyStatecomponent provides clear user feedback for various validation failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @apps/webapp/app/components/code/QueryResultsChart.tsx:
- Around line 101-143: formatDateByGranularity currently hardcodes the "en-US"
locale; change all instances of "en-US" to undefined so the runtime/browsers
default locale is used (e.g., in the Date.prototype.toLocaleTimeString and
toLocaleDateString calls inside formatDateByGranularity), and make the same
replacement in the formatDateForTooltip function to ensure dates are localized
to the user's environment.
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
apps/webapp/app/components/code/QueryResultsChart.tsx
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead
Files:
apps/webapp/app/components/code/QueryResultsChart.tsx
{packages/core,apps/webapp}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use zod for validation in packages/core and apps/webapp
Files:
apps/webapp/app/components/code/QueryResultsChart.tsx
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use function declarations instead of default exports
Files:
apps/webapp/app/components/code/QueryResultsChart.tsx
apps/webapp/app/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
Access all environment variables through the
envexport ofenv.server.tsinstead of directly accessingprocess.envin the Trigger.dev webapp
Files:
apps/webapp/app/components/code/QueryResultsChart.tsx
apps/webapp/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
apps/webapp/**/*.{ts,tsx}: When importing from@trigger.dev/corein the webapp, use subpath exports from the package.json instead of importing from the root path
Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp
Files:
apps/webapp/app/components/code/QueryResultsChart.tsx
**/*.{js,ts,jsx,tsx,json,md,css,scss}
📄 CodeRabbit inference engine (AGENTS.md)
Format code using Prettier
Files:
apps/webapp/app/components/code/QueryResultsChart.tsx
🧠 Learnings (1)
📓 Common learnings
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to internal-packages/database/**/*.{ts,tsx} : Use Prisma for database interactions in internal-packages/database with PostgreSQL
🧬 Code graph analysis (1)
apps/webapp/app/components/code/QueryResultsChart.tsx (3)
apps/webapp/app/components/code/ChartConfigPanel.tsx (2)
ChartConfiguration(15-24)AggregationType(13-13)apps/webapp/app/components/primitives/Chart.tsx (1)
ChartConfig(10-18)apps/webapp/app/components/primitives/Paragraph.tsx (1)
Paragraph(88-107)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (24)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (8, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (6, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (2, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (3, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (8, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (7, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (7, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (4, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (1, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (5, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (6, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (3, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (4, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (2, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (5, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (1, 8)
- GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - pnpm)
- GitHub Check: units / packages / 🧪 Unit Tests: Packages (1, 1)
- GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - npm)
- GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - npm)
- GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - pnpm)
- GitHub Check: typecheck / typecheck
- GitHub Check: claude-review
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (2)
apps/webapp/app/components/code/QueryResultsChart.tsx (2)
671-711: Well-implemented date-aware sorting logic!The special handling for date-based sorting (lines 689-696) ensures that when sorting by the X-axis column, dates are compared chronologically using the
__rawDatemetadata rather than as raw strings or timestamps. This prevents incorrect ordering in time-series charts and aligns with the PR objective of improving "sorting change when x axis is a date."The implementation correctly:
- Only applies date comparison when
sortByColumn === xAxisColumn- Falls back to numeric/string comparison for non-date columns
- Handles null/undefined values defensively
740-746: Correct enforcement of chronological ordering for date-based axes.The sorting logic correctly ensures that date-based charts are always displayed in chronological order (line 743) by sorting on
xDataKeyascending, overriding any user-specified sort configuration. This prevents confusion from disconnected or reversed time series.For categorical (non-date) axes, the user's sort preferences are respected (line 745). This dual behavior is appropriate and well-implemented.
PR Review: TRQL - TRigger Query LanguageOverviewThis is a substantial and well-architected feature addition that introduces TRQL (TRigger Query Language) - a custom query language that allows users to safely write queries against their data. The implementation is impressive in scope, covering parsing, validation, compilation to ClickHouse SQL, a full UI with editor/autocomplete/charts, AI-powered query generation, and query history tracking. ✅ StrengthsSecurity Architecture
Code Quality
Testing
UX
|
PR Review: TRQL and the Query PageOverviewThis is a substantial PR (~43k additions) introducing TRQL (Trigger Query Language), a TypeScript port of PostHog HogQL that provides a safe SQL query interface for ClickHouse with tenant isolation. StrengthsSecurity Architecture
Code Quality
Feature Completeness
Suggestions for Improvement1. AI Service Security ConsiderationFile: apps/webapp/app/v3/services/aiQueryService.server.ts The AI service uses gpt-4o-mini by default. While the generated queries are validated, consider:
2. Missing Error Boundary in Query RouteFile: apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx The query page catches errors in the action but consider adding an ErrorBoundary component for unexpected React errors. 3. Query History CleanupQuestion: Is there a mechanism to clean up old query history entries? The CustomerQueryHistory table could grow large over time. Consider adding:
4. Consider Adding Query Size LimitsThe printer has maxRows settings, but consider also limiting:
5. Minor: Console Logger in ProductionFile: apps/webapp/app/v3/querySchemas.ts:406 This debug log in the bulk_action_group_ids whereTransform should be removed or guarded with a debug flag for production. Test CoverageStrong test coverage overall:
Suggestion: Add Integration TestsConsider adding end-to-end tests that actually execute queries against a test ClickHouse instance to verify:
Security Checklist
SummaryThis is a well-architected PR that introduces a powerful and secure query system. The security model is sound, with proper tenant isolation at the SQL generation level. The code quality is high with good separation of concerns and extensive test coverage. Recommended: Approve with minor suggestions above. Review generated by Claude |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (8)
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx (8)
120-123: Consider a more specific redirect or forbidden response.Currently, users without query access are redirected to the home page (
"/"). Consider returning a 403 response with a message or redirecting to a more contextual location (e.g., the environment page).Alternative approach
const canAccess = await hasQueryAccess(user.id, user.admin, organizationSlug); if (!canAccess) { - throw redirect("/"); + throw new Response("You don't have access to query features", { + status: 403, + statusText: "Forbidden", + }); }
346-357: Chart config resets on initial query execution.The
useEffectresetschartConfigwhenevercolumnsKeychanges. On the initial load,columnsKeyis an empty string (no results yet). After the first query execution,columnsKeybecomes non-empty, triggering the reset. This means users lose their chart configuration even on the very first query, which may not be the intended behavior.Consider tracking whether this is truly a schema change vs. initial population:
Potential fix
+ const prevColumnsKeyRef = useRef<string>(""); + // Reset chart config only when column schema actually changes // This allows re-running queries with different WHERE clauses without losing config useEffect(() => { - if (columnsKey) { + if (columnsKey && prevColumnsKeyRef.current && columnsKey !== prevColumnsKeyRef.current) { setChartConfig(defaultChartConfig); } + prevColumnsKeyRef.current = columnsKey; }, [columnsKey]);
697-704: Consider using a timestamp key instead of setTimeout for re-triggering.The current pattern (setting to
undefined, then back to the value aftersetTimeout(0)) forces a re-render to re-trigger the same prompt. While functional, this relies on timing and state updates.Alternative approach
Use a tuple with a timestamp or counter to ensure uniqueness:
- const [autoSubmitPrompt, setAutoSubmitPrompt] = useState<string | undefined>(); + const [autoSubmitPrompt, setAutoSubmitPrompt] = useState<{ prompt: string; key: number } | undefined>(); // In AIQueryInput, destructure: autoSubmitPrompt?.prompt // In the click handler: onClick={() => { - // Use a unique key to ensure re-trigger even if same prompt clicked twice - setAutoSubmitPrompt(undefined); - setTimeout(() => setAutoSubmitPrompt(example), 0); + setAutoSubmitPrompt({ prompt: example, key: Date.now() }); }}
918-1875: Consider externalizing the function catalog.The inline function documentation spans nearly 1,000 lines (lines 918-1875), making this file very large (2,169 lines total). While comprehensive, this approach has maintainability implications:
- Duplicates documentation that might exist elsewhere (e.g., in the
@internal/tsqlpackage)- Makes the route file harder to navigate
- Increases bundle size if not properly code-split
Consider moving the function catalog to a separate file or data structure that can be imported:
// Create: app/v3/tsqlFunctionCatalog.ts export const tsqlFunctionCatalog = [ { title: "Aggregate functions", functions: [ { name: "count()", desc: "Count rows", example: "count()" }, // ... ] }, // ... ]; // Then in this route: import { tsqlFunctionCatalog } from "~/v3/tsqlFunctionCatalog";This would improve maintainability and allow reuse across the application.
2073-2112: Remove unusedsuffixvariable.The
suffixvariable is defined as an empty string at line 2076 and only checked at lines 2107-2109, but never assigned a non-empty value. This appears to be dead code.Cleanup
function highlightSQL(query: string): React.ReactNode[] { // Normalize whitespace for display (let CSS line-clamp handle truncation) const normalized = query.replace(/\s+/g, " ").slice(0, 200); - const suffix = ""; // ... rest of function ... // Add remaining text if (lastIndex < normalized.length) { parts.push(normalized.slice(lastIndex)); } - if (suffix) { - parts.push(suffix); - } return parts; }
2075-2075: Hard-coded query length limit may truncate important information.The
highlightSQLfunction truncates queries to 200 characters. For longer queries in the history popover, users may not see enough context to distinguish between similar queries.Consider making this configurable or increasing the limit:
- const normalized = query.replace(/\s+/g, " ").slice(0, 200); + const normalized = query.replace(/\s+/g, " ").slice(0, 300);Or extract as a constant:
const MAX_QUERY_PREVIEW_LENGTH = 200;
1-2169: Consider splitting this large route file into multiple modules.This file contains 2,169 lines mixing route handlers, UI components, documentation content, and utilities. While well-organized, the size impacts maintainability and violates the single-responsibility principle.
Consider this structure:
route.tsx (loader, action, main Page) - ~200 lines ├── components/ │ ├── QueryEditorForm.tsx │ ├── QueryHelpSidebar.tsx │ ├── ExportResultsButton.tsx │ └── QueryHistoryPopover.tsx ├── content/ │ ├── tsqlFunctionCatalog.ts (function documentation) │ ├── exampleQueries.ts │ └── TRQLGuideContent.tsx └── utils/ ├── queryFormatting.ts (highlightSQL, formatQueryStats, formatBytes) └── queryAccess.server.ts (hasQueryAccess)This would improve:
- Maintainability: Smaller, focused modules
- Testability: Easier to unit test isolated functions
- Reusability: Components/utils can be imported elsewhere
- Navigation: Easier to find and modify specific functionality
205-220: Consider using a more specific output schema instead ofz.record(z.any()).The
z.record(z.any())schema at line 209 accepts any record structure without validation. While thetableSchemaparameter validates the query itself and later transforms the results, the result rows bypass strict runtime validation. This approach loses compile-time type safety and could mask unexpected data shapes.Define a more specific output schema that reflects the expected query result structure, or document why a fully permissive schema is appropriate for this use case.
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead
Files:
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
{packages/core,apps/webapp}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use zod for validation in packages/core and apps/webapp
Files:
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use function declarations instead of default exports
Files:
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
apps/webapp/app/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
Access all environment variables through the
envexport ofenv.server.tsinstead of directly accessingprocess.envin the Trigger.dev webapp
Files:
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
apps/webapp/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
apps/webapp/**/*.{ts,tsx}: When importing from@trigger.dev/corein the webapp, use subpath exports from the package.json instead of importing from the root path
Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp
Files:
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
**/*.{js,ts,jsx,tsx,json,md,css,scss}
📄 CodeRabbit inference engine (AGENTS.md)
Format code using Prettier
Files:
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
🧠 Learnings (4)
📓 Common learnings
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to internal-packages/database/**/*.{ts,tsx} : Use Prisma for database interactions in internal-packages/database with PostgreSQL
📚 Learning: 2025-12-08T15:19:56.823Z
Learnt from: 0ski
Repo: triggerdotdev/trigger.dev PR: 2760
File: apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.runs.$runParam/route.tsx:278-281
Timestamp: 2025-12-08T15:19:56.823Z
Learning: In apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.runs.$runParam/route.tsx, the tableState search parameter uses intentional double-encoding: the parameter value contains a URL-encoded URLSearchParams string, so decodeURIComponent(value("tableState") ?? "") is required to fully decode it before parsing with new URLSearchParams(). This pattern allows bundling multiple filter/pagination params as a single search parameter.
Applied to files:
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
📚 Learning: 2025-11-27T16:26:58.661Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/webapp.mdc:0-0
Timestamp: 2025-11-27T16:26:58.661Z
Learning: Applies to apps/webapp/**/*.{ts,tsx} : Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp
Applied to files:
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
📚 Learning: 2025-11-27T16:26:37.432Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: The webapp at apps/webapp is a Remix 2.1 application using Node.js v20
Applied to files:
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
🧬 Code graph analysis (1)
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx (10)
apps/webapp/app/utils/pathBuilder.ts (1)
EnvironmentParamSchema(26-28)apps/webapp/app/presenters/v3/QueryPresenter.server.ts (2)
QueryPresenter(13-43)QueryHistoryItem(5-11)apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx (1)
action(18-181)apps/webapp/app/services/queryService.server.ts (2)
executeQuery(50-116)QueryScope(16-16)apps/webapp/app/v3/querySchemas.ts (2)
querySchemas(417-417)defaultQuery(422-425)apps/webapp/app/components/code/TSQLResultsTable.tsx (1)
TSQLResultsTable(402-459)internal-packages/tsql/src/index.ts (2)
OutputColumnMetadata(73-73)ColumnSchema(68-68)apps/webapp/app/components/primitives/Popover.tsx (5)
Popover(242-242)PopoverArrowTrigger(243-243)PopoverContent(244-244)PopoverMenuItem(246-246)PopoverTrigger(249-249)apps/webapp/app/components/primitives/CopyableText.tsx (1)
CopyableText(8-98)apps/webapp/app/components/primitives/DateTime.tsx (1)
DateTime(54-98)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (24)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (6, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (7, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (5, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (1, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (4, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (5, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (3, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (4, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (3, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (8, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (6, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (7, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (2, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (8, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (2, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (1, 8)
- GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - pnpm)
- GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - npm)
- GitHub Check: units / packages / 🧪 Unit Tests: Packages (1, 1)
- GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - npm)
- GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - pnpm)
- GitHub Check: typecheck / typecheck
- GitHub Check: claude-review
- GitHub Check: Analyze (javascript-typescript)
PR Review: TRQL and the Query PageThis is a substantial and well-architected PR that introduces TRQL (TRigger Query Language), a secure query interface for users to explore their ClickHouse data. The implementation demonstrates strong security practices and thoughtful design. Here's my detailed review: 🟢 StrengthsSecurity Architecture (Excellent)The security model is well-designed with multiple layers of protection:
Test Coverage (Good)
Code Quality
🟡 Suggestions for Improvement1. Rate Limiting & Resource ProtectionThe query endpoint should have rate limiting to prevent abuse: // Consider adding in route.tsx action:
// - Per-user query rate limits
// - Query cost estimation before executionThe 2. Query Timeout ConfigurationI see // In queryService.server.ts, consider:
const MAX_QUERY_TIMEOUT_MS = 30000;
// Validate and cap user-provided timeouts3. AI Query Generation SecurityThe AI query service (
4. Error Message Information LeakageBe cautious about exposing detailed error messages to users: // In route.tsx:242
const errorMessage = err instanceof Error ? err.message : "Unknown error executing query";Some ClickHouse error messages might leak schema information. Consider sanitizing error messages before returning them to the frontend. 5. Session/History PrivacyThe
6. Frontend PerformanceFor
🔴 Potential Issues1. Missing Input Validation for
|
| Category | Rating |
|---|---|
| Security | ⭐⭐⭐⭐⭐ |
| Code Quality | ⭐⭐⭐⭐ |
| Test Coverage | ⭐⭐⭐⭐ |
| Documentation | ⭐⭐⭐⭐ |
| Performance | ⭐⭐⭐ |
Overall: This is a well-executed feature with strong security foundations. The concerns raised are mostly hardening suggestions rather than blockers. The security model with automatic tenant guard injection is particularly well done and should serve as a good pattern for similar features.
Great work on the ANTLR grammar port and the comprehensive security test suite! 🎉
Review generated by Claude
PR Review: TRQL and the Query PageOverviewThis is an impressive, substantial PR introducing TRQL (TRiggerQueryLanguage) - a type-safe, tenant-safe query language that transpiles to ClickHouse SQL, along with a new Query Page UI. The PR adds ~43,000 lines across 100 files, which is a significant addition warranting careful review. Code Quality and Best PracticesStrengths:
Suggestions:
Potential Bugs or Issues
Performance Considerations
Security ConcernsStrong Security Foundations:
Recommendations:
Test CoverageGood Coverage:
Suggestions:
Additional Observations
SummaryThis is a well-architected, security-conscious feature addition. The TSQL engine demonstrates solid engineering with proper tenant isolation and comprehensive testing. The main areas for improvement are:
Overall, this PR represents significant work and the foundations are solid. |
PR Review: TRQL and the Query PageThis is an impressive and comprehensive addition to Trigger.dev, introducing a full query language engine (TRQL) derived from PostHog's HogQL. I've reviewed the code across the Overall Assessment: ✅ Strong ImplementationThis PR demonstrates solid software engineering practices with clear separation of concerns, comprehensive security measures, and good test coverage. Below are my detailed findings: 🟢 StrengthsSecurity (Excellent)
Code Quality
Test CoverageFound 8 test files in the tsql package:
Plus webapp component tests for autocompletion and linting. Database MigrationThe
🟡 Suggestions for Improvement1. Query Result Size LimitsIn const byteSeconds = parseFloat(stats.byte_seconds) || 0;
const costInCents = byteSeconds * env.CENTS_PER_QUERY_BYTE_SECOND;Consider adding:
2. Error Handling in QueryService
if (result[0] === null && history) {Consider also logging failed queries for debugging/monitoring purposes, perhaps with a separate flag. 3. Editor UXIn } catch {
// If formatting fails (e.g., invalid SQL), silently ignore
}Consider showing a toast notification to inform users why formatting didn't work. 4. Input ValidationIn if (typeof tableName \!== "string") {
throw new QueryError("Table name must be a string");
}The validator.ts does this well - consider adding similar context to printer errors. 5. Escape Function Edge CaseIn if (/^[A-Za-z_$][A-Za-z0-9_$]*$/.test(identifier)) {The tests in 🔴 Potential Issues1. Memory Usage in Large ResultsThe result transformation in
2. Rate LimitingThe query endpoint doesn't appear to have rate limiting. Malicious users could potentially:
Recommend adding rate limiting per organization/user. 3. Index Usage VerificationThe
However, since this is a new table being created in the same migration, this should be fine for initial deployment. 📋 Minor Nits
SummaryThis is a well-architected feature with strong security fundamentals. The TRQL engine is comprehensive, the security model is sound with mandatory tenant isolation, and the test coverage is good. The main areas to address are:
Great work on this significant addition! 🎉 |
PR Review: TRQL and the Query PageThis is a substantial PR introducing TRQL (TRigger Query Language), a custom query language for safely querying ClickHouse with tenant isolation. Overall, this is an impressive and well-architected feature. Here's my detailed review: Security (Excellent) ✅The security implementation is thorough and well-designed: Tenant Isolation:
SQL Injection Prevention:
Table Allowlisting:
Code Quality (Good) 👍Strengths:
Areas for Improvement:
const byteSeconds = parseFloat(stats.byte_seconds);
if (Number.isNaN(byteSeconds)) {
logger.warn('Invalid byte_seconds value', { value: stats.byte_seconds });
}Performance Considerations (Good) 👍Positives:
Potential Concerns:
const projects = await prisma.project.findMany({...});
const environments = await prisma.runtimeEnvironment.findMany({...});For organizations with many projects/environments, consider caching these mappings or fetching only what's needed.
Test Coverage (Good) 👍The TSQL package has excellent test coverage:
Missing tests:
Potential Issues 🔍
// ai-generate.tsx:18-19
const canAccess = await hasQueryAccess(userId, user.admin, organizationSlug);
if (!canAccess) {
return new Response(JSON.stringify({ type: "result", success: false, error: "Unauthorized" }), { status: 403, headers: { "Content-Type": "application/json" } });
}
openai(env.AI_RUN_FILTER_MODEL ?? "gpt-4o-mini")The env var name
const costInCents = byteSeconds * (env.CENTS_PER_QUERY_BYTE_SECOND ?? 0);SummaryThis is a well-designed feature with strong security foundations. The tenant isolation and SQL injection prevention are particularly well-implemented. The main areas to address before merging:
Great work on this substantial feature! The TSQL architecture provides a solid foundation for safe user queries. |
PR Review: TRQL and Query PageThis is an impressive, well-architected feature! The TSQL implementation demonstrates a strong defense-in-depth security approach. Here's my detailed review: 🟢 StrengthsSecurity Architecture
Code Quality
Test Coverage
🟡 Areas for ImprovementHigh Priority1. Missing authorization check in AI endpoint The AI generation endpoint ( // Current - only checks project access
export async function action({ request, params }: ActionFunctionArgs) {
const userId = await requireUserId(request);
const project = await findProjectBySlug(organizationSlug, projectParam, userId);
// Missing: hasQueryAccess check
}Recommendation: Add the same 2. No rate limiting The query endpoints allow unlimited:
Recommendation: Implement rate limiting at multiple levels:
Medium Priority3. No input length validation The Zod schemas don't include maximum length constraints: // Current
prompt: z.string().min(1, "Prompt is required"), // No .max()
query: z.string().min(1, "Query is required"), // No .max()Recommendation: Add 4. Error message exposure ClickHouse error messages may expose internal details to users. Consider sanitizing error messages before returning them to the client (full details can be logged server-side). Low Priority5. Query complexity limits While there's a
This would prevent resource exhaustion from complex queries. 🔍 Security NotesThe security implementation is well-designed:
📝 Minor Suggestions
SummaryThis is a high-quality PR with strong security fundamentals. The main actionable items are:
Great work on the comprehensive security test coverage and the defense-in-depth approach! 🎉 |
PR Review: Query Page with TSQL Query LanguageThis is a comprehensive and well-architected feature that adds a SQL query interface for task run analytics. I've reviewed the implementation across the TSQL package, ClickHouse client integration, webapp components, and security considerations. Security ReviewStrengths - This PR has excellent security practices:
Potential Improvements:
Code QualityStrengths:
Suggestions:
Performance ConsiderationsStrengths:
Suggestions:
Minor Issues
SummaryThis is a solid, security-conscious implementation. The multi-layer defense (tenant isolation, parameterization, allowlisting, access control) is well thought out. The main areas for improvement are around operational concerns (rate limiting, cost limits) and code organization (splitting large files). Recommendation: Approve with minor suggestions The security model is sound, and the code quality is high. The suggestions above are improvements but not blockers. |
PR Review: TRQL and the Query PageThis is an impressive and substantial PR introducing TRQL (TRigger Query Language), a TypeScript port of PostHog's HogQL, along with a new Query page for users to execute queries on their data. Below is my review covering code quality, security, performance, and potential issues. ✅ Strengths
🐛 Potential Bugs / Issues1. Missing NaN Guard in queryService.server.ts:97-98const byteSeconds = parseFloat(stats.byte_seconds) || 0;
const costInCents = byteSeconds * env.CENTS_PER_QUERY_BYTE_SECOND;
2. Division by Zero in clickhouse/client.ts:366const elapsedSeconds = elapsedNs / 1_000_000_000;
const byteSeconds = elapsedSeconds > 0 ? readBytes / elapsedSeconds : 0;Good defensive coding here with the 3. Stale Closure in AIQueryInput.tsxIn [isLoading, resourcePath, mode, getCurrentQuery] // missing: processStreamEventThis could cause stale closure issues where 4. Incomplete Error Handling in Stream ProcessingIn 🔒 Security Considerations
⚡ Performance Considerations
const projects = await prisma.project.findMany({...});
const environments = await prisma.runtimeEnvironment.findMany({...});Consider caching these mappings or lazy-loading them only when the query actually uses those columns.
📝 Code Quality Suggestions
this.logger.debug("parsedSummary", parsedSummary);
// Later in the same file:
this.log("Querying clickhouse...") // inconsistent
"noImplicitAny": falseThis weakens type safety. Consider enabling it and fixing any type issues.
🧪 Test CoverageThe test coverage is generally excellent. A few suggestions:
📋 Minor Issues
SummaryThis is a well-architected feature with strong security foundations. The main areas to address before merge are:
The security model with tenant isolation is particularly well-done. The comprehensive test suite gives confidence in the implementation. Overall: Approve with minor changes 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In
@apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx:
- Around line 12-16: The RequestSchema currently allows mode="edit" without
enforcing currentQuery, causing AIQueryService.streamQuery to treat empty
currentQuery as a "new" edit; update RequestSchema so that when mode is "edit"
currentQuery is required and non-empty (e.g., use z.discriminatedUnion or
.refine to require currentQuery when mode === "edit"), and ensure any form
parsing that relies on RequestSchema (referenced by RequestSchema and
AIQueryService.streamQuery) will validate and return a clear error instead of
silently falling back to "new".
- Around line 18-21: In the action function, replace
EnvironmentParamSchema.parse(params) with
EnvironmentParamSchema.safeParse(params) and check the result; if safeParse
returns success: false, return a 400 response (e.g., new Response or Remix json
with status 400) describing the invalid route params instead of letting parse
throw; otherwise extract organizationSlug/projectParam/envParam from result.data
and continue the existing flow (keep requireUserId(request) as-is).
- Around line 70-90: The env.OPENAI_API_KEY check is validated but not used;
update the AIQueryService instantiation to pass the API key explicitly by
replacing the openai(...) call with createOpenAI({ apiKey: env.OPENAI_API_KEY,
model: env.AI_RUN_FILTER_MODEL ?? "gpt-4o-mini" }) (or equivalent createOpenAI
call used elsewhere), keep the existing env.OPENAI_API_KEY validation, and
import createOpenAI instead of openai so the validated key is actually supplied
to AIQueryService.
In @internal-packages/tsql/src/grammar/parser.test.ts:
- Around line 7-13: The parse helper currently constructs
CharStreams.fromString, TSQLLexer, CommonTokenStream and TSQLParser but doesn't
surface syntax errors; update the parse function to attach an ANTLR error
listener (remove default listeners and add a custom listener) to both the
TSQLLexer and TSQLParser that collects errors, run the parse production(s) you
need, and if any errors were recorded throw or fail the test (or return the
collected errors) so tests fail when parse errors occur; reference the parse
function, TSQLLexer, TSQLParser, and CharStreams.fromString when making the
change.
🧹 Nitpick comments (10)
internal-packages/clickhouse/src/client/client.ts (2)
234-410: Consider extracting shared logic to reduce duplication.The
queryWithStatsmethod shares ~95% of its implementation with the existingquerymethod. The only meaningful differences are the stats extraction (lines 346-381) and the return type.Consider refactoring to extract the common query execution logic into a private helper method, with both
queryandqueryWithStatscalling this helper and then processing the results according to their needs. This would improve maintainability and reduce the risk of divergence when fixing bugs or adding features.♻️ Potential refactoring approach
Extract a private method that handles the common query execution:
private async executeQuery<TIn extends z.ZodSchema<any>, TOut extends z.ZodSchema<any>>( req: { name: string; query: string; params?: TIn; schema: TOut; settings?: ClickHouseSettings }, params: z.input<TIn>, options?: { attributes?: Record<string, string | number | boolean>; params?: BaseQueryParams } ): Promise<Result<{ res: ResultSet; unparsedRows: Array<TOut> }, QueryError>>Then both
queryandqueryWithStatswould call this helper and post-process the results (extracting stats, shaping the return value, etc.).
346-381: Stats calculation logic is correct.The byte_seconds calculation properly guards against division by zero (line 366), and all stats fields correctly default to "0" when the summary header is missing or fields are undefined.
However, line 362 contains what appears to be debug logging without a descriptive message. Consider removing this or making it more informative:
♻️ Suggested improvement
- this.logger.debug("parsedSummary", parsedSummary); + this.logger.debug("Parsed query statistics from ClickHouse summary header", { parsedSummary });Or remove it entirely if not needed for production debugging.
internal-packages/tsql/src/grammar/parser.test.ts (2)
15-73: LGTM! Consider adding error case tests.The SELECT statement tests provide good structural coverage of common query patterns. The assertions correctly verify the presence of expected AST nodes.
As an optional enhancement, consider adding tests for invalid SELECT statements to verify error handling behavior.
75-128: Strengthen text-based assertions.The expression tests use weak text-based assertions that only check if substrings exist in the concatenated AST text. This doesn't validate:
- Parse tree structure
- Operator precedence
- Token types
- Expression nesting
Consider checking AST structure instead, e.g., verifying the specific operator nodes, operand types, and tree depth.
Example of stronger assertion
For the addition test, instead of:
expect(text).toContain("1"); expect(text).toContain("2"); expect(text).toContain("+");Consider:
// Verify it's a binary operation const children = columnExpr.children; expect(children?.length).toBeGreaterThan(0); // Check for operator token expect(columnExpr.PLUS()).toBeDefined(); // Verify left and right operands exist // ... more structural checksinternal-packages/tsql/src/query/ast.ts (1)
248-277: Prefer string unions over enums to align with coding guidelines.The coding guidelines specify avoiding enums in favor of string unions or const objects. Consider refactoring these enums to string unions, similar to how
SetOperatoris defined (lines 290-295).♻️ Suggested refactor to string unions
-export enum ArithmeticOperationOp { - Add = "+", - Sub = "-", - Mult = "*", - Div = "/", - Mod = "%", -} +export type ArithmeticOperationOp = "+" | "-" | "*" | "/" | "%"; -export enum CompareOperationOp { - Eq = "==", - NotEq = "!=", - Gt = ">", - GtEq = ">=", - Lt = "<", - LtEq = "<=", - Like = "like", - ILike = "ilike", - NotLike = "not like", - NotILike = "not ilike", - In = "in", - GlobalIn = "global in", - NotIn = "not in", - GlobalNotIn = "global not in", - InCohort = "in cohort", - NotInCohort = "not in cohort", - Regex = "=~", - IRegex = "=~*", - NotRegex = "!~", - NotIRegex = "!~*", -} +export type CompareOperationOp = + | "==" + | "!=" + | ">" + | ">=" + | "<" + | "<=" + | "like" + | "ilike" + | "not like" + | "not ilike" + | "in" + | "global in" + | "not in" + | "global not in" + | "in cohort" + | "not in cohort" + | "=~" + | "=~*" + | "!~" + | "!~*";You'll also need to update the
NEGATED_COMPARE_OPSarray to use string literals:-export const NEGATED_COMPARE_OPS: CompareOperationOp[] = [ - CompareOperationOp.NotEq, - CompareOperationOp.NotLike, - CompareOperationOp.NotILike, - CompareOperationOp.NotIn, - CompareOperationOp.GlobalNotIn, - CompareOperationOp.NotInCohort, - CompareOperationOp.NotRegex, - CompareOperationOp.NotIRegex, -]; +export const NEGATED_COMPARE_OPS = [ + "!=", + "not like", + "not ilike", + "not in", + "global not in", + "not in cohort", + "!~", + "!~*", +] as const;Based on coding guidelines specifying to avoid enums.
internal-packages/tsql/src/query/escape.ts (4)
20-21: Minor: Comment about\vmay be overly cautious.In modern JavaScript/TypeScript,
\vis a valid and well-supported escape sequence for vertical tab (U+000B). The comment suggests it "may not work in all contexts," but this was primarily a concern in older JavaScript engines. Using\x0Bis fine, but the comment could be clarified or simplified.
38-43: Consider simplifyingsafeIdentifier.The
includescheck beforereplaceis unnecessary—String.prototype.replaceis efficient even when the pattern doesn't exist. Simplifying to justreturn identifier.replace(/%/g, "")would be cleaner.♻️ Simplified implementation
export function safeIdentifier(identifier: string): string { - if (identifier.includes("%")) { - return identifier.replace(/%/g, ""); - } - return identifier; + return identifier.replace(/%/g, ""); }
49-55: Optional: Consider optimizing for larger strings.The
split("").map().join()pattern creates intermediate arrays. For typical SQL values this is fine, but for large strings, a single-pass approach (loop or regex with replacer function) would be more efficient.
256-262: Array type inference assumes homogeneous arrays.The function infers array type from the first element (Line 260), which assumes all elements have the same type. This is typically safe for ClickHouse arrays, but be aware that mixed-type arrays will be incorrectly typed.
apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx (1)
187-204: Consider deduping query extraction withAIQueryServiceto avoid drift.
extractQueryFromTextduplicatesAIQueryService’s extraction logic (currently private). If one changes, the other can diverge; consider exporting a shared helper or making the service expose a utility for this.
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsxapps/webapp/app/services/queryService.server.tsinternal-packages/clickhouse/src/client/client.tsinternal-packages/tsql/src/grammar/parser.test.tsinternal-packages/tsql/src/index.tsinternal-packages/tsql/src/query/ast.tsinternal-packages/tsql/src/query/escape.test.tsinternal-packages/tsql/src/query/escape.tsinternal-packages/tsql/src/query/printer.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/webapp/app/services/queryService.server.ts
🧰 Additional context used
📓 Path-based instructions (9)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead
Files:
internal-packages/tsql/src/query/escape.tsapps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsxinternal-packages/tsql/src/index.tsinternal-packages/clickhouse/src/client/client.tsinternal-packages/tsql/src/query/ast.tsinternal-packages/tsql/src/query/escape.test.tsinternal-packages/tsql/src/grammar/parser.test.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use function declarations instead of default exports
Files:
internal-packages/tsql/src/query/escape.tsapps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsxinternal-packages/tsql/src/index.tsinternal-packages/clickhouse/src/client/client.tsinternal-packages/tsql/src/query/ast.tsinternal-packages/tsql/src/query/escape.test.tsinternal-packages/tsql/src/grammar/parser.test.ts
**/*.{js,ts,jsx,tsx,json,md,css,scss}
📄 CodeRabbit inference engine (AGENTS.md)
Format code using Prettier
Files:
internal-packages/tsql/src/query/escape.tsapps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsxinternal-packages/tsql/src/index.tsinternal-packages/clickhouse/src/client/client.tsinternal-packages/tsql/src/query/ast.tsinternal-packages/tsql/src/query/escape.test.tsinternal-packages/tsql/src/grammar/parser.test.ts
**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/otel-metrics.mdc)
**/*.ts: When creating or editing OTEL metrics (counters, histograms, gauges), ensure metric attributes have low cardinality by using only enums, booleans, bounded error codes, or bounded shard IDs
Do not use high-cardinality attributes in OTEL metrics such as UUIDs/IDs (envId, userId, runId, projectId, organizationId), unbounded integers (itemCount, batchSize, retryCount), timestamps (createdAt, startTime), or free-form strings (errorMessage, taskName, queueName)
When exporting OTEL metrics via OTLP to Prometheus, be aware that the exporter automatically adds unit suffixes to metric names (e.g., 'my_duration_ms' becomes 'my_duration_ms_milliseconds', 'my_counter' becomes 'my_counter_total'). Account for these transformations when writing Grafana dashboards or Prometheus queries
Files:
internal-packages/tsql/src/query/escape.tsinternal-packages/tsql/src/index.tsinternal-packages/clickhouse/src/client/client.tsinternal-packages/tsql/src/query/ast.tsinternal-packages/tsql/src/query/escape.test.tsinternal-packages/tsql/src/grammar/parser.test.ts
{packages/core,apps/webapp}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use zod for validation in packages/core and apps/webapp
Files:
apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx
apps/webapp/app/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
Access all environment variables through the
envexport ofenv.server.tsinstead of directly accessingprocess.envin the Trigger.dev webapp
Files:
apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx
apps/webapp/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
apps/webapp/**/*.{ts,tsx}: When importing from@trigger.dev/corein the webapp, use subpath exports from the package.json instead of importing from the root path
Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp
Files:
apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx
**/*.{test,spec}.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use vitest for all tests in the Trigger.dev repository
Files:
internal-packages/tsql/src/query/escape.test.tsinternal-packages/tsql/src/grammar/parser.test.ts
**/*.test.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.test.{ts,tsx,js,jsx}: Test files should live beside the files under test and use descriptivedescribeanditblocks
Avoid mocks or stubs in tests; use helpers from@internal/testcontainerswhen Redis or Postgres are needed
Use vitest for unit tests
Files:
internal-packages/tsql/src/query/escape.test.tsinternal-packages/tsql/src/grammar/parser.test.ts
🧠 Learnings (10)
📓 Common learnings
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to internal-packages/database/**/*.{ts,tsx} : Use Prisma for database interactions in internal-packages/database with PostgreSQL
📚 Learning: 2025-12-08T15:19:56.823Z
Learnt from: 0ski
Repo: triggerdotdev/trigger.dev PR: 2760
File: apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.runs.$runParam/route.tsx:278-281
Timestamp: 2025-12-08T15:19:56.823Z
Learning: In apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.runs.$runParam/route.tsx, the tableState search parameter uses intentional double-encoding: the parameter value contains a URL-encoded URLSearchParams string, so decodeURIComponent(value("tableState") ?? "") is required to fully decode it before parsing with new URLSearchParams(). This pattern allows bundling multiple filter/pagination params as a single search parameter.
Applied to files:
apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx
📚 Learning: 2025-11-27T16:26:58.661Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/webapp.mdc:0-0
Timestamp: 2025-11-27T16:26:58.661Z
Learning: Applies to apps/webapp/**/*.{ts,tsx} : Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp
Applied to files:
apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx
📚 Learning: 2024-10-22T10:50:41.311Z
Learnt from: nicktrn
Repo: triggerdotdev/trigger.dev PR: 1424
File: packages/core/src/v3/errors.ts:155-189
Timestamp: 2024-10-22T10:50:41.311Z
Learning: When using `assertExhaustive` in a `switch` statement in TypeScript (e.g., in the `shouldRetryError` function in `packages/core/src/v3/errors.ts`), and it throws an error, it's acceptable not to add a `return` statement afterward, as control flow will not proceed beyond the `assertExhaustive` call.
Applied to files:
apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx
📚 Learning: 2025-11-27T16:26:37.432Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to internal-packages/database/**/*.{ts,tsx} : Use Prisma for database interactions in internal-packages/database with PostgreSQL
Applied to files:
internal-packages/tsql/src/index.ts
📚 Learning: 2025-11-27T16:27:48.109Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-27T16:27:48.109Z
Learning: Applies to **/*.test.{ts,tsx,js,jsx} : Use vitest for unit tests
Applied to files:
internal-packages/tsql/src/grammar/parser.test.ts
📚 Learning: 2025-11-27T16:26:37.432Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to **/*.{test,spec}.{ts,tsx} : Use vitest for all tests in the Trigger.dev repository
Applied to files:
internal-packages/tsql/src/grammar/parser.test.ts
📚 Learning: 2025-11-27T16:27:48.109Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-27T16:27:48.109Z
Learning: Applies to **/*.test.{ts,tsx,js,jsx} : Avoid mocks or stubs in tests; use helpers from `internal/testcontainers` when Redis or Postgres are needed
Applied to files:
internal-packages/tsql/src/grammar/parser.test.ts
📚 Learning: 2025-11-27T16:27:48.109Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-27T16:27:48.109Z
Learning: Applies to **/*.test.{ts,tsx,js,jsx} : Test files should live beside the files under test and use descriptive `describe` and `it` blocks
Applied to files:
internal-packages/tsql/src/grammar/parser.test.ts
📚 Learning: 2025-11-27T16:26:58.661Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/webapp.mdc:0-0
Timestamp: 2025-11-27T16:26:58.661Z
Learning: Applies to apps/webapp/**/*.test.{ts,tsx} : Test files should only import classes and functions from `app/**/*.ts` files and should not import `env.server.ts` directly or indirectly; pass configuration through options instead
Applied to files:
internal-packages/tsql/src/grammar/parser.test.ts
🧬 Code graph analysis (5)
internal-packages/tsql/src/query/escape.ts (2)
internal-packages/tsql/src/index.ts (5)
escapeTSQLIdentifier(44-44)escapeClickHouseIdentifier(42-42)escapeTSQLString(45-45)escapeClickHouseString(43-43)getClickHouseType(46-46)internal-packages/tsql/src/query/printer_context.ts (1)
timezone(89-91)
apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx (5)
apps/webapp/app/services/session.server.ts (1)
requireUserId(25-35)apps/webapp/app/utils/pathBuilder.ts (1)
EnvironmentParamSchema(26-28)apps/webapp/app/models/project.server.ts (1)
findProjectBySlug(136-147)apps/webapp/app/models/runtimeEnvironment.server.ts (1)
findEnvironmentBySlug(116-145)apps/webapp/app/v3/services/aiQueryService.server.ts (1)
AIQueryService(46-487)
internal-packages/clickhouse/src/client/client.ts (7)
internal-packages/clickhouse/src/client/types.ts (2)
ClickhouseQueryWithStatsFunction(39-45)QueryStats(19-29)packages/core/src/v3/utils/crypto.ts (1)
randomUUID(1-5)internal-packages/tracing/src/index.ts (2)
startSpan(60-90)recordSpanError(92-98)packages/core/src/v3/utils/flattenAttributes.ts (1)
flattenAttributes(6-14)internal-packages/clickhouse/src/client/errors.ts (1)
QueryError(33-42)packages/core/src/v3/tryCatch.ts (1)
tryCatch(8-15)internal-packages/clickhouse/src/client/tsql.ts (1)
QueryStats(24-24)
internal-packages/tsql/src/query/ast.ts (3)
internal-packages/tsql/src/query/context.ts (1)
TSQLContext(38-56)internal-packages/tsql/src/query/constants.ts (2)
ConstantDataType(3-13)TSQLQuerySettings(47-52)internal-packages/tsql/src/query/models.ts (7)
Table(41-49)LazyJoin(51-54)LazyTable(56-56)VirtualTable(58-58)FieldOrTable(6-8)UnknownDatabaseField(23-23)DatabaseField(10-17)
internal-packages/tsql/src/query/escape.test.ts (1)
internal-packages/tsql/src/query/escape.ts (7)
escapeClickHouseIdentifier(89-105)escapeTSQLIdentifier(61-83)SQLValueEscaper(124-213)escapeClickHouseString(225-227)escapeTSQLString(218-220)getClickHouseType(233-265)safeIdentifier(38-43)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (24)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (8, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (3, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (7, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (5, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (2, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (1, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (2, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (4, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (3, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (6, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (5, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (7, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (6, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (8, 8)
- GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - npm)
- GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - pnpm)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (1, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (4, 8)
- GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - npm)
- GitHub Check: units / packages / 🧪 Unit Tests: Packages (1, 1)
- GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - pnpm)
- GitHub Check: typecheck / typecheck
- GitHub Check: claude-review
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (12)
internal-packages/clickhouse/src/client/client.ts (1)
19-19: LGTM!The new type imports support the
queryWithStatsmethod and are properly scoped.Also applies to: 23-23
internal-packages/tsql/src/grammar/parser.test.ts (2)
1-5: LGTM!The imports are appropriate for testing an ANTLR-based parser with Vitest.
130-165: LGTM!The program tests provide good structural validation of variable declarations and program-level constructs. The filtering approach to count varDecls is appropriate.
internal-packages/tsql/src/index.ts (3)
10-10: LGTM: Clean error handling and proper aliasing to avoid shadowing.The import alias
TSQLSyntaxErrorprevents shadowing the globalSyntaxError, and the error listener implementation correctly captures parse errors.Also applies to: 19-32
137-167: LGTM: Robust parsing with proper validation.The function correctly handles ANTLR parsing, error capture, and AST validation before returning.
205-277: LGTM: Well-structured compilation function with clear tenant isolation.The
compileTSQLfunction cleanly orchestrates the parsing, schema creation, and printing steps. The tenant isolation design (requiredorganizationId, optionalprojectId/environmentId) provides appropriate flexibility.internal-packages/tsql/src/query/ast.ts (1)
654-672: LGTM: Robust helper with proper edge case handling.The function correctly handles empty arrays (error), single queries (optimization), and multiple queries (construction). The error handling and optimization are appropriate.
internal-packages/tsql/src/query/escape.ts (4)
61-105: LGTM! Identifier escaping is secure and well-implemented.Both
escapeTSQLIdentifierandescapeClickHouseIdentifierproperly validate against the%character to prevent injection, use appropriate regex patterns, and handle edge cases (like numeric identifiers in TSQL for array access). The dialect-specific differences (allowing$in TSQL) are correctly implemented.
110-118: LGTM! Type definition is comprehensive.The
EscapableValuetype covers all the value types handled by the escaper, including nested arrays.
124-213: LGTM! SQLValueEscaper class is well-structured.The visitor pattern implementation correctly handles all supported types with appropriate dialect-specific behavior. Date formatting uses UTC methods consistently, and the timezone parameter is properly passed to ClickHouse functions. Error handling for unsupported types is appropriate.
218-227: LGTM! Wrapper functions provide clean API.These functions appropriately delegate to
SQLValueEscaperwith the correct dialect settings.internal-packages/tsql/src/query/escape.test.ts (1)
1-241: LGTM! Excellent test coverage.This test suite comprehensively covers all exported functions from
escape.tswith appropriate edge cases, dialect-specific behavior, error conditions, and type variations. The tests are well-structured using vitest'sdescribeanditblocks, and assertions correctly verify the expected behavior.
| const RequestSchema = z.object({ | ||
| prompt: z.string().min(1, "Prompt is required"), | ||
| mode: z.enum(["new", "edit"]).default("new"), | ||
| currentQuery: z.string().optional(), | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mode: "edit" should require currentQuery (otherwise silently behaves like “new”).
Right now { mode: "edit" } with no/empty currentQuery will fall back to “new” behavior in AIQueryService.streamQuery, which is surprising and makes “edit” nondeterministic.
Proposed fix
const RequestSchema = z.object({
prompt: z.string().min(1, "Prompt is required"),
mode: z.enum(["new", "edit"]).default("new"),
currentQuery: z.string().optional(),
-});
+}).superRefine((val, ctx) => {
+ if (val.mode === "edit" && !val.currentQuery?.trim()) {
+ ctx.addIssue({
+ code: z.ZodIssueCode.custom,
+ path: ["currentQuery"],
+ message: "Current query is required when mode is edit",
+ });
+ }
+});Also applies to: 84-85
🤖 Prompt for AI Agents
In
@apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx
around lines 12 - 16, The RequestSchema currently allows mode="edit" without
enforcing currentQuery, causing AIQueryService.streamQuery to treat empty
currentQuery as a "new" edit; update RequestSchema so that when mode is "edit"
currentQuery is required and non-empty (e.g., use z.discriminatedUnion or
.refine to require currentQuery when mode === "edit"), and ensure any form
parsing that relies on RequestSchema (referenced by RequestSchema and
AIQueryService.streamQuery) will validate and return a clear error instead of
silently falling back to "new".
| export async function action({ request, params }: ActionFunctionArgs) { | ||
| const userId = await requireUserId(request); | ||
| const { organizationSlug, projectParam, envParam } = EnvironmentParamSchema.parse(params); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid throwing 500s on bad route params (EnvironmentParamSchema.parse(params)).
parse() will throw and turn a bad URL into a 500 unless Remix catches and formats it. Prefer safeParse + 400.
Proposed fix
export async function action({ request, params }: ActionFunctionArgs) {
const userId = await requireUserId(request);
- const { organizationSlug, projectParam, envParam } = EnvironmentParamSchema.parse(params);
+ const parsedParams = EnvironmentParamSchema.safeParse(params);
+ if (!parsedParams.success) {
+ return new Response(
+ JSON.stringify({
+ type: "result",
+ success: false,
+ error: "Invalid route parameters",
+ }),
+ { status: 400, headers: { "Content-Type": "application/json" } }
+ );
+ }
+ const { organizationSlug, projectParam, envParam } = parsedParams.data;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function action({ request, params }: ActionFunctionArgs) { | |
| const userId = await requireUserId(request); | |
| const { organizationSlug, projectParam, envParam } = EnvironmentParamSchema.parse(params); | |
| export async function action({ request, params }: ActionFunctionArgs) { | |
| const userId = await requireUserId(request); | |
| const parsedParams = EnvironmentParamSchema.safeParse(params); | |
| if (!parsedParams.success) { | |
| return new Response( | |
| JSON.stringify({ | |
| type: "result", | |
| success: false, | |
| error: "Invalid route parameters", | |
| }), | |
| { status: 400, headers: { "Content-Type": "application/json" } } | |
| ); | |
| } | |
| const { organizationSlug, projectParam, envParam } = parsedParams.data; |
🤖 Prompt for AI Agents
In
@apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx
around lines 18 - 21, In the action function, replace
EnvironmentParamSchema.parse(params) with
EnvironmentParamSchema.safeParse(params) and check the result; if safeParse
returns success: false, return a 400 response (e.g., new Response or Remix json
with status 400) describing the invalid route params instead of letting parse
throw; otherwise extract organizationSlug/projectParam/envParam from result.data
and continue the existing flow (keep requireUserId(request) as-is).
| if (!env.OPENAI_API_KEY) { | ||
| return new Response( | ||
| JSON.stringify({ | ||
| type: "result", | ||
| success: false, | ||
| error: "OpenAI API key is not configured", | ||
| }), | ||
| { | ||
| status: 400, | ||
| headers: { "Content-Type": "application/json" }, | ||
| } | ||
| ); | ||
| } | ||
|
|
||
| const { prompt, mode, currentQuery } = submission.data; | ||
|
|
||
| const service = new AIQueryService( | ||
| querySchemas, | ||
| openai(env.AI_RUN_FILTER_MODEL ?? "gpt-4o-mini") | ||
| ); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's read the file to understand the env setup and imports
cat -n "apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx" | head -100Repository: triggerdotdev/trigger.dev
Length of output: 176
🏁 Script executed:
# Check if env.server.ts exists and how env is exported
fd -t f "env.server.ts" apps/webappRepository: triggerdotdev/trigger.dev
Length of output: 98
🏁 Script executed:
# Search for other usages of @ai-sdk/openai in the webapp
rg "@ai-sdk/openai" apps/webapp -A 3 -B 1Repository: triggerdotdev/trigger.dev
Length of output: 2984
🌐 Web query:
@ai-sdk/openai API key configuration - how to pass apiKey parameter
💡 Result:
Short answer — pass apiKey when creating the provider (or set OPENAI_API_KEY env var).
Examples:
-
Using createOpenAI:
import { createOpenAI } from '@ai-sdk/openai'
const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY || 'sk-...' }) -
Using the default provider (via env var):
// set OPENAI_API_KEY=sk-... in your environment (or .env); the provider will use it by default.
Sources: ai-sdk docs and package readme. [1][2][3]
References
- AI SDK Getting Started / Configure OpenAI API key. [2]
- @ai-sdk/openai provider docs (createOpenAI apiKey option). [3]
- npm @ai-sdk/openai package readme (constructor/create options include apiKey). [5]
🏁 Script executed:
cat -n "apps/webapp/app/env.server.ts"Repository: triggerdotdev/trigger.dev
Length of output: 50381
🏁 Script executed:
# Look at other usages of openai() in the codebase to understand the pattern
rg "openai\(" apps/webapp -B 2 -A 2 --type ts --type tsxRepository: triggerdotdev/trigger.dev
Length of output: 96
🏁 Script executed:
# Search for OPENAI_API_KEY usage patterns
rg "OPENAI_API_KEY" apps/webapp -B 1 -A 1Repository: triggerdotdev/trigger.dev
Length of output: 1816
Pass the API key explicitly to the OpenAI provider for consistency with the codebase.
While @ai-sdk/openai's default openai() function does read from process.env.OPENAI_API_KEY automatically, the codebase elsewhere (e.g., humanToCron.server.ts) explicitly passes the API key to the provider. The validation of env.OPENAI_API_KEY without using it is redundant. Use createOpenAI({ apiKey: env.OPENAI_API_KEY }) instead to align with best practices and ensure the validated value is actually used.
🤖 Prompt for AI Agents
In
@apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx
around lines 70 - 90, The env.OPENAI_API_KEY check is validated but not used;
update the AIQueryService instantiation to pass the API key explicitly by
replacing the openai(...) call with createOpenAI({ apiKey: env.OPENAI_API_KEY,
model: env.AI_RUN_FILTER_MODEL ?? "gpt-4o-mini" }) (or equivalent createOpenAI
call used elsewhere), keep the existing env.OPENAI_API_KEY validation, and
import createOpenAI instead of openai so the validated key is actually supplied
to AIQueryService.
| // Create a streaming response | ||
| const stream = new ReadableStream({ | ||
| async start(controller) { | ||
| const encoder = new TextEncoder(); | ||
|
|
||
| const sendEvent = (event: { | ||
| type: string; | ||
| content?: string; | ||
| tool?: string; | ||
| args?: unknown; | ||
| result?: unknown; | ||
| success?: boolean; | ||
| query?: string; | ||
| error?: string; | ||
| }) => { | ||
| controller.enqueue(encoder.encode(`data: ${JSON.stringify(event)}\n\n`)); | ||
| }; | ||
|
|
||
| try { | ||
| const result = service.streamQuery(prompt, { mode, currentQuery }); | ||
|
|
||
| // Process the stream | ||
| for await (const part of result.fullStream) { | ||
| switch (part.type) { | ||
| case "text-delta": { | ||
| sendEvent({ type: "thinking", content: part.textDelta }); | ||
| break; | ||
| } | ||
| case "tool-call": { | ||
| sendEvent({ | ||
| type: "tool_call", | ||
| tool: part.toolName, | ||
| args: part.args, | ||
| }); | ||
| break; | ||
| } | ||
| case "error": { | ||
| sendEvent({ | ||
| type: "result", | ||
| success: false, | ||
| error: part.error instanceof Error ? part.error.message : String(part.error), | ||
| }); | ||
| break; | ||
| } | ||
| case "finish": { | ||
| // Extract query from the final text | ||
| const finalText = await result.text; | ||
| const query = extractQueryFromText(finalText); | ||
|
|
||
| if (query) { | ||
| sendEvent({ | ||
| type: "result", | ||
| success: true, | ||
| query, | ||
| }); | ||
| } else if ( | ||
| finalText.toLowerCase().includes("cannot") || | ||
| finalText.toLowerCase().includes("unable") | ||
| ) { | ||
| sendEvent({ | ||
| type: "result", | ||
| success: false, | ||
| error: finalText.slice(0, 300), | ||
| }); | ||
| } else { | ||
| sendEvent({ | ||
| type: "result", | ||
| success: false, | ||
| error: "Could not generate a valid query", | ||
| }); | ||
| } | ||
| break; | ||
| } | ||
| } | ||
| } | ||
| } catch (error) { | ||
| sendEvent({ | ||
| type: "result", | ||
| success: false, | ||
| error: error instanceof Error ? error.message : "An error occurred", | ||
| }); | ||
| } finally { | ||
| controller.close(); | ||
| } | ||
| }, | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SSE stream can emit multiple terminal “result” events; also handle client abort + proxy buffering.
- After emitting a terminal
type:"result"(on"error"or"finish"), thefor awaitloop continues; you can end up sending multiple results. - No abort handling: if the client disconnects, continuing to enqueue wastes tokens/time.
- Consider SSE headers to prevent intermediary buffering.
Proposed fix
const stream = new ReadableStream({
async start(controller) {
const encoder = new TextEncoder();
+ let done = false;
+
+ const abort = () => {
+ done = true;
+ try {
+ controller.close();
+ } catch {
+ // ignore
+ }
+ };
+ request.signal.addEventListener("abort", abort, { once: true });
const sendEvent = (event: {
type: string;
@@
}) => {
- controller.enqueue(encoder.encode(`data: ${JSON.stringify(event)}\n\n`));
+ if (done) return;
+ controller.enqueue(encoder.encode(`data: ${JSON.stringify(event)}\n\n`));
};
try {
const result = service.streamQuery(prompt, { mode, currentQuery });
// Process the stream
for await (const part of result.fullStream) {
+ if (done) break;
switch (part.type) {
case "text-delta": {
sendEvent({ type: "thinking", content: part.textDelta });
break;
}
@@
case "error": {
sendEvent({
type: "result",
success: false,
error: part.error instanceof Error ? part.error.message : String(part.error),
});
- break;
+ done = true;
+ break;
}
case "finish": {
@@
} else {
sendEvent({
type: "result",
success: false,
error: "Could not generate a valid query",
});
}
- break;
+ done = true;
+ break;
}
}
+ if (done) break;
}
} catch (error) {
sendEvent({
type: "result",
success: false,
error: error instanceof Error ? error.message : "An error occurred",
});
} finally {
+ done = true;
controller.close();
}
},
});
return new Response(stream, {
headers: {
- "Content-Type": "text/event-stream",
- "Cache-Control": "no-cache",
+ "Content-Type": "text/event-stream; charset=utf-8",
+ "Cache-Control": "no-cache, no-transform",
Connection: "keep-alive",
+ "X-Accel-Buffering": "no",
},
});Committable suggestion skipped: line range outside the PR's diff.
| function parse(input: string) { | ||
| const inputStream = CharStreams.fromString(input); | ||
| const lexer = new TSQLLexer(inputStream); | ||
| const tokenStream = new CommonTokenStream(lexer); | ||
| const parser = new TSQLParser(tokenStream); | ||
| return parser; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error checking to verify parse success.
The helper function doesn't configure error handling, so tests may pass even when the parser encounters syntax errors. Consider adding an error listener to track parsing errors and fail tests if any occur.
🔍 Suggested enhancement
+import { ANTLRErrorListener, RecognitionException, Recognizer } from "antlr4ts";
+
+class ThrowingErrorListener implements ANTLRErrorListener<any> {
+ syntaxError(
+ recognizer: Recognizer<any, any>,
+ offendingSymbol: any,
+ line: number,
+ charPositionInLine: number,
+ msg: string,
+ e: RecognitionException | undefined
+ ): void {
+ throw new Error(`Syntax error at ${line}:${charPositionInLine}: ${msg}`);
+ }
+}
+
function parse(input: string) {
const inputStream = CharStreams.fromString(input);
const lexer = new TSQLLexer(inputStream);
const tokenStream = new CommonTokenStream(lexer);
const parser = new TSQLParser(tokenStream);
+ parser.removeErrorListeners();
+ parser.addErrorListener(new ThrowingErrorListener());
return parser;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function parse(input: string) { | |
| const inputStream = CharStreams.fromString(input); | |
| const lexer = new TSQLLexer(inputStream); | |
| const tokenStream = new CommonTokenStream(lexer); | |
| const parser = new TSQLParser(tokenStream); | |
| return parser; | |
| } | |
| import { ANTLRErrorListener, RecognitionException, Recognizer } from "antlr4ts"; | |
| class ThrowingErrorListener implements ANTLRErrorListener<any> { | |
| syntaxError( | |
| recognizer: Recognizer<any, any>, | |
| offendingSymbol: any, | |
| line: number, | |
| charPositionInLine: number, | |
| msg: string, | |
| e: RecognitionException | undefined | |
| ): void { | |
| throw new Error(`Syntax error at ${line}:${charPositionInLine}: ${msg}`); | |
| } | |
| } | |
| function parse(input: string) { | |
| const inputStream = CharStreams.fromString(input); | |
| const lexer = new TSQLLexer(inputStream); | |
| const tokenStream = new CommonTokenStream(lexer); | |
| const parser = new TSQLParser(tokenStream); | |
| parser.removeErrorListeners(); | |
| parser.addErrorListener(new ThrowingErrorListener()); | |
| return parser; | |
| } |
🤖 Prompt for AI Agents
In @internal-packages/tsql/src/grammar/parser.test.ts around lines 7 - 13, The
parse helper currently constructs CharStreams.fromString, TSQLLexer,
CommonTokenStream and TSQLParser but doesn't surface syntax errors; update the
parse function to attach an ANTLR error listener (remove default listeners and
add a custom listener) to both the TSQLLexer and TSQLParser that collects
errors, run the parse production(s) you need, and if any errors were recorded
throw or fail the test (or return the collected errors) so tests fail when parse
errors occur; reference the parse function, TSQLLexer, TSQLParser, and
CharStreams.fromString when making the change.
TRQL (pronounced Treacle like the delicious British dark sweet syrup) is the TRiggerQueryLanguage. It allows users to safely write queries on their data. The queries are safely turned into ClickHouse queries which are tenant-safe and not SQL injectable.
query-alpha-demo-100mb.mp4
This started out as a translation of HogQL by PostHog from Python to TypeScript.
Features
Query page
There’s a new Query page (currently behind a feature flag) where you can write TRQL queries and execute them against your environment, project or organization.
Features