move machine learning lambda endpoints to backend server and add cach…#276
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughAdds server endpoints for ML correlation matrices backed by AWS Lambda with caching, health, and cache-invalidate routes; integrates secrets into ECS; updates server to mount the new router; introduces server dependencies; updates client API handlers to use a fixed backend route; adds example env vars. Changes
Sequence DiagramsequenceDiagram
participant Client
participant Server as Server (Express)
participant Cache as Cache (NodeCache)
participant Lambda as AWS Lambda
Client->>Server: GET /ml/correlation-matrix/packet
Server->>Cache: get PACKET_CORRELATION_MATRIX
alt Cache Hit
Cache-->>Server: cached data
Server-->>Client: 200 + data
else Cache Miss
Server->>Lambda: invoke configured function (via Lambda Client / Function URL)
Lambda-->>Server: 200 + matrix data
Server->>Cache: set PACKET_CORRELATION_MATRIX (1h TTL)
Server-->>Client: 200 + data
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
packages/server/src/controllers/routeControllers/machineLearning.controller.ts
Fixed
Show fixed
Hide fixed
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@packages/client/.env.example`:
- Line 1: Remove the surrounding single quotes from the NEXT_PUBLIC_MAPSAPIKEY
value in the .env example so the line reads NEXT_PUBLIC_MAPSAPIKEY= without
quotes; update the entry for NEXT_PUBLIC_MAPSAPIKEY (the environment variable
name) to use an unquoted empty value to satisfy dotenv-linter and parsers.
In
`@packages/server/src/controllers/routeControllers/machineLearning.controller.ts`:
- Around line 36-45: The current custom HTTPS agent (httpsAgent) created with
rejectUnauthorized: false disables SSL verification and must be removed or made
conditional; update the lambdaAxios creation to either use the default axios
instance (remove httpsAgent) or construct https.Agent with rejectUnauthorized:
true, and if you must bypass validation only for local development wrap the
insecure option behind an explicit environment flag (e.g.,
process.env.DISABLE_SSL_VERIFICATION) so rejectUnauthorized is never false in
production; also prefer fixing root-certificate issues (NODE_EXTRA_CA_CERTS)
instead of disabling TLS checks.
- Around line 13-27: Currently the module throws at load time by reading
PACKET_CORRELATION_MATRIX_URL and LAP_CORRELATION_MATRIX_URL which can crash the
server; replace the top-level validation with lazy getters (e.g. add
getPacketCorrelationMatrixUrl() and getLapCorrelationMatrixUrl() functions) that
read process.env and throw or return null when called, and update the ML request
handlers to call these getters at runtime and handle missing values gracefully
(return a 503 or suitable error) instead of crashing during import; ensure all
references to PACKET_CORRELATION_MATRIX_URL and LAP_CORRELATION_MATRIX_URL in
the file are replaced to use the new getters.
In `@packages/server/src/routes/machineLearning.route.ts`:
- Line 15: The POST /ml/invalidate route currently exposed via
machineLearningRouter.post("/ml/invalidate", controllers.invalidateCache) lacks
any access control; add an authentication/authorization gate that validates
callers before invoking controllers.invalidateCache (e.g., implement an auth
middleware function or an inline pre-handler that checks a request header or
token against a secure store like process.env.ADMIN_API_KEY or your existing
auth service, and attach it to the route:
machineLearningRouter.post("/ml/invalidate", authMiddleware,
controllers.invalidateCache)); return 401/403 for unauthenticated/unauthorized
requests and log denied attempts. Ensure the auth check uses the project’s
existing security pattern (token, JWT, signature verification, or
service-to-service credential) and keep validation of the cache key in
controllers.invalidateCache intact.
🧹 Nitpick comments (4)
packages/server/.env.example (1)
33-35: Minor env file formatting improvements.The static analysis tool flagged a few style issues:
- Extra blank line at line 33
- Values should not be quoted in
.envfiles (the shell/dotenv parser handles this)- Consider alphabetical ordering (
GET_LAP_*beforeGET_PACKET_*)- Add a trailing newline at end of file
Suggested fix
LAP_POSITION_PASSWORD="changeme" - -GET_PACKET_CORRELATION_MATRIX_URL='https://myApiGatewayForPacketMatrix.lambda-url.region.on.aws/' -GET_LAP_CORRELATION_MATRIX_URL='https://myApiGatewayForLapMatrix.lambda-url.region.on.aws/' + +GET_LAP_CORRELATION_MATRIX_URL=https://myApiGatewayForLapMatrix.lambda-url.region.on.aws/ +GET_PACKET_CORRELATION_MATRIX_URL=https://myApiGatewayForPacketMatrix.lambda-url.region.on.aws/packages/client/src/pages/api/getLapCorrelationMatrix.ts (1)
7-27: Consider extracting shared fetch logic to reduce duplication.This handler is nearly identical to
getPacketCorrelationMatrix.ts(per the relevant code snippets), differing only in the endpoint path. Consider extracting a shared utility function to improve maintainability.Example shared utility approach
Create a shared helper in a utils file:
// utils/backendFetch.ts import { prodURL } from "@shared/helios-types"; import type { NextApiResponse } from "next"; export async function fetchFromBackend( endpoint: string, res: NextApiResponse, errorMessage = "Failed to fetch data" ) { const url = `${prodURL}${endpoint}`; const result = await fetch(url); if (!result.ok) { const errorData = await result.json().catch(() => ({})); res.status(result.status).json({ error: errorData.error || errorMessage, }); return null; } return result.json(); }Then both handlers become:
export default async function handler(_req: NextApiRequest, res: NextApiResponse) { try { const data = await fetchFromBackend("/ml/correlation-matrix/lap", res, "Failed to fetch correlation matrix data"); if (data) res.status(200).json(data); } catch (err) { res.status(500).json({ error: `Failed to load data: ${err instanceof Error ? err.message : String(err)}`, }); } }packages/server/src/controllers/routeControllers/machineLearning.controller.ts (2)
76-126: Consider extracting shared caching logic to reduce duplication.
getPacketCorrelationMatrixandgetLapCorrelationMatrixhave nearly identical implementations differing only in cache key and Lambda URL. A helper function would improve maintainability.Suggested refactor
async function getCorrelationMatrix( request: Request, response: Response, cacheKey: string, lambdaUrl: string, ) { const logger = createApplicationLogger( "machineLearning.controller.ts", request, response, ); logger.info(`ENTRY - ${request.method} ${request.url}`); try { const cachedData = cache.get<unknown>(cacheKey); if (cachedData !== undefined) { const stats = cache.getStats(); logger.info( `Cache HIT for key: ${cacheKey} (Total hits: ${stats.hits}, Total misses: ${stats.misses})`, ); logger.info(`EXIT - ${request.method} ${request.url} - ${200} (cached)`); return response.status(200).json(cachedData); } const stats = cache.getStats(); logger.info( `Cache MISS for key: ${cacheKey} (Total hits: ${stats.hits}, Total misses: ${stats.misses})`, ); const data = await fetchFromLambda(lambdaUrl, logger); cache.set(cacheKey, data); logger.info(`EXIT - ${request.method} ${request.url} - ${200} (from Lambda)`); return response.status(200).json(data); } catch (error) { logger.error( `ERROR - ${request.method} ${request.url} - ${error instanceof Error ? error.message : String(error)}`, ); return response.status(503).json({ error: "Service temporarily unavailable. Unable to fetch correlation matrix data.", }); } } export const getPacketCorrelationMatrix = (req: Request, res: Response) => getCorrelationMatrix(req, res, CACHE_KEY.PACKET_CORRELATION_MATRIX, PACKET_CORRELATION_MATRIX_URL); export const getLapCorrelationMatrix = (req: Request, res: Response) => getCorrelationMatrix(req, res, CACHE_KEY.LAP_CORRELATION_MATRIX, LAP_CORRELATION_MATRIX_URL);Also applies to: 132-182
271-274: Minor: Inconsistent uptime format.The
uptimefield concatenates a number with a string while other numeric fields likehitRateare pure numbers. Consider using a consistent format.Suggested fix
date: new Date(), message: "OK", - uptime: process.uptime() + " seconds", + uptimeSeconds: process.uptime(), };
packages/server/src/controllers/routeControllers/machineLearning.controller.ts
Outdated
Show resolved
Hide resolved
packages/server/src/controllers/routeControllers/machineLearning.controller.ts
Outdated
Show resolved
Hide resolved
packages/server/src/controllers/routeControllers/machineLearning.controller.ts
Show resolved
Hide resolved
packages/server/src/controllers/routeControllers/machineLearning.controller.ts
Show resolved
Hide resolved
packages/server/src/controllers/routeControllers/machineLearning.controller.ts
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@packages/server/src/controllers/routeControllers/machineLearning.controller.ts`:
- Around line 15-30: PACKET_CORRELATION_MATRIX_URL and
LAP_CORRELATION_MATRIX_URL are being defaulted to empty strings so a cache miss
leads fetchFromLambda to call axios with an invalid URL; update the code to
either not default to "" (leave undefined) or, preferably, add early-exit
validation inside the handlers that call fetchFromLambda (e.g., the handler that
uses PACKET_CORRELATION_MATRIX_URL and getLapCorrelationMatrix for
LAP_CORRELATION_MATRIX_URL) to return a 500/400 response with a clear "URL not
configured" message when the env var is falsy, and only call fetchFromLambda
when the corresponding URL is present. Ensure you reference
PACKET_CORRELATION_MATRIX_URL, LAP_CORRELATION_MATRIX_URL, fetchFromLambda and
getLapCorrelationMatrix when making the checks.
🧹 Nitpick comments (1)
packages/server/src/controllers/routeControllers/machineLearning.controller.ts (1)
131-185: Consider extracting shared logic to reduce duplication.
getLapCorrelationMatrixandgetPacketCorrelationMatrixare nearly identical. Consider extracting the common cache-fetch-return logic into a helper.Refactor suggestion
async function handleCorrelationMatrixRequest( request: Request, response: Response, cacheKey: string, lambdaUrl: string, ) { const logger = createApplicationLogger("machineLearning.controller.ts", request, response); logger.info(`ENTRY - ${request.method} ${request.url}`); if (!lambdaUrl) { logger.error(`Lambda URL not configured for ${cacheKey}`); return response.status(503).json({ error: "Service not configured." }); } try { const cachedData = cache.get<unknown>(cacheKey); if (cachedData !== undefined) { const stats = cache.getStats(); logger.info(`Cache HIT for key: ${cacheKey} (Total hits: ${stats.hits}, Total misses: ${stats.misses})`); logger.info(`EXIT - ${request.method} ${request.url} - 200 (cached)`); return response.status(200).json(cachedData); } const stats = cache.getStats(); logger.info(`Cache MISS for key: ${cacheKey} (Total hits: ${stats.hits}, Total misses: ${stats.misses})`); const data = await fetchFromLambda(lambdaUrl, logger); cache.set(cacheKey, data); logger.info(`EXIT - ${request.method} ${request.url} - 200 (from Lambda)`); return response.status(200).json(data); } catch (error) { logger.error(`ERROR - ${request.method} ${request.url} - ${error instanceof Error ? error.message : String(error)}`); return response.status(503).json({ error: "Service temporarily unavailable." }); } } export const getPacketCorrelationMatrix = (req: Request, res: Response) => handleCorrelationMatrixRequest(req, res, CACHE_KEY.PACKET_CORRELATION_MATRIX, PACKET_CORRELATION_MATRIX_URL); export const getLapCorrelationMatrix = (req: Request, res: Response) => handleCorrelationMatrixRequest(req, res, CACHE_KEY.LAP_CORRELATION_MATRIX, LAP_CORRELATION_MATRIX_URL);
packages/server/src/controllers/routeControllers/machineLearning.controller.ts
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In
`@packages/server/src/controllers/routeControllers/machineLearning.controller.ts`:
- Around line 287-319: The current branch uses a falsy check (if (key && key !==
"all")) which treats values like 0 or "" as "invalidate all"; change the
validation so only an explicit key === "all" triggers cache.flushAll(), and any
other falsy/invalid key (e.g., null, undefined, "", 0, or values not in
Object.values(CACHE_KEY)) returns a 400 with the same error shape; update the
conditional around request.body.key, the CACHE_KEY check, and the
cache.flushAll() branch (references: request.body.key, CACHE_KEY, cache.del,
cache.flushAll, response.status) to enforce this stricter validation and to only
call cache.flushAll() when key === "all".
- Around line 100-112: The code parses Lambda Function URL payloads into
lambdaResponse and data but doesn't check HTTP status codes, so responses like
{statusCode: 400, body: "..."} get treated as success and may be cached; after
decoding response.Payload into lambdaResponse (and before any cache write),
check if lambdaResponse.statusCode is present and >= 400 (or not in 2xx range)
and throw an error that includes the statusCode and body so execution treats it
as a failure (in addition to checking response.FunctionError); ensure this check
runs before the code paths that write to the cache (the places that currently
store the result for 1 hour) so error responses are never cached.
| const payloadString = new TextDecoder().decode(response.Payload); | ||
| const lambdaResponse = JSON.parse(payloadString); | ||
|
|
||
| // Lambda functions with Function URLs return a response with statusCode and body | ||
| // The body is a JSON string that needs to be parsed again | ||
| let data: unknown; | ||
| if (lambdaResponse.body) { | ||
| // Parse the body string to get the actual data | ||
| data = JSON.parse(lambdaResponse.body); | ||
| } else { | ||
| // Direct invocation without Function URL wrapper | ||
| data = lambdaResponse; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/server/src/controllers/routeControllers/machineLearning.controller.ts | head -150 | tail -80Repository: UCSolarCarTeam/Helios-Telemetry
Length of output: 3027
🏁 Script executed:
rg -n "invokeMLFunction" packages/server/src/controllers/routeControllers/machineLearning.controller.ts -A 5Repository: UCSolarCarTeam/Helios-Telemetry
Length of output: 57
🏁 Script executed:
rg -n "invokeMLFunction" packages/server/src/Repository: UCSolarCarTeam/Helios-Telemetry
Length of output: 57
🏁 Script executed:
cat -n packages/server/src/controllers/routeControllers/machineLearning.controller.ts | head -75Repository: UCSolarCarTeam/Helios-Telemetry
Length of output: 3441
🏁 Script executed:
rg -n "invokeLambdaFunction\|cache.set\|cache.get" packages/server/src/controllers/routeControllers/machineLearning.controller.ts -B 3 -A 3Repository: UCSolarCarTeam/Helios-Telemetry
Length of output: 57
🏁 Script executed:
wc -l packages/server/src/controllers/routeControllers/machineLearning.controller.tsRepository: UCSolarCarTeam/Helios-Telemetry
Length of output: 157
🏁 Script executed:
cat -n packages/server/src/controllers/routeControllers/machineLearning.controller.ts | sed -n '130,250p'Repository: UCSolarCarTeam/Helios-Telemetry
Length of output: 4564
🏁 Script executed:
cat -n packages/server/src/controllers/routeControllers/machineLearning.controller.ts | sed -n '250,280p'Repository: UCSolarCarTeam/Helios-Telemetry
Length of output: 1198
🏁 Script executed:
rg -n "StatusCode\|FunctionError" packages/server/src/controllers/routeControllers/machineLearning.controller.tsRepository: UCSolarCarTeam/Helios-Telemetry
Length of output: 57
Check statusCode in Lambda Function URL responses before caching.
If a Lambda Function URL returns { statusCode: 400+, body: "..." }, the current code treats it as success because response.FunctionError is only set for execution errors, not HTTP errors. This can cache error responses for 1 hour. Validate statusCode and throw before caching at lines 187 and 257.
🔧 Suggested fix
- let data: unknown;
- if (lambdaResponse.body) {
- // Parse the body string to get the actual data
- data = JSON.parse(lambdaResponse.body);
- } else {
- // Direct invocation without Function URL wrapper
- data = lambdaResponse;
- }
+ if (
+ typeof lambdaResponse?.statusCode === "number" &&
+ lambdaResponse.statusCode >= 400
+ ) {
+ throw new Error(
+ `Lambda returned error status ${lambdaResponse.statusCode}`,
+ );
+ }
+
+ let data: unknown;
+ if (lambdaResponse?.body !== undefined) {
+ // Parse the body string to get the actual data
+ data =
+ typeof lambdaResponse.body === "string"
+ ? JSON.parse(lambdaResponse.body)
+ : lambdaResponse.body;
+ } else {
+ // Direct invocation without Function URL wrapper
+ data = lambdaResponse;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const payloadString = new TextDecoder().decode(response.Payload); | |
| const lambdaResponse = JSON.parse(payloadString); | |
| // Lambda functions with Function URLs return a response with statusCode and body | |
| // The body is a JSON string that needs to be parsed again | |
| let data: unknown; | |
| if (lambdaResponse.body) { | |
| // Parse the body string to get the actual data | |
| data = JSON.parse(lambdaResponse.body); | |
| } else { | |
| // Direct invocation without Function URL wrapper | |
| data = lambdaResponse; | |
| } | |
| const payloadString = new TextDecoder().decode(response.Payload); | |
| const lambdaResponse = JSON.parse(payloadString); | |
| if ( | |
| typeof lambdaResponse?.statusCode === "number" && | |
| lambdaResponse.statusCode >= 400 | |
| ) { | |
| throw new Error( | |
| `Lambda returned error status ${lambdaResponse.statusCode}`, | |
| ); | |
| } | |
| // Lambda functions with Function URLs return a response with statusCode and body | |
| // The body is a JSON string that needs to be parsed again | |
| let data: unknown; | |
| if (lambdaResponse?.body !== undefined) { | |
| // Parse the body string to get the actual data | |
| data = | |
| typeof lambdaResponse.body === "string" | |
| ? JSON.parse(lambdaResponse.body) | |
| : lambdaResponse.body; | |
| } else { | |
| // Direct invocation without Function URL wrapper | |
| data = lambdaResponse; | |
| } |
🤖 Prompt for AI Agents
In
`@packages/server/src/controllers/routeControllers/machineLearning.controller.ts`
around lines 100 - 112, The code parses Lambda Function URL payloads into
lambdaResponse and data but doesn't check HTTP status codes, so responses like
{statusCode: 400, body: "..."} get treated as success and may be cached; after
decoding response.Payload into lambdaResponse (and before any cache write),
check if lambdaResponse.statusCode is present and >= 400 (or not in 2xx range)
and throw an error that includes the statusCode and body so execution treats it
as a failure (in addition to checking response.FunctionError); ensure this check
runs before the code paths that write to the cache (the places that currently
store the result for 1 hour) so error responses are never cached.
| try { | ||
| const { key } = request.body; | ||
|
|
||
| if (key && key !== "all") { | ||
| // Invalidate specific key | ||
| if (!Object.values(CACHE_KEY).includes(key)) { | ||
| logger.warn(`Invalid cache key provided: ${key}`); | ||
| return response.status(400).json({ | ||
| error: `Invalid cache key. Valid keys: ${Object.values(CACHE_KEY).join(", ")}, or "all"`, | ||
| }); | ||
| } | ||
|
|
||
| const deleted = cache.del(key); | ||
| logger.info(`Cache INVALIDATE for key: ${key}, existed: ${deleted > 0}`); | ||
| logger.info(`EXIT - ${request.method} ${request.url} - ${200}`); | ||
| return response.status(200).json({ | ||
| invalidated: deleted > 0, | ||
| key, | ||
| message: | ||
| deleted > 0 | ||
| ? `Cache for ${key} invalidated successfully` | ||
| : `No cache found for ${key}`, | ||
| }); | ||
| } else { | ||
| // Invalidate all caches | ||
| const keys = cache.keys(); | ||
| cache.flushAll(); | ||
| logger.info(`Cache INVALIDATE ALL, cleared ${keys.length} entries`); | ||
| logger.info(`EXIT - ${request.method} ${request.url} - ${200}`); | ||
| return response.status(200).json({ | ||
| message: "All caches invalidated successfully", | ||
| }); | ||
| } |
There was a problem hiding this comment.
Require an explicit "all" to flush the cache.
Currently any falsy key (e.g., 0, empty string) falls into “invalidate all”. Consider rejecting invalid/falsy keys and only flushing when key === "all" or key is omitted by design.
🔧 Suggested fix
- if (key && key !== "all") {
+ if (key === "all" || key === undefined) {
+ // Invalidate all caches
+ const keys = cache.keys();
+ cache.flushAll();
+ logger.info(`Cache INVALIDATE ALL, cleared ${keys.length} entries`);
+ logger.info(`EXIT - ${request.method} ${request.url} - ${200}`);
+ return response.status(200).json({
+ message: "All caches invalidated successfully",
+ });
+ }
+
+ if (key && key !== "all") {
// Invalidate specific key
if (!Object.values(CACHE_KEY).includes(key)) {
logger.warn(`Invalid cache key provided: ${key}`);
return response.status(400).json({
error: `Invalid cache key. Valid keys: ${Object.values(CACHE_KEY).join(", ")}, or "all"`,
});
}
@@
- } else {
- // Invalidate all caches
- const keys = cache.keys();
- cache.flushAll();
- logger.info(`Cache INVALIDATE ALL, cleared ${keys.length} entries`);
- logger.info(`EXIT - ${request.method} ${request.url} - ${200}`);
- return response.status(200).json({
- message: "All caches invalidated successfully",
- });
- }
+ }🤖 Prompt for AI Agents
In
`@packages/server/src/controllers/routeControllers/machineLearning.controller.ts`
around lines 287 - 319, The current branch uses a falsy check (if (key && key
!== "all")) which treats values like 0 or "" as "invalidate all"; change the
validation so only an explicit key === "all" triggers cache.flushAll(), and any
other falsy/invalid key (e.g., null, undefined, "", 0, or values not in
Object.values(CACHE_KEY)) returns a 400 with the same error shape; update the
conditional around request.body.key, the CACHE_KEY check, and the
cache.flushAll() branch (references: request.body.key, CACHE_KEY, cache.del,
cache.flushAll, response.status) to enforce this stricter validation and to only
call cache.flushAll() when key === "all".
…ing.
Summary by CodeRabbit
New Features
Improvements
Chores
✏️ Tip: You can customize this high-level summary in your review settings.