Skip to content

move machine learning lambda endpoints to backend server and add cach…#276

Merged
justin-phxm merged 5 commits intomainfrom
TEL-231-investigate-redis-to-cache-memoize-loading-graphs-on-the-analysis-tab
Feb 7, 2026
Merged

move machine learning lambda endpoints to backend server and add cach…#276
justin-phxm merged 5 commits intomainfrom
TEL-231-investigate-redis-to-cache-memoize-loading-graphs-on-the-analysis-tab

Conversation

@justin-phxm
Copy link
Collaborator

@justin-phxm justin-phxm commented Jan 31, 2026

…ing.

Summary by CodeRabbit

  • New Features

    • Machine learning endpoints providing packet and lap correlation matrices, plus health and cache-invalidation endpoints.
  • Improvements

    • Improved API error handling and clearer HTTP responses.
    • Enhanced observability and structured logging for ML endpoints and cache operations.
    • Client-side API calls updated to use consolidated backend routes.
  • Chores

    • Updated example env files (including SOCKET_URL) and added server runtime caching and AWS Lambda support; added server-side secrets configuration.

✏️ Tip: You can customize this high-level summary in your review settings.

@justin-phxm justin-phxm self-assigned this Jan 31, 2026
@justin-phxm justin-phxm requested a review from a team as a code owner January 31, 2026 20:39
@vercel
Copy link

vercel bot commented Jan 31, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
helios-telemetry Ready Ready Preview, Comment Jan 31, 2026 10:53pm

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 31, 2026

📝 Walkthrough

Walkthrough

Adds server endpoints for ML correlation matrices backed by AWS Lambda with caching, health, and cache-invalidate routes; integrates secrets into ECS; updates server to mount the new router; introduces server dependencies; updates client API handlers to use a fixed backend route; adds example env vars.

Changes

Cohort / File(s) Summary
Env examples
packages/client/.env.example, packages/server/.env.example
Added SOCKET_URL to client example; added GET_PACKET_CORRELATION_MATRIX_URL and GET_LAP_CORRELATION_MATRIX_URL to server example.
Client API handlers
packages/client/src/pages/api/getLapCorrelationMatrix.ts, packages/client/src/pages/api/getPacketCorrelationMatrix.ts
Replaced environment-derived API_ROUTE with prodURL-based BACKEND_ROUTE, renamed req_req, added non-ok fetch response handling and more explicit catch error messages.
Server dependencies
packages/server/package.json
Added node-cache and @aws-sdk/client-lambda to dependencies and @types/node-cache to devDependencies.
ML controller
packages/server/src/controllers/routeControllers/machineLearning.controller.ts
New controller: NodeCache (1h TTL), LambdaClient invocation helper, endpoints for packet/lap correlation, cache invalidate, and health; structured logging and error handling.
ML routes
packages/server/src/routes/machineLearning.route.ts
New Express router registering ML endpoints (/ml/correlation-matrix/packet, /ml/correlation-matrix/lap, /ml/invalidate, /ml/health).
Server integration
packages/server/src/index.ts
Mounts the new machineLearningRouter.
Infra secrets
packages/amplify/amplify/backend.ts
Added Secrets Manager reference and mapped LAP_CORRELATION_MATRIX_FUNCTION_NAME and PACKET_CORRELATION_MATRIX_FUNCTION_NAME into ECS container secrets.

Sequence Diagram

sequenceDiagram
    participant Client
    participant Server as Server (Express)
    participant Cache as Cache (NodeCache)
    participant Lambda as AWS Lambda

    Client->>Server: GET /ml/correlation-matrix/packet
    Server->>Cache: get PACKET_CORRELATION_MATRIX
    alt Cache Hit
        Cache-->>Server: cached data
        Server-->>Client: 200 + data
    else Cache Miss
        Server->>Lambda: invoke configured function (via Lambda Client / Function URL)
        Lambda-->>Server: 200 + matrix data
        Server->>Cache: set PACKET_CORRELATION_MATRIX (1h TTL)
        Server-->>Client: 200 + data
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • burtonjong
  • promatty

Poem

🐇 I hopped through logs and found a cached map,

Lambdas hummed softly while I took a nap,
Numbers tucked in burrows, safe for an hour,
I nibbled a carrot and admired their power.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: moving machine learning Lambda endpoints to the backend server and adding caching functionality.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch TEL-231-investigate-redis-to-cache-memoize-loading-graphs-on-the-analysis-tab

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@packages/client/.env.example`:
- Line 1: Remove the surrounding single quotes from the NEXT_PUBLIC_MAPSAPIKEY
value in the .env example so the line reads NEXT_PUBLIC_MAPSAPIKEY= without
quotes; update the entry for NEXT_PUBLIC_MAPSAPIKEY (the environment variable
name) to use an unquoted empty value to satisfy dotenv-linter and parsers.

In
`@packages/server/src/controllers/routeControllers/machineLearning.controller.ts`:
- Around line 36-45: The current custom HTTPS agent (httpsAgent) created with
rejectUnauthorized: false disables SSL verification and must be removed or made
conditional; update the lambdaAxios creation to either use the default axios
instance (remove httpsAgent) or construct https.Agent with rejectUnauthorized:
true, and if you must bypass validation only for local development wrap the
insecure option behind an explicit environment flag (e.g.,
process.env.DISABLE_SSL_VERIFICATION) so rejectUnauthorized is never false in
production; also prefer fixing root-certificate issues (NODE_EXTRA_CA_CERTS)
instead of disabling TLS checks.
- Around line 13-27: Currently the module throws at load time by reading
PACKET_CORRELATION_MATRIX_URL and LAP_CORRELATION_MATRIX_URL which can crash the
server; replace the top-level validation with lazy getters (e.g. add
getPacketCorrelationMatrixUrl() and getLapCorrelationMatrixUrl() functions) that
read process.env and throw or return null when called, and update the ML request
handlers to call these getters at runtime and handle missing values gracefully
(return a 503 or suitable error) instead of crashing during import; ensure all
references to PACKET_CORRELATION_MATRIX_URL and LAP_CORRELATION_MATRIX_URL in
the file are replaced to use the new getters.

In `@packages/server/src/routes/machineLearning.route.ts`:
- Line 15: The POST /ml/invalidate route currently exposed via
machineLearningRouter.post("/ml/invalidate", controllers.invalidateCache) lacks
any access control; add an authentication/authorization gate that validates
callers before invoking controllers.invalidateCache (e.g., implement an auth
middleware function or an inline pre-handler that checks a request header or
token against a secure store like process.env.ADMIN_API_KEY or your existing
auth service, and attach it to the route:
machineLearningRouter.post("/ml/invalidate", authMiddleware,
controllers.invalidateCache)); return 401/403 for unauthenticated/unauthorized
requests and log denied attempts. Ensure the auth check uses the project’s
existing security pattern (token, JWT, signature verification, or
service-to-service credential) and keep validation of the cache key in
controllers.invalidateCache intact.
🧹 Nitpick comments (4)
packages/server/.env.example (1)

33-35: Minor env file formatting improvements.

The static analysis tool flagged a few style issues:

  • Extra blank line at line 33
  • Values should not be quoted in .env files (the shell/dotenv parser handles this)
  • Consider alphabetical ordering (GET_LAP_* before GET_PACKET_*)
  • Add a trailing newline at end of file
Suggested fix
 LAP_POSITION_PASSWORD="changeme"
-
-GET_PACKET_CORRELATION_MATRIX_URL='https://myApiGatewayForPacketMatrix.lambda-url.region.on.aws/'
-GET_LAP_CORRELATION_MATRIX_URL='https://myApiGatewayForLapMatrix.lambda-url.region.on.aws/'
+
+GET_LAP_CORRELATION_MATRIX_URL=https://myApiGatewayForLapMatrix.lambda-url.region.on.aws/
+GET_PACKET_CORRELATION_MATRIX_URL=https://myApiGatewayForPacketMatrix.lambda-url.region.on.aws/
packages/client/src/pages/api/getLapCorrelationMatrix.ts (1)

7-27: Consider extracting shared fetch logic to reduce duplication.

This handler is nearly identical to getPacketCorrelationMatrix.ts (per the relevant code snippets), differing only in the endpoint path. Consider extracting a shared utility function to improve maintainability.

Example shared utility approach

Create a shared helper in a utils file:

// utils/backendFetch.ts
import { prodURL } from "@shared/helios-types";
import type { NextApiResponse } from "next";

export async function fetchFromBackend(
  endpoint: string,
  res: NextApiResponse,
  errorMessage = "Failed to fetch data"
) {
  const url = `${prodURL}${endpoint}`;
  const result = await fetch(url);

  if (!result.ok) {
    const errorData = await result.json().catch(() => ({}));
    res.status(result.status).json({
      error: errorData.error || errorMessage,
    });
    return null;
  }

  return result.json();
}

Then both handlers become:

export default async function handler(_req: NextApiRequest, res: NextApiResponse) {
  try {
    const data = await fetchFromBackend("/ml/correlation-matrix/lap", res, "Failed to fetch correlation matrix data");
    if (data) res.status(200).json(data);
  } catch (err) {
    res.status(500).json({
      error: `Failed to load data: ${err instanceof Error ? err.message : String(err)}`,
    });
  }
}
packages/server/src/controllers/routeControllers/machineLearning.controller.ts (2)

76-126: Consider extracting shared caching logic to reduce duplication.

getPacketCorrelationMatrix and getLapCorrelationMatrix have nearly identical implementations differing only in cache key and Lambda URL. A helper function would improve maintainability.

Suggested refactor
async function getCorrelationMatrix(
  request: Request,
  response: Response,
  cacheKey: string,
  lambdaUrl: string,
) {
  const logger = createApplicationLogger(
    "machineLearning.controller.ts",
    request,
    response,
  );

  logger.info(`ENTRY - ${request.method} ${request.url}`);

  try {
    const cachedData = cache.get<unknown>(cacheKey);

    if (cachedData !== undefined) {
      const stats = cache.getStats();
      logger.info(
        `Cache HIT for key: ${cacheKey} (Total hits: ${stats.hits}, Total misses: ${stats.misses})`,
      );
      logger.info(`EXIT - ${request.method} ${request.url} - ${200} (cached)`);
      return response.status(200).json(cachedData);
    }

    const stats = cache.getStats();
    logger.info(
      `Cache MISS for key: ${cacheKey} (Total hits: ${stats.hits}, Total misses: ${stats.misses})`,
    );

    const data = await fetchFromLambda(lambdaUrl, logger);
    cache.set(cacheKey, data);

    logger.info(`EXIT - ${request.method} ${request.url} - ${200} (from Lambda)`);
    return response.status(200).json(data);
  } catch (error) {
    logger.error(
      `ERROR - ${request.method} ${request.url} - ${error instanceof Error ? error.message : String(error)}`,
    );
    return response.status(503).json({
      error: "Service temporarily unavailable. Unable to fetch correlation matrix data.",
    });
  }
}

export const getPacketCorrelationMatrix = (req: Request, res: Response) =>
  getCorrelationMatrix(req, res, CACHE_KEY.PACKET_CORRELATION_MATRIX, PACKET_CORRELATION_MATRIX_URL);

export const getLapCorrelationMatrix = (req: Request, res: Response) =>
  getCorrelationMatrix(req, res, CACHE_KEY.LAP_CORRELATION_MATRIX, LAP_CORRELATION_MATRIX_URL);

Also applies to: 132-182


271-274: Minor: Inconsistent uptime format.

The uptime field concatenates a number with a string while other numeric fields like hitRate are pure numbers. Consider using a consistent format.

Suggested fix
     date: new Date(),
     message: "OK",
-    uptime: process.uptime() + " seconds",
+    uptimeSeconds: process.uptime(),
   };

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In
`@packages/server/src/controllers/routeControllers/machineLearning.controller.ts`:
- Around line 15-30: PACKET_CORRELATION_MATRIX_URL and
LAP_CORRELATION_MATRIX_URL are being defaulted to empty strings so a cache miss
leads fetchFromLambda to call axios with an invalid URL; update the code to
either not default to "" (leave undefined) or, preferably, add early-exit
validation inside the handlers that call fetchFromLambda (e.g., the handler that
uses PACKET_CORRELATION_MATRIX_URL and getLapCorrelationMatrix for
LAP_CORRELATION_MATRIX_URL) to return a 500/400 response with a clear "URL not
configured" message when the env var is falsy, and only call fetchFromLambda
when the corresponding URL is present. Ensure you reference
PACKET_CORRELATION_MATRIX_URL, LAP_CORRELATION_MATRIX_URL, fetchFromLambda and
getLapCorrelationMatrix when making the checks.
🧹 Nitpick comments (1)
packages/server/src/controllers/routeControllers/machineLearning.controller.ts (1)

131-185: Consider extracting shared logic to reduce duplication.

getLapCorrelationMatrix and getPacketCorrelationMatrix are nearly identical. Consider extracting the common cache-fetch-return logic into a helper.

Refactor suggestion
async function handleCorrelationMatrixRequest(
  request: Request,
  response: Response,
  cacheKey: string,
  lambdaUrl: string,
) {
  const logger = createApplicationLogger("machineLearning.controller.ts", request, response);
  logger.info(`ENTRY - ${request.method} ${request.url}`);

  if (!lambdaUrl) {
    logger.error(`Lambda URL not configured for ${cacheKey}`);
    return response.status(503).json({ error: "Service not configured." });
  }

  try {
    const cachedData = cache.get<unknown>(cacheKey);
    if (cachedData !== undefined) {
      const stats = cache.getStats();
      logger.info(`Cache HIT for key: ${cacheKey} (Total hits: ${stats.hits}, Total misses: ${stats.misses})`);
      logger.info(`EXIT - ${request.method} ${request.url} - 200 (cached)`);
      return response.status(200).json(cachedData);
    }

    const stats = cache.getStats();
    logger.info(`Cache MISS for key: ${cacheKey} (Total hits: ${stats.hits}, Total misses: ${stats.misses})`);

    const data = await fetchFromLambda(lambdaUrl, logger);
    cache.set(cacheKey, data);

    logger.info(`EXIT - ${request.method} ${request.url} - 200 (from Lambda)`);
    return response.status(200).json(data);
  } catch (error) {
    logger.error(`ERROR - ${request.method} ${request.url} - ${error instanceof Error ? error.message : String(error)}`);
    return response.status(503).json({ error: "Service temporarily unavailable." });
  }
}

export const getPacketCorrelationMatrix = (req: Request, res: Response) =>
  handleCorrelationMatrixRequest(req, res, CACHE_KEY.PACKET_CORRELATION_MATRIX, PACKET_CORRELATION_MATRIX_URL);

export const getLapCorrelationMatrix = (req: Request, res: Response) =>
  handleCorrelationMatrixRequest(req, res, CACHE_KEY.LAP_CORRELATION_MATRIX, LAP_CORRELATION_MATRIX_URL);

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In
`@packages/server/src/controllers/routeControllers/machineLearning.controller.ts`:
- Around line 287-319: The current branch uses a falsy check (if (key && key !==
"all")) which treats values like 0 or "" as "invalidate all"; change the
validation so only an explicit key === "all" triggers cache.flushAll(), and any
other falsy/invalid key (e.g., null, undefined, "", 0, or values not in
Object.values(CACHE_KEY)) returns a 400 with the same error shape; update the
conditional around request.body.key, the CACHE_KEY check, and the
cache.flushAll() branch (references: request.body.key, CACHE_KEY, cache.del,
cache.flushAll, response.status) to enforce this stricter validation and to only
call cache.flushAll() when key === "all".
- Around line 100-112: The code parses Lambda Function URL payloads into
lambdaResponse and data but doesn't check HTTP status codes, so responses like
{statusCode: 400, body: "..."} get treated as success and may be cached; after
decoding response.Payload into lambdaResponse (and before any cache write),
check if lambdaResponse.statusCode is present and >= 400 (or not in 2xx range)
and throw an error that includes the statusCode and body so execution treats it
as a failure (in addition to checking response.FunctionError); ensure this check
runs before the code paths that write to the cache (the places that currently
store the result for 1 hour) so error responses are never cached.

Comment on lines +100 to +112
const payloadString = new TextDecoder().decode(response.Payload);
const lambdaResponse = JSON.parse(payloadString);

// Lambda functions with Function URLs return a response with statusCode and body
// The body is a JSON string that needs to be parsed again
let data: unknown;
if (lambdaResponse.body) {
// Parse the body string to get the actual data
data = JSON.parse(lambdaResponse.body);
} else {
// Direct invocation without Function URL wrapper
data = lambdaResponse;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n packages/server/src/controllers/routeControllers/machineLearning.controller.ts | head -150 | tail -80

Repository: UCSolarCarTeam/Helios-Telemetry

Length of output: 3027


🏁 Script executed:

rg -n "invokeMLFunction" packages/server/src/controllers/routeControllers/machineLearning.controller.ts -A 5

Repository: UCSolarCarTeam/Helios-Telemetry

Length of output: 57


🏁 Script executed:

rg -n "invokeMLFunction" packages/server/src/

Repository: UCSolarCarTeam/Helios-Telemetry

Length of output: 57


🏁 Script executed:

cat -n packages/server/src/controllers/routeControllers/machineLearning.controller.ts | head -75

Repository: UCSolarCarTeam/Helios-Telemetry

Length of output: 3441


🏁 Script executed:

rg -n "invokeLambdaFunction\|cache.set\|cache.get" packages/server/src/controllers/routeControllers/machineLearning.controller.ts -B 3 -A 3

Repository: UCSolarCarTeam/Helios-Telemetry

Length of output: 57


🏁 Script executed:

wc -l packages/server/src/controllers/routeControllers/machineLearning.controller.ts

Repository: UCSolarCarTeam/Helios-Telemetry

Length of output: 157


🏁 Script executed:

cat -n packages/server/src/controllers/routeControllers/machineLearning.controller.ts | sed -n '130,250p'

Repository: UCSolarCarTeam/Helios-Telemetry

Length of output: 4564


🏁 Script executed:

cat -n packages/server/src/controllers/routeControllers/machineLearning.controller.ts | sed -n '250,280p'

Repository: UCSolarCarTeam/Helios-Telemetry

Length of output: 1198


🏁 Script executed:

rg -n "StatusCode\|FunctionError" packages/server/src/controllers/routeControllers/machineLearning.controller.ts

Repository: UCSolarCarTeam/Helios-Telemetry

Length of output: 57


Check statusCode in Lambda Function URL responses before caching.

If a Lambda Function URL returns { statusCode: 400+, body: "..." }, the current code treats it as success because response.FunctionError is only set for execution errors, not HTTP errors. This can cache error responses for 1 hour. Validate statusCode and throw before caching at lines 187 and 257.

🔧 Suggested fix
-    let data: unknown;
-    if (lambdaResponse.body) {
-      // Parse the body string to get the actual data
-      data = JSON.parse(lambdaResponse.body);
-    } else {
-      // Direct invocation without Function URL wrapper
-      data = lambdaResponse;
-    }
+    if (
+      typeof lambdaResponse?.statusCode === "number" &&
+      lambdaResponse.statusCode >= 400
+    ) {
+      throw new Error(
+        `Lambda returned error status ${lambdaResponse.statusCode}`,
+      );
+    }
+
+    let data: unknown;
+    if (lambdaResponse?.body !== undefined) {
+      // Parse the body string to get the actual data
+      data =
+        typeof lambdaResponse.body === "string"
+          ? JSON.parse(lambdaResponse.body)
+          : lambdaResponse.body;
+    } else {
+      // Direct invocation without Function URL wrapper
+      data = lambdaResponse;
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const payloadString = new TextDecoder().decode(response.Payload);
const lambdaResponse = JSON.parse(payloadString);
// Lambda functions with Function URLs return a response with statusCode and body
// The body is a JSON string that needs to be parsed again
let data: unknown;
if (lambdaResponse.body) {
// Parse the body string to get the actual data
data = JSON.parse(lambdaResponse.body);
} else {
// Direct invocation without Function URL wrapper
data = lambdaResponse;
}
const payloadString = new TextDecoder().decode(response.Payload);
const lambdaResponse = JSON.parse(payloadString);
if (
typeof lambdaResponse?.statusCode === "number" &&
lambdaResponse.statusCode >= 400
) {
throw new Error(
`Lambda returned error status ${lambdaResponse.statusCode}`,
);
}
// Lambda functions with Function URLs return a response with statusCode and body
// The body is a JSON string that needs to be parsed again
let data: unknown;
if (lambdaResponse?.body !== undefined) {
// Parse the body string to get the actual data
data =
typeof lambdaResponse.body === "string"
? JSON.parse(lambdaResponse.body)
: lambdaResponse.body;
} else {
// Direct invocation without Function URL wrapper
data = lambdaResponse;
}
🤖 Prompt for AI Agents
In
`@packages/server/src/controllers/routeControllers/machineLearning.controller.ts`
around lines 100 - 112, The code parses Lambda Function URL payloads into
lambdaResponse and data but doesn't check HTTP status codes, so responses like
{statusCode: 400, body: "..."} get treated as success and may be cached; after
decoding response.Payload into lambdaResponse (and before any cache write),
check if lambdaResponse.statusCode is present and >= 400 (or not in 2xx range)
and throw an error that includes the statusCode and body so execution treats it
as a failure (in addition to checking response.FunctionError); ensure this check
runs before the code paths that write to the cache (the places that currently
store the result for 1 hour) so error responses are never cached.

Comment on lines +287 to +319
try {
const { key } = request.body;

if (key && key !== "all") {
// Invalidate specific key
if (!Object.values(CACHE_KEY).includes(key)) {
logger.warn(`Invalid cache key provided: ${key}`);
return response.status(400).json({
error: `Invalid cache key. Valid keys: ${Object.values(CACHE_KEY).join(", ")}, or "all"`,
});
}

const deleted = cache.del(key);
logger.info(`Cache INVALIDATE for key: ${key}, existed: ${deleted > 0}`);
logger.info(`EXIT - ${request.method} ${request.url} - ${200}`);
return response.status(200).json({
invalidated: deleted > 0,
key,
message:
deleted > 0
? `Cache for ${key} invalidated successfully`
: `No cache found for ${key}`,
});
} else {
// Invalidate all caches
const keys = cache.keys();
cache.flushAll();
logger.info(`Cache INVALIDATE ALL, cleared ${keys.length} entries`);
logger.info(`EXIT - ${request.method} ${request.url} - ${200}`);
return response.status(200).json({
message: "All caches invalidated successfully",
});
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Require an explicit "all" to flush the cache.
Currently any falsy key (e.g., 0, empty string) falls into “invalidate all”. Consider rejecting invalid/falsy keys and only flushing when key === "all" or key is omitted by design.

🔧 Suggested fix
-    if (key && key !== "all") {
+    if (key === "all" || key === undefined) {
+      // Invalidate all caches
+      const keys = cache.keys();
+      cache.flushAll();
+      logger.info(`Cache INVALIDATE ALL, cleared ${keys.length} entries`);
+      logger.info(`EXIT - ${request.method} ${request.url} - ${200}`);
+      return response.status(200).json({
+        message: "All caches invalidated successfully",
+      });
+    }
+
+    if (key && key !== "all") {
       // Invalidate specific key
       if (!Object.values(CACHE_KEY).includes(key)) {
         logger.warn(`Invalid cache key provided: ${key}`);
         return response.status(400).json({
           error: `Invalid cache key. Valid keys: ${Object.values(CACHE_KEY).join(", ")}, or "all"`,
         });
       }
@@
-    } else {
-      // Invalidate all caches
-      const keys = cache.keys();
-      cache.flushAll();
-      logger.info(`Cache INVALIDATE ALL, cleared ${keys.length} entries`);
-      logger.info(`EXIT - ${request.method} ${request.url} - ${200}`);
-      return response.status(200).json({
-        message: "All caches invalidated successfully",
-      });
-    }
+    }
🤖 Prompt for AI Agents
In
`@packages/server/src/controllers/routeControllers/machineLearning.controller.ts`
around lines 287 - 319, The current branch uses a falsy check (if (key && key
!== "all")) which treats values like 0 or "" as "invalidate all"; change the
validation so only an explicit key === "all" triggers cache.flushAll(), and any
other falsy/invalid key (e.g., null, undefined, "", 0, or values not in
Object.values(CACHE_KEY)) returns a 400 with the same error shape; update the
conditional around request.body.key, the CACHE_KEY check, and the
cache.flushAll() branch (references: request.body.key, CACHE_KEY, cache.del,
cache.flushAll, response.status) to enforce this stricter validation and to only
call cache.flushAll() when key === "all".

Copy link
Contributor

@dastine0308 dastine0308 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@justin-phxm justin-phxm merged commit 17b2892 into main Feb 7, 2026
8 checks passed
@justin-phxm justin-phxm deleted the TEL-231-investigate-redis-to-cache-memoize-loading-graphs-on-the-analysis-tab branch February 7, 2026 17:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants