Skip to content

Conversation

@visz11
Copy link

@visz11 visz11 commented Nov 10, 2025

CodeAnt-AI Description

Add GraphQL query to return throughput/value-stream charts

What Changed

  • Exposes a new GraphQL query that returns throughput/value-stream chart data for a selected metric (issue or bug), granularity, and filter (including sprint-based date ranges)
  • For bug throughput, the response includes previous-period comparison statistics and totals so clients can show trend comparisons
  • When the required integration is missing or the metric is not configured, the query returns a chart object with a clear error state and message instead of failing
  • Issue throughput responses include threshold-aware chart output and a computed throughput percentage

Impact

✅ View throughput/value-stream charts via GraphQL
✅ Previous-period comparison for bug throughput
✅ Clearer chart errors when integration missing or metric unconfigured

💡 Usage Guide

Checking Your Pull Request

Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.

Talking to CodeAnt AI

Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:

@codeant-ai ask: Your question here

This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.

Example

@codeant-ai ask: Can you suggest a safer alternative to storing this secret?

Preserve Org Learnings with CodeAnt

You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:

@codeant-ai: Your feedback here

This helps CodeAnt AI learn and adapt to your team's coding style and standards.

Example

@codeant-ai: Do not flag unused imports.

Retrigger review

Ask CodeAnt AI to review the PR again, by typing:

@codeant-ai: review

Check Your Repository Health

To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.

Summary by CodeRabbit

Release Notes

  • New Features
    • Enabled querying throughput metrics by type with customizable granularity and filtering options
    • Added visualization support for issue and bug throughput data analysis

@codeant-ai
Copy link

codeant-ai bot commented Nov 10, 2025

CodeAnt AI is reviewing your PR.


Thanks for using CodeAnt! 🎉

We're free for open-source projects. if you're enjoying it, help us grow by sharing.

Share on X ·
Reddit ·
LinkedIn

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

@coderabbitai
Copy link

coderabbitai bot commented Nov 10, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Introduces FlowQuery, a new GraphQL query class that computes throughput metrics from value stream data. The getThroughputValueStreamByType method resolves granularities, retrieves data, calculates cumulative inflow/outflow and throughput, applies metric-specific transformations, handles integration exceptions, and returns a Chart via chartUtilService.

Changes

Cohort / File(s) Summary
New GraphQL Query Implementation
FlowQuery.java
Introduces FlowQuery class extending base query and implementing GraphQLQueryResolver. Wires multiple services via @Autowired and defines ExecutorService, Gson, and Logger. Implements getThroughputValueStreamByType(metricStr, granularityStr, breakdown, filter) method that resolves granularity/date range, fetches value stream data, computes cumulative throughput metrics, applies special logic for ISSUE_THROUGHPUT and BUG_THROUGHPUT (including data conversion and previous-period stat calculation), catches integration/configuration exceptions to populate error states, and delegates to chartUtilService for final Chart generation.

Sequence Diagram

sequenceDiagram
    participant Caller
    participant FlowQuery
    participant Services as Services<br/>(Autowired)
    participant ChartUtil as chartUtilService

    Caller->>FlowQuery: getThroughputValueStreamByType(metricStr, granularityStr, breakdown, filter)
    
    rect rgb(220, 240, 250)
    Note over FlowQuery,Services: Parse & Resolve
    FlowQuery->>FlowQuery: Resolve granularity<br/>from Filter
    FlowQuery->>FlowQuery: Compute date range
    end
    
    rect rgb(240, 250, 220)
    Note over FlowQuery,Services: Data Retrieval & Computation
    FlowQuery->>Services: Retrieve value stream data
    Services-->>FlowQuery: Value stream data
    FlowQuery->>FlowQuery: Compute cumulative<br/>inflow/outflow/throughput
    end
    
    rect rgb(250, 240, 220)
    Note over FlowQuery: Metric-Specific Logic
    alt ISSUE_THROUGHPUT or BUG_THROUGHPUT
        FlowQuery->>FlowQuery: Apply special handling<br/>(data conversion,<br/>previous-period stat)
    end
    end
    
    rect rgb(250, 235, 235)
    Note over FlowQuery,Services: Exception Handling
    alt NoIntegrationException or<br/>ConfigurationException
        FlowQuery->>FlowQuery: Populate Chart<br/>with error state
    end
    end
    
    rect rgb(230, 240, 250)
    Note over FlowQuery,ChartUtil: Chart Generation
    FlowQuery->>ChartUtil: Generate Chart<br/>from project data<br/>(metric-specific)
    ChartUtil-->>FlowQuery: Final Chart
    end
    
    FlowQuery-->>Caller: Return Chart
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Exception handling branches: Verify NoIntegrationException and ConfigurationException are properly mapped to Chart error states
  • Metric-specific logic: Review special handling for ISSUE_THROUGHPUT and BUG_THROUGHPUT, particularly data conversion and previous-period stat calculation
  • Data transformation: Ensure cumulative inflow/outflow computation is accurate and aligns with downstream service expectations
  • Service integration: Confirm all autowired services are correctly utilized and that the chartUtilService invocation handles all metric types appropriately

Poem

🐰 Hop, hop! Through value streams we now query,
Throughput metrics flow without worry!
Issue bugs and flows align,
One new GraphQL query so fine,
The rabbit's work is truly divine! 🌟

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and accurately summarizes the main change: introducing a new FlowQuery class for GraphQL queries, which aligns with the primary addition in the changeset.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch Add-FlowQuery-class-for-GraphQL-queries

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@visz11
Copy link
Author

visz11 commented Nov 10, 2025

/refacto-visz

@gemini-code-assist
Copy link

Summary of Changes

Hello @visz11, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a foundational FlowQuery class designed to extend the GraphQL API capabilities. Its primary purpose is to enable the retrieval and processing of complex throughput metrics, such as issue and bug throughput, for value stream analysis and charting. The class integrates with various existing services to aggregate data, apply filters, and manage different levels of granularity, while also providing clear error states for integration and configuration challenges.

Highlights

  • New GraphQL Query Class: Introduced a new FlowQuery class, which serves as a GraphQL query resolver to handle data requests related to various flow metrics.
  • Throughput Metric Calculation: The FlowQuery class includes a getThroughputValueStreamByType method, responsible for calculating and preparing throughput data (e.g., issue throughput, bug throughput) for charting, supporting different granularities and filters.
  • Integration and Error Handling: The new query resolver incorporates robust error handling for scenarios like missing integrations (NoIntegrationException) and misconfigurations (ConfigurationException), ensuring appropriate chart data states are returned.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@refacto-visz
Copy link

refacto-visz bot commented Nov 10, 2025

Add FlowQuery class for GraphQL queries

TL;DR: Adds new GraphQL query resolver for throughput and value stream analytics with chart generation capabilities.


Refacto PR Summary

Introduces FlowQuery class as a GraphQL resolver for flow-based analytics queries with comprehensive throughput calculations.
This PR implements a new GraphQL query resolver that handles throughput metrics and value stream analytics. The FlowQuery class extends BaseQuery and provides the getThroughputValueStreamByType method for calculating issue and bug throughput with configurable granularity and filtering. The implementation includes integration validation, threshold management, chart generation, and comprehensive error handling for missing integrations and configuration issues.

Change Highlights

Click to expand
  • FlowQuery.java: New GraphQL resolver with throughput calculation logic
  • FlowQuery.java: Integration service validation and configuration management
  • FlowQuery.java: Chart generation with metadata and threshold support
  • FlowQuery.java: Exception handling for integration and configuration errors
  • FlowQuery.java: Previous period comparison statistics calculation
  • FlowQuery.java: Executor service for concurrent processing capabilities

Sequence Diagram

sequenceDiagram
    participant Client as GraphQL Client
    participant FQ as FlowQuery
    participant IS as IntegrationService
    participant CU as ChartUtilService
    participant TS as ThresholdService
    participant DB as Database
    
    Client->>FQ: getThroughputValueStreamByType()
    FQ->>FQ: Parse metric and granularity
    FQ->>IS: isIntegrationActive()
    IS-->>FQ: Validation result
    FQ->>DB: getValueStreamData()
    DB-->>FQ: Raw throughput data
    FQ->>FQ: Calculate cumulative flow
    FQ->>TS: getThreshold()
    TS-->>FQ: Threshold config
    FQ->>CU: getChartForValueStreamFromProjectData()
    CU-->>FQ: Formatted chart
    FQ-->>Client: Chart with metadata
Loading

Testing Guide

Click to expand
  1. Basic throughput query: Execute getThroughputValueStreamByType with ISSUE_THROUGHPUT metric and WEEK granularity, verify chart data structure
  2. Bug throughput calculation: Test BUG_THROUGHPUT metric with valid filter, confirm cumulative inflow/outflow calculations and percentage accuracy
  3. Integration validation: Test with inactive integration, verify NO_INTEGRATION chart state and error message returned
  4. Configuration error handling: Test with missing metric configuration, confirm NOT_CONFIGURED state and appropriate error response
  5. Previous period comparison: Execute with date ranges spanning multiple periods, verify previous period statistics calculation accuracy

@refacto-visz
Copy link

refacto-visz bot commented Nov 10, 2025

Refacto is reviewing this PR. Please wait for the review comments to be posted.

@codeant-ai codeant-ai bot added the size:L This PR changes 100-499 lines, ignoring generated files label Nov 10, 2025
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new FlowQuery class to handle GraphQL queries related to flow metrics. The implementation is a good start, but there are several critical areas that need improvement. I've identified a potential resource leak with an unmanaged ExecutorService, an exception handling issue that could lead to unpredictable behavior, and multiple instances of unsafe type casting that could cause runtime exceptions. Additionally, there are some minor code cleanup opportunities, such as removing unused imports, fields, and a method parameter. Addressing these points will significantly improve the robustness and maintainability of the code.

Comment on lines +79 to +83
ExecutorService executorService = Executors.newFixedThreadPool(8);

Gson gson = new Gson();

Logger logger = LoggerFactory.getLogger(FlowQuery.class);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The fields executorService, gson, and logger are initialized but never used within this class.

In particular, executorService creates a thread pool that is never shut down, which is a resource leak. If an executor is needed, it should be managed by the Spring container as a bean to handle its lifecycle correctly, and its size should be configurable rather than hardcoded.

If these fields are not needed, they should be removed to clean up the code and prevent resource leaks.

Comment on lines +101 to +104
if (dataList.size() - 1 > 0) {
cumulativeOutflow = (int) dataList.get(dataList.size() - 1).get("cumulativeOutflow");
cumulativeInflow = (int) dataList.get(dataList.size() - 1).get("cumulativeInflow");
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The condition dataList.size() - 1 > 0 is unconventional and can be simplified to dataList.size() > 1 for better readability. More importantly, the direct cast from Object to int is unsafe and can cause a ClassCastException or data loss if the value is a Double. It's safer to cast to Number, use getOrDefault to handle missing keys, and then get the appropriate primitive value.

Suggested change
if (dataList.size() - 1 > 0) {
cumulativeOutflow = (int) dataList.get(dataList.size() - 1).get("cumulativeOutflow");
cumulativeInflow = (int) dataList.get(dataList.size() - 1).get("cumulativeInflow");
}
if (dataList.size() > 1) {
Map<String, Object> lastDataPoint = dataList.get(dataList.size() - 1);
cumulativeOutflow = ((Number) lastDataPoint.getOrDefault("cumulativeOutflow", 0)).doubleValue();
cumulativeInflow = ((Number) lastDataPoint.getOrDefault("cumulativeInflow", 0)).doubleValue();
}

prevPeriodDataList = convertIssueThroughputData(getValueStreamData(filter.lastPeriodFilterInstance(), metricConfig, granularity));
throughput = Math.round(throughput * 100.0) / 100.0;
if (CollectionUtils.isNotEmpty(prevPeriodDataList) && CollectionUtils.isNotEmpty(dataList)) {
prevPeriodStatAndTotalPair = Pair.of(teamQuery.getPreviousPeriodStat((int) (double) dataList.get(dataList.size() - 1).get("y"), (int) (double) prevPeriodDataList.get(prevPeriodDataList.size() - 1).get("y"), metricConfig), throughput);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This line is very long and complex, which harms readability and maintainability. It also performs unsafe casts (int) (double) on values retrieved from a map, which can lead to ClassCastException or NullPointerException. The logic should be broken down into smaller, more manageable steps with safe access to map values.

                            Map<String, Object> lastDataPoint = dataList.get(dataList.size() - 1);
                            Map<String, Object> lastPrevPeriodDataPoint = prevPeriodDataList.get(prevPeriodDataList.size() - 1);

                            double currentY = ((Number) lastDataPoint.getOrDefault("y", 0.0)).doubleValue();
                            double prevPeriodY = ((Number) lastPrevPeriodDataPoint.getOrDefault("y", 0.0)).doubleValue();

                            prevPeriodStatAndTotalPair = Pair.of(teamQuery.getPreviousPeriodStat((int) currentY, (int) prevPeriodY, metricConfig), throughput);

Comment on lines +134 to +136
} catch (Exception ex) {
globalExceptionHandler.handleGeneralException(ex);
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The generic catch (Exception ex) block handles the exception but then allows execution to fall through to the final return statement. This can lead to unexpected behavior or further exceptions if the chart data is in an inconsistent state. To ensure predictable behavior, you should return the chart object from within this catch block, similar to how NoIntegrationException and ConfigurationException are handled.

        } catch (Exception ex) {
            globalExceptionHandler.handleGeneralException(ex);
            return chart;
        }

Comment on lines +4 to +52
import com.flecso.employer.central.entity.GoalTemplate;
import com.flecso.employer.dto.*;
import com.flecso.employer.dto.analytics.*;
import com.flecso.employer.dto.charts.Chart;
import com.flecso.employer.dto.charts.ChartDataState;
import com.flecso.employer.dto.charts.ChartError;
import com.flecso.employer.dto.charts.ChartMetadata;
import com.flecso.employer.dto.filters.IssueFilterConditions;
import com.flecso.employer.dto.rechart.ChartKey;
import com.flecso.employer.entity.IntegrationType;
import com.flecso.employer.entity.IssueUnits;
import com.flecso.employer.central.entity.MetricConfig;
import com.flecso.employer.entity.SDLCStageConfig;
import com.flecso.employer.exception.ConfigurationException;
import com.flecso.employer.exception.GlobalExceptionHandler;
import com.flecso.employer.exception.NoIntegrationException;
import com.flecso.employer.repository.IssueUnitCustomRepoImpl;
import com.flecso.employer.repository.IssueUnitsRepo;
import com.flecso.employer.repository.SDLCStageConfigRepo;
import com.flecso.employer.service.DateUtil;
import com.flecso.employer.service.TeamService;
import com.flecso.employer.service.goal.ThresholdService;
import com.flecso.employer.service.integration.IntegrationService;
import com.flecso.employer.util.ChartUtilService;
import com.flecso.employer.util.ColorUtil;
import com.flecso.employer.util.Constants;
import com.flecso.employer.util.CoreUtil;
import com.github.sisyphsu.dateparser.DateParserUtils;
import com.google.common.collect.Sets;
import com.google.gson.Gson;
import org.apache.commons.collections4.CollectionUtils;
import org.apache.commons.collections4.MapUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.commons.lang3.tuple.Pair;
import org.apache.commons.lang3.tuple.Triple;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import java.math.BigDecimal;
import java.time.LocalDate;
import java.time.LocalDateTime;
import java.time.temporal.ChronoUnit;
import java.util.*;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.stream.Collectors;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There are several unused imports in this file. Removing them will improve code clarity and reduce clutter. For example, GoalTemplate, IssueFilterConditions, ChartKey, DateUtil, ColorUtil, Constants, CoreUtil, DateParserUtils, Sets, MapUtils, Triple, BigDecimal, LocalDate, LocalDateTime, ChronoUnit, CompletableFuture, and Collectors appear to be unused. Please run your IDE's 'organize imports' feature to clean them up.


Logger logger = LoggerFactory.getLogger(FlowQuery.class);

public Chart getThroughputValueStreamByType(String metricStr, String granularityStr, String breakdown, Filter filter) throws Exception {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The method parameter breakdown is not used within the method getThroughputValueStreamByType. It should be removed if it's not needed to avoid confusion and simplify the method signature.

Suggested change
public Chart getThroughputValueStreamByType(String metricStr, String granularityStr, String breakdown, Filter filter) throws Exception {
public Chart getThroughputValueStreamByType(String metricStr, String granularityStr, Filter filter) throws Exception {

@refacto-visz
Copy link

refacto-visz bot commented Nov 10, 2025

Code Review: FlowQuery GraphQL Resolver

👍 Well Done
Dependency Injection Architecture

Proper Spring autowiring promotes testability and loose coupling with clean service layer separation

Exception Handling Structure

Comprehensive exception handling with proper error state management and user feedback implementation

📁 Selected files for review (1)
  • FlowQuery.java
🎯 Custom Instructions
✅ Applied Instructions
Organization Guidelines
  • Keep pull requests small and focused (prefer < 400 lines of code).
  • All CI/CD checks, linting, and unit tests must pass before merge.
  • Use feature flags for new functionality and include a clear rollback plan.
  • Follow the company security checklist:
    • No hard-coded secrets or credentials.
    • Validate all external inputs.
    • Use parameterized queries for DB access.

Scope: All files

📝 Additional Comments
FlowQuery.java (4)
Exception Swallowing Pattern

Generic exception handling delegates to global handler without local recovery or state cleanup. Method continues execution after exception handling potentially returning inconsistent state. Caller receives no indication of processing failure affecting system reliability.

Standards:

  • ISO-IEC-25010-Reliability-Fault-Tolerance
  • SRE-Error-Handling
  • DbC-Postconditions
Collection Sizing Optimization

ArrayList initialization without capacity specification may cause multiple resize operations during data population. Memory efficiency depends on expected data volume from getValueStreamData operations. Consider initial sizing based on typical result sets.

Standards:

  • ISO-IEC-25010-Performance-Efficiency-Resource-Utilization
  • Memory-Allocation-Optimization
Missing Strategy Pattern

Switch statement on metric type suggests missing Strategy pattern for handling different throughput calculation algorithms. Current approach violates Open/Closed Principle as adding new metric types requires modifying existing code. Strategy pattern would enable extension without modification and improve testability of individual metric handlers.

Standards:

  • Design-Pattern-Strategy
  • SOLID-OCP
  • Clean-Code-Functions
Incomplete Path Coverage

Switch statement lacks default case handling for unknown metric types. Missing default branch could cause silent failures when new Metric enum values are added. Business logic completeness requires explicit handling of all possible enum states to prevent unexpected behavior in metric processing workflows.

Standards:

  • Logic-Verification-Path-Completeness
  • Business-Rule-Enum-Handling

@Autowired
private SDLCStageConfigRepo sdlcStageConfigRepo;

ExecutorService executorService = Executors.newFixedThreadPool(8);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resource Leak Risk

ExecutorService created without proper shutdown mechanism causes thread resource leak. Threads remain active after application shutdown preventing clean resource cleanup. Service restart issues and memory exhaustion result from accumulated thread resources.

    @PreDestroy
    public void shutdown() {
        if (executorService != null && !executorService.isShutdown()) {
            executorService.shutdown();
            try {
                if (!executorService.awaitTermination(60, TimeUnit.SECONDS)) {
                    executorService.shutdownNow();
                }
            } catch (InterruptedException e) {
                executorService.shutdownNow();
                Thread.currentThread().interrupt();
            }
        }
    }
Commitable Suggestion
Suggested change
ExecutorService executorService = Executors.newFixedThreadPool(8);
@PreDestroy
public void shutdown() {
if (executorService != null && !executorService.isShutdown()) {
executorService.shutdown();
try {
if (!executorService.awaitTermination(60, TimeUnit.SECONDS)) {
executorService.shutdownNow();
}
} catch (InterruptedException e) {
executorService.shutdownNow();
Thread.currentThread().interrupt();
}
}
}
Standards
  • ISO-IEC-25010-Reliability-Recoverability
  • DbC-Resource-Mgmt
  • SRE-Graceful-Shutdown

Comment on lines +102 to +103
cumulativeOutflow = (int) dataList.get(dataList.size() - 1).get("cumulativeOutflow");
cumulativeInflow = (int) dataList.get(dataList.size() - 1).get("cumulativeInflow");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unsafe Type Casting

Unsafe casting from Object to int without type validation creates ClassCastException risk. Map values may contain different numeric types or null values causing runtime failures. Type safety validation required before casting operations.

                Object outflowObj = dataList.get(dataList.size() - 1).get("cumulativeOutflow");
                Object inflowObj = dataList.get(dataList.size() - 1).get("cumulativeInflow");
                cumulativeOutflow = outflowObj instanceof Number ? ((Number) outflowObj).doubleValue() : 0.0;
                cumulativeInflow = inflowObj instanceof Number ? ((Number) inflowObj).doubleValue() : 0.0;
Commitable Suggestion
Suggested change
cumulativeOutflow = (int) dataList.get(dataList.size() - 1).get("cumulativeOutflow");
cumulativeInflow = (int) dataList.get(dataList.size() - 1).get("cumulativeInflow");
Object outflowObj = dataList.get(dataList.size() - 1).get("cumulativeOutflow");
Object inflowObj = dataList.get(dataList.size() - 1).get("cumulativeInflow");
cumulativeOutflow = outflowObj instanceof Number ? ((Number) outflowObj).doubleValue() : 0.0;
cumulativeInflow = inflowObj instanceof Number ? ((Number) inflowObj).doubleValue() : 0.0;
Standards
  • ISO-IEC-25010-Functional-Correctness-Appropriateness
  • DbC-Preconditions
  • SRE-Error-Handling

prevPeriodDataList = convertIssueThroughputData(getValueStreamData(filter.lastPeriodFilterInstance(), metricConfig, granularity));
throughput = Math.round(throughput * 100.0) / 100.0;
if (CollectionUtils.isNotEmpty(prevPeriodDataList) && CollectionUtils.isNotEmpty(dataList)) {
prevPeriodStatAndTotalPair = Pair.of(teamQuery.getPreviousPeriodStat((int) (double) dataList.get(dataList.size() - 1).get("y"), (int) (double) prevPeriodDataList.get(prevPeriodDataList.size() - 1).get("y"), metricConfig), throughput);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Complex Unsafe Type Casting

Complex nested type casting without validation creates multiple failure points. Double-to-int casting may lose precision and Object-to-double casting may throw ClassCastException. Chained casting operations increase runtime failure probability significantly.

                            Object currentYObj = dataList.get(dataList.size() - 1).get("y");
                            Object prevYObj = prevPeriodDataList.get(prevPeriodDataList.size() - 1).get("y");
                            int currentY = currentYObj instanceof Number ? ((Number) currentYObj).intValue() : 0;
                            int prevY = prevYObj instanceof Number ? ((Number) prevYObj).intValue() : 0;
                            prevPeriodStatAndTotalPair = Pair.of(teamQuery.getPreviousPeriodStat(currentY, prevY, metricConfig), throughput);
Commitable Suggestion
Suggested change
prevPeriodStatAndTotalPair = Pair.of(teamQuery.getPreviousPeriodStat((int) (double) dataList.get(dataList.size() - 1).get("y"), (int) (double) prevPeriodDataList.get(prevPeriodDataList.size() - 1).get("y"), metricConfig), throughput);
Object currentYObj = dataList.get(dataList.size() - 1).get("y");
Object prevYObj = prevPeriodDataList.get(prevPeriodDataList.size() - 1).get("y");
int currentY = currentYObj instanceof Number ? ((Number) currentYObj).intValue() : 0;
int prevY = prevYObj instanceof Number ? ((Number) prevYObj).intValue() : 0;
prevPeriodStatAndTotalPair = Pair.of(teamQuery.getPreviousPeriodStat(currentY, prevY, metricConfig), throughput);
Standards
  • ISO-IEC-25010-Functional-Correctness-Appropriateness
  • DbC-Preconditions
  • SRE-Error-Handling

@Autowired
private SDLCStageConfigRepo sdlcStageConfigRepo;

ExecutorService executorService = Executors.newFixedThreadPool(8);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hard-coded Thread Pool Configuration

Fixed thread pool size hard-coded without configuration management creates resource exhaustion risk under varying load conditions. Thread pool cannot adapt to system capacity or workload changes causing potential service degradation. Configurable thread pool sizing required for production reliability.

    @Value("${app.executor.thread-pool-size:8}")
    private int threadPoolSize;
    
    ExecutorService executorService = Executors.newFixedThreadPool(threadPoolSize);
Commitable Suggestion
Suggested change
ExecutorService executorService = Executors.newFixedThreadPool(8);
@Value("${app.executor.thread-pool-size:8}")
private int threadPoolSize;
ExecutorService executorService = Executors.newFixedThreadPool(threadPoolSize);
Standards
  • ISO-IEC-25010-Reliability-Fault-Tolerance
  • SRE-Resource-Management

@Autowired
private SDLCStageConfigRepo sdlcStageConfigRepo;

ExecutorService executorService = Executors.newFixedThreadPool(8);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed Thread Pool Scalability

Fixed thread pool with hardcoded size creates resource contention under variable load. Thread pool sizing should be configurable and based on system resources. Pool exhaustion causes request queuing and degraded response times.

    @Value("${app.executor.core-pool-size:4}")
    private int corePoolSize;
    @Value("${app.executor.max-pool-size:8}")
    private int maxPoolSize;
    
    ExecutorService executorService = new ThreadPoolExecutor(
        corePoolSize, maxPoolSize, 60L, TimeUnit.SECONDS,
        new LinkedBlockingQueue<>(), new ThreadPoolExecutor.CallerRunsPolicy());
Commitable Suggestion
Suggested change
ExecutorService executorService = Executors.newFixedThreadPool(8);
@Value("${app.executor.core-pool-size:4}")
private int corePoolSize;
@Value("${app.executor.max-pool-size:8}")
private int maxPoolSize;
ExecutorService executorService = new ThreadPoolExecutor(
corePoolSize, maxPoolSize, 60L, TimeUnit.SECONDS,
new LinkedBlockingQueue<>(), new ThreadPoolExecutor.CallerRunsPolicy());
Standards
  • ISO-IEC-25010-Performance-Efficiency-Resource-Utilization
  • Optimization-Pattern-Thread-Pool-Management
  • Algorithmic-Complexity-Resource-Scaling

Comment on lines +106 to +108
if (cumulativeInflow != 0) {
throughput = (cumulativeOutflow*100) / cumulativeInflow;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Division By Zero Validation

Throughput calculation logic has potential precision loss and incomplete validation. Integer casting on line 102-103 may truncate decimal values from Map objects containing doubles. Division operation lacks validation for negative values which could produce misleading throughput percentages in business calculations.

            if (cumulativeInflow > 0 && cumulativeOutflow >= 0) {
                throughput = (cumulativeOutflow * 100.0) / cumulativeInflow;
            } else {
                throughput = 0.0;
            }
Commitable Suggestion
Suggested change
if (cumulativeInflow != 0) {
throughput = (cumulativeOutflow*100) / cumulativeInflow;
}
if (cumulativeInflow > 0 && cumulativeOutflow >= 0) {
throughput = (cumulativeOutflow * 100.0) / cumulativeInflow;
} else {
throughput = 0.0;
}
Standards
  • Mathematical-Accuracy-Precision-Preservation
  • Business-Rule-Input-Validation

import static com.flecso.employer.dto.analytics.Metric.*;

@Component
public class FlowQuery extends BaseQuery implements GraphQLQueryResolver {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

God Class Violation

FlowQuery class violates Single Responsibility Principle by handling multiple unrelated concerns including data access, service coordination, exception handling, and chart generation. The class has 10+ dependencies suggesting excessive responsibilities. This creates tight coupling, makes testing difficult, and increases change risk across multiple domains.

Standards
  • SOLID-SRP
  • Clean-Code-Class-Organization
  • Maintainability-Quality-Coupling

dataList = getValueStreamData(filter, metricConfig, granularity);
double cumulativeOutflow = 0;
double cumulativeInflow = 0;
if (dataList.size() - 1 > 0) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Fix off-by-one logic: check for list non-empty with dataList.size() > 0 before accessing last element. [possible bug]

Severity Level: Critical 🚨

Suggested change
if (dataList.size() - 1 > 0) {
if (dataList.size() > 0) {
Why it matters? ⭐

The current condition (dataList.size() - 1 > 0) is equivalent to dataList.size() > 1, which skips the single-element case but the code then accesses the last element. Changing to dataList.size() > 0 (or !dataList.isEmpty()) correctly allows processing when there's exactly one element and prevents unintended skipping.

Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** FlowQuery.java
**Line:** 101:101
**Comment:**
	*Possible Bug: Fix off-by-one logic: check for list non-empty with dataList.size() > 0 before accessing last element.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.

Comment on lines +102 to +103
cumulativeOutflow = (int) dataList.get(dataList.size() - 1).get("cumulativeOutflow");
cumulativeInflow = (int) dataList.get(dataList.size() - 1).get("cumulativeInflow");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Avoid ClassCastException by converting numeric map values via Number.doubleValue() instead of casting Object to int. [possible bug]

Severity Level: Critical 🚨

Suggested change
cumulativeOutflow = (int) dataList.get(dataList.size() - 1).get("cumulativeOutflow");
cumulativeInflow = (int) dataList.get(dataList.size() - 1).get("cumulativeInflow");
Object outflowObj = dataList.get(dataList.size() - 1).get("cumulativeOutflow");
Object inflowObj = dataList.get(dataList.size() - 1).get("cumulativeInflow");
if (outflowObj instanceof Number) {
cumulativeOutflow = ((Number) outflowObj).doubleValue();
} else if (outflowObj != null) {
cumulativeOutflow = Double.parseDouble(outflowObj.toString());
}
if (inflowObj instanceof Number) {
cumulativeInflow = ((Number) inflowObj).doubleValue();
} else if (inflowObj != null) {
cumulativeInflow = Double.parseDouble(inflowObj.toString());
}
Why it matters? ⭐

The current code extracts Objects from a Map and casts them with (int) which is brittle and can throw ClassCastException if the stored value is Double, Long, etc. The file contains these exact lines (see getThroughputValueStreamByType) and the variables are declared as double, so using Number.doubleValue() or parsing the toString() is safer and will correctly handle Integer/Double/Long values and avoid runtime casting issues.

Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** FlowQuery.java
**Line:** 102:103
**Comment:**
	*Possible Bug: Avoid ClassCastException by converting numeric map values via Number.doubleValue() instead of casting Object to int.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.

prevPeriodDataList = convertIssueThroughputData(getValueStreamData(filter.lastPeriodFilterInstance(), metricConfig, granularity));
throughput = Math.round(throughput * 100.0) / 100.0;
if (CollectionUtils.isNotEmpty(prevPeriodDataList) && CollectionUtils.isNotEmpty(dataList)) {
prevPeriodStatAndTotalPair = Pair.of(teamQuery.getPreviousPeriodStat((int) (double) dataList.get(dataList.size() - 1).get("y"), (int) (double) prevPeriodDataList.get(prevPeriodDataList.size() - 1).get("y"), metricConfig), throughput);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Avoid unsafe double/int casts on y values extracted from the maps by converting via Number.intValue()/Integer.parseInt(). [possible bug]

Severity Level: Critical 🚨

Suggested change
prevPeriodStatAndTotalPair = Pair.of(teamQuery.getPreviousPeriodStat((int) (double) dataList.get(dataList.size() - 1).get("y"), (int) (double) prevPeriodDataList.get(prevPeriodDataList.size() - 1).get("y"), metricConfig), throughput);
Object currYObj = dataList.get(dataList.size() - 1).get("y");
Object prevYObj = prevPeriodDataList.get(prevPeriodDataList.size() - 1).get("y");
int currY = currYObj instanceof Number ? ((Number) currYObj).intValue() : Integer.parseInt(currYObj.toString());
int prevY = prevYObj instanceof Number ? ((Number) prevYObj).intValue() : Integer.parseInt(prevYObj.toString());
prevPeriodStatAndTotalPair = Pair.of(teamQuery.getPreviousPeriodStat(currY, prevY, metricConfig), throughput);
Why it matters? ⭐

The code currently performs (int)(double) casts on Objects pulled from maps which is fragile (and may fail for Integer/Long/BigDecimal). The same hunk shows these exact casts; converting via Number.intValue() or parsing ensures correct behavior for different numeric types and avoids ClassCastException/Unboxing issues.

Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** FlowQuery.java
**Line:** 119:119
**Comment:**
	*Possible Bug: Avoid unsafe double/int casts on `y` values extracted from the maps by converting via Number.intValue()/Integer.parseInt().

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.

globalExceptionHandler.handleGeneralException(ex);
}

return metric.equals(ISSUE_THROUGHPUT) ? chartUtilService.getChartForValueStreamFromProjectData(metricConfig, dataList, threshold, chartMetadata) : chartUtilService.getChartFromProjectData(null, dataList, prevPeriodStatAndTotalPair, chartMetadata, granularity);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Use enum identity comparison (==) for Metric instead of equals. [enhancement]

Severity Level: Minor ⚠️

Suggested change
return metric.equals(ISSUE_THROUGHPUT) ? chartUtilService.getChartForValueStreamFromProjectData(metricConfig, dataList, threshold, chartMetadata) : chartUtilService.getChartFromProjectData(null, dataList, prevPeriodStatAndTotalPair, chartMetadata, granularity);
return metric == ISSUE_THROUGHPUT ? chartUtilService.getChartForValueStreamFromProjectData(metricConfig, dataList, threshold, chartMetadata) : chartUtilService.getChartFromProjectData(null, dataList, prevPeriodStatAndTotalPair, chartMetadata, granularity);
Why it matters? ⭐

Metric is an enum; comparing enums with == is the recommended and idiomatic approach in Java (null-safety aside if metric can be null). Replacing metric.equals(...) with metric == ISSUE_THROUGHPUT improves clarity and avoids the tiny overhead of an equals call.

Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** FlowQuery.java
**Line:** 138:138
**Comment:**
	*Enhancement: Use enum identity comparison (==) for `Metric` instead of `equals`.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.

@codeant-ai
Copy link

codeant-ai bot commented Nov 10, 2025

CodeAnt AI finished reviewing your PR.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e184110 and fd85234.

📒 Files selected for processing (1)
  • FlowQuery.java (1 hunks)

Comment on lines +101 to +104
if (dataList.size() - 1 > 0) {
cumulativeOutflow = (int) dataList.get(dataList.size() - 1).get("cumulativeOutflow");
cumulativeInflow = (int) dataList.get(dataList.size() - 1).get("cumulativeInflow");
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix boundary check so single-point data is processed.

When there is exactly one entry in dataList, the guard if (dataList.size() - 1 > 0) skips the body, leaving both cumulative values at 0 and reporting throughput as 0 despite having valid data. This breaks single-bucket charts (e.g., only the latest period), which is a common scenario right after integration or in narrow date ranges. Change the condition to if (dataList.size() > 0) (and similar defensive checks as needed) so that the most recent record is always considered.

🤖 Prompt for AI Agents
In FlowQuery.java around lines 101 to 104, the boundary check skips processing
when dataList has exactly one element; change the guard from `if
(dataList.size() - 1 > 0)` to `if (dataList.size() > 0)` so the last record is
always read, and add minimal defensive checks (verify
dataList.get(dataList.size()-1) is non-null and contains the expected keys or
types before casting) to avoid NPEs or ClassCastExceptions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:L This PR changes 100-499 lines, ignoring generated files

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants