Skip to content

Conversation

@chenfeiz0326
Copy link
Collaborator

@chenfeiz0326 chenfeiz0326 commented Oct 24, 2025

Description

This PR supports uploading pytest's perf results (only server-client benchmark) to OpenSearch database.

OpenSearch Database: https://gpuwa.nvidia.com/os-dashboards/app/data-explorer/discover#/view/8b942dd0-b166-11f0-990f-f7c929993cf6?_q=(filters:!(),query:(language:lucene,query:''))&_a=(discover:(columns:!(_source),isDirty:!f,savedSearch:'8b942dd0-b166-11f0-990f-f7c929993cf6',sort:!()),metadata:(indexPattern:'54473a20-b166-11f0-990f-f7c929993cf6',view:discover))&_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-6M,to:now))
OpenSearch Dashboar: https://gpuwa.nvidia.com/os-dashboards/app/dashboards#/view/a3f0e760-b166-11f0-990f-f7c929993cf6?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-15d,to:now))&_a=(description:'',filters:!(),fullScreenMode:!f,options:(hidePanelTitles:!f,useMargins:!t),panels:!((embeddableConfig:(),gridData:(h:15,i:bf15b8e3-2ad6-486e-a141-789c02dbfb78,w:24,x:0,y:0),id:'16a64a70-b167-11f0-990f-f7c929993cf6',panelIndex:bf15b8e3-2ad6-486e-a141-789c02dbfb78,type:visualization,version:'2.15.0'),(embeddableConfig:(),gridData:(h:15,i:'12d009b4-ca34-4da5-9956-e49fe811a048',w:24,x:24,y:0),id:'70ab2ea0-b167-11f0-990f-f7c929993cf6',panelIndex:'12d009b4-ca34-4da5-9956-e49fe811a048',type:visualization,version:'2.15.0')),query:(language:kuery,query:''),timeRestore:!f,title:df-sandbox-temp-1-perf-test-nvidiagb200-deepseek_r1_0528_fp4,viewMode:edit)

Summary by CodeRabbit

  • New Features

    • Added performance data collection and database integration for test results.
    • Introduced baseline performance tracking and historical data comparison capabilities.
    • Enhanced per-test metrics recording with granular result tracking.
  • Tests

    • Updated test suite with refined performance monitoring configuration.

Signed-off-by: Chenfei Zhang <[email protected]>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 24, 2025

📝 Walkthrough

Walkthrough

Introduces NVDataFlow integration for performance testing. A new nvdf.py module provides configuration assembly, GPU metadata collection, API operations (POST/GET with retry logic), and baseline data preparation. The test_perf.py module is updated to convert test configurations to NVDataFlow format, track per-test results, and upload data to the database after execution. Supporting changes include method signature updates and test list modifications.

Changes

Cohort / File(s) Summary
NVDataFlow Module
tests/integration/defs/perf/nvdf.py
New module providing NVDataFlow service integration: configuration assembly via get_nvdf_config() and get_job_info(); data validation via type_check_for_nvdf() and _id(); API operations via post() and query() with retry logic; data composition via post_data(), query_data(), and prepare_baseline_data(); comparison utilities via match(), get_best_perf_result(), and get_baseline().
Test Performance Integration
tests/integration/defs/perf/test_perf.py
Added imports from nvdf module. Extended ServerConfig and ClientConfig with to_nvdf_data() methods for data conversion. Introduced _test_results attribute and store_test_result() method to MultiMetricPerfTest for per-test result tracking. Added upload_test_results_to_database() method to aggregate, prepare, and post results to NVDataFlow for server-benchmark runtime.
Performance Utilities
tests/integration/defs/perf/utils.py
Updated run_ex() method signature in AbstractPerfScriptTestClass to accept metric_type parameter. Integrated call to store_test_result() after performance computation to capture per-metric results.
Test Configuration
tests/integration/test_lists/test-db/perf_sanity_l0_dgx_b200.yml, tests/integration/test_lists/test-db/perf_sanity_l0_dgx_b300.yml
Removed test entries: b300 variant from b200 list and b200 variant from b300 list, aligning test coverage to target GPU hardware.

Sequence Diagram

sequenceDiagram
    participant Test as Test Execution
    participant Utils as utils.run_ex()
    participant PerfTest as PerfTest Instance
    participant NVDF as nvdf Module
    participant DB as NVDataFlow DB

    Test->>Utils: run_ex(full_test_name, metric_type, ...)
    Utils->>Utils: Execute benchmark & compute perf_result
    Utils->>PerfTest: store_test_result(cmd_idx, metric_type, perf_result)
    PerfTest->>PerfTest: _test_results[cmd_idx][metric_type] = perf_result
    Utils->>Utils: _write_result()
    
    Test->>PerfTest: upload_test_results_to_database()
    PerfTest->>NVDF: prepare_baseline_data(new_data, model_groups, gpu_type)
    NVDF->>NVDF: match/aggregate metrics from history
    NVDF->>PerfTest: return baseline_data_dict
    
    PerfTest->>NVDF: post_data(baseline_data_dict, new_data_dict, model_groups, gpu_type)
    NVDF->>NVDF: per project: type_check_for_nvdf, _id()
    NVDF->>DB: POST validated data with retry logic
    DB->>NVDF: response
    NVDF->>PerfTest: return
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 78.57% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description Check ⚠️ Warning The pull request description is largely incomplete and missing critical sections required by the repository template. While a Description section exists, it only briefly states what the PR does (uploading pytest perf results to OpenSearch) without explaining the motivation, implementation details, or solution approach. Most importantly, the Test Coverage section and PR Checklist sections are entirely absent. These sections are essential for understanding test coverage and confirming compliance with development guidelines. The description does not meet the structured requirements defined in the repository's PR template. The author should expand the PR description to include all required sections from the template. First, add a comprehensive Description section that explains the issue being addressed and the solution provided in detail. Second, add a Test Coverage section that explicitly lists which tests validate the new functionality for uploading performance results to the database. Third, include the PR Checklist section with appropriate checkboxes marking which items have been verified. These additions will ensure the PR meets the repository's documentation standards and provides reviewers with necessary context for evaluation.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The PR title "[TRTLLM-8825][feat] Support Pytest Perf Results uploading to Database" directly aligns with the main objective of the changeset. The raw_summary and pr_objectives confirm that the primary feature being added is support for uploading pytest performance results to an OpenSearch database. The title follows the required format with a JIRA ticket ID, feature type indicator, and a concise summary of the change. The wording is clear and specific enough that teammates reviewing the git history would immediately understand the primary change without ambiguity.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tests/integration/defs/perf/utils.py (1)

509-520: Make metric_type Optional and avoid mutable default for outputs

run_ex is called with metric_type=None for prepare-dataset; also outputs={} as default is unsafe.

-    def run_ex(self,
-               full_test_name: str,
-               metric_type: PerfMetricType,
+    def run_ex(self,
+               full_test_name: str,
+               metric_type: Optional[PerfMetricType],
                venv: Optional[PythonVenvRunnerImpl],
@@
-               output_dir: str,
-               outputs: Dict[int, str] = {},
+               output_dir: str,
+               outputs: Optional[Dict[int, str]] = None,
@@
-        outputs = outputs.copy()
+        outputs = (outputs or {}).copy()
tests/integration/defs/perf/test_perf.py (1)

2293-2305: Avoid KeyError when removing failed cmd results

Use pop(..., None) to safely discard failed entries.

-                    del self._test_results[self._current_cmd_idx]
+                    self._test_results.pop(self._current_cmd_idx, None)

Also applies to: 2311-2312

🧹 Nitpick comments (7)
tests/integration/defs/perf/nvdf.py (5)

29-31: Avoid hard-coded NVDF endpoint; allow HTTPS and override via env

Make NVDF_BASE_URL configurable (env) and prefer HTTPS for transport security.

-NVDF_BASE_URL = "http://gpuwa.nvidia.com"
-PROJECT_ROOT = "sandbox-tmp-perf-test"
+NVDF_BASE_URL = os.getenv("NVDF_BASE_URL", "https://gpuwa.nvidia.com")
+PROJECT_ROOT = os.getenv("NVDF_PROJECT_ROOT", "sandbox-tmp-perf-test")

139-158: GPU name normalization comment vs behavior

Comment says “Replace spaces with hyphens” but code removes spaces. Align both; hyphens are easier to read.

-                # Replace spaces with hyphens
-                gpu_type = gpu_type.replace(" ", "")
+                # Replace spaces with hyphens for readability
+                gpu_type = gpu_type.replace(" ", "-")

315-339: GET with body is unusual; also add backoff jitter

OpenSearch supports GET-with-body but some proxies don’t. Consider POST. Add sleep between retries for stability.

-    while retry_time:
-        res = requests.get(url, data=json_data, headers=headers, timeout=10)
+    while retry_time:
+        res = requests.get(url, data=json_data, headers=headers, timeout=10)
         if res.status_code in [200, 201, 202]:
             return res
@@
-        retry_time -= 1
+        retry_time -= 1
+        time.sleep(min(60, 2 ** (5 - retry_time)))

Optionally switch to requests.post(url, ...) if infra allows.


507-560: Simplify baseline checks and guard non-numeric values

Small cleanups for clarity; also avoid including None in min/max.

-            # Skip baseline data
-            if data.get("b_is_baseline") and data.get("b_is_baseline") == True:
+            # Skip baseline data
+            if data.get("b_is_baseline"):
                 continue
-            if metric not in data:
+            if metric not in data or data[metric] is None:
                 continue
             values.append(data.get(metric))

562-578: Simplify truthy check

Minor readability improvement.

-    for data in history_data_list:
-        if data.get("b_is_baseline") and data.get("b_is_baseline") == True:
+    for data in history_data_list:
+        if data.get("b_is_baseline"):
             return data
tests/integration/defs/perf/utils.py (1)

607-609: Guard store_test_result and provide base-class default to prevent AttributeError

When extending AbstractPerfScriptTestClass, not all subclasses may implement store_test_result. Guard the call or add a no-op default in the base class.

-                # Store the test result
-                self.store_test_result(cmd_idx, metric_type, self._perf_result)
+                # Store the test result if supported
+                if metric_type is not None and hasattr(self, "store_test_result"):
+                    self.store_test_result(cmd_idx, metric_type, self._perf_result)

Optionally add to AbstractPerfScriptTestClass:

+    def store_test_result(self, cmd_idx: int, metric_type, perf_result: float) -> None:
+        """No-op default; subclasses may override to persist per-metric results."""
+        return
tests/integration/defs/perf/test_perf.py (1)

575-600: Use actual model name and stable typing in ServerConfig NVDF payload

  • Prefer grouping by underlying model_name rather than server config label for consistent project keys.
  • l_moe_max_num_tokens should be an int; default 0 avoids type-check failures.
     def to_nvdf_data(self) -> dict:
         """Convert ServerConfig to NVDataFlow data"""
         return {
-            "s_model_name": self.name,
+            "s_model_name": self.model_name,
+            "s_server_name": self.name,
@@
-            "l_moe_max_num_tokens": self.moe_max_num_tokens,
+            "l_moe_max_num_tokens": int(self.moe_max_num_tokens or 0),

Note: Adding s_server_name is optional but helpful for tracing.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 602b059 and b5eb07a.

📒 Files selected for processing (5)
  • tests/integration/defs/perf/nvdf.py (1 hunks)
  • tests/integration/defs/perf/test_perf.py (10 hunks)
  • tests/integration/defs/perf/utils.py (2 hunks)
  • tests/integration/test_lists/test-db/perf_sanity_l0_dgx_b200.yml (0 hunks)
  • tests/integration/test_lists/test-db/perf_sanity_l0_dgx_b300.yml (0 hunks)
💤 Files with no reviewable changes (2)
  • tests/integration/test_lists/test-db/perf_sanity_l0_dgx_b300.yml
  • tests/integration/test_lists/test-db/perf_sanity_l0_dgx_b200.yml
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tests/integration/defs/perf/utils.py
  • tests/integration/defs/perf/nvdf.py
  • tests/integration/defs/perf/test_perf.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tests/integration/defs/perf/utils.py
  • tests/integration/defs/perf/nvdf.py
  • tests/integration/defs/perf/test_perf.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tests/integration/defs/perf/utils.py
  • tests/integration/defs/perf/nvdf.py
  • tests/integration/defs/perf/test_perf.py
🧬 Code graph analysis (3)
tests/integration/defs/perf/utils.py (1)
tests/integration/defs/perf/test_perf.py (1)
  • store_test_result (2213-2220)
tests/integration/defs/perf/nvdf.py (1)
tests/integration/defs/trt_test_alternative.py (2)
  • print_error (318-324)
  • print_info (300-306)
tests/integration/defs/perf/test_perf.py (2)
tests/integration/defs/perf/nvdf.py (4)
  • _id (229-233)
  • get_nvdf_config (90-122)
  • post_data (341-361)
  • prepare_baseline_data (425-473)
tests/integration/defs/perf/utils.py (1)
  • PerfMetricType (87-117)
🪛 Ruff (0.14.1)
tests/integration/defs/perf/nvdf.py

91-91: Found useless expression. Either assign it to a variable or remove it.

(B018)


91-91: Undefined name gpu_type

(F821)


91-91: Undefined name gpu_count

(F821)


91-91: Undefined name build_id

(F821)


91-91: Undefined name build_url

(F821)


91-91: Undefined name job_name

(F821)


91-91: Undefined name job_id

(F821)


91-91: Undefined name job_url

(F821)


92-92: Found useless expression. Either assign it to a variable or remove it.

(B018)


92-92: Undefined name branch

(F821)


92-92: Undefined name commit

(F821)


92-92: Undefined name is_post_merge

(F821)


92-92: Undefined name is_pr_job

(F821)


92-92: Undefined name trigger_mr_user

(F821)


103-103: Undefined name gpu_type

(F821)


104-104: Undefined name gpu_count

(F821)


106-106: Undefined name host_node_name

(F821)


109-109: Undefined name build_id

(F821)


110-110: Undefined name build_url

(F821)


111-111: Undefined name job_name

(F821)


112-112: Undefined name job_id

(F821)


113-113: Undefined name job_url

(F821)


114-114: Undefined name branch

(F821)


115-115: Undefined name commit

(F821)


116-116: Undefined name is_post_merge

(F821)


117-117: Undefined name is_pr_job

(F821)


118-118: Undefined name trigger_mr_user

(F821)


145-145: Starting a process with a partial executable path

(S607)


160-160: Do not catch blind exception: Exception

(BLE001)


167-167: Do not use bare except

(E722)


300-300: Probable use of requests call without timeout

(S113)


359-359: Do not catch blind exception: Exception

(BLE001)


405-405: Loop control variable data overrides iterable it iterates

(B020)


420-420: Do not catch blind exception: Exception

(BLE001)


538-538: Avoid equality comparisons to True; use data.get("b_is_baseline"): for truth checks

Replace with data.get("b_is_baseline")

(E712)


551-551: Avoid equality comparisons to True; use data.get("b_is_baseline"): for truth checks

Replace with data.get("b_is_baseline")

(E712)


575-575: Avoid equality comparisons to True; use data.get("b_is_baseline"): for truth checks

Replace with data.get("b_is_baseline")

(E712)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (4)
tests/integration/defs/perf/nvdf.py (1)

287-313: HTTP POST without timeout/backoff jitter; misleading retry message

Add timeout, exponential backoff, and correct the retry count in the final log.
[raise_recommended_refactor]

-    retry_time = 5
-    while retry_time:
-        res = requests.post(url, data=json_data, headers=headers)
+    retries = 5
+    attempt = 0
+    while attempt < retries:
+        res = requests.post(url, data=json_data, headers=headers, timeout=10)
         if res.status_code in [200, 201, 202]:
@@
-        retry_time -= 1
+        attempt += 1
+        time.sleep(min(60, 2 ** attempt))
-    print_error(
-        f"Fail to post to {project} after {retry_time} retry: {url}, json: {json_data}, error: {res.text}"
-    )
+    print_error(
+        f"Fail to post to {project} after {retries} retries: {url}, json: {json_data}, error: {getattr(res, 'text', '<no response>')}"
+    )
tests/integration/defs/perf/test_perf.py (2)

680-690: ClientConfig NVDF payload OK; relies on 'd_' numeric support in type checker

No change needed after expanding type_check_for_nvdf to accept d_*.


1453-1455: Result aggregation shape looks good

Storing results by cmd_idx and PerfMetricType is consistent with later upload usage.

Also applies to: 2213-2221

tests/integration/defs/perf/utils.py (1)

2254-2264: Review comment is inconsistent with actual code state

The review comment assumes metric_type has been changed to Optional[PerfMetricType], but the function definition at utils.py:509-519 shows it remains metric_type: PerfMetricType (not Optional). The call site at test_perf.py:2255 passes None, which violates this non-Optional type contract.

Either the function parameter type annotation needs to be updated to Optional[PerfMetricType], or the call site needs to pass a valid PerfMetricType value instead of None.

Likely an incorrect or invalid review comment.

Signed-off-by: Chenfei Zhang <[email protected]>
Signed-off-by: Chenfei Zhang <[email protected]>
@chenfeiz0326
Copy link
Collaborator Author

/bot run --stage-list "DGX_B200-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1,DGX_B300-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22491 [ run ] triggered by Bot. Commit: ed8d6c5

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22491 [ run ] completed with state SUCCESS. Commit: ed8d6c5
/LLM/main/L0_MergeRequest_PR pipeline #16949 (Partly Tested) completed with status: 'FAILURE'

Signed-off-by: Chenfei Zhang <[email protected]>
@chenfeiz0326
Copy link
Collaborator Author

/bot run --stage-list "DGX_B200-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1,DGX_B300-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22495 [ run ] triggered by Bot. Commit: da15bed

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22495 [ run ] completed with state SUCCESS. Commit: da15bed
/LLM/main/L0_MergeRequest_PR pipeline #16952 (Partly Tested) completed with status: 'SUCCESS'

Signed-off-by: Chenfei Zhang <[email protected]>
Signed-off-by: Chenfei Zhang <[email protected]>
@chenfeiz0326
Copy link
Collaborator Author

/bot run --stage-list "DGX_B200-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1,DGX_B300-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22498 [ run ] triggered by Bot. Commit: ec8b536

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22498 [ run ] completed with state SUCCESS. Commit: ec8b536
/LLM/main/L0_MergeRequest_PR pipeline #16955 (Partly Tested) completed with status: 'SUCCESS'

Signed-off-by: Chenfei Zhang <[email protected]>
@chenfeiz0326
Copy link
Collaborator Author

/bot run --stage-list "DGX_B200-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1,DGX_B300-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22502 [ run ] triggered by Bot. Commit: 9276d86

Signed-off-by: Chenfei Zhang <[email protected]>
@chenfeiz0326
Copy link
Collaborator Author

/bot run --stage-list "DGX_B200-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1,DGX_B300-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22503 [ run ] triggered by Bot. Commit: faca55a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22502 [ run ] completed with state ABORTED. Commit: 9276d86
LLM/main/L0_MergeRequest_PR #16959 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22503 [ run ] completed with state SUCCESS. Commit: faca55a
/LLM/main/L0_MergeRequest_PR pipeline #16960 (Partly Tested) completed with status: 'SUCCESS'

Signed-off-by: Chenfei Zhang <[email protected]>
@chenfeiz0326
Copy link
Collaborator Author

/bot run --stage-list "DGX_B200-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1,DGX_B300-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22524 [ run ] triggered by Bot. Commit: 6564b82

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22524 [ run ] completed with state SUCCESS. Commit: 6564b82
/LLM/main/L0_MergeRequest_PR pipeline #16979 (Partly Tested) completed with status: 'SUCCESS'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants