Skip to content

Conversation

@jasonqinzhou
Copy link
Contributor

@jasonqinzhou jasonqinzhou commented Nov 21, 2025

Overview:

[
](test: add AIConfigurator dense model tests for Dynamo Planner Profiler #4052 ](test: add AIConfigurator dense model tests for Dynamo Planner Profiler #4526)

Details:

Add test coverage.

Where should the reviewer start?

Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)

  • closes GitHub issue: #xxx

Summary by CodeRabbit

  • Bug Fixes

    • Fixed configuration processing to safely handle optional flags.
  • Chores

    • Updated aiconfigurator dependency to release version 0.4.0.
    • Updated build configuration settings.
    • Improved test framework consistency.

✏️ Tip: You can customize this high-level summary in your review settings.

@jasonqinzhou jasonqinzhou requested review from a team as code owners November 21, 2025 03:30
@github-actions github-actions bot added the test label Nov 21, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 21, 2025

Walkthrough

This PR introduces maintenance updates: MyPy cache ignoring in .gitignore, conditional flag handling in benchmark utilities, aiconfigurator dependency version upgrade, and systematic test fixture refactoring renaming trtllm_args to llm_args.

Changes

Cohort / File(s) Summary
Config & Dependencies
.gitignore, benchmarks/pyproject.toml
Adds MyPy cache ignore entry (.mypy_cache/) to .gitignore; updates aiconfigurator dependency from a git commit hash to release tag release/0.4.0 in benchmarks.
Benchmark Utilities
benchmarks/profiler/utils/config_modifiers/vllm.py
Modifies flag removal logic in convert_config for PREFILL target to conditionally remove --is-prefill-worker flag only if present, preventing errors when absent.
Test Refactoring
tests/profiler/test_profile_sla_aiconfigurator.py
Systematically renames trtllm_args fixture and parameter to llm_args across all test methods; renames test_trtllm_aiconfigurator_many to test_aiconfigurator_dense_models; updates all internal references and call sites.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Test refactoring file: Verify all fixture and parameter references are consistently renamed without logic changes
  • vllm.py: Confirm conditional logic properly checks flag presence before removal
  • Dependency update: Validate aiconfigurator version compatibility with existing benchmarks

Poem

🐰 With hopping glee, the changes flow,
MyPy caches hide below,
Tests renamed in a gentle way,
Flags check twice before they stray,
Dependencies updated fresh,
Code refined in every mesh! ✨

Pre-merge checks

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ⚠️ Warning Pull request description is incomplete. The Overview section contains malformed markdown links, Details section lacks substantive information, and Related Issues section contains placeholder text. Complete the Overview with a clear description of changes, expand Details to explain what test coverage was added and why, and replace the placeholder 'closes GitHub issue: #xxx' with the actual related issue number.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title directly describes the main change: adding AIConfigurator dense model tests. It aligns with the test file refactoring and new test additions in the changeset.

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tests/profiler/test_profile_sla_aiconfigurator.py (1)

107-109: Update test name to be backend-agnostic.

The test name test_trtllm_aiconfigurator_single_model retains the trtllm prefix but now uses the generic llm_args fixture. For consistency with the renamed fixture and the other test (test_aiconfigurator_dense_models at line 131), remove the backend-specific prefix.

Apply this diff:

-    async def test_trtllm_aiconfigurator_single_model(self, llm_args):
+    async def test_aiconfigurator_single_model(self, llm_args):
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e7544f1 and 75731c1.

📒 Files selected for processing (4)
  • .gitignore (1 hunks)
  • benchmarks/profiler/utils/config_modifiers/vllm.py (1 hunks)
  • benchmarks/pyproject.toml (1 hunks)
  • tests/profiler/test_profile_sla_aiconfigurator.py (5 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-14T03:26:06.160Z
Learnt from: jasonqinzhou
Repo: ai-dynamo/dynamo PR: 4331
File: tests/profiler/test_profile_sla_aiconfigurator.py:118-121
Timestamp: 2025-11-14T03:26:06.160Z
Learning: The aiconfigurator (aic) only supports sglang version 0.5.1.post1 in its database, so tests should use this specific version rather than the latest available sglang release.

Applied to files:

  • tests/profiler/test_profile_sla_aiconfigurator.py
🧬 Code graph analysis (1)
tests/profiler/test_profile_sla_aiconfigurator.py (1)
benchmarks/profiler/profile_sla.py (1)
  • run_profile (129-748)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: vllm (amd64)
  • GitHub Check: operator (amd64)
  • GitHub Check: Build and Test - dynamo
🔇 Additional comments (6)
.gitignore (1)

80-81: LGTM!

Standard MyPy cache directory ignore entry, properly organized under a dedicated section.

benchmarks/profiler/utils/config_modifiers/vllm.py (1)

126-127: LGTM!

Good defensive programming—checking membership before removal prevents ValueError if the flag is absent. This aligns with the conditional removal pattern used elsewhere in the same method (lines 130-131, 161-164).

tests/profiler/test_profile_sla_aiconfigurator.py (3)

40-76: LGTM!

Renaming the fixture from trtllm_args to llm_args appropriately reflects the expanded scope to support multiple LLM backends (TensorRT-LLM, vLLM, SGLang).


131-138: LGTM!

The test expansion to cover multiple backends (TensorRT-LLM, vLLM, SGLang) and dense models aligns well with the PR objectives. The parametrization provides good coverage across different backend versions and model sizes.


118-121: SGLang version confirmed; verify vLLM 0.11.0 support in aiconfigurator database.

SGLang version 0.5.1.post1 is confirmed to be supported by aiconfigurator, but vLLM 0.11.0 requires verification. While the version exists (referenced in codebase with known issues), there is no evidence confirming it's in the aiconfigurator database. The test will validate this at runtime—if vLLM 0.11.0 is unsupported, the test will fail with a database lookup error. Confirm this version is available before merging.

benchmarks/pyproject.toml (1)

43-43: Verify profiler test compatibility with the new release/0.4.0 branch that includes sglang support changes.

The reference to release/0.4.0 is a valid branch (not a tag as the original comment implied). However, moving from the pinned commit (5554d2e, 2025-11-06) to this branch introduces 8 new commits, including significant refactoring work on sglang support. A key commit (e9a5452, 2025-11-09) unifies sglang paths and explicitly updates support for sglang version 0.5.1.post1.

Ensure the profiler benchmarks tests work correctly with these changes, particularly the sglang integration in tests/profiler/test_profile_sla_aiconfigurator.py, which relies on aiconfigurator's sglang 0.5.1.post1 support.

Signed-off-by: Jason Zhou <[email protected]>
Signed-off-by: Jason Zhou <[email protected]>
@jasonqinzhou
Copy link
Contributor Author

test

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants