Skip to content

Conversation

@yzh119
Copy link
Collaborator

@yzh119 yzh119 commented Oct 20, 2025

📌 Description

Some invalid configuration are generated in JIT warmup (mixed precision) function gen_prefill_attention_modules.

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Tests
    • Updated test infrastructure to enhance compatibility handling for specific hardware acceleration scenarios, improving test robustness for mixed-precision configurations.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @yzh119, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses and resolves an issue where unit tests were failing on the Hopper platform. The root cause was identified as invalid configurations being generated during JIT warmup, particularly when attempting to use the fa3 attention template with mixed precision. The change introduces a safeguard to ensure that fa3 is only utilized with uniform precision, thereby eliminating the source of the test failures and ensuring robust operation.

Highlights

  • Unit Test Fix: Resolved failed unit tests specifically observed on the Hopper platform.
  • Mixed Precision Handling: Implemented a check within the JIT warmup function gen_prefill_attention_modules to prevent the fa3 (Flash Attention v3) template from being generated with mixed precision configurations, as it does not support them.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 20, 2025

Walkthrough

A targeted control-flow guard was added to gen_prefill_attention_modules in the test helpers to skip mixed-precision configurations for the FA3 backend path when query and key-value dtypes differ.

Changes

Cohort / File(s) Summary
FA3 mixed-precision guard
tests/test_helpers/jit_utils.py
Added a conditional check within the FA3 backend branch to skip mixed-precision configurations where q_dtype != kv_dtype, preventing certain config combinations during prefill attention module generation.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

🐰 A guard for the FA3, so precise and lean,
Mixed precisions now gracefully skip the scene,
When queries and keys take diverging roads,
The test helper lightens its computational loads,
Efficiency hops forward, clean and serene! 🌿

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title "unittest: fix failed unittest on hopper" is partially related to the changeset, referring to a real aspect of the change (fixing a test failure) but not explicitly conveying the main technical change. The actual change is adding a control-flow guard to skip mixed-precision configurations in the FA3 path within gen_prefill_attention_modules. While the title indicates that a unittest issue on hopper is being fixed, it doesn't explain what the underlying fix is, which could leave teammates scanning history without full understanding of the primary technical change. However, the title is specific and clear about the context (hopper GPU, unittest failure) and accurately reflects that this PR addresses a failing test.
Description Check ✅ Passed The pull request description includes the required template structure with a clear and complete Description section explaining that "invalid configuration are generated in JIT warmup (mixed precision) function gen_prefill_attention_modules." The full Pull Request Checklist template is present with all major sections intact. However, the Related Issues section is empty (only contains template comment), Reviewer Notes section is not filled in, and all checklist items remain unchecked. Despite these gaps, the description is substantially complete with the critical information (the problem being addressed) clearly communicated, and the missing sections are non-critical optional components.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9ee58ac and a6c4b62.

📒 Files selected for processing (1)
  • tests/test_helpers/jit_utils.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Deploy Docs
🔇 Additional comments (1)
tests/test_helpers/jit_utils.py (1)

163-164: The original review comment contains an incorrect verification request.

The suggested check to add FA3 guards to gen_decode_attention_modules and gen_persistent_batch_attention_modules is based on a false premise: neither function generates FA3 modules. These functions generate standard decode and persistent batch attention modules respectively, with no FA3 backend support checks. Only gen_prefill_attention_modules generates FA3 modules, and it already has the correct mixed-precision guard in place.

Likely an incorrect or invalid review comment.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes a failing unit test on Hopper by preventing the JIT warmup function gen_prefill_attention_modules from generating configurations for mixed-precision attention with the fa3 backend, which does not support it. The change is correct for the test helper function. However, the review identifies a potential issue where the root cause in the main library code is not addressed. The function determine_attention_backend can still incorrectly select the fa3 backend for mixed-precision cases, which could lead to runtime errors when using the public APIs. It is strongly recommended to fix this at the source by updating is_fa3_backend_supported.

Comment on lines +163 to +164
if q_dtype != kv_dtype:
continue # fa3 template do not support mixed precision
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This correctly prevents generating an invalid configuration for the fa3 backend in this test helper. However, this only addresses the symptom in the JIT warmup. The root cause appears to be in flashinfer.utils.determine_attention_backend, which can still select the fa3 backend for mixed-precision cases because is_fa3_backend_supported doesn't perform this check. This could lead to runtime errors in user-facing APIs like single_prefill_with_kv_cache. A more robust fix would be to add the mixed-precision check to is_fa3_backend_supported to prevent incorrect backend selection throughout the library.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant