Skip to content

Conversation

@harvenstar
Copy link
Contributor

Summary

  • Add training script for Qwen3-14B with batch size 300 configuration

Test plan

  • Verify script runs correctly on target cluster

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @harvenstar, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a dedicated training script for the Qwen3-14B large language model, specifically designed for execution on B300 GPUs. The script encapsulates a full training configuration, from data handling and model checkpointing to performance tuning and distributed execution via Ray, addressing specific hardware considerations to ensure efficient model training.

Highlights

  • New Qwen3-14B Training Script: Introduces scripts/run-qwen3-14B-b300.sh, a new bash script for training the Qwen3-14B model.
  • B300 GPU Configuration: The script is specifically designed for training on B300 GPUs, incorporating necessary workarounds for hardware compatibility, such as disabling SGLANG CUDA graphs and the TP memory imbalance check.
  • Batch Size Configuration: While the pull request description mentions a 'batch size 300 configuration', the script explicitly sets a --global-batch-size 128 and --rollout-batch-size 16.
  • Comprehensive Training Setup: The script defines a full suite of training parameters, including checkpointing, data rollout, evaluation, performance optimizations (e.g., --tensor-model-parallel-size 4), GRPO arguments, and Adam optimizer settings.
  • Ray-based Distributed Training: Utilizes Ray for orchestrating distributed training, including cluster setup and job submission with specific environment variables.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a training script for Qwen3-14B. A security audit revealed a potential command injection vulnerability from unquoted environment variable expansion and an insecure default configuration exposing the Ray dashboard without authentication. These critical security issues must be addressed to prevent unauthorized access and code execution. Additionally, the script's process cleanup logic needs safety enhancements, and a typo in an environment variable should be corrected.

- Start from docker pull instead of assuming running inside container
- Remove pkill cleanup and NVLink detection (hardcode NCCL_NVLS_ENABLE=1)
- Fix PYTHONBUFFERED typo → PYTHONUNBUFFERED
- Quote MASTER_ADDR properly
- Inline MODEL_ARGS (container heredoc can't source host files)
- Data paths use /data mount point (DATA_DIR configurable)
- Use /.dockerenv detection for single-file host/container dual mode
- Add process cleanup (pkill sglang/ray/python) matching other scripts
- Source model args from scripts/models/qwen3-14B.sh instead of inlining
- Detect NVLink dynamically instead of hardcoding NCCL_NVLS_ENABLE
- Fix heredoc stdin issue by adding docker run -i flag
- Add --working-dir /root/miles for correct train.py resolution
@harvenstar
Copy link
Contributor Author

train-b300.log

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant