Skip to content

support 32K model len on deepseek r1 W8A8 #728

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

flying632
Copy link

What this PR does / why we need it?

Optimize NPU memory usage. #723

vllm v0.8.4.rc2 and DeepSeek R1 can only support a model length of 16K. When attempting to run with a model length of 32K, an "Out of Memory" (OOM) error will occur.

Does this PR introduce any user-facing change?

How was this patch tested?

@flying632 flying632 marked this pull request as ready for review April 29, 2025 14:44
@flying632 flying632 changed the title support 32K model len on deepseek r1 W8A8 [Performance] support 32K model len on deepseek r1 W8A8 Apr 29, 2025
@flying632 flying632 changed the title [Performance] support 32K model len on deepseek r1 W8A8 support 32K model len on deepseek r1 W8A8 Apr 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant