Skip to content

Conversation

quic-vargupt
Copy link

@quic-vargupt quic-vargupt commented Oct 14, 2025

No description provided.

abhishek-singh591 and others added 2 commits October 17, 2025 11:10
…vision to 0.22.0+cpu, and Python Requirement to >=3.9 (quic#542)

Update Transformers to 4.55.0
Update PyTorch to 2.7.0+cpu
Torchvision to 0.22.0+cpu
and Python Requirement to >=3.9

Updated modeling files and Cache Utils for transformers 4.55.0

Updated models :

1. codegen
2. falcon
3. gemma
4. gemma2
5. gptj
6. gpt2
7. granite
8. granite_moe
9. grok1
10. llama
11. llama_swiftkv
12. mistral
13. mixtral_moe
14. mpt
15. phi
16. phi3
17. qwen2
18. starcoder2
19. gpt_bigcode
20. internvl
21. llava
22. llava_next
23. whisper
24. gemma3
25. llama4
26. mllama

---------
Update Qeff Documentation to indicate vLLM Support in Validated Models Page

Signed-off-by: Asmita Goswami <[email protected]>
Signed-off-by: Mamta Singh <[email protected]>
Co-authored-by: Mamta Singh <[email protected]>
Co-authored-by: Asmita Goswami <[email protected]>
Signed-off-by: Varun Gupta <[email protected]>
@abukhoy
Copy link
Contributor

abukhoy commented Oct 17, 2025

Please Rebase and resolve the conflict with the mainline.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants