Update on the development branch #1765
kaiyux
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
The TensorRT-LLM team is pleased to announce that we are pushing an update to the development branch (and the Triton backend) this June 11, 2024.
This update includes:
examples/phi/README.mdmax_batch_sizeintrtllm-buildcommand is 256 by default now.max_num_tokensintrtllm-buildcommand is 8192 by default now.apiingptManagerBenchmarkcommand isexecutorby default now.biasargument to theLayerNormmodule, and supports non-bias layer normalization.LLM.generate()API.SamplingConfigSamplingParamswith some sampling parameters, seetensorrt_llm/hlapi/utils.pySamplingParamsinstead ofSamplingConfiginLLM.generate()API, seeexamples/high-level-api/README.mdGptManagerAPImaxBeamWidthintoTrtGptModelOptionalParamsschedulerConfigintoTrtGptModelOptionalParamsconvert_hf_mpt_legacycall failure when the function is called in other than global scope, thanks to the contribution from @bloodeagle40234 in Define hf_config explisitly for convert_hf_mpt_legacy #1534.use_fp8_context_fmhabroken outputs (use_fp8_context_fmha broken outputs #1539).--ipc=hostnotes to installation guide to prevent bus error, seedocs/source/installation/build-from-source-linux.mdanddocs/source/installation/linux.md(Bus error running t5 conversion script using the latest main #1538)Thanks,
The TensorRT-LLM Engineering Team
Beta Was this translation helpful? Give feedback.
All reactions