Configure max_tokens
for LLM output
#51
Labels
easy difficulty
Small and limited scope to implement feature or fix bug
max_tokens
for LLM output
#51
Most of the LLM service endpoints provide an option for specifying a maximum number of output tokens. This makes the output more predictable and can help manage costs when generating lots of show notes.
OpenAI used to call this
max_tokens
but that has been deprecated and replaced withmax_completion_tokens
. Will need to research how this is specified for all the other LLM services.Example command:
npm run as -- \ --video "https://www.youtube.com/watch?v=MORMZXEaONk" \ --chatgpt \ --maxTokens 1000
The text was updated successfully, but these errors were encountered: