Skip to content

Conversation

vignesh14052002
Copy link
Contributor

we were about to use OpenAI's O1 model for FactualCorrectness metric since it is good at reasoning, but faced BadRequestError on ragas eventhough it worked fine in langchain and llama index, then I came to know ragas is calculating temperature, but it is not supported in O1 models, so added a way to bypass temperature

Reproducible code

from langchain_openai import ChatOpenAI
from llama_index.llms.openai import OpenAI
from ragas.llms import LlamaIndexLLMWrapper,LangchainLLMWrapper
from langchain_core.prompt_values import StringPromptValue

api_key = "<your_api_key>"

langchain_llm=ChatOpenAI(model="o1", api_key=api_key)
llama_index_llm=OpenAI(model="o1", api_key=api_key)

# Both Will work fine
print(langchain_llm.invoke("hi"))
print(llama_index_llm.complete("hi"))

prompt = StringPromptValue(text="hi")

# Both Will raise error
await LlamaIndexLLMWrapper(llama_index_llm).agenerate_text(prompt)
await LangchainLLMWrapper(langchain_llm).agenerate_text(prompt)

Error

BadRequestError: Error code: 400 - {'error': {'message': "Unsupported parameter: 'temperature' is not supported with this model.", 'type': 'invalid_request_error', 'param': 'temperature', 'code': 'unsupported_parameter'}}

After Fix

# Both Will work as expected
await LlamaIndexLLMWrapper(llama_index_llm,bypass_temperature=True).agenerate_text(prompt)
await LangchainLLMWrapper(langchain_llm,bypass_temperature=True).agenerate_text(prompt)

Due to scalability reasons, I am adding a flag instead of checking the model name and removing the temperature

@dosubot dosubot bot added the size:S This PR changes 10-29 lines, ignoring generated files. label Jul 31, 2025
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Greptile Summary

This PR adds support for OpenAI's O1 series models by introducing a bypass_temperature parameter to both LangchainLLMWrapper and LlamaIndexLLMWrapper classes in ragas/src/ragas/llms/base.py. The O1 models (o1, o1-preview, o1-mini) are reasoning-focused models that don't support the temperature parameter in their API, which was causing BadRequestError when used with Ragas.

The implementation adds a boolean bypass_temperature flag to both wrapper classes' constructors. When set to True, the flag prevents temperature from being applied to the underlying LLM. For LangchainLLMWrapper, it conditionally skips setting the temperature on the LLM object. For LlamaIndexLLMWrapper, it removes the temperature key from the kwargs dictionary before passing them to the completion method.

This change integrates well with Ragas' existing LLM wrapper architecture, maintaining backward compatibility while enabling support for newer OpenAI models that have fixed temperature behavior. The developer chose a flag-based approach over hardcoded model name detection for better scalability and flexibility with other LLM providers that might have similar constraints.

Confidence score: 4/5

  • This is a well-targeted fix that addresses a legitimate compatibility issue with minimal risk
  • The implementation is clean and follows the existing codebase patterns
  • The LlamaIndexLLMWrapper implementation needs attention as it modifies kwargs in-place which could affect other parts of the code

1 file reviewed, no comments

Edit Code Review Bot Settings | Greptile

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size:S This PR changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants