Skip to content

Conversation

qiuosier
Copy link
Member

@qiuosier qiuosier commented Sep 19, 2025

The OCI model deployments now supports OpenAI specs.

This PR removes the legacy implementation and implemented the new clients by inheriting from ChatOpenAI and OpenAI. Users will be able to use the parameters and methods supported by OpenAI.

Example:

from langchain_oci import ChatOCIModelDeployment
from langchain_core.messages import HumanMessage


llm = ChatOCIModelDeployment(
    model="odsc-llm",
    endpoint="https://modeldeployment.<region>.oci.customer-oci.com/<ocid>/predictWithResponseStream",
    max_tokens=256,
)

messages = [HumanMessage(content="Who's the first president of United States?")]
response = llm.stream(messages)

# Print the text of the response
for chunk in response:
    print(chunk.content, end="", flush=True)

The implementation requires langchain-openai library. However, this library is not required for users using the OCI generative AI clients.

@oracle-contributor-agreement oracle-contributor-agreement bot added the OCA Verified All contributors have signed the Oracle Contributor Agreement. label Sep 19, 2025
@qiuosier qiuosier marked this pull request as draft September 19, 2025 17:57
# For langchain_openai, show the message with pip install command.
if ex.name == "langchain_openai":
message = (
"No module named langchain_openai. "
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can give some more user friendly message, that we rely on the langchain_openai and delegate inferencing to this library?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
OCA Verified All contributors have signed the Oracle Contributor Agreement.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants