Skip to content

feat(gemini): add specific multi-model selection and fallback support issue #586#591

Open
bedead wants to merge 10 commits intoemcie-co:developfrom
bedead:feat-specific-gemini-model-selection
Open

feat(gemini): add specific multi-model selection and fallback support issue #586#591
bedead wants to merge 10 commits intoemcie-co:developfrom
bedead:feat-specific-gemini-model-selection

Conversation

@bedead
Copy link
Copy Markdown

@bedead bedead commented Oct 10, 2025

Description

#586 #584 This PR extends the GeminiService provider to support multiple model names with automatic fallback handling.

Key Updates

Added support for passing model_name as:

  • a single string (e.g., "gemini-2.0-flash-lite")
  • a list of model names (e.g., ["gemini-2.0-flash-lite", "gemini-2.5-flash"])
  • or left None, for default model fallback behavior.

Integrated FallbackSchematicGenerator to automatically switch to the next model in the list if one fails or is overloaded.

Kept backward compatibility — existing usages like p.NLPServices.gemini() or None continue to work.

Example Usage

import asyncio
import parlant.sdk as p

async def main():
    async with p.Server(
        nlp_service=p.NLPServices.gemini(
            model_name=["gemini-2.0-flash-lite", "gemini-2.5-flash"]
        )
    ) as server:
        agent = await server.create_agent(
            name="Otto Carmen",
            description="You work at a car dealership",
        )

asyncio.run(main())

Files Modified

parlant/adapters/nlp/gemini_service.py
parlant/sdk.py

Files Added

examples/main.py

Satyam Mishra added 2 commits October 10, 2025 13:57
Signed-off-by: Satyam Mishra <satyammishra9050@gmail.com>
Signed-off-by: Satyam Mishra <satyammishra9050@gmail.com>
@bedead bedead marked this pull request as ready for review October 10, 2025 08:33
@bedead bedead changed the title feat(gemini): add specific multi-model selection and fallback support feat(gemini): add specific multi-model selection and fallback support #586 Oct 10, 2025
@bedead bedead changed the title feat(gemini): add specific multi-model selection and fallback support #586 feat(gemini): add specific multi-model selection and fallback support issue #586 Oct 10, 2025
Comment thread examples/main.py Outdated
Comment thread src/parlant/adapters/nlp/gemini_service.py Outdated
Comment thread src/parlant/adapters/nlp/gemini_service.py Outdated
Comment thread src/parlant/adapters/nlp/gemini_service.py Outdated
Comment thread src/parlant/adapters/nlp/gemini_service.py Outdated
Comment thread src/parlant/sdk.py Outdated
Comment thread src/parlant/sdk.py Outdated
Comment thread src/parlant/sdk.py Outdated
Comment thread src/parlant/sdk.py Outdated
Signed-off-by: Satyam Mishra <satyammishra9050@gmail.com>
@bedead bedead requested a review from mc-dorzo October 10, 2025 16:23
@bedead
Copy link
Copy Markdown
Author

bedead commented Oct 10, 2025

Should I update the docs to?

@bedead
Copy link
Copy Markdown
Author

bedead commented Oct 10, 2025

Here's screenshots as asked, thanks for comments.

Test code:

image

ChatUI:

image

Terminal:

image

@mc-dorzo
Copy link
Copy Markdown
Contributor

Great job @bedead !
Can you please add a doc describing usage at https://github.com/emcie-co/parlant/tree/develop/docs/adapters/nlp/gemini.md

@bedead bedead closed this Oct 15, 2025
@bedead bedead deleted the feat-specific-gemini-model-selection branch October 15, 2025 12:55
@bedead bedead restored the feat-specific-gemini-model-selection branch October 15, 2025 12:55
@bedead bedead reopened this Oct 15, 2025
Satyam Mishra added 2 commits October 15, 2025 18:26
@bedead
Copy link
Copy Markdown
Author

bedead commented Oct 15, 2025

@mc-dorzo I was also thinking about adding this feature to other nlp adapters. Should I add them as well, and create separate PR for each?

@kichanyurd
Copy link
Copy Markdown
Contributor

@bedead please also see my comment in the issue you opened.

Regarding other providers... that would be truly lovely :) And yes, separate PRs please!

Great work mate - we appreciate your initiative a lot!

@bedead
Copy link
Copy Markdown
Author

bedead commented Oct 15, 2025

@kichanyurd Great that you are well now (I know, sickness drains a lot). Also, I just viewed your comment in previous issue. I will make the change.

@kichanyurd
Copy link
Copy Markdown
Contributor

@bedead LGTM! Can you confirm this is well-tested and ready to go?

Comment thread src/parlant/sdk.py Outdated
Satyam Mishra added 2 commits October 18, 2025 10:38
@bedead
Copy link
Copy Markdown
Author

bedead commented Oct 18, 2025

@kichanyurd SS:
image
image

Works without passing in generative_model_name params also.
image
image

@kichanyurd
Copy link
Copy Markdown
Contributor

@bedead A realization came to me with a bit of a delay 😅

I think this pattern will be useful to virtually all NLP services.

So we need to ensure this is the right way we want to do it. It needs to be flexible enough so that we don't end up changing the pattern in a few weeks/months (consequently breaking the API for many users).

Please allow me a few more days to think it over! Appreciate your patience. 🙏

@bedead
Copy link
Copy Markdown
Author

bedead commented Oct 21, 2025

@kichanyurd I understand. It's important to think about this properly. I was also thinking, that it would be more intuitive and reasonable to set models specific to agents rather than whole server.

Concept wise, this will allow users to choose from multiple providers (which could be specified in server). And then select specific models in agents. As certain providers and models are good with certain tasks. E.g. From my experience gpt models performs well in debugging and generating code templates. Whereas gemini models are great with image manipulation, and bit of creative work. And Antrophic models are good at understand code, generating new code.

Current implementation will make provider as will as models very static.

One way that is used by most of agentic frameworks, is that llm could be initialised (with providers and models name) and then passed to specific agent.

@kichanyurd
Copy link
Copy Markdown
Contributor

@bedead, sorry for the late response here. We're totally swamped right now with work on v3.1 and trying to get SLMs under control to bring costs down as close to the floor as we can.

I like your idea of controlling NLPService per agent. I added it to our roadmap and will try to get it into v3.1 if I can.

I'll also leave this issue open for now until we can come up with an idea to configure each NLPService in an elegant and uniform manner. I was thinking perhaps the Builder pattern might be a good direction to explore here.

@bedead
Copy link
Copy Markdown
Author

bedead commented Nov 10, 2025

@kichanyurd no issues, if there are any specific areas which needs help. You can ping me; I would love to contribute more to this project.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants