feat(gemini): add specific multi-model selection and fallback support issue #586#591
feat(gemini): add specific multi-model selection and fallback support issue #586#591bedead wants to merge 10 commits intoemcie-co:developfrom
Conversation
Signed-off-by: Satyam Mishra <satyammishra9050@gmail.com>
Signed-off-by: Satyam Mishra <satyammishra9050@gmail.com>
Signed-off-by: Satyam Mishra <satyammishra9050@gmail.com>
|
Should I update the docs to? |
|
Great job @bedead ! |
Signed-off-by: Satyam Mishra <satyammishra9050@gmail.com>
|
@mc-dorzo I was also thinking about adding this feature to other nlp adapters. Should I add them as well, and create separate PR for each? |
|
@bedead please also see my comment in the issue you opened. Regarding other providers... that would be truly lovely :) And yes, separate PRs please! Great work mate - we appreciate your initiative a lot! |
|
@kichanyurd Great that you are well now (I know, sickness drains a lot). Also, I just viewed your comment in previous issue. I will make the change. |
Signed-off-by: Satyam Mishra <satyammishra9050@gmail.com>
|
@bedead LGTM! Can you confirm this is well-tested and ready to go? |
Signed-off-by: Satyam Mishra <satyammishra9050@gmail.com>
|
@kichanyurd SS: |
|
@bedead A realization came to me with a bit of a delay 😅 I think this pattern will be useful to virtually all NLP services. So we need to ensure this is the right way we want to do it. It needs to be flexible enough so that we don't end up changing the pattern in a few weeks/months (consequently breaking the API for many users). Please allow me a few more days to think it over! Appreciate your patience. 🙏 |
|
@kichanyurd I understand. It's important to think about this properly. I was also thinking, that it would be more intuitive and reasonable to set models specific to agents rather than whole server. Concept wise, this will allow users to choose from multiple providers (which could be specified in server). And then select specific models in agents. As certain providers and models are good with certain tasks. E.g. From my experience gpt models performs well in debugging and generating code templates. Whereas gemini models are great with image manipulation, and bit of creative work. And Antrophic models are good at understand code, generating new code. Current implementation will make provider as will as models very static. One way that is used by most of agentic frameworks, is that llm could be initialised (with providers and models name) and then passed to specific agent. |
|
@bedead, sorry for the late response here. We're totally swamped right now with work on v3.1 and trying to get SLMs under control to bring costs down as close to the floor as we can. I like your idea of controlling I'll also leave this issue open for now until we can come up with an idea to configure each |
|
@kichanyurd no issues, if there are any specific areas which needs help. You can ping me; I would love to contribute more to this project. |







Description
#586 #584 This PR extends the GeminiService provider to support multiple model names with automatic fallback handling.
Key Updates
Added support for passing model_name as:
Integrated FallbackSchematicGenerator to automatically switch to the next model in the list if one fails or is overloaded.
Kept backward compatibility — existing usages like p.NLPServices.gemini() or None continue to work.
Example Usage
Files Modified
Files Added
examples/main.py