Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Breaking Changes: Model Configuration and Environment Variables Update #453

Open
miurla opened this issue Feb 24, 2025 · 8 comments
Open

Comments

@miurla
Copy link
Owner

miurla commented Feb 24, 2025

Changes Overview

We have made two significant changes that require user action:

  1. Model Configuration Moved to JSON

    • All model configurations have been moved from environment variables to public/config/models.json
    • This includes:
      • Model IDs and names
      • Provider information
      • Tool calling configuration
      • Model status (enabled/disabled)
  2. Environment Variable Updates

    • Removed model-specific environment variables:
      • NEXT_PUBLIC_OLLAMA_MODEL
      • NEXT_PUBLIC_OLLAMA_TOOL_CALL_MODEL
      • NEXT_PUBLIC_AZURE_DEPLOYMENT_NAME
      • NEXT_PUBLIC_OPENAI_COMPATIBLE_MODEL
    • Renamed environment variable:
      • NEXT_PUBLIC_ENABLE_SAVE_CHAT_HISTORYENABLE_SAVE_CHAT_HISTORY

Required Actions

1. Update Model Configuration

Development Mode

Edit public/config/models.json directly with your model configurations:

{
  "models": [
    {
      "id": "model-id",
      "name": "Model Name",
      "provider": "Provider Name",
      "providerId": "provider-id",
      "enabled": true,
      "toolCallType": "native|manual",
      "toolCallModel": "tool-call-model-id" // only if toolCallType is "manual"
    }
  ]
}

Docker Mode

Using Prebuilt Image

When using the prebuilt image (ghcr.io/miurla/morphic:latest), create a models.json file alongside your .env.local and update your docker-compose.yml to mount it:

services:
  morphic:
    image: ghcr.io/miurla/morphic:latest
    env_file: .env.local
    volumes:
      - ./models.json:/app/public/config/models.json
Building Your Own Image

When building your own Docker image from source:

  1. Clone the repository
  2. Edit public/config/models.json directly with your configurations
  3. Build and run the image using docker-compose

2. Update Environment Variables

  1. Remove the following variables from your .env.local:
    • NEXT_PUBLIC_OLLAMA_MODEL
    • NEXT_PUBLIC_OLLAMA_TOOL_CALL_MODEL
    • NEXT_PUBLIC_AZURE_DEPLOYMENT_NAME
    • NEXT_PUBLIC_OPENAI_COMPATIBLE_MODEL
  2. If you are using chat history storage, rename:
    • NEXT_PUBLIC_ENABLE_SAVE_CHAT_HISTORY to ENABLE_SAVE_CHAT_HISTORY

Documentation

For detailed configuration instructions, please refer to:

Migration Guide

  1. For Development: Edit public/config/models.json directly with your model settings
    For Docker: Create models.json alongside .env.local with your model settings
  2. Update your .env.local by removing the deprecated variables
  3. If using chat history, rename the environment variable
  4. Restart your development server or Docker container

Please report any issues you encounter during the migration.

@arsaboo
Copy link

arsaboo commented Feb 24, 2025

I added the following in models.json:

    {
      "id": "qwen2.5:latest",
      "name": "Qwen 2.5",
      "provider": "Ollama",
      "providerId": "ollama",
      "enabled": true,
      "toolCallType": "manual",
      "toolCallModel": "phi4"
    },

But I still don't see it in the UI. deepseek-r1 is now working with Ollama.

@arsaboo
Copy link

arsaboo commented Feb 24, 2025

Also, this openrouter config did not work:

    {
      "id": "openai/o3-mini",
      "name": "O3 Mini",
      "provider": "OpenAI Compatible",
      "providerId": "openai-compatible",
      "enabled": true,
      "toolCallType": "native"
    }

I just see Undefined in the UI:

Image

I do have the following defined in my .env

OPENAI_COMPATIBLE_API_KEY=REDACTED
OPENAI_COMPATIBLE_API_BASE_URL="https://openrouter.ai/api/v1"

@miurla
Copy link
Owner Author

miurla commented Feb 24, 2025

@arsaboo

I've fixed the model configuration issue by:

  1. Improving model config loading in Docker environment
  2. Using Next.js middleware to correctly handle request URLs

Please check the description and try it out. Let me know if you have any issues.

@arsaboo
Copy link

arsaboo commented Feb 24, 2025

@miurla Just tried, and now it all works 🎉

Image

@arsaboo
Copy link

arsaboo commented Feb 24, 2025

I tried a few other providers and here's the error I am getting with Gemini 2.0 Flash:

Error in chat: Invalid arguments for tool search: Type validation failed: Value: {"max_results":5,"query":"DeepSeek R1","search_depth":"basic"}. Error message: [ { "code": "invalid_type", "expected": "array", "received": "undefined", "path": [ "include_domains" ], "message": "Required" }, { "code": "invalid_type", "expected": "array", "received": "undefined", "path": [ "exclude_domains" ], "message": "Required" } ]

The same query works fine on other two Gemini Models.

@miurla
Copy link
Owner Author

miurla commented Feb 24, 2025

Invalid arguments for tool search

This error is related to the model's tool calling functionality. Gemini model's native tool calling is not yet stable. You can resolve this issue by setting the toolCallType to 'manual' in the model configuration.

@warlock666
Copy link

Invalid arguments for tool search

This error is related to the model's tool calling functionality. Gemini model's native tool calling is not yet stable. You can resolve this issue by setting the toolCallType to 'manual' in the model configuration.

Thanks for the fast response. I did try configuring it as manual. Still the same error. Gemini flash for deep research is the best bang for buck. Hopefully it works out.

@miurla
Copy link
Owner Author

miurla commented Mar 3, 2025

@warlock666 It looks like your error is still related to native tool calling. The cookie storing your model configuration might not have been updated properly.

Try this to reset the configuration:

  1. Select a different model first
  2. Then switch back to Gemini Flash

This should force the browser to update the stored configuration and resolve the error. The system might be using cached settings that still have toolCallType set to "native" instead of "manual".

Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants