Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-loaded configurations #882

Open
sm18lr88 opened this issue Aug 5, 2024 · 0 comments · May be fixed by #919
Open

Pre-loaded configurations #882

sm18lr88 opened this issue Aug 5, 2024 · 0 comments · May be fixed by #919
Labels
upgrade New feature or request

Comments

@sm18lr88
Copy link

sm18lr88 commented Aug 5, 2024

The setup for the locally hosted Docker version is still complicated (it used to be way simpler in the earlier version of Khoj), and the instructions are not very clear on how to load various models, embedding engines, etc.

Consider offering pre-loaded choices for LLMs and Embedding choices from a drop-down menu.

For example, a drop-down menu in the chat models where users can simply select the LLM provider, model, and input their API key.
For embeddings likewise: a list of all the various models users can use, from a simple drop-down menu rather than having to type it in.

I keep trying this app here and there but I only ever managed to get the initial version to ever work.

Thanks.

@sm18lr88 sm18lr88 added the upgrade New feature or request label Aug 5, 2024
@sm18lr88 sm18lr88 changed the title [IDEA] pre-loaded configurations Pre-loaded configurations Aug 5, 2024
debanjum added a commit that referenced this issue Sep 19, 2024
Given the LLM landscape is rapidly changing, providing a good default
set of options should help reduce decision fatigue to get started

Improve initialization flow during first run
- Set Google, Anthropic Chat models too
  Previously only Offline, Openai chat models could be set during init

- Add multiple chat models for each LLM provider
  Interactively set a comma separated list of models for each provider

- Auto add default chat models for each provider in non-interactive
  model if the {OPENAI,GEMINI,ANTHROPIC}_API_KEY env var is set

- Do not ask for max_tokens, tokenizer for offline models during
  initialization. Use better defaults inferred in code instead

- Explicitly set default chat model to use
  If unset, it implicitly defaults to using the first chat model.
  Make it explicit to reduce this confusion

Resolves #882
@debanjum debanjum linked a pull request Sep 19, 2024 that will close this issue
debanjum added a commit that referenced this issue Sep 20, 2024
Given the LLM landscape is rapidly changing, providing a good default
set of options should help reduce decision fatigue to get started

Improve initialization flow during first run
- Set Google, Anthropic Chat models too
  Previously only Offline, Openai chat models could be set during init

- Add multiple chat models for each LLM provider
  Interactively set a comma separated list of models for each provider

- Auto add default chat models for each provider in non-interactive
  model if the {OPENAI,GEMINI,ANTHROPIC}_API_KEY env var is set

- Do not ask for max_tokens, tokenizer for offline models during
  initialization. Use better defaults inferred in code instead

- Explicitly set default chat model to use
  If unset, it implicitly defaults to using the first chat model.
  Make it explicit to reduce this confusion

Resolves #882
debanjum added a commit that referenced this issue Sep 20, 2024
Given the LLM landscape is rapidly changing, providing a good default
set of options should help reduce decision fatigue to get started

Improve initialization flow during first run
- Set Google, Anthropic Chat models too
  Previously only Offline, Openai chat models could be set during init

- Add multiple chat models for each LLM provider
  Interactively set a comma separated list of models for each provider

- Auto add default chat models for each provider in non-interactive
  model if the {OPENAI,GEMINI,ANTHROPIC}_API_KEY env var is set

- Do not ask for max_tokens, tokenizer for offline models during
  initialization. Use better defaults inferred in code instead

- Explicitly set default chat model to use
  If unset, it implicitly defaults to using the first chat model.
  Make it explicit to reduce this confusion

Resolves #882
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
upgrade New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant