diff --git a/integrations/llamafile.md b/integrations/llamafile.md index f779e53b..fff09795 100644 --- a/integrations/llamafile.md +++ b/integrations/llamafile.md @@ -20,12 +20,14 @@ toc: true ### **Table of Contents** - [Overview](#overview) -- [Download and run the model](#download-and-run-the-model) -- [Usage](#usage) +- [Download and run the model](#download-and-run-models) + - [Generative models](#generative-models) + - [Embedding models](#embedding-models) +- [Usage](#usage-with-haystack) ## Overview -[llamafile](https://github.com/Mozilla-Ocho/llamafile) is a project that aims to make open LLMs accessible to developers and users. +[llamafile](https://github.com/Mozilla-Ocho/llamafile) is a project by Mozilla that aims to make open LLMs accessible to developers and users. To run LLMs locally, simply download a single-file executable ("llamafile") that contains both the model and the inference engine and runs locally on most computers. @@ -151,4 +153,6 @@ result = rag_pipe.run({"text_embedder":{"text": query}, print(result["generator"]["replies"][0]) # According to the documents, the best food in the world is pizza. -``` \ No newline at end of file +``` + +For a fun use case, explore this notebook: [Quizzes and Adventures with Character Codex and llamafile](https://github.com/deepset-ai/haystack-cookbook/blob/main/notebooks/charactercodex_llamafile.ipynb). \ No newline at end of file