Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

using llama3.1 as llm, no valid JSON? #45

Open
babuqi opened this issue Aug 14, 2024 · 7 comments
Open

using llama3.1 as llm, no valid JSON? #45

babuqi opened this issue Aug 14, 2024 · 7 comments

Comments

@babuqi
Copy link

babuqi commented Aug 14, 2024

{"type": "error", "data": "Community Report Extraction Error", "stack": "Traceback (most recent call last):\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/index/graph/extractors/community_reports/community_reports_extractor.py", line 58, in call\n await self._llm(\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/openai/json_parsing_llm.py", line 34, in call\n result = await self._delegate(input, **kwargs)\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/openai/openai_token_replacing_llm.py", line 37, in call\n return await self._delegate(input, **kwargs)\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/openai/openai_history_tracking_llm.py", line 33, in call\n output = await self._delegate(input, **kwargs)\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/base/caching_llm.py", line 104, in call\n result = await self._delegate(input, **kwargs)\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/base/rate_limiting_llm.py", line 177, in call\n result, start = await execute_with_retry()\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/base/rate_limiting_llm.py", line 159, in execute_with_retry\n async for attempt in retryer:\n File "/home/zippo/anaconda3/envs/GraphRAG/lib/python3.10/site-packages/tenacity/asyncio/init.py", line 166, in anext\n do = await self.iter(retry_state=self._retry_state)\n File "/home/zippo/anaconda3/envs/GraphRAG/lib/python3.10/site-packages/tenacity/asyncio/init.py", line 153, in iter\n result = await action(retry_state)\n File "/home/zippo/anaconda3/envs/GraphRAG/lib/python3.10/site-packages/tenacity/_utils.py", line 99, in inner\n return call(*args, **kwargs)\n File "/home/zippo/anaconda3/envs/GraphRAG/lib/python3.10/site-packages/tenacity/init.py", line 398, in \n self._add_action_func(lambda rs: rs.outcome.result())\n File "/home/zippo/anaconda3/envs/GraphRAG/lib/python3.10/concurrent/futures/_base.py", line 451, in result\n return self.__get_result()\n File "/home/zippo/anaconda3/envs/GraphRAG/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result\n raise self._exception\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/base/rate_limiting_llm.py", line 165, in execute_with_retry\n return await do_attempt(), start\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/base/rate_limiting_llm.py", line 147, in do_attempt\n return await self._delegate(input, **kwargs)\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/base/base_llm.py", line 48, in call\n return await self._invoke_json(input, **kwargs)\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/openai/openai_chat_llm.py", line 90, in _invoke_json\n raise RuntimeError(FAILED_TO_CREATE_JSON_ERROR)\nRuntimeError: Failed to generate valid JSON output\n", "source": "Failed to generate valid JSON output", "details": null}

@sepmein
Copy link

sepmein commented Aug 17, 2024

same error, I failed at the last step, create community

@adimarco
Copy link

Likewise. This is maybe because the 7B parameter llama3.1 model I'm running locally just doesn't cut it? I plan to spin up a GPU in the cloud and test with a larger model.

@HRishabh95
Copy link

Same error , Anyone able to solve it?

@jialanxin
Copy link

I export the ollama modelile of llama3.1 and set the parameter “num_ctx” to 20480. Then the pipeline can work.

@babuqi
Copy link
Author

babuqi commented Sep 1, 2024

I export the ollama modelile of llama3.1 and set the parameter “num_ctx” to 20480. Then the pipeline can work.

how can i find this parameter?

@jialanxin
Copy link

I export the ollama modelile of llama3.1 and set the parameter “num_ctx” to 20480. Then the pipeline can work.

how can i find this parameter?

see ollama‘s doc
https://github.com/ollama/ollama/blob/main/docs/modelfile.md

@babuqi
Copy link
Author

babuqi commented Sep 5, 2024

I export the ollama modelile of llama3.1 and set the parameter “num_ctx” to 20480. Then the pipeline can work.

how can i find this parameter?

see ollama‘s doc https://github.com/ollama/ollama/blob/main/docs/modelfile.md

thank you for you help!
create a modelfile
FROM llama3.1
PARAMETER num_ctx 20480

then create a new model from this modelfile,and use the model, the issue could be solved

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants