-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
using llama3.1 as llm, no valid JSON? #45
Comments
same error, I failed at the last step, create community |
Likewise. This is maybe because the 7B parameter llama3.1 model I'm running locally just doesn't cut it? I plan to spin up a GPU in the cloud and test with a larger model. |
Same error , Anyone able to solve it? |
I export the ollama modelile of llama3.1 and set the parameter “num_ctx” to 20480. Then the pipeline can work. |
how can i find this parameter? |
see ollama‘s doc |
thank you for you help! then create a new model from this modelfile,and use the model, the issue could be solved |
{"type": "error", "data": "Community Report Extraction Error", "stack": "Traceback (most recent call last):\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/index/graph/extractors/community_reports/community_reports_extractor.py", line 58, in call\n await self._llm(\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/openai/json_parsing_llm.py", line 34, in call\n result = await self._delegate(input, **kwargs)\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/openai/openai_token_replacing_llm.py", line 37, in call\n return await self._delegate(input, **kwargs)\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/openai/openai_history_tracking_llm.py", line 33, in call\n output = await self._delegate(input, **kwargs)\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/base/caching_llm.py", line 104, in call\n result = await self._delegate(input, **kwargs)\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/base/rate_limiting_llm.py", line 177, in call\n result, start = await execute_with_retry()\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/base/rate_limiting_llm.py", line 159, in execute_with_retry\n async for attempt in retryer:\n File "/home/zippo/anaconda3/envs/GraphRAG/lib/python3.10/site-packages/tenacity/asyncio/init.py", line 166, in anext\n do = await self.iter(retry_state=self._retry_state)\n File "/home/zippo/anaconda3/envs/GraphRAG/lib/python3.10/site-packages/tenacity/asyncio/init.py", line 153, in iter\n result = await action(retry_state)\n File "/home/zippo/anaconda3/envs/GraphRAG/lib/python3.10/site-packages/tenacity/_utils.py", line 99, in inner\n return call(*args, **kwargs)\n File "/home/zippo/anaconda3/envs/GraphRAG/lib/python3.10/site-packages/tenacity/init.py", line 398, in \n self._add_action_func(lambda rs: rs.outcome.result())\n File "/home/zippo/anaconda3/envs/GraphRAG/lib/python3.10/concurrent/futures/_base.py", line 451, in result\n return self.__get_result()\n File "/home/zippo/anaconda3/envs/GraphRAG/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result\n raise self._exception\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/base/rate_limiting_llm.py", line 165, in execute_with_retry\n return await do_attempt(), start\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/base/rate_limiting_llm.py", line 147, in do_attempt\n return await self._delegate(input, **kwargs)\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/base/base_llm.py", line 48, in call\n return await self._invoke_json(input, **kwargs)\n File "/home/zippo/GraphRAG/ollama/repo/graphrag-local-ollama/graphrag/llm/openai/openai_chat_llm.py", line 90, in _invoke_json\n raise RuntimeError(FAILED_TO_CREATE_JSON_ERROR)\nRuntimeError: Failed to generate valid JSON output\n", "source": "Failed to generate valid JSON output", "details": null}
The text was updated successfully, but these errors were encountered: