-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
❌ Errors occurred during the pipeline run, see logs for more details. #78
Comments
This is my settings.yaml configuration:
|
me too |
1 similar comment
me too |
Don’t run any ollama model manually,just ensure ollama is running at background.have a try,it may help. |
我也是这样的报错,请问您后续解决了吗?@jrodgers2000 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I am encountering this error when trying to run the
python -m graphrag.index --root ./ragtest
command. It is unable to invoke the LLM as shown below. Doesollama serve
need to be ran before this script is executed? I try running it and it saysError: listen tcp 127.0.0.1:11434: bind: address already in use
. I never had that issue before running this program and whenever I kill ollama it comes right back with a new PID. I have also done a curl to mistral andollama run mistral
and both work, so the LLM is reachable.This is my first error message from the log when trying to run it.
{"type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last):\n File \"/home/user/.local/lib/python3.10/site-packages/httpx/_transports/default.py\", line 72, in map_httpcore_exceptions\n yield\n File \"/home/user/.local/lib/python3.10/site-packages/httpx/_transports/default.py\", line 377, in handle_async_request\n resp = await self._pool.handle_async_request(req)\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_async/connection_pool.py\", line 216, in handle_async_request\n raise exc from None\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_async/connection_pool.py\", line 196, in handle_async_request\n response = await connection.handle_async_request(\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_async/connection.py\", line 101, in handle_async_request\n return await self._connection.handle_async_request(request)\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_async/http11.py\", line 143, in handle_async_request\n raise exc\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_async/http11.py\", line 113, in handle_async_request\n ) = await self._receive_response_headers(**kwargs)\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_async/http11.py\", line 186, in _receive_response_headers\n event = await self._receive_event(timeout=timeout)\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_async/http11.py\", line 224, in _receive_event\n data = await self._network_stream.read(\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_backends/anyio.py\", line 32, in read\n with map_exceptions(exc_map):\n File \"/home/user/miniconda3/envs/graphrag-ollama-local/lib/python3.10/contextlib.py\", line 153, in __exit__\n self.gen.throw(typ, value, traceback)\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_exceptions.py\", line 14, in map_exceptions\n raise to_exc(exc) from exc\nhttpcore.ReadTimeout\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/home/user/.local/lib/python3.10/site-packages/openai/_base_client.py\", line 1571, in _request\n response = await self._client.send(\n File \"/home/user/.local/lib/python3.10/site-packages/httpx/_client.py\", line 1674, in send\n response = await self._send_handling_auth(\n File \"/home/user/.local/lib/python3.10/site-packages/httpx/_client.py\", line 1702, in _send_handling_auth\n response = await self._send_handling_redirects(\n File \"/home/user/.local/lib/python3.10/site-packages/httpx/_client.py\", line 1739, in _send_handling_redirects\n response = await self._send_single_request(request)\n File \"/home/user/.local/lib/python3.10/site-packages/httpx/_client.py\", line 1776, in _send_single_request\n response = await transport.handle_async_request(request)\n File \"/home/user/.local/lib/python3.10/site-packages/httpx/_transports/default.py\", line 376, in handle_async_request\n with map_httpcore_exceptions():\n File \"/home/user/miniconda3/envs/graphrag-ollama-local/lib/python3.10/contextlib.py\", line 153, in __exit__\n self.gen.throw(typ, value, traceback)\n File \"/home/user/.local/lib/python3.10/site-packages/httpx/_transports/default.py\", line 89, in map_httpcore_exceptions\n raise mapped_exc(message) from exc\nhttpx.ReadTimeout\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/home/user/git/local-graphrag-demo/graphrag-local-ollama/graphrag/llm/base/base_llm.py\", line 53, in _invoke\n output = await self._execute_llm(input, **kwargs)\n File \"/home/user/git/local-graphrag-demo/graphrag-local-ollama/graphrag/llm/openai/openai_chat_llm.py\", line 55, in _execute_llm\n completion = await self.client.chat.completions.create(\n File \"/home/user/.local/lib/python3.10/site-packages/openai/resources/chat/completions.py\", line 1633, in create\n return await self._post(\n File \"/home/user/.local/lib/python3.10/site-packages/openai/_base_client.py\", line 1838, in post\n return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)\n File \"/home/user/.local/lib/python3.10/site-packages/openai/_base_client.py\", line 1532, in request\n return await self._request(\n File \"/home/user/.local/lib/python3.10/site-packages/openai/_base_client.py\", line 1590, in _request\n raise APITimeoutError(request=request) from err\nopenai.APITimeoutError: Request timed out.\n", "source": "Request timed out."
The text was updated successfully, but these errors were encountered: