Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

❌ Errors occurred during the pipeline run, see logs for more details. #78

Open
jrodgers2000 opened this issue Nov 6, 2024 · 5 comments

Comments

@jrodgers2000
Copy link

jrodgers2000 commented Nov 6, 2024

I am encountering this error when trying to run the python -m graphrag.index --root ./ragtest command. It is unable to invoke the LLM as shown below. Does ollama serve need to be ran before this script is executed? I try running it and it says Error: listen tcp 127.0.0.1:11434: bind: address already in use. I never had that issue before running this program and whenever I kill ollama it comes right back with a new PID. I have also done a curl to mistral and ollama run mistral and both work, so the LLM is reachable.

This is my first error message from the log when trying to run it.

{"type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last):\n File \"/home/user/.local/lib/python3.10/site-packages/httpx/_transports/default.py\", line 72, in map_httpcore_exceptions\n yield\n File \"/home/user/.local/lib/python3.10/site-packages/httpx/_transports/default.py\", line 377, in handle_async_request\n resp = await self._pool.handle_async_request(req)\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_async/connection_pool.py\", line 216, in handle_async_request\n raise exc from None\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_async/connection_pool.py\", line 196, in handle_async_request\n response = await connection.handle_async_request(\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_async/connection.py\", line 101, in handle_async_request\n return await self._connection.handle_async_request(request)\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_async/http11.py\", line 143, in handle_async_request\n raise exc\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_async/http11.py\", line 113, in handle_async_request\n ) = await self._receive_response_headers(**kwargs)\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_async/http11.py\", line 186, in _receive_response_headers\n event = await self._receive_event(timeout=timeout)\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_async/http11.py\", line 224, in _receive_event\n data = await self._network_stream.read(\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_backends/anyio.py\", line 32, in read\n with map_exceptions(exc_map):\n File \"/home/user/miniconda3/envs/graphrag-ollama-local/lib/python3.10/contextlib.py\", line 153, in __exit__\n self.gen.throw(typ, value, traceback)\n File \"/home/user/.local/lib/python3.10/site-packages/httpcore/_exceptions.py\", line 14, in map_exceptions\n raise to_exc(exc) from exc\nhttpcore.ReadTimeout\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/home/user/.local/lib/python3.10/site-packages/openai/_base_client.py\", line 1571, in _request\n response = await self._client.send(\n File \"/home/user/.local/lib/python3.10/site-packages/httpx/_client.py\", line 1674, in send\n response = await self._send_handling_auth(\n File \"/home/user/.local/lib/python3.10/site-packages/httpx/_client.py\", line 1702, in _send_handling_auth\n response = await self._send_handling_redirects(\n File \"/home/user/.local/lib/python3.10/site-packages/httpx/_client.py\", line 1739, in _send_handling_redirects\n response = await self._send_single_request(request)\n File \"/home/user/.local/lib/python3.10/site-packages/httpx/_client.py\", line 1776, in _send_single_request\n response = await transport.handle_async_request(request)\n File \"/home/user/.local/lib/python3.10/site-packages/httpx/_transports/default.py\", line 376, in handle_async_request\n with map_httpcore_exceptions():\n File \"/home/user/miniconda3/envs/graphrag-ollama-local/lib/python3.10/contextlib.py\", line 153, in __exit__\n self.gen.throw(typ, value, traceback)\n File \"/home/user/.local/lib/python3.10/site-packages/httpx/_transports/default.py\", line 89, in map_httpcore_exceptions\n raise mapped_exc(message) from exc\nhttpx.ReadTimeout\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/home/user/git/local-graphrag-demo/graphrag-local-ollama/graphrag/llm/base/base_llm.py\", line 53, in _invoke\n output = await self._execute_llm(input, **kwargs)\n File \"/home/user/git/local-graphrag-demo/graphrag-local-ollama/graphrag/llm/openai/openai_chat_llm.py\", line 55, in _execute_llm\n completion = await self.client.chat.completions.create(\n File \"/home/user/.local/lib/python3.10/site-packages/openai/resources/chat/completions.py\", line 1633, in create\n return await self._post(\n File \"/home/user/.local/lib/python3.10/site-packages/openai/_base_client.py\", line 1838, in post\n return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)\n File \"/home/user/.local/lib/python3.10/site-packages/openai/_base_client.py\", line 1532, in request\n return await self._request(\n File \"/home/user/.local/lib/python3.10/site-packages/openai/_base_client.py\", line 1590, in _request\n raise APITimeoutError(request=request) from err\nopenai.APITimeoutError: Request timed out.\n", "source": "Request timed out."

@jrodgers2000
Copy link
Author

jrodgers2000 commented Nov 6, 2024

This is my settings.yaml configuration:

llm:
  api_key: ${GRAPHRAG_API_KEY}
  type: openai_chat # or azure_openai_chat
  model: mistral
  model_supports_json: true # recommended if this is available for your model.
  # max_tokens: 4000
  # request_timeout: 180.0
  api_base: http://localhost:11434/v1
  # api_version: 2024-02-15-preview
  # organization: <organization_id>
  # deployment_name: <azure_model_deployment_name>
  # tokens_per_minute: 150_000 # set a leaky bucket throttle
  # requests_per_minute: 10_000 # set a leaky bucket throttle
  # max_retries: 10
  # max_retry_wait: 10.0
  # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
  # concurrent_requests: 25 # the number of parallel inflight requests that may be made

parallelization:
  stagger: 0.3
  # num_threads: 50 # the number of threads to use for parallel processing

async_mode: threaded # or asyncio

embeddings:
  ## parallelization: override the global parallelization settings for embeddings
  async_mode: threaded # or asyncio
  llm:
    api_key: ${GRAPHRAG_API_KEY}
    type: openai_embedding # or azure_openai_embedding
    model: nomic-embed-text
    api_base: http://localhost:11434/v1
    # api_version: 2024-02-15-preview
    # organization: <organization_id>
    # deployment_name: <azure_model_deployment_name>
    # tokens_per_minute: 150_000 # set a leaky bucket throttle
    # requests_per_minute: 10_000 # set a leaky bucket throttle
    # max_retries: 10
    # max_retry_wait: 10.0
    # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
    # concurrent_requests: 25 # the number of parallel inflight requests that may be made
    # batch_size: 16 # the number of documents to send in a single request
    # batch_max_tokens: 8191 # the maximum number of tokens to send in a single request
    # target: required # or optional

@syhsu42185
Copy link

me too

1 similar comment
@hebutBryant
Copy link

me too

@yutianqiufeng
Copy link

Don’t run any ollama model manually,just ensure ollama is running at background.have a try,it may help.

@lijiabao2
Copy link

我也是这样的报错,请问您后续解决了吗?@jrodgers2000

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants