Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use client or legacy under jupyter lab will have error: RuntimeError: asyncio.run() cannot be called from a running event loop #2748

Open
loverubywithout opened this issue Feb 24, 2025 · 1 comment
Assignees
Labels
bug Something isn't working respond

Comments

@loverubywithout
Copy link

Bug description

g4f version: version: 0.4.6.2 or 0.4.7.3
I use client API or legacy API, will encountered error or warning.
If I use AsyncClient, it's ok, no error, no warning.


run environment: command line

from g4f.client import Client as OpenAI

message, response_format, model_id = df_prompts.iloc[0, 1:]
provider, model = df_models.iloc[model_id, :]

client = OpenAI(
    provider=dict_providers[provider]
)

r = client.chat.completions.create(
    model=model,
    messages=message,
    response_format=response_format
)

I can get the answer, but have warning:

E:\Anaconda\envs\ai\lib\site-packages\curl_cffi\aio.py:137: RuntimeWarning:
    Proactor event loop does not implement add_reader family of methods required.
    Registering an additional selector thread for add_reader support.
    To avoid this warning use:
        asyncio.set_event_loop_policy(WindowsSelectorEventLoopPolicy())

  self.loop = _get_selector(loop if loop is not None else asyncio.get_running_loop())

run environment: jupyter lab

legacy API

# dict_providers[provider] is g4f.Provider.DeepInfraChat.DeepInfraChat

r = g4f.ChatCompletion.create(
    provider = dict_providers[provider],
    model = 'deepseek-v3',
    messages = message,
    response_format = response_format,
)
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[15], line 4
      1 message, response_format, model_id = df_prompts.iloc[0, 1:]
      2 provider, model = df_models.iloc[model_id, :]
----> 4 r = g4f.ChatCompletion.create(
      5     provider = dict_providers[provider],
      6     model = 'deepseek-v3',
      7     messages = message,
      8     response_format = response_format,
      9 )

File [e:\develop\github\ai\gpt4free\g4f\__init__.py:53](file:///E:/develop/github/ai/gpt4free/g4f/__init__.py#line=52), in ChatCompletion.create(model, messages, provider, stream, image, image_name, ignore_working, ignore_stream, **kwargs)
     49     kwargs["ignore_stream"] = True
     51 result = provider.get_create_function()(model, messages, stream=stream, **kwargs)
---> 53 return result if stream else concat_chunks(result)

File [e:\develop\github\ai\gpt4free\g4f\providers\helper.py:99](file:///E:/develop/github/ai/gpt4free/g4f/providers/helper.py#line=98), in concat_chunks(chunks)
     98 def concat_chunks(chunks: Iterator) -> str:
---> 99     return "".join([
    100         str(chunk) for chunk in chunks
    101         if chunk and not isinstance(chunk, Exception)
    102     ])

File [e:\develop\github\ai\gpt4free\g4f\providers\helper.py:99](file:///E:/develop/github/ai/gpt4free/g4f/providers/helper.py#line=98), in <listcomp>(.0)
     98 def concat_chunks(chunks: Iterator) -> str:
---> 99     return "".join([
    100         str(chunk) for chunk in chunks
    101         if chunk and not isinstance(chunk, Exception)
    102     ])

File [e:\develop\github\ai\gpt4free\g4f\providers\asyncio.py:45](file:///E:/develop/github/ai/gpt4free/g4f/providers/asyncio.py#line=44), in to_sync_generator(generator, stream)
     43 def to_sync_generator(generator: AsyncIterator, stream: bool = True) -> Iterator:
     44     if not stream:
---> 45         yield from asyncio.run(async_generator_to_list(generator))
     46         return
     48     loop = get_running_loop(check_nested=False)

File [E:\Anaconda\envs\ai\lib\asyncio\runners.py:33](file:///E:/Anaconda/envs/ai/lib/asyncio/runners.py#line=32), in run(main, debug)
      9 """Execute the coroutine and return the result.
     10 
     11 This function runs the passed coroutine, taking care of
   (...)
     30     asyncio.run(main())
     31 """
     32 if events._get_running_loop() is not None:
---> 33     raise RuntimeError(
     34         "asyncio.run() cannot be called from a running event loop")
     36 if not coroutines.iscoroutine(main):
     37     raise ValueError("a coroutine was expected, got {!r}".format(main))

RuntimeError: asyncio.run() cannot be called from a running event loop

@loverubywithout loverubywithout added the bug Something isn't working label Feb 24, 2025
@hlohaus
Copy link
Collaborator

hlohaus commented Feb 24, 2025

Please disregard this warning; currently, there is no workaround when using curl_cffi on Windows. @loverubywithout

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working respond
Projects
None yet
Development

No branches or pull requests

3 participants