-
-
Notifications
You must be signed in to change notification settings - Fork 743
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OSError: [Errno 24] Too many open files #1030
Comments
the open files is 1024, you should increase it! |
I already tried it. It only increase time to this error. |
Hard to say if this is on our side (there's nothing in the logs that says something about uvicorn or I'm missing something ?). It would be also interesting to check if this happens without ssl enabled. If that does happen only after 24h it's going to be hard to reproduce so please try to come up with a reproducible example that would trigger that without being that long so we can investigate. On my side I tried to reproduce with an app running pretty much like yours |
I see this problem too. Some of my experience:
From what I can tell, some resource is leaking, but I can't point to the specific one. |
what would help is the application or the endpoint your're suspecting that would leak. |
will close this as stale, feel free to reopen if the issue persists |
Hm... It looks like we experienced precisely the same issue. But in our case takes a few days until the service enters into a loop of such error without a recovery.
|
Same here, running uvicorn 0.20.0/FastAPI 0.89.1/Python 3.9 inside a container on ECS. Ran fine for a few days with some occasional load and then errors out.
|
Pardon me, forgot to attach our spec.
|
I sense @AranVinkItility we have something in common. We are running containers on AWS. |
I am having a similar issue. I am just experimenting with
My dependencies (using Poetry):
The docker image causing trouble is Edit: After some further experimentation with open file limits I have managed to make it work, which suggests there is an issue with docker and the local system defining their Setting Docker However, bumping my local user's
|
Same here, [tool.poetry.dependencies] |
Regarding DazEdword and my similar issue it looks like that it is bound to the usage of |
I am facing the same issue as well. 2024-02-14 15:07:08,273 - ERROR - socket.accept() out of system resource What is the solution for this issue ? |
Same issue here. Any update about this? |
Same error here. Any updates? shall we re-open the issue?
S |
I am also facing the same issue. |
socket.accept() out of system resource Hi, |
Hi, I had the same problem. 2024-05-17 11:13:45,915 - ERROR - base_events - default_exception_handler - line:1758 - socket.accept() out of system resource |
@SynclabIO |
I have faced this issue couple of months back. I forgot how did I solve it exactly. AFAIR this happens because with every request new file descriptor has been created. So if number of concurrent requests are more, then this issue happens. Remember if you have multiple workers, CPU can only serve that many concurrent requests, all extra request will be piled up on each worker. I am not sure but it seems every piled up request also has new file descriptor and this won't get closed until request is not served. One hack I remember to do is increase number of file descriptor that system can open. Try that. |
Thank you for your reply. |
We reworked the whole codes in the system to not open the files. |
Are you sure ? @TheMadrasTechie Hit APIs on a server in one tab and on other tab monitor open network sockets. It help you understand what is going on when continuous requests are coming to server. To monitor network sockets opening: To check total fd or socket allowed. Try to increase above limit if its less. To count number files open used to list the number of open files/file descriptor that are using the port 8000. lsof -i :8000 | wc -l |
We have a similar issue in a somewhat large fastapi codebase. We have an entire issue meta discussing it here: signebedi/libreforms-fastapi#226 We were dynamically loading our config from file as a dependency injection. I wasn's surprised that this was leading to a "Too many open files" error for our dotenv config file. We cached the config contents using watchdog, and now we are experiencing the same issue for one of our log files.... We are running python 3.11. Uvicorn v. 0.29.0. Fastapi v.0.111.0. My sense is that increasing the ulimit to solve this issue is an exhaustible strategy. More likely than not, there is an underlying problem that is causing the number of file handles to balloon out of control, and increasing the file handle limit only postpones the error. I wouldn't recommend it as a production solution. I wonder if this is connected to the issue resolved in #2317. If so, my intution says to try upgrading to Uvicorn 0.30.1.... Thoughts? |
@signebedi I'm not quite following why you think #2317 (or related changes) is connected to this. That kind of signal handling is about process signals. The issue here of accumulating connections should correspond to TCP FIN/ACK packages ("signals"). |
Thanks for chiming in! Eh, I'm far from an expert on uvicorn and just trying to understand why this issue seems to come up so often. I'll happily admit that my notes were haphazard and not well thought through, and I appreciate you taking the time to respond despite that fact. And, to be sure, updating to 0.30.1 didn't solve the problem. I've been monitoring file handles for a production instance of our codebase and am coming to the conclusion that this probably isn't an issue with uvicorn, at least in our case. I think it's a problem with file handles that are opened up when we use fastapi's dependency injection system, using application logic that was originally written to be instantiated once, not passed as a dependency injection, and as a result opens an irresponsible number of file handles... Hope this is helpful for others who are facing the same issue. |
Checklist
master
.Describe the bug
Server will hit some limit (I don't know what it is) and stop accept any new request.
Looks like Some connection is always alive
To reproduce
Host a server and keep making request on it after around 24H.
Expected behavior
The connection should be close after 60 sec.
Actual behavior
Get
asyncio:socket.accept() out of system resource
andOSError: [Errno 24] Too many open files
error log when try to make new request on server.I already make sure everything or browser is closed. But looks like some connection is still alive.
Debugging material
The
ESTABLISHED
will increase by time.run
lsof
in container:There shows like
500u
or more after server up time for 24hWhen it hit the limit, the server no longer can accept any new request.
docker logs
shows below:I have to
down
andup
the docker container again.And the number will go down to only like
10u
Run
ulimit -a
in container:I already tried increase the open files (-n) 1024.
But it didn't resolve the problem.
Only increase the time to happen this problem.
Environment
Run server by following command:
uvicorn --version
:python --version
:cat /etc/os-release
in container:Important
The text was updated successfully, but these errors were encountered: