Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log files cut off for large flights #2

Open
wturk opened this issue Jun 2, 2021 · 2 comments
Open

Log files cut off for large flights #2

wturk opened this issue Jun 2, 2021 · 2 comments
Assignees
Labels
bug Something isn't working

Comments

@wturk
Copy link

wturk commented Jun 2, 2021

When trying to download the log file for a large flight (more than 300 sessions), the log file is cut off before the end of the file. On the server-side, the HTTP worker times out and exits ([CRITICAL] WORKER TIMEOUT). This is also the moment when the incomplete file starts downloading in the browser.

@maxwelld90
Copy link
Member

maxwelld90 commented Jun 2, 2021

Just to add a bit more context here, the log file in its entirety is 489MB -- but the TIMEOUT occurs when ~338MB is downloaded (it depends, but it is around this range). We have a guide which can download the log in its entirety in case it is needed by anyone while this issue is investigated.

This involves issuing commands on your system (host) and within the MongoDB Docker container for LogUI. Commands issued on your host start with a dollar sign ($), with commands in the Docker container starting with a hash (#).

1. Start a new shell in the MongoDB container for LogUI.

$ docker exec -it logui_mongo_1 sh

2. Run the following command when inside the MongoDB container shell.

# mongoexport --uri "mongodb://mongo:<password>@127.0.0.1:27017/logui-db?authsource=admin" -c <65c857ba-efb2-4f8e-9c4c-01fd35639e36> -o /data/loguidump.json --jsonArray

Some things to note here:

  • <password> should be replaced with the DATABASE_PASSWORD property from your setup's .env file.
  • <65c857ba-efb2-4f8e-9c4c-01fd35639e36> should be replaced with the flight ID of the flight you wish to download data for.
  • For both, remove the <> brackets! For example, if DATABASE_PASSWORD is abc123, use mongo:abc123.

Depending on the number of events, this may take a while! It creates a file in your container called loguidump.json.

3. Once complete, leave the container shell.

# exit

4. You can then copy the file that was created to your host's filesystem.

$ docker cp logui_mongo_1:/data/loguidump.json ~/loguidump.json

The file loguidump.json is created in your home directory. Replace ~/loguidump.json with the path to where you wish this file to be saved on your filesystem.

5. Once this is done, log back into the MongoDB container shell.

$ docker exec -it logui_mongo_1 sh

6. Delete the file in the container's filesystem, and exit.

# rm /data/loguidump.json
# exit

That should be it! That will give you all of the data for the given flight.
This is a temporary workaround until we can figure out a permanent solution to the issue.

@maxwelld90 maxwelld90 added the bug Something isn't working label Jun 2, 2021
@maxwelld90 maxwelld90 self-assigned this Jun 2, 2021
@maxwelld90
Copy link
Member

One potential solution.

So the Gunicorn worker process times out, and when it does, the download, which is still taking place, is terminated. The progress of the download will obviously depend on your connection speed. I think this times out after a consistent period. I could adjust the TIMEOUT value for worker processes, but I don't want to do that because too long a timeout could mean threads just sitting there and eating up resources.

Maybe this is a potential solution: use a different worker process.
https://stackoverflow.com/a/61388259

Could try and use gevent to do HTTP events without asynchronous support. But will that break async callbacks to the server if a lot of people are using it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants