You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running Debian bookworm on a t2.nano ec2 instance in us-east-1. No docker / other container involved.
Mountpoint options
mount-s3 --debug --log-directory <directory> --read-only <bucket><directory>
(Normally it runs without the '--debug' and '--log-directory' but in#630 I was asked to run with these options. The error happens in either case.)
What happened?
$ cp <file in s3 mount point> <directory>
cp: error reading '<filename>': Transport endpoint is not connected
cp: failed to close '<filename>': Transport endpoint is not connected
$ sudo dmesg -T | egrep -i 'Out of memory'
[Mon Oct 28 20:46:38 2024] Out of memory: Killed process 3470 (mount-s3) total-vm:925588kB, anon-rss:282932kB, file-rss:0kB, shmem-rss:0kB, UID:33 pgtables:752kB oom_score_adj:0
When normal processes are running (including mountpoint-s3) this is the state of ram.
$ free
total used free shared buff/cache available
Mem: 465116 167108 209720 2040 102356 298008
Swap: 0 0 0
Relevant log output
I'll attach a file because it's large.
The text was updated successfully, but these errors were encountered:
Hey! Thanks for opening this issue. I can see that you're using t2.nano instance, which has 512MiB of RAM. This explains the OOM as currently Mountpoint enforces the minimum memory target of 512MiB which allows MP to consume the entire RAM.
We'll continue looking into it and will provide updates here when we'll have them.
This is entered at the request of vladem in #630.
Mountpoint for Amazon S3 version
1.10.0
AWS Region
us-east-1
Describe the running environment
Running Debian bookworm on a t2.nano ec2 instance in us-east-1. No docker / other container involved.
Mountpoint options
What happened?
When normal processes are running (including mountpoint-s3) this is the state of ram.
Relevant log output
I'll attach a file because it's large.
The text was updated successfully, but these errors were encountered: