-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lack of application error handling at the fuse level #96
Comments
Hi, it's explicitly stated in the README and it's implemented this way due to the performance reasons. S3 services are relatively slow (high-latency), so we have to utilize parallelism and asynchrony to achieve high performance of copying small files and so on. Use fsync() if you want to guarantee that changes are flushed to the server. |
Hi there.
|
Further investigation showed that problem in --multipart-copy-threshold |
Doing this you just avoid calling UploadPartCopy method of S3 API. The problem is between GeeseFS and Openstack swift. Documentation says about condition when s3 server returns 412, in brief it happens with some combination of two headers: x-amz-copy-source-if-none-match & x-amz-copy-source-if-modified-since. Collegues of mine investigated the issue and made GeeseFS work with modern Openstack Swift by withdrawing X-Amz-Copy-Source-If-Match (I'll double check as it is not the one from the list above) header on the way to API. |
There is lack of application error handling at the fuse level. As a result, there is a mismatch between the data in the bucket and in at the fuse level.
How to reproduce.
In the first screenshot, the 'mv' command is executed and after that the listing is made.
The listing performed by the second alternative client shows that the name changes were not applied at the S3 bucket and notification about error was is not recieved on operation system level.
The text was updated successfully, but these errors were encountered: