-
Notifications
You must be signed in to change notification settings - Fork 635
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
http_client_sync example hangs on uploading ~2GB multipart data #2823
Comments
In which line does it hang? Are you certain about the server-side; does it respond with a proper HTTP response after receiving the 2GB request? If it blocks in the following line, it indicates it's awaiting for the server response, and there might be an issue on that end: // Receive the HTTP response
http::read(stream, buffer, res); |
@ashtum
never returns after this. at the start netstat shows this:
After few seconds when the server says 200 OK, netstat shows this:
If i match the hashes of the source file and the one downloaded from the server (after 200 OK status ) Also, if i analyse the network communication of the test exe through procmon, it appears to be stuck in some infinite loop:
This keeps on going untill i force exit the exe, and thereafter the netstat entries disappear. |
I'm unable to replicate this issue on my end. Could you please provide the server-side code? |
Hi, I dont think so i can provide the server side code, as its already being used in production. To add to that, when i use Postman, 3GB upload works fine with the same api, and same server. And, if you can provide the test-server that you used and tried replicating against, Thanks. |
Hello,
Although there is a slight difference between the beast server vs our production server, in the sense that beast gives a proper feedback to the socket and the client exits gracefully, our server somehow does not give the feedback on the socket and the clients gets stuck. Is there any specific limit on windows which is causing this behaviour? and is it configurable by any chance? beast_http_server_fast_mod.cpp
beast_client.cpp
Thanks. |
The issue arises from the narrowing conversion occurring in this line of code in Asio, which converts The write function in Asio itself limits the buffer size sent to I'll address this during the time window for bug fixes designated for the upcoming Boost 1.85 release. By the way, the asynchronous API is functioning properly; you may consider utilizing it until this issue is resolved. |
using async api would not be possible at the moment, so we will wait for a proper fix for this. So you would want to revisit the async api aswell for the 1.85 release. Thanks. |
Yes, indeed, all variations of HTTP write operations encounter the same issue as they utilize I raised an issue on Asio repository: chriskohlhoff/asio#1443 Lets see what would be the authors response, ideally we expect these type of bugs to resolve in Asio itself, meanwhile you might consider using the following code as a workaround for synchronous HTTP writes: template<
class SyncWriteStream,
bool isRequest, class Body, class Fields>
std::size_t
write2(
SyncWriteStream& stream,
boost::beast::http::serializer<isRequest, Body, Fields>& sr,
boost::system::error_code& ec)
{
static_assert(boost::beast::is_sync_write_stream<SyncWriteStream>::value,
"SyncWriteStream type requirements not met");
sr.split(false);
std::size_t bytes_transferred = 0;
if(! sr.is_done())
{
boost::beast::http::detail::write_lambda<SyncWriteStream> f{stream};
do
{
sr.next(ec, f);
bytes_transferred += f.bytes_transferred;
if(ec)
return bytes_transferred;
sr.consume(f.bytes_transferred);
}
while(! sr.is_done());
}
else
{
ec = {};
}
return bytes_transferred;
}
template<
class SyncWriteStream,
bool isRequest, class Body, class Fields>
std::size_t
write2(
SyncWriteStream& stream,
boost::beast::http::serializer<isRequest, Body, Fields>& sr)
{
static_assert(boost::beast::is_sync_write_stream<SyncWriteStream>::value,
"SyncWriteStream type requirements not met");
boost::beast::error_code ec;
auto const bytes_transferred =
write2(stream, sr, ec);
if(ec)
BOOST_THROW_EXCEPTION(beast::system_error{ec});
return bytes_transferred;
} |
Did we manage to make this fix in 1.85 release? |
I would suggest using http::serializer::limit to set a limit on the size of the returned http::response_serializer<http::string_body> sr{ response };
sr.limit(1 << 24);
http::write(stream, sr); I'm still waiting for a response from Asio's author on this issue. Preferably, this issue should be resolved in Asio itself as it has platform-specific knowledge of limitations (or provide an interface for querying these limitation values). |
Version of Beast
1.71 , but it does not matter, i tried with 1.81 with same behaviour.
OS: windows
When i try to upload 2GB of data using multipart form data , the testcase hangs although the data gets uploaded successfully as reported by the server.
Here is the reworked http_client_sync.cpp example tweaked to reproduce this.
Here is a sample req that gets generated and works for file sizes ~1.5GB, but fails for 2GB+ data.
Any suggestions ?
The text was updated successfully, but these errors were encountered: