Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Connection Timeout #1503

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

reneme
Copy link
Contributor

@reneme reneme commented Sep 24, 2020

This introduces a "connection timeout" to distinguish between an "overall timeout" and a timeout for the "network connection establishment" only. To keep the API backward compatible (and also to adhere to the "principle of least astonishment") the "connection timeout" is defaulted to whatever the "global timeout" is set to. Only if a user of the library specifically sets the "connection timeout" it is actually honoured.

http_client_config

The http_client_config now has a new set_connection_timeout() method along with the necessary internals. Furthermore it implements the "default to global timeout" mentioned above.

WinHTTP-based client

The adaption is simple, as WinHTTP provides WinHttpSetTimeouts that allows for individual timeouts for resolve, connect, send and receive. This just sets the "connection timeout" for resolve and connect and leaves the existing "global timeout" for send and receive.

ASIO-based client

The ASIO implementation is a bit more involved, as it handles timeouts with a custom wrapper class (timeout_timer) around a boost::asio::steady_timer. I extended the state-machine of this class to distinguish between connecting and communicating states (previously simply called started). Also, I refactored the "resetting" of the steady_timer to encapsulate the usage of the "connection timeout" or the "global timeout" depending on the timeout_timer's state.

I removed the m_timer.start() calls and added calls to m_timer.start_connecting() and m_timer.start_communicating() in strategic places. Unfortunately, those needed to go to slightly different locations as before. I would appreciate thorough a review here. More details on those locations are added via in-code review comments.

WinRT-based client

Unfortuantely, I'm not familiar with that implementation. Therefore, I didn't touch it. Help would be appreciated.

@reneme reneme force-pushed the feature/connection_timeout branch from c6ee3fc to 433db52 Compare September 24, 2020 14:56
@@ -579,8 +580,6 @@ class asio_context final : public request_context, public std::enable_shared_fro

request_stream << CRLF;

m_context->m_timer.start();

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This call moved to the call site of the surrounding method (ssl_proxy_tunnel::start_proxy_connect()) i.e. asio_context::start_request(). This is the only location where ssl_proxy_tunnel::start_proxy_connect() is called if a proxy-tunnel is required. asio_context::start_request() also initiates the ordinary connection (without a proxy-tunnel). Therefore, I found it more consistent to move the call to start_connecting() there.

{
ctx->m_timer.start();
}

Copy link
Contributor Author

@reneme reneme Sep 24, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is not needed anymore, as the conditional below should clearly distinguish the states of the timer:

a) The connection is reused (or an established proxy tunnel): In both cases the call to ctx->write_request() will eventually call m_timer.start_communicating(). Note that it is legal to call start_communicating() without a prior call to start_connecting() (e.g. when reusing an already established connection).

b) Otherwise, no connection is established yet, and a call to m_timer.start_connecting() is necessary. See just a few lines below...

@@ -917,6 +912,8 @@ class asio_context final : public request_context, public std::enable_shared_fro
// 'start_http_request_flow'
std::shared_ptr<ssl_proxy_tunnel> ssl_tunnel =
std::make_shared<ssl_proxy_tunnel>(shared_from_this(), start_http_request_flow);

m_timer.start_connecting();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This replaces the above-mentioned call inside ssl_proxy_tunnel::start_proxy_connect(). All calls to m_timer.start_connection() are now consistently placed inside asio_context::start_request().

@@ -1091,6 +1088,7 @@ class asio_context final : public request_context, public std::enable_shared_fro
}
else
{
m_timer.start_communicating();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is inside the write_request() that starts sending the request to the server. Just before the respective async_write() we call m_timer.start_communicating() to switch the timeout_timer state from the "connection timeout" to the "global timeout".

Note that this method might also initiate a TLS handshake if required (see conditional above). In that case, the request is eventually sent inside handle_handshake() and not here.

@@ -1101,6 +1099,7 @@ class asio_context final : public request_context, public std::enable_shared_fro
{
if (!ec)
{
m_timer.start_communicating();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As stated above, if a TLS handshake was required, this will now eventually send the request. As before, m_timer.start_communicating() switches the timeout_timer over to the "global timeout", just before sending the application payload data.

@reneme reneme marked this pull request as ready for review September 24, 2020 15:32
@reneme reneme force-pushed the feature/connection_timeout branch from f52858b to 4243356 Compare September 25, 2020 09:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant