You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are currently setting the TCP pacing maxrate to the max rate of the uplink. This may be fine, but once ethernet flow control (PAUSE frames) is turned off on all ports, we'll want to keep a close eye on discards at sites. It's possible that we may want to reduce the maxrate for pacing to something less than the uplink's capacity if we see discards.
The text was updated successfully, but these errors were encountered:
A question raised by @robertodauria is whether this setting is per-TCP flow or for the entire device. If it's per-TCP flow, then this is not exactly what we want at 1g sites.
TCP pacing is good for flows having idle times, as the congestion
window permits TCP stack to queue a possibly large number of packets.
This removes the 'slow start after idle' choice, badly hitting large
BDP flows and applications delivering chunks of data such as video
streams.
maxrate
Maximum sending rate of a flow. Default is unlimited. Application
specific setting via SO_MAX_PACING_RATE is ignored only if it is
larger than this value.
An initial glance at the documentation would seem to indicate that the maxrate setting is per flow, which completely subverts the intention I understand we have in making this setting in the first place. And it also brings into question in my mind whether our setting maxrate on an interface is doing what we want.
We are currently setting the TCP pacing
maxrate
to the max rate of the uplink. This may be fine, but once ethernet flow control (PAUSE frames) is turned off on all ports, we'll want to keep a close eye on discards at sites. It's possible that we may want to reduce themaxrate
for pacing to something less than the uplink's capacity if we see discards.The text was updated successfully, but these errors were encountered: