Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Iperf3.18 - packet count for burst mode seems broken when sending UDP traffic (on Windows) #1820

Closed
fitchtravis opened this issue Jan 14, 2025 · 4 comments · Fixed by #1821
Closed

Comments

@fitchtravis
Copy link

fitchtravis commented Jan 14, 2025

NOTE: The iperf3 issue tracker is for registering bugs, enhancement
requests, or submissions of code. It is not a means for asking
questions about building or using iperf3. Those are best directed
towards the Discussions section for this project at
https://github.com/esnet/iperf/discussions
or to the iperf3 mailing list at [email protected].
A list of frequently-asked questions
regarding iperf3 can be found at http://software.es.net/iperf/faq.html.

Context

  • Version of iperf3:

Using a build from - https://github.com/ar51an/iperf3-win-builds

iperf 3.18 (cJSON 1.7.15)
CYGWIN_NT-10.0-20348 MELPRTG01PRD 3.5.4-1.x86_64 2024-08-25 16:52 UTC x86_64
Optional features available: CPU affinity setting, support IPv4 don't fragment, POSIX threads
  • Hardware:
    System Manufacturer: VMware, Inc.
    System Model: VMware Virtual Platform
    Processor(s): 1 Processor(s) Installed.
    [01]: Intel64 Family 6 Model 85 Stepping 4 GenuineIntel ~2095 Mhz
    Total Physical Memory: 32,767 MB

  • Operating system (and distribution, if any):

OS Name: Microsoft Windows Server 2022 Datacenter
OS Version: 10.0.20348 N/A Build 20348

Please note: iperf3 is supported on Linux, FreeBSD, and macOS.
Support may be provided on a best-effort basis to other UNIX-like
platforms. We cannot provide support for building and/or running
iperf3 on Windows, iOS, or Android.

  • Other relevant information (for example, non-default compilers,
    libraries, cross-compiling, etc.):

Please fill out one of the "Bug Report" or "Enhancement Request"
sections, as appropriate. Note that submissions of bug fixes, new
features, etc. should be done as a pull request at
https://github.com/esnet/iperf/pulls

Bug Report

  • Expected Behavior

When using -b 100M/100, we expect to see iperf3 sending at 10Mbps with a burst of 100 packets.

Identically issue to - #1192

Without the bust mode option, tests run normal.

With burst mode iperf seems to be ignoring the bandwidth throttle limit

  • Actual Behavior

iPerf3 is sending as much data as possible.

PS C:\Temp\iperf-3.18-win64> .\iperf3.exe --get-server-output  -c 10.y.y.11-b 100M/100 -t 5 -u
Connecting to host 10.y.y.11, port 5201
[  5] local 10.x.x.242 port 65202 connected to 10.y.y.11 port 5201
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-1.01   sec   165 MBytes  1.36 Gbits/sec  128257
[  5]   1.01-2.01   sec   191 MBytes  1.61 Gbits/sec  149129
[  5]   2.01-3.00   sec   207 MBytes  1.76 Gbits/sec  161664
[  5]   3.00-4.01   sec   217 MBytes  1.81 Gbits/sec  169560
[  5]   4.01-5.00   sec   219 MBytes  1.85 Gbits/sec  170887
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-5.00   sec  1000 MBytes  1.68 Gbits/sec  0.000 ms  0/779497 (0%)  sender
[  5]   0.00-5.56   sec   422 MBytes   637 Mbits/sec  0.111 ms  450227/779494 (58%)  receiver

Server output:

-----------------------------------------------------------
Server listening on 5201 (test #35)
-----------------------------------------------------------
Accepted connection from 10.x.x.242, port 54742
[  5] local 10.y.y.11 port 5201 connected to 10.x.c.242 port 65202
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-1.00   sec  50.6 MBytes   424 Mbits/sec  0.014 ms  72697/112137 (65%)
[  5]   1.00-2.01   sec  84.2 MBytes   702 Mbits/sec  0.008 ms  80335/145963 (55%)
[  5]   2.01-3.01   sec  87.7 MBytes   737 Mbits/sec  0.015 ms  76942/145311 (53%)
[  5]   3.01-4.00   sec  89.6 MBytes   755 Mbits/sec  0.007 ms  74864/144734 (52%)
[  5]   4.00-5.02   sec  88.9 MBytes   733 Mbits/sec  0.012 ms  101676/171019 (59%)
[  5]   5.02-5.56   sec  21.3 MBytes   329 Mbits/sec  0.111 ms  43713/60330 (72%)
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[SUM]  0.0- 5.6 sec  1 datagrams received out-of-order
[  5]   0.00-5.56   sec   422 MBytes   637 Mbits/sec  0.111 ms  450227/779494 (58%)  receiver


iperf Done.
  • Steps to Reproduce

Just run iperf3 with these switches '-b 100M/100 -t 5 -u'

  • Possible Solution

Enhancement Request

  • Current behavior

  • Desired behavior

  • Implementation notes

@fitchtravis fitchtravis changed the title Iperf3.18 - packet count for burst mode seems broken when sending UDP traffic Iperf3.18 - packet count for burst mode seems broken when sending UDP traffic (on Windows) Jan 14, 2025
@davidBar-On
Copy link
Contributor

Submitter #1821 with a proposed fix to the issue.

@fitchtravis, just out of curiosity, what is the reason you are using burst? I am asking because my understanding is that the main use of burst was to improve performance, as before the multi-thread iperf3 version a select() was called after each sending cycle. However, with the multi-thread version select() is not called, so burst should not impact performance.

@fitchtravis
Copy link
Author

@davidBar-On . We have an issue with our MS Teams Audio/Video traffic getting dropped across our WAN due to how QoS has been configured. I was hoping to use bust mode incrementality, to determine at which point triggers the QoS profiles to start dropping packets.

@davidBar-On
Copy link
Contributor

@fitchtravis, thanks for the explanation. If you know how to build iperf3 for Windows you may use the PR code (it may be a while until it will be fixed in an official version).

@fitchtravis
Copy link
Author

@davidBar-On .. Thanks for the update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants