eflatency: Optionally echo the packet in the pong reply and support VLAN tags#238
eflatency: Optionally echo the packet in the pong reply and support VLAN tags#238osresearch wants to merge 3 commits intoXilinx-CNS:masterfrom
Conversation
jfeather-amd
left a comment
There was a problem hiding this comment.
Thanks for the contribution, @osresearch!
This review just covers things I found by inspection, and I plan on doing some testing in the coming days, but overall I quite like these changes! I do have one concern in regards to performance which is perhaps a non-issue, I will confer with other members of the team in regards to this concern. There are also quite a few style nit-picks, please feel free to disregard these if you want - I can apply these changes in the merging process if you would rather focus on the code itself.
| switch( EF_EVENT_TYPE(evs[i]) ) { | ||
| while(vi->i < vi->n_ev) | ||
| { | ||
| const ef_event * const ev = &vi->evs[vi->i++]; |
There was a problem hiding this comment.
This certainly feels a lot easier to parse what's going on! The previous dance of incrementing vi->i early if we're returning rather than continuing to process the remaining events is quite obtuse.
src/tests/ef_vi/eflatency.c
Outdated
| if (cfg_validate && cfg_payload_len > 0) | ||
| { | ||
| const uint8_t rx_pattern = rx_vi->rx_pkt[HEADER_SIZE]; | ||
| if (pattern != rx_pattern) |
There was a problem hiding this comment.
I wonder if when validating it's worth validating the whole packet using memcp for example, or if for more detail about the first octet that's different a custom loop would suffice.
There was a problem hiding this comment.
The first version of the patch (as you noted above) only set the first byte... I can add a loop to check the rest of them.
|
Thanks for the feedback on the patch. I'll make the style corrections and push an updated version. |
bf7d196 to
732e00f
Compare
jfeather-amd
left a comment
There was a problem hiding this comment.
Thanks for addressing my comments so quickly! I'm still looking at some tests for this, and have kicked off a test run to go overnight.
| // sfc driver | ||
| vi->rx_pkt = pkt_bufs[EF_EVENT_RX_RQ_ID(*ev)]->dma_buf; | ||
| return; | ||
| case EF_EVENT_TYPE_RX_REF: | ||
| // efct driver | ||
| vi->rx_pkt = efct_vi_rxpkt_get(&vi->vi, ev->rx_ref.pkt_id); |
There was a problem hiding this comment.
Ah, good question! Because handle_rx_ref() calls efct_vi_rxpkt_release(), our app's reference to this has been released. The data may still be valid (e.g., if something else still has a reference to this), but I don't believe it's safe to use at this point.
Indeed, checking my own knowledge here against the user guide:
Once released, the packet identifier and any pointers to the packet data must be considered invalid.
| for( i = 0; i < cfg_iter; ++i ) { | ||
| for( i = 0; i < cfg_iter; ++i, pattern++ ) { | ||
| memset(&tx_pkt[HEADER_SIZE], pattern, cfg_payload_len); | ||
| checksum_udp_pkt(tx_pkt); |
There was a problem hiding this comment.
The memset() and checksum_udp_pkt() calls are outside of the timing loop for the ping process
This looks like it's only half true, although I hadn't noticed the nuance before! We call gettimeofday(&start, NULL); above the loop, and internally (per iteration) call uint64_t start = ci_frc64_get(); so I would expect the "full" measurement will see an increase, but perhaps the per-iteration one won't. I would definitely want to verify this behaviour before accepting this though, as numbers changing for whatever reason can be quite a nasty surprise to end users!
| const uint8_t rx_pattern = rx_vi->rx_pkt[HEADER_SIZE]; | ||
| if( pattern != rx_pattern ) | ||
| fprintf(stderr, "expected pong %02x got %02x\n", pattern, rx_pattern); | ||
| } |
There was a problem hiding this comment.
After you pointed out the memory operations were outside of the timing loop, I spotted that this bit of code isn't. I would be interested to see if your 150ns time increase changes if this it moved after uint64_t stop = ci_frc64_get();
|
Hi @osresearch, sorry for the delay in getting back to you on this! I just finished looking into performance testing this patch and found that there seems to be a significant enough regression that I am hesitant to merge this in its current state. I would like to think for a while longer about how to progress this PR, as I do think this would be a nice change to have! Some options to consider are:
Although I haven't thought for long enough to decide which one of these would be most appropriate. |
|
Thanks for doing the performance testing on the patch, @jfeather-amd . Can you describe where the slowdowns seem to be? In the non-vlan, non-echo, non-validating case (the default), my latency deltas were in the noise on the X2 and X3 cards, so I'm very curious about your methodology so that I can replicate the results for my future testing. |
|
I've re-run tests on the X3 cards with better isolation and pinning the However, this performance regression appears to be an issue with the way |
|
Hello @osresearch, sorry for not replying sooner. We are busy! The purposes why we use Would you like to contact our customer support team (and sales team) so we can better understand your motivation and usage cases and figure out what we can do about them? You would need to email support-nic@amd.com. |
This patch adds the option of having the
pongnode copy the contents of thepingmessage into the reply, which adds a little more realism to theeflatencytest since it requires the receiver to read the contents of the message, not just receive the notice that a message has arrived.Additionally, it also adds the option for 802.1q VLAN tagging for
eflatencytests that traverse switches, making it possible to benchmark those switches as well.It also cleans up a bit of the logic by removing some magic sizes by using
sizeof()on various ethernet headers.