-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Mark all packets TX'ed before PTO as lost #2129
base: main
Are you sure you want to change the base?
Conversation
We'd previously only mark 1 one or two packets as lost when a PTO fired. That meant that we potentially didn't RTX all data that we could have (i.e., that was in lost packets that we didn't mark lost). This also changes the probing code to suppress redundant keep-alives, i.e., PINGs that we sent for other reasons, which could double as keep-alives but did not. Broken out of mozilla#1998
Failed Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
All resultsSucceeded Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
Unsupported Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2129 +/- ##
=======================================
Coverage 95.35% 95.35%
=======================================
Files 112 112
Lines 36336 36332 -4
=======================================
- Hits 34648 34646 -2
+ Misses 1688 1686 -2 ☔ View full report in Codecov by Sentry. |
Benchmark resultsPerformance differences relative to 55e3a93. coalesce_acked_from_zero 1+1 entries: 💚 Performance has improved.time: [99.198 ns 99.479 ns 99.765 ns] change: [-12.409% -12.000% -11.589%] (p = 0.00 < 0.05) coalesce_acked_from_zero 3+1 entries: 💚 Performance has improved.time: [117.52 ns 117.86 ns 118.22 ns] change: [-33.120% -32.712% -32.230%] (p = 0.00 < 0.05) coalesce_acked_from_zero 10+1 entries: 💚 Performance has improved.time: [116.99 ns 117.71 ns 118.92 ns] change: [-39.495% -35.200% -32.567%] (p = 0.00 < 0.05) coalesce_acked_from_zero 1000+1 entries: 💚 Performance has improved.time: [98.619 ns 98.768 ns 98.937 ns] change: [-31.113% -30.475% -29.761%] (p = 0.00 < 0.05) RxStreamOrderer::inbound_frame(): No change in performance detected.time: [111.35 ms 111.48 ms 111.71 ms] change: [-0.0512% +0.0846% +0.3009%] (p = 0.44 > 0.05) transfer/pacing-false/varying-seeds: Change within noise threshold.time: [25.833 ms 26.894 ms 27.980 ms] change: [-10.698% -5.9022% -0.4362%] (p = 0.03 < 0.05) transfer/pacing-true/varying-seeds: No change in performance detected.time: [35.031 ms 36.861 ms 38.698 ms] change: [-10.008% -3.7693% +2.1590%] (p = 0.26 > 0.05) transfer/pacing-false/same-seed: No change in performance detected.time: [25.469 ms 26.355 ms 27.219 ms] change: [-7.7928% -3.6344% +0.9172%] (p = 0.11 > 0.05) transfer/pacing-true/same-seed: No change in performance detected.time: [39.520 ms 41.484 ms 43.481 ms] change: [-11.328% -5.1375% +1.1522%] (p = 0.11 > 0.05) 1-conn/1-100mb-resp (aka. Download)/client: No change in performance detected.time: [112.63 ms 115.70 ms 121.44 ms] thrpt: [823.44 MiB/s 864.28 MiB/s 887.87 MiB/s] change: time: [-2.9182% -0.1897% +6.7346%] (p = 0.92 > 0.05) thrpt: [-6.3097% +0.1901% +3.0059%] 1-conn/10_000-parallel-1b-resp (aka. RPS)/client: No change in performance detected.time: [311.80 ms 315.21 ms 318.54 ms] thrpt: [31.393 Kelem/s 31.725 Kelem/s 32.072 Kelem/s] change: time: [-3.1737% -1.6055% +0.0018%] (p = 0.05 > 0.05) thrpt: [-0.0018% +1.6317% +3.2777%] 1-conn/1-1b-resp (aka. HPS)/client: Change within noise threshold.time: [33.981 ms 34.150 ms 34.336 ms] thrpt: [29.124 elem/s 29.283 elem/s 29.428 elem/s] change: time: [+0.4621% +1.2135% +1.9498%] (p = 0.00 < 0.05) thrpt: [-1.9125% -1.1989% -0.4600%] Client/server transfer resultsTransfer of 33554432 bytes over loopback.
|
@martinthomson I'd appreciate a review, since the code I am touching is pretty complex. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes sense to me. Thanks for extracting it into a smaller pull request.
I am in favor of waiting for Martin's review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we not have tests for this? Should we?
.pto_packets(PtoState::pto_packet_count(*pn_space)) | ||
.cloned(), | ||
); | ||
lost.extend(space.pto_packets().cloned()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need pto_packet_count if this is the decision?
The other question I have is whether this is necessary. We're cloning all of the information so that we can process the loss, which means more work on a PTO. Maybe PTO is rare enough that this doesn't matter, but one of the reasons for the limit on number was to avoid the extra work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need pto_packet_count if this is the decision?
We do still need it to limit the number of packets we send on PTO.
The other question I have is whether this is necessary. We're cloning all of the information so that we can process the loss, which means more work on a PTO. Maybe PTO is rare enough that this doesn't matter, but one of the reasons for the limit on number was to avoid the extra work.
I've been wondering if it would be sufficient to mark n packets per space as lost, instead of all.
There are tests in #2128, but this PR alone doesn't make them succeed yet. |
We'd previously only mark 1 one or two packets as lost when a PTO fired. That meant that we potentially didn't RTX all data that we could have (i.e., that was in lost packets that we didn't mark lost).
This also changes the probing code to suppress redundant keep-alives, i.e., PINGs that we sent for other reasons, which could double as keep-alives but did not.
Broken out of #1998