rfc9937.original.xml   rfc9937.xml 
<?xml version="1.0" encoding="US-ASCII"?> <?xml version="1.0" encoding="UTF-8"?>
<!-- This specifies a new standards-track PRR that obsoletes experimental RFC693
7.
It is shared outside of Google.
Build instructions appear at the end of this document.
<?xml-model href="rfc7991bis.rnc"?>
<!DOCTYPE rfc [ <!DOCTYPE rfc [
<!ENTITY nbsp "&#160;"> <!ENTITY nbsp "&#160;">
<!ENTITY zwsp "&#8203;"> <!ENTITY zwsp "&#8203;">
<!ENTITY nbhy "&#8209;"> <!ENTITY nbhy "&#8209;">
<!ENTITY wj "&#8288;"> <!ENTITY wj "&#8288;">
]> ]>
<?xml-stylesheet type='text/xsl' href='rfc2629.xslt' ?>
<!-- used by XSLT processors -->
<!-- For a complete list and description of processing instructions (PIs),
please see http://xml.resource.org/authoring/README.html. -->
<rfc
xmlns:xi="http://www.w3.org/2001/XInclude"
category="std"
consensus="true"
docName="draft-ietf-tcpm-prr-rfc6937bis-21"
ipr="trust200902"
obsoletes="6937"
updates=""
submissionType="IETF"
xml:lang="en"
tocInclude="true"
tocDepth="4"
symRefs="true"
sortRefs="true"
version="3"
>
<!-- ***** FRONT MATTER ***** --> <rfc xmlns:xi="http://www.w3.org/2001/XInclude" category="std" consensus="true" docName="draft-ietf-tcpm-prr-rfc6937bis-21" number="9937" ipr="trust200902" obso letes="6937" updates="" submissionType="IETF" xml:lang="en" tocInclude="true" to cDepth="4" symRefs="true" sortRefs="true" version="3">
<front> <front>
<title abbrev="Proportional Rate Reduction"> Proportional Rate Reduction</titl e> <title abbrev="Proportional Rate Reduction"> Proportional Rate Reduction</titl e>
<seriesInfo name="RFC" value="9937"/>
<author fullname="Matt Mathis" initials="M." surname="Mathis"> <author fullname="Matt Mathis" initials="M." surname="Mathis">
<address> <address>
<email>ietf@mattmathis.net</email> <email>ietf@mattmathis.net</email>
</address> </address>
</author> </author>
<author fullname="Neal Cardwell" initials="N." surname="Cardwell"> <author fullname="Neal Cardwell" initials="N." surname="Cardwell">
<organization>Google, Inc.</organization> <organization>Google, Inc.</organization>
<address> <address>
skipping to change at line 66 skipping to change at line 43
</address> </address>
</author> </author>
<author fullname="Nandita Dukkipati" initials="N." surname="Dukkipati"> <author fullname="Nandita Dukkipati" initials="N." surname="Dukkipati">
<organization>Google, Inc.</organization> <organization>Google, Inc.</organization>
<address> <address>
<email>nanditad@google.com </email> <email>nanditad@google.com </email>
</address> </address>
</author> </author>
<date month="June" day="22" year="2025" /> <date month="November" year="2025" />
<area> Transport Area </area> <area>WIT</area>
<workgroup> TCP Maintenance Working Group </workgroup> <workgroup>tcpm</workgroup>
<!-- [rfced] Please insert any keywords (beyond those that appear in
the title) for use on https://www.rfc-editor.org/search. -->
<keyword>example</keyword>
<abstract> <abstract>
<t> <t>This document specifies a Standards Track version of the Proportional Rate
This document specifies a standards-track version of the Proportional Rate Reduc Reduction (PRR) algorithm that obsoletes the Experimental version described in R
tion (PRR) algorithm that obsoletes the experimental version described in RFC693 FC 6937. PRR regulates the amount of data sent by TCP or other transport proto
7. PRR regulates the amount of data sent by TCP or other transport protocols d cols during fast recovery. PRR accurately regulates the actual flight size thro
uring fast recovery. PRR accurately regulates the actual flight size through re ugh recovery such that at the end of recovery it will be as close as possible to
covery such that at the end of recovery it will be as close as possible to the s the slow start threshold (ssthresh), as determined by the congestion control al
low start threshold (ssthresh), as determined by the congestion control algorith gorithm.
m.
</t> </t>
</abstract> </abstract>
</front> </front>
<middle> <middle>
<section title="Introduction"> <section><name>Introduction</name>
<t>Van Jacobson's packet conservation principle <xref target="Jacobson88" /> def ines a self clock process wherein N data segments delivered to the receiver gene rate acknowledgments that the data sender uses as the clock to trigger sending a nother N data segments into the network.</t> <t>Van Jacobson's packet conservation principle <xref target="Jacobson88" /> def ines a self clock process wherein N data segments delivered to the receiver gene rate acknowledgments that the data sender uses as the clock to trigger sending a nother N data segments into the network.</t>
<t>Congestion control algorithms like Reno <xref target="RFC5681" /> and CUBIC < <!-- [rfced] "Reno" is not used in RFC 5681, except in titles in the
xref target="RFC9438" /> are built on the conceptual foundation of this self clo References section. Please review and let us know if/how this citation
ck process. They control the sending process of a transport protocol connection should be updated. Note that there are multiple occurrences of this
by using a congestion window ("cwnd") to limit "inflight", the volume of data th throughout the document.
at a connection estimates is in-flight in the network at a given time. Furthermo
re, these algorithms require that transport protocol connections reduce their c
wnd in response to packet losses. Fast recovery (see <xref target="RFC5681" /> a
nd <xref target="RFC6675" />) is the algorithm for making this cwnd reduction us
ing feedback from acknowledgements. Its stated goal is to maintain a sender's s
elf clock by relying on returning ACKs during recovery to clock more data into t
he network. Without Proportional Rate Reduction (PRR), fast recovery typically a
djusts the window by waiting for a large fraction of a round-trip time (one half
round-trip time of ACKs for Reno <xref target="RFC5681" />, or 30% of a round-t
rip time for CUBIC <xref target="RFC9438" />) to pass before sending any data.</
t>
<t><xref target="RFC6675" /> makes fast recovery with Selective Acknowledgement Original:
(SACK) <xref target="RFC2018" /> more accurate by computing "pipe", a sender-sid Congestion control algorithms like Reno [RFC5681] and CUBIC [RFC9438]
e estimate of the number of bytes still outstanding in the network. With <xref are built on the conceptual foundation of this self clock process.
target="RFC6675" />, fast recovery is implemented by sending data as necessary -->
on each ACK to allow pipe to rise to match ssthresh, the target window size for
fast recovery, as determined by the congestion control algorithm. This protects
fast recovery from timeouts in many cases where there are heavy losses. However
, <xref target="RFC6675" /> has two significant drawbacks. First, because it mak
es a large multiplicative decrease in cwnd at the start of fast recovery, it can
cause a timeout if the entire second half of the window of data or ACKs are los
t. Second, a single ACK carrying a SACK option that implies a large quantity of
missing data can cause a step discontinuity in the pipe estimator, which can ca
use Fast Retransmit to send a large burst of data.</t>
<t>PRR regulates the transmission process during fast recovery in a manner that avoids these excess window adjustments, such that transmissions progress smooth ly, and at the end of recovery the actual window size will be as close as possib le to ssthresh. </t> <t>Congestion control algorithms like Reno <xref target="RFC5681" /> and CUBIC < xref target="RFC9438" /> are built on the conceptual foundation of this self clo ck process. They control the sending process of a transport protocol connection by using a congestion window ("cwnd") to limit "inflight", the volume of data th at a connection estimates is in flight in the network at a given time. Furthermo re, these algorithms require that transport protocol connections reduce their c wnd in response to packet losses. Fast recovery (see <xref target="RFC5681" /> a nd <xref target="RFC6675" />) is the algorithm for making this cwnd reduction us ing feedback from acknowledgments. Its stated goal is to maintain a sender's se lf clock by relying on returning ACKs during recovery to clock more data into th e network. Without Proportional Rate Reduction (PRR), fast recovery typically ad justs the window by waiting for a large fraction of a round-trip time (RTT) (one half round-trip time of ACKs for Reno <xref target="RFC5681" /> or 30% of a rou nd-trip time for CUBIC <xref target="RFC9438" />) to pass before sending any dat a.</t>
<t>PRR's approach is inspired by Van Jacobson's packet conservation principle. <t><xref target="RFC6675" /> makes fast recovery with Selective Acknowledgment (
As much as possible, PRR relies on the self clock process, and is only slightly SACK) <xref target="RFC2018" /> more accurate by computing "pipe", a sender-side
affected by the accuracy of estimators such as the estimate of the volume of in estimate of the number of bytes still outstanding in the network. With <xref
-flight data. This is what gives the algorithm its precision in the presence o target="RFC6675" />, fast recovery is implemented by sending data as necessary o
f events that cause uncertainty in other estimators.</t> n each ACK to allow pipe to rise to match ssthresh, the target window size for f
ast recovery, as determined by the congestion control algorithm. This protects
fast recovery from timeouts in many cases where there are heavy losses. However,
<xref target="RFC6675" /> has two significant drawbacks. First, because it make
s a large multiplicative decrease in cwnd at the start of fast recovery, it can
cause a timeout if the entire second half of the window of data or ACKs are lost
. Second, a single ACK carrying a SACK option that implies a large quantity of
missing data can cause a step discontinuity in the pipe estimator, which can cau
se Fast Retransmit to send a large burst of data.</t>
<t>PRR regulates the transmission process during fast recovery in a manner that
avoids these excess window adjustments, such that transmissions progress smooth
ly, and at the end of recovery, the actual window size will be as close as possi
ble to ssthresh. </t>
<t>PRR's approach is inspired by Van Jacobson's packet conservation principle.
As much as possible, PRR relies on the self clock process and is only slightly
affected by the accuracy of estimators, such as the estimate of the volume of in
-flight data. This is what gives the algorithm its precision in the presence o
f events that cause uncertainty in other estimators.</t>
<t> When inflight is above ssthresh, PRR reduces inflight smoothly toward ssthre sh by clocking out transmissions at a rate that is in proportion to both the del ivered data and ssthresh. </t> <t> When inflight is above ssthresh, PRR reduces inflight smoothly toward ssthre sh by clocking out transmissions at a rate that is in proportion to both the del ivered data and ssthresh. </t>
<t>When inflight is less than ssthresh, PRR adaptively chooses between one of tw <!--[rfced] To have the abbreviation directly match the expanded form, may
o Reduction Bounds to limit the total window reduction due to all mechanisms, i we update this text as follows?
ncluding transient application stalls and the losses themselves. As a baseline,
to be cautious when there may be considerable congestion, PRR uses its Conservat Original:
ive Reduction Bound (PRR-CRB), which is strictly packet conserving. When recover As a baseline, to be cautious when there may be
y seems to be progressing well, PRR uses its Slow Start Reduction Bound (PRR-SSR considerable congestion, PRR uses its Conservative Reduction Bound
B), which is more aggressive than PRR-CRB by at most one segment per ACK. PRR-C (PRR-CRB), which is strictly packet conserving. When recovery seems
RB meets the Strong Packet Conservation Bound described in <xref target="conserv to be progressing well, PRR uses its Slow Start Reduction Bound (PRR-
ative" />; however, when used in real networks as the sole approach, it does not SSRB), which is more aggressive than PRR-CRB by at most one segment
perform as well as the algorithm described in <xref target="RFC6675" />, which per ACK.
prove to be more aggressive in a significant number of cases. PRR-SSRB offers a
compromise by allowing a connection to send one additional segment per ACK, rel Perhaps:
ative to PRR-CRB, in some situations. Although PRR-SSRB is less aggressive than As a baseline, to be cautious when there may be
<xref target="RFC6675" /> (transmitting fewer segments or taking more time to tr considerable congestion, PRR uses its Conservative Reduction Bound
ansmit them), it outperforms due to the lower probability of additional losses d (CRB), which is strictly packet conserving. When recovery seems
uring recovery.</t> to be progressing well, PRR uses its Slow Start Reduction Bound (SSRB),
which is more aggressive than PRR-CRB by at most one segment
per ACK.
-->
<t>When inflight is less than ssthresh, PRR adaptively chooses between one of tw
o Reduction Bounds to limit the total window reduction due to all mechanisms, i
ncluding transient application stalls and the losses themselves. As a baseline,
to be cautious when there may be considerable congestion, PRR uses its Conservat
ive Reduction Bound (PRR-CRB), which is strictly packet conserving. When recover
y seems to be progressing well, PRR uses its Slow Start Reduction Bound (PRR-SSR
B), which is more aggressive than PRR-CRB by at most one segment per ACK. PRR-C
RB meets the Strong Packet Conservation Bound described in <xref target="conserv
ative" />; however, when used in real networks as the sole approach, it does not
perform as well as the algorithm described in <xref target="RFC6675" />, which
proves to be more aggressive in a significant number of cases. PRR-SSRB offers
a compromise by allowing a connection to send one additional segment per ACK, re
lative to PRR-CRB, in some situations. Although PRR-SSRB is less aggressive than
<xref target="RFC6675" /> (transmitting fewer segments or taking more time to t
ransmit them), it outperforms due to the lower probability of additional losses
during recovery.</t>
<t>The original definition of the packet conservation principle <xref target="Ja cobson88" /> treated packets that are presumed to be lost (e.g., marked as cand idates for retransmission) as having left the network. This idea is reflected in the inflight estimator used by PRR, but it is distinct from the Strong Packet C onservation Bound as described in <xref target="conservative" />, which is defin ed solely on the basis of data arriving at the receiver. <t>The original definition of the packet conservation principle <xref target="Ja cobson88" /> treated packets that are presumed to be lost (e.g., marked as cand idates for retransmission) as having left the network. This idea is reflected in the inflight estimator used by PRR, but it is distinct from the Strong Packet C onservation Bound as described in <xref target="conservative" />, which is defin ed solely on the basis of data arriving at the receiver.
</t> </t>
<t>This document specifies several main changes from the earlier version of PRR in <xref target="RFC6937" />. First, it introduces a new adaptive heuristic that replaces a manual configuration parameter that determined how conservative PRR was when inflight was less than ssthresh (whether to use PRR-CRB or PRR-SSRB). S econd, the algorithm specifies behavior for non-SACK connections (connections th at have not negotiated <xref target="RFC2018" /> SACK support via the "SACK-perm itted" option). Third, the algorithm ensures a smooth sending process even when the sender has experienced high reordering and starts loss recovery after a larg e amount of sequence space has been SACKed. Finally, this document also include s additional discussion about the integration of PRR with congestion control and loss detection algorithms. <t>This document specifies several main changes from the earlier version of PRR in <xref target="RFC6937" />. First, it introduces a new adaptive heuristic that replaces a manual configuration parameter that determined how conservative PRR was when inflight was less than ssthresh (whether to use PRR-CRB or PRR-SSRB). S econd, the algorithm specifies behavior for non-SACK connections (connections th at have not negotiated SACK <xref target="RFC2018" /> support via the "SACK-perm itted" option). Third, the algorithm ensures a smooth sending process even when the sender has experienced high reordering and starts loss recovery after a larg e amount of sequence space has been SACKed. Finally, this document also include s additional discussion about the integration of PRR with congestion control and loss detection algorithms.
</t> </t>
<t>PRR has extensive deployment experience in multiple TCP implementations since the first widely deployed TCP PRR implementation in 2011 <xref target="First_TC P_PRR" />.</t> <t>PRR has extensive deployment experience in multiple TCP implementations since the first widely deployed TCP PRR implementation in 2011 <xref target="First_TC P_PRR" />.</t>
</section> </section>
<section title="Conventions"> <section><name>Conventions</name>
<t>
<t> The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD" The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQU
, "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this IRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
document are to be interpreted as described in BCP 14 <xref target="RFC2119" /> NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>
<xref target="RFC8174" /> when, and only when, they appear in all capitals, as RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
shown here.</t> "<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to
be interpreted as
</section> described in BCP&nbsp;14 <xref target="RFC2119"/> <xref target="RFC8174"/>
when, and only when, they appear in all capitals, as shown here.
<section title="Document and WG Information"> </t>
<t><em>RFC Editor: please advise on how we can specify the "Janey C. Hoe" name i
n the "Acknowledgements" section as XML that would be correctly translated by x
m2rfc into plain ASCII txt output with a single space after the "C." (interpreti
ng the "C." as an initial) rather than a double space after the "C." (interpreti
ng the "C." as the end of the sentence).</em></t>
<t><em>RFC Editor: please remove this section before publication</em></t>
<t>Formatted: 2025-06-22 19:27:52-07:00</t>
<t>Please send all comments, questions and feedback to tcpm@ietf.org</t>
<t>About revision 00:</t>
<t>The introduction above was drawn from draft-mathis-tcpm-rfc6937bis-00.
All of the text below was copied verbatim from RFC 6937, to facilitate compariso
n between RFC 6937 and this document as it evolves.</t>
<t>About revision 01:</t>
<ul>
<li>Recast the RFC 6937 introduction as background </li>
<li>Made "Changes From RFC 6937" an explicit section</li>
<li>Made Relationships to other standards more explicit</li>
<li>Added a generalized SafeACK heuristic</li>
<li>Provided hints for non TCP implementations</li>
<li>Added language about detecting ACK splitting, but have no advice on actions
(yet)</li>
</ul>
<t>About revision 02:</t>
<ul>
<li>Companion RACK loss detection RECOMMENDED </li>
<li>Non-SACK accounting in the pseudo code </li>
<li>cwnd computation in the pseudo code </li>
<li>Force fast retransmit at the beginning of fast recovery </li>
<li>Remove deprecated Rate-Halving text </li>
<li>Fixed bugs in the example traces</li>
</ul>
<t>About revision 03 and 04:</t>
<ul>
<li> Clarify when and how SndCnt becomes 0 </li>
<li> Improve algorithm to smooth the sending rate under higher reordering cases
</li>
</ul>
<t>About revision 05:</t>
<ul>
<li> Revert the RecoverFS text and pseudocode to match the behavior in draft rev
ision 03 and more closely match Linux TCP PRR</li>
</ul>
<t>About revision 06:</t>
<ul>
<li> Update RecoverFS to be initialized as: RecoverFS = pipe. </li>
</ul>
<t>About revision 07:</t>
<ul>
<li> Restored the revision 04 prose description for the rationale for initializi
ng RecoverFS as: RecoverFS = pipe. </li>
<li> Added reference to <xref target="Hoe96Startup" /> in acknowledgements</li>
</ul>
<t>About revision 08:</t>
<ul>
<li> Inserted missing reference to <xref target="RFC9293" /></li>
<li> Recategorized "voluntary window reductions" as a phrase introduced by PRR <
/li>
</ul>
<t>About revision 09:</t>
<ul>
<li> Document the setting of cwnd = ssthresh when the sender completes a PRR epi
sode, based on Linux TCP PRR experience and the mailing list discussion in the T
CPM mailing list thread: "draft-ietf-tcpm-prr-rfc6937bis-03: set cwnd to ssthres
h exiting fast recovery?". Mention the potential for bursts as a result of setti
ng cwnd = ssthresh. Say that pacing is RECOMMENDED to deal with this.</li>
<li> Revised RecoverFS initialization to handle fast recoveries with mixes of re
al and spurious loss detection events (due to reordering), and incorporate consi
deration for a potentially large volume of data that is SACKed before fast recov
ery starts.</li>
<li>Fixed bugs in the definition of DeliveredData (reverted to definition from R
FC 6937).</li>
<li>Clarified PRR triggers initialization based on start of congestion control r
eduction, not loss recovery, since congestion control may reduce ssthresh for ea
ch round trip with new losses in recovery.</li>
<li>Fixed bugs in PRR examples.</li>
</ul>
<t>About revision 10:</t>
<ul>
<li>Minor typo fixes and wordsmithing.</li>
</ul>
<t>About revision 11:</t>
<ul>
<li>Based on comments at the TCPM session at IETF 120, clarified the scope of co
ngestion control algorithms for which PRR can be used, and clarified that it can
be used for Reno or CUBIC.</li>
</ul>
<t>About revision 12:</t>
<ul>
<li>Added "About revision 11" and "About revision 12" sections.</li>
<li>Added a clarification about the applicability to CUBIC in the algorithm sect
ion.</li>
</ul>
<t>About revision 13:</t>
<ul>
<li>Switch from using the RFC 6675 "pipe" concept to an "inflight" concept that
is independent of loss detection algorithm, and thus is usable with RACK-TLP los
s detection <xref target="RFC8985"/></li>
</ul>
<t>About revision 14:</t>
<ul>
<li>Numerous editorial changes based on 2025-04-15 review from WIT area directo
r Gorry Fairhurst.</li>
<li>Added a note to the RFC Editor to remove this "Document and WG Information"
section before publication.</li>
<li>Rephrased all sentences with "we" or "our" to remove those words.</li>
<li>Updated the RFC2119 MUST/SHOULD/MAY/... text to use the latest boilerplate t
ext from RFC8174, and moved this text into a separate section.</li>
<li>Ensured that each term in the "Definitions" section is listed with (a) the t
erm, (b) an actual in-line definition, and (c) the citation of the original sour
ce reference, where appropriate.</li>
<li>Added missing definitions for terms used in the document: cwnd, rwnd, ssthr
esh, SND.NXT, RMSS</li>
<li>In the "Relationships to other standards", after the paragraph about the con
gestion control algorithms with which PRR can be used, added a paragraph about P
RR's independence from loss detection algorithm details and an explicit list of
loss detection algorithms with which PRR can be used.</li>
<li>Where appropriate, changed "TCP" to a more generic phrase, like: "transport
protocol", "connection", or "sender", depending on the context. Left "TCP" in pl
ace where that was the precise term that was appropriate in the context, given t
he protocol or packet header details. There are now no references to "TCP" in be
tween the definition of SMSS and the "Adapting PRR to other transport protocols"
section. The "Algorithm", "Examples", and "Properties" sections no longer menti
on "TCP".</li>
<li>Corrected the two occurrences of "MSS" in the pseudocode to use "SMSS", sinc
e "SMSS" has a definition and is consistent with the Reno (RFC5681) and CUBIC (R
FC9438) documents.</li>
<li>Clarified the recommendation to use pacing to avoid bursts, and moved this i
nto its own paragraph to make it easier for the reader to see.</li>
</ul>
<t>About revision 15:</t>
<ul>
<li>Fixed the description of the initialization of RecoverFS to match the latest
RecoverFS pseudocode</li>
<li> Add a note that in the first example both algorithms (RFC6675 and PRR) comp
lete the fast recovery episode with a cwnd matching the ssthresh of 20.</li>
<li>Revised order of 2nd and 4th co-author</li>
<li>Numerous editorial changes based on 2025-05-27 last call Genart review from
Russ Housley, including the following changes.</li>
<li>Fixed abstract and intro sections that said that this document "updates" the
experimental PRR algorithm to clarify that this document obsoletes the experime
ntal PRR RFC</li>
<li>To address the feedback 'The 7th paragraph of Section 5 begins with "A final
change"; yet the
8th paragraph talks about another adaptation to PRR', reworded the "A final chan
ge" phrase.</li>
<li>Moved the paragraph about measurement studies to a new "Measurement Studies"
section, to address the feedback: 'The last paragraph of Section 5 is not reall
y about changes since the publication of RFC 6937'</li>
<li>Fixed various minor editorial issues identified in the review</li>
</ul>
<t>About revision 16:</t>
<ul>
<li>Revised the description and caption for the figures to try to improve clarit
y.</li>
</ul>
<t>About revision 17:</t>
<ul>
<li>Moved the explanation of "Van Jacobson's packet conservation principle" to b
e before the first use of the concept in the phrase "strictly packet conserving"
.</li>
<li>Numerous editorial changes based on the 29 suggestions in the 2025-06-03 per
fmetrdir review from Paul Aitken ("perfmetrdir review of draft-ietf-tcpm-prr-rfc
6937bis-16"), including the following larger-scale changes.</li>
<li>Ensured that all references to RFCs (mainly RFC6675 and RFC6937) used proper
xref tags.</li>
<li> Moved the "Definitions" section to be immediately before the "Background" s
ection, so that more terms are defined before being used.</li>
</ul>
<t>About revision 18:</t>
<ul>
<li>Several editorial changes based on the 2025-06-04 Opsdir review from Daniele
Ceccarelli ("draft-ietf-tcpm-prr-rfc6937bis-16 ietf last call Opsdir review"),
including the following larger-scale changes.</li>
<li>Moved the content in the "Background" section into the "Introduction" sectio
n and revised the content to ensure that each passage only uses terms and concep
ts already described by the earlier text.</li>
<li>Made things simpler and more consistent by replacing a few "Reduction Bound
algorithms" with "Reduction Bounds". In revision 16 we already had the simpler "
Reduction Bounds" phrasing in four spots, so this makes the text more self-consi
stent.</li>
</ul>
<t>About revision 19:</t>
<ul>
<li>Fix a nit in the abstract caught by "idnits" online tool: 'The abstract seem
s to contain references ([RFC6937]), which it shouldn't. Please replace those w
ith straight textual mentions of the documents in question.'</li>
<li>Several editorial changes based on the suggestions in the 2025-06-12 perfmet
rdir review from Paul Aitken (tcpm thread: "perfmetrdir review of draft-ietf-tcp
m-prr-rfc6937bis-16").</li>
</ul>
<t>About revision 20:</t>
<ul>
<li>Several editorial changes based on the suggestions in the 2025-06-13 review
from Mohamed Boucadair (tcpm thread: "Mohamed Boucadair's Yes on draft-ietf-tcpm
-prr-rfc6937bis-19: (with COMMENT)"), including the following larger changes.</l
i>
<li>Changed the "Proportional Rate Reduction for TCP" title to "Proportional Rat
e Reduction"</li>
<li>Added an "Operational Considerations" section.</li>
<li>Moved the prose description of the computation of DeliveredData, inflight, R
ecoverFS, etc, from the "Definitions" section to the "Algorithm" section.</li>
<li>Moved the example section so that it is immediately after the discussion abo
ut properties, rather than immediately before.</li>
</ul>
<t>About revision 21:</t>
<ul>
<li>Fix a typo from revision 20 where an extra/old sentence about multiple imple
mentations was accidentally left in the document.</li>
</ul>
</section> </section>
<section title="Definitions"> <section><name>Definitions</name>
<t>The following terms, parameters, and state variables are used as they are def ined in earlier documents:</t> <t>The following terms, parameters, and state variables are used as they are def ined in earlier documents:</t>
<t>SND.UNA: The oldest unacknowledged sequence number. This is defined in Secti <dl spacing="normal" newline="false">
on 3.4 of <xref target="RFC9293" />.</t> <dt>SND.UNA:</dt><dd>The oldest unacknowledged sequence number. This is
defined in <xref target="RFC9293" section="3.4"/>.</dd>
<t>SND.NXT: The next sequence number to be sent. This is defined in Section 3.4 <dt>SND.NXT:</dt><dd>The next sequence number to be sent. This is defined
of <xref target="RFC9293" />.</t> in <xref target="RFC9293" section="3.4"/>.</dd>
<dt>duplicate ACK: </dt><dd>An acknowledgment is considered a "duplicate
<t>duplicate ACK: An acknowledgment is considered a "duplicate ACK" or "duplic ACK" or "duplicate acknowledgment" when (a) the receiver of the ACK has
ate acknowledgment" when (a) the receiver of the ACK has outstanding data, (b) outstanding data, (b) the incoming acknowledgment carries no data, (c) the
the incoming acknowledgment carries no data, (c) the SYN and FIN bits are both SYN and FIN bits are both off, (d) the acknowledgment number is equal to
off, (d) the acknowledgment number is equal to SND.UNA, and (e) the advertised SND.UNA, and (e) the advertised window in the incoming acknowledgment equals
window in the incoming acknowledgment equals the advertised window in the last the advertised window in the last incoming acknowledgment. This is defined
incoming acknowledgment. This is defined in Section 2 of <xref target="RFC5681" in <xref target="RFC5681" section="2"/>.</dd>
/>.</t> <dt>FlightSize:</dt><dd>The amount of data that has been sent but not yet
cumulatively acknowledged. This is defined in <xref target="RFC5681"
<t>FlightSize: The amount of data that has been sent but not yet cumulatively ac section="2"/>.</dd>
knowledged. This is defined in Section 2 <xref target="RFC5681" />.</t> <dt>Receiver Maximum Segment Size (RMSS):</dt><dd>The RMSS is the size of
the largest segment the receiver is willing to accept. This is the value
<t>Receiver Maximum Segment Size (RMSS): The RMSS is the size of the largest seg specified in the MSS option sent by the receiver during connection startup
ment the receiver is willing to accept. This is the value specified in the MSS o (see <xref target="RFC9293" section="3.7.1"/>). Or if the MSS option is not
ption sent by the receiver during connection startup (see Section 3.7.1 of <xr used, it is the default of 536 bytes for IPv4 or 1220 bytes for IPv6 (see
ef target="RFC9293" />). Or, if the MSS option is not used, it is the default of <xref target="RFC9293" section="3.7.1"/>). The size does not include the
536 bytes for IPv4 or 1220 bytes for IPv6 (see Section 3.7.1 of <xref target="R TCP/IP headers and options. The RMSS is defined in <xref target="RFC5681"
FC9293" />). The size does not include the TCP/IP headers and options. The RMSS section="2"/> and <xref target="RFC9293" section="3.8.6.3"/>.</dd>
is defined in Section 2 of <xref target="RFC5681" /> and section 3.8.6.3 of <xre <dt>Sender Maximum Segment Size (SMSS):</dt><dd>The SMSS is the size of the
f target="RFC9293" />.</t> largest segment that the sender can transmit. This value can be based on
the Maximum Transmission Unit (MTU) of the network, the path MTU discovery <xr
<t>Sender Maximum Segment Size (SMSS): The SMSS is the size of the largest segme ef
nt that the sender can transmit. This value can be based on the maximum transm target="RFC1191" /> <xref target="RFC8201" /> <xref target="RFC4821" />
ission unit of the network, the path MTU discovery <xref target="RFC1191" /> <xr algorithm, RMSS, or other factors. The size does not include the TCP/IP
ef target="RFC8201" /> <xref target="RFC4821" /> algorithm, RMSS, or other facto headers and options. This is defined in <xref target="RFC5681"
rs. The size does not include the TCP/IP headers and options. This is defined i section="2"/>.</dd>
n Section 2 of <xref target="RFC5681" />.</t> <dt>Receiver Window (rwnd):</dt><dd>The most recently received advertised
receiver window, in bytes. At any given time, a connection <bcp14>MUST
<t>Receiver Window (rwnd): The most recently received advertised receiver window NOT</bcp14> send data with a sequence number higher than the sum of SND.UNA
, in bytes. At any given time, a connection MUST NOT send data with a sequence and rwnd. This is defined in <xref target="RFC5681" section="2"/>.</dd>
number higher than the sum of SND.UNA and rwnd. This is defined in section 2 <xr <dt>Congestion Window (cwnd):</dt><dd>A state variable that limits the
ef target="RFC5681" />.</t> amount of data a connection can send. At any given time, a connection
<bcp14>MUST NOT</bcp14> send data if inflight (see below) matches or exceeds
<t>Congestion Window (cwnd): A state variable that limits the amount of data a c cwnd. This is defined in <xref target="RFC5681" section="2"/>.</dd>
onnection can send. At any given time, a connection MUST NOT send data if infli <dt>Slow Start Threshold (ssthresh):</dt><dd>The slow start threshold
ght (see below) matches or exceeds cwnd. This is defined in Section 2 of <xref t (ssthresh) state variable is used to determine whether the slow start or
arget="RFC5681" />.</t> congestion avoidance algorithm is used to control data transmission. During
fast recovery, ssthresh is the target window size for a fast recovery
<t>Slow Start Threshold (ssthresh): The slow start threshold (ssthresh) state va episode, as determined by the congestion control algorithm. This is defined
riable is used to determine whether the slow start or congestion avoidance algor in <xref target="RFC5681" section="3.1"/>.</dd>
ithm is used to control data transmission. During fast recovery, ssthresh is the </dl>
target window size for a fast recovery episode, as determined by the congestion
control algorithm. This is defined in Section 3.1 of <xref target="RFC5681" />.
</t>
<t>PRR defines additional variables and terms:</t> <t>PRR defines additional variables and terms:</t>
<t> Delivered Data (DeliveredData): The data sender's best estimate of the tota <dl spacing="normal" newline="false">
l number of bytes that the current ACK indicates have been delivered to the rece <dt>Delivered Data (DeliveredData):</dt><dd>The data sender's best estimate of
iver since the previously received ACK.</t> the total number of bytes that the current ACK indicates have been delivered to
the receiver since the previously received ACK.</dd>
<t> In-Flight Data (inflight): The data sender's best estimate of the number of <dt>In-Flight Data (inflight):</dt><dd>The data sender's best estimate of the
unacknowledged bytes in flight in the network; i.e., bytes that were sent and ne number of unacknowledged bytes in flight in the network, i.e., bytes that were s
ither lost nor received by the data receiver.</t> ent and neither lost nor received by the data receiver.</dd>
<dt>Recovery Flight Size (RecoverFS):</dt><dd>The number of bytes the sender e
<t> Recovery Flight Size (RecoverFS): The number of bytes the sender estimates m stimates might possibly be delivered over the course of the current PRR episode.
ight possibly be delivered over the course of the current PRR episode.</t> </dd>
<dt>SafeACK:</dt><dd>A local boolean variable indicating that the current ACK
<t>SafeACK: A local boolean variable indicating that the current ACK indicates t indicates the recovery is making good progress and the sender can send more aggr
he recovery is making good progress and the sender can send more aggressively, i essively, increasing inflight, if appropriate.</dd>
ncreasing inflight, if appropriate.</t> <dt>SndCnt:</dt><dd>A local variable indicating exactly how many bytes should
be sent in response to each ACK. </dd>
<t>SndCnt: A local variable indicating exactly how many bytes should be sent in <dt>Voluntary window reductions:</dt><dd>Choosing not to send data in response
response to each ACK. </t> to some ACKs, for the purpose of reducing the sending window size and data rate
.</dd>
<t>Voluntary window reductions: choosing not to send data in response to some AC </dl>
Ks, for the purpose of reducing the sending window size and data rate.</t>
</section> </section>
<section title="Changes Relative to RFC 6937"> <section><name>Changes Relative to RFC 6937</name>
<t>The largest change since <xref target="RFC6937" /> is the introduction of a n ew heuristic that uses good recovery progress (for TCP, when the latest ACK adva nces SND.UNA and does not indicate that a prior fast retransmit has been lost) t o select the Reduction Bound (PRR-CRB or PRR-SSRB). <xref target="RFC6937" /> l eft the choice of Reduction Bound to the discretion of the implementer but recom mended to use PRR-SSRB by default. For all of the environments explored in earl ier PRR research, the new heuristic is consistent with the old recommendation.</ t> <t>The largest change since <xref target="RFC6937" /> is the introduction of a n ew heuristic that uses good recovery progress (for TCP, when the latest ACK adva nces SND.UNA and does not indicate that a prior fast retransmit has been lost) t o select the Reduction Bound (PRR-CRB or PRR-SSRB). <xref target="RFC6937" /> l eft the choice of Reduction Bound to the discretion of the implementer but recom mended to use PRR-SSRB by default. For all of the environments explored in earl ier PRR research, the new heuristic is consistent with the old recommendation.</ t>
<t> <t>
The paper "An Internet-Wide Analysis of Traffic Policing" <xref target="Flach201 6policing"/> The paper "An Internet-Wide Analysis of Traffic Policing" <xref target="Flach201 6policing"/>
uncovered a crucial situation not previously explored, where both Reduction Boun ds perform very poorly, but for different reasons. Under many configurations, t oken bucket traffic policers can suddenly start discarding a large fraction of t he traffic when tokens are depleted, without any warning to the end systems. Th e transport congestion control has no opportunity to measure the token rate, and sets ssthresh based on the previously observed path performance. This value fo r ssthresh may cause a data rate that is substantially larger than the token rep lenishment rate, causing high loss. Under these conditions, both Reduction Bound s perform very poorly. PRR-CRB is too timid, sometimes causing very long recov ery times at smaller than necessary windows, and PRR-SSRB is too aggressive, oft en causing many retransmissions to be lost for multiple rounds. Both cases lead to prolonged recovery, decimating application latency and/or goodput. </t> uncovered a crucial situation not previously explored, where both Reduction Boun ds perform very poorly but for different reasons. Under many configurations, to ken bucket traffic policers can suddenly start discarding a large fraction of th e traffic when tokens are depleted, without any warning to the end systems. The transport congestion control has no opportunity to measure the token rate and s ets ssthresh based on the previously observed path performance. This value for ssthresh may cause a data rate that is substantially larger than the token reple nishment rate, causing high loss. Under these conditions, both Reduction Bounds perform very poorly. PRR-CRB is too timid, sometimes causing very long recover y times at smaller than necessary windows, and PRR-SSRB is too aggressive, often causing many retransmissions to be lost for multiple rounds. Both cases lead to prolonged recovery, decimating application latency and/or goodput. </t>
<t>Investigating these environments led to the development of a "SafeACK" heuris tic to dynamically switch between Reduction Bounds: by default conservatively us e PRR-CRB and only switch to PRR-SSRB when ACKs indicate the recovery is making good progress (SND.UNA is advancing without detecting any new losses). The SafeA CK heuristic was experimented with in Google's CDN <xref target="Flach2016polici ng"/> and implemented in Linux TCP since 2015. </t> <t>Investigating these environments led to the development of a "SafeACK" heuris tic to dynamically switch between Reduction Bounds: by default, conservatively u se PRR-CRB and only switch to PRR-SSRB when ACKs indicate the recovery is making good progress (SND.UNA is advancing without detecting any new losses). The Safe ACK heuristic was experimented with in Google's Content Delivery Network (CDN) < xref target="Flach2016policing"/> and implemented in Linux TCP since 2015. </t>
<t>This SafeACK heuristic is only invoked where losses, application-limited beha vior, or other events cause the current estimate of in-flight data to fall below ssthresh. The high loss rates that make the heuristic essential are only commo n in the presence of heavy losses such as traffic policers <xref target="Flach20 16policing"/>. In these environments the heuristic performs better than either bound by itself. </t> <t>This SafeACK heuristic is only invoked where losses, application-limited beha vior, or other events cause the current estimate of in-flight data to fall below ssthresh. The high loss rates that make the heuristic essential are only commo n in the presence of heavy losses, such as traffic policers <xref target="Flach2 016policing"/>. In these environments, the heuristic performs better than eithe r bound by itself. </t>
<t>Another PRR algorithm change improves the sending process when the sender ent ers recovery after a large portion of sequence space has been SACKed. This scena rio could happen when the sender has previously detected reordering, for example , by using <xref target="RFC8985"/>. In the previous version of PRR, RecoverFS d id not properly account for sequence ranges SACKed before entering fast recovery , which caused PRR to initially send too slowly. With the change, PRR properly a ccounts for sequence ranges SACKed before entering fast recovery.</t> <t>Another PRR algorithm change improves the sending process when the sender ent ers recovery after a large portion of sequence space has been SACKed. This scena rio could happen when the sender has previously detected reordering, for example , by using <xref target="RFC8985"/>. In the previous version of PRR, RecoverFS d id not properly account for sequence ranges SACKed before entering fast recovery , which caused PRR to initially send too slowly. With the change, PRR properly a ccounts for sequence ranges SACKed before entering fast recovery.</t>
<t>Yet another change is to force a fast retransmit upon the first ACK that tri ggers the recovery. Previously, PRR may not allow a fast retransmit (i.e., SndCn t is 0) on the first ACK in fast recovery, depending on the loss situation. Forc ing a fast retransmit is important to maintain the ACK clock and avoid potential retransmission timeout (RTO) events. The forced fast retransmit only happens on ce during the entire recovery and still follows the packet conservation principl es in PRR. This heuristic has been implemented since the first widely deployed T CP PRR implementation in 2011 <xref target="First_TCP_PRR" />. </t> <t>Yet another change is to force a fast retransmit upon the first ACK that tri ggers the recovery. Previously, PRR may not allow a fast retransmit (i.e., SndCn t is 0) on the first ACK in fast recovery, depending on the loss situation. Forc ing a fast retransmit is important to maintain the ACK clock and avoid potential retransmission timeout (RTO) events. The forced fast retransmit only happens on ce during the entire recovery and still follows the packet conservation principl es in PRR. This heuristic has been implemented since the first widely deployed T CP PRR implementation in 2011 <xref target="First_TCP_PRR" />. </t>
<t> In another change, upon exiting recovery a data sender sets cwnd to ssthresh <t> In another change, upon exiting recovery, a data sender sets cwnd to ssthres
. This is important for robust performance. Without setting cwnd to ssthresh at h. This is important for robust performance. Without setting cwnd to ssthresh at
the end of recovery, with application-limited sender behavior and some loss patt the end of recovery and with application-limited sender behavior and some loss
erns cwnd could end fast recovery well below ssthresh, leading to bad performanc patterns, cwnd could end fast recovery well below ssthresh, leading to bad perfo
e. The performance could, in some cases, be worse than <xref target="RFC6675" /> rmance. The performance could, in some cases, be worse than <xref target="RFC667
recovery, which simply sets cwnd to ssthresh at the start of recovery. This beh 5" /> recovery, which simply sets cwnd to ssthresh at the start of recovery. Thi
avior of setting cwnd to ssthresh at the end of recovery has been implemented si s behavior of setting cwnd to ssthresh at the end of recovery has been implement
nce the first widely deployed TCP PRR implementation in 2011 <xref target="First ed since the first widely deployed TCP PRR implementation in 2011 <xref target="
_TCP_PRR" />, and is similar to <xref target="RFC6675" />, which specifies setti First_TCP_PRR" /> and is similar to <xref target="RFC6675" />, which specifies s
ng cwnd to ssthresh at the start of recovery. </t> etting cwnd to ssthresh at the start of recovery. </t>
<!--[rfced] To avoid awkward hyphenation of an RFC citation, may we
rephrase the latter part of this sentence as follows?
Original:
Since [RFC6937] was written, PRR has also been adapted to perform
multiplicative window reduction for non-loss based congestion control
algorithms, such as for [RFC3168] style Explicit Congestion
Notification (ECN).
Perhaps:
Since [RFC6937] was written, PRR has also been adapted to perform
multiplicative window reduction for non-loss-based congestion control
algorithms, such as for Explicit Congestion Notification (ECN) as
described in [RFC3168].
-->
<t> <t>
Since <xref target="RFC6937" /> was written, PRR has also been adapted to perfor m multiplicative window reduction for non-loss based congestion control algorith ms, such as for <xref target="RFC3168" /> style Explicit Congestion Notification (ECN). This can be done by using some parts of the loss recovery state machin e (in particular the RecoveryPoint from <xref target="RFC6675" />) to invoke the PRR ACK processing for exactly one round trip worth of ACKs. However, note that using PRR for cwnd reductions for <xref target="RFC3168" /> ECN has been observ ed, with some approaches to Active Queue Management (AQM), to cause an excess cw nd reduction during ECN-triggered congestion episodes, as noted in <xref target= "VCC" />. Since <xref target="RFC6937" /> was written, PRR has also been adapted to perfor m multiplicative window reduction for non-loss-based congestion control algorith ms, such as for <xref target="RFC3168" /> style Explicit Congestion Notification (ECN). This can be done by using some parts of the loss recovery state machin e (in particular, the RecoveryPoint from <xref target="RFC6675" />) to invoke th e PRR ACK processing for exactly one round trip worth of ACKs. However, note tha t using PRR for cwnd reductions for ECN <xref target="RFC3168" /> has been obser ved, with some approaches to Active Queue Management (AQM), to cause an excess c wnd reduction during ECN-triggered congestion episodes, as noted in <xref target ="VCC" />.
</t> </t>
</section> </section>
<section title="Relationships to other standards"> <section><name>Relationships to Other Standards</name>
<t>PRR MAY be used in conjunction with any congestion control algorithm that int ends to make a multiplicative decrease in its sending rate over approximately th e time scale of one round trip time, as long as the current volume of in-flight data is limited by a congestion window (cwnd) and the target volume of in-flight data during that reduction is a fixed value given by ssthresh. In particular, P RR is applicable to both Reno <xref target="RFC5681" /> and CUBIC <xref target=" RFC9438" /> congestion control. PRR is described as a modification to "A Conserv ative Loss Recovery Algorithm Based on Selective Acknowledgment (SACK) for TCP" <xref target="RFC6675" />. It is most accurate with SACK <xref target="RFC2018 " /> but does not require SACK.</t> <t>PRR <bcp14>MAY</bcp14> be used in conjunction with any congestion control alg orithm that intends to make a multiplicative decrease in its sending rate over a pproximately the time scale of one round-trip time, as long as the current volum e of in-flight data is limited by a congestion window (cwnd) and the target volu me of in-flight data during that reduction is a fixed value given by ssthresh. I n particular, PRR is applicable to both Reno <xref target="RFC5681" /> and CUBIC <xref target="RFC9438" /> congestion control. PRR is described as a modificatio n to "A Conservative Loss Recovery Algorithm Based on Selective Acknowledgment ( SACK) for TCP" <xref target="RFC6675" />. It is most accurate with SACK <xref target="RFC2018" /> but does not require SACK.</t>
<t>PRR can be used in conjunction with a wide array of loss detection algorithms . This is because PRR does not have any dependencies on the details of how a los s detection algorithm estimates which packets have been delivered and which pack ets have been lost. Upon the reception of each ACK, PRR simply needs the loss de tection algorithm to communicate how many packets have been marked as lost and h ow many packets have been marked as delivered. Thus PRR MAY be used in conjunct ion with the loss detection algorithms specified or described in the following documents: Reno <xref target="RFC5681" />, NewReno <xref target="RFC6582" />, SA CK <xref target="RFC6675" />, FACK <xref target="FACK" />, and RACK-TLP <xref ta rget="RFC8985" />. Because of the performance properties of RACK-TLP, including resilience to tail loss, reordering, and lost retransmissions, it is RECOMMENDED that PRR is implemented together with RACK-TLP loss recovery <xref target="RFC8 985"/>. <t>PRR can be used in conjunction with a wide array of loss detection algorithms . This is because PRR does not have any dependencies on the details of how a los s detection algorithm estimates which packets have been delivered and which pack ets have been lost. Upon the reception of each ACK, PRR simply needs the loss de tection algorithm to communicate how many packets have been marked as lost and h ow many packets have been marked as delivered. Thus, PRR <bcp14>MAY</bcp14> be used in conjunction with the loss detection algorithms specified or described i n the following documents: Reno <xref target="RFC5681" />, NewReno <xref target= "RFC6582" />, SACK <xref target="RFC6675" />, Forward Acknowledgment (FACK) <xre f target="FACK" />, and Recent Acknowledgment Tail Loss Probe (RACK-TLP) <xref t arget="RFC8985" />. Because of the performance properties of RACK-TLP, including resilience to tail loss, reordering, and lost retransmissions, it is <bcp14>REC OMMENDED</bcp14> that PRR is implemented together with RACK-TLP loss recovery <x ref target="RFC8985"/>.
</t> </t>
<t>The SafeACK heuristic came about as a result of robust Lost Retransmission De tection under development in an early precursor to <xref target="RFC8985"/>. Wi thout Lost Retransmission Detection, policers that cause very high loss rates ar e at very high risk of causing retransmission timeouts because Reno <xref target ="RFC5681" />, CUBIC <xref target="RFC9438" />, and <xref target="RFC6675" /> c an send retransmissions significantly above the policed rate. </t> <t>The SafeACK heuristic came about as a result of robust Lost Retransmission De tection under development in an early precursor to <xref target="RFC8985"/>. Wi thout Lost Retransmission Detection, policers that cause very high loss rates ar e at very high risk of causing retransmission timeouts because Reno <xref target ="RFC5681" />, CUBIC <xref target="RFC9438" />, and <xref target="RFC6675" /> c an send retransmissions significantly above the policed rate. </t>
</section> </section>
<section title="Algorithm"> <section><name>Algorithm</name>
<section title="Initialization Steps"> <section><name>Initialization Steps</name>
<t> <t>
At the beginning of a congestion control response episode initiated by the conge stion control algorithm, a data sender using PRR MUST initialize the PRR state.< /t> At the beginning of a congestion control response episode initiated by the conge stion control algorithm, a data sender using PRR <bcp14>MUST</bcp14> initialize the PRR state.</t>
<t>The timing of the start of a congestion control response episode is entirely up to the congestion control algorithm, and (for example) could correspond to th e start of a fast recovery episode, or a once-per-round-trip reduction when lost retransmits or lost original transmissions are detected after fast recovery is already in progress.</t> <t>The timing of the start of a congestion control response episode is entirely up to the congestion control algorithm, and (for example) could correspond to th e start of a fast recovery episode, or a once-per-round-trip reduction when lost retransmits or lost original transmissions are detected after fast recovery is already in progress.</t>
<t>The PRR initialization allows a congestion control algorithm, CongCtrlAlg(), that might set ssthresh to something other than FlightSize/2 (including, e.g., C UBIC <xref target="RFC9438" />). </t> <t>The PRR initialization allows a congestion control algorithm, CongCtrlAlg(), that might set ssthresh to something other than FlightSize/2 (including, e.g., C UBIC <xref target="RFC9438" />). </t>
<t> A key step of PRR initialization is computing Recovery Flight Size (RecoverF S), the number of bytes the data sender estimates might possibly be delivered ov er the course of the PRR episode. This can be thought of as the sum of the follo wing values at the start of the episode: inflight, the bytes cumulatively acknow ledged in the ACK triggering recovery, the bytes SACKed in the ACK triggering re covery, and the bytes between SND.UNA and SND.NXT that have been marked lost. Th e RecoverFS includes losses because losses are marked using heuristics, so some packets previously marked as lost may ultimately be delivered (without being ret ransmitted) during recovery. PRR uses RecoverFS to compute a smooth sending rate . Upon entering fast recovery, PRR initializes RecoverFS, and RecoverFS remains constant during a given fast recovery episode.</t> <t> A key step of PRR initialization is computing Recovery Flight Size (RecoverF S), the number of bytes the data sender estimates might possibly be delivered ov er the course of the PRR episode. This can be thought of as the sum of the follo wing values at the start of the episode: inflight, the bytes cumulatively acknow ledged in the ACK triggering recovery, the bytes SACKed in the ACK triggering re covery, and the bytes between SND.UNA and SND.NXT that have been marked lost. Th e RecoverFS includes losses because losses are marked using heuristics, so some packets previously marked as lost may ultimately be delivered (without being ret ransmitted) during recovery. PRR uses RecoverFS to compute a smooth sending rate . Upon entering fast recovery, PRR initializes RecoverFS, and RecoverFS remains constant during a given fast recovery episode.</t>
<t>The full sequence of PRR algorithm initialization steps is as follows:</t> <t>The full sequence of PRR algorithm initialization steps is as follows:</t>
<sourcecode><![CDATA[ <sourcecode type="pseudocode"><![CDATA[
ssthresh = CongCtrlAlg() // Target flight size in recovery ssthresh = CongCtrlAlg() // Target flight size in recovery
prr_delivered = 0 // Total bytes delivered in recovery prr_delivered = 0 // Total bytes delivered in recovery
prr_out = 0 // Total bytes sent in recovery prr_out = 0 // Total bytes sent in recovery
RecoverFS = SND.NXT - SND.UNA RecoverFS = SND.NXT - SND.UNA
// Bytes SACKed before entering recovery will not be // Bytes SACKed before entering recovery will not be
// marked as delivered during recovery: // marked as delivered during recovery:
RecoverFS -= (bytes SACKed in scoreboard) RecoverFS -= (bytes SACKed in scoreboard)
// Include the (common) case of selectively ACKed bytes: // Include the (common) case of selectively ACKed bytes:
RecoverFS += (bytes newly SACKed) RecoverFS += (bytes newly SACKed)
// Include the (rare) case of cumulatively ACKed bytes: // Include the (rare) case of cumulatively ACKed bytes:
RecoverFS += (bytes newly cumulatively acknowledged) RecoverFS += (bytes newly cumulatively acknowledged)
]]></sourcecode> ]]></sourcecode>
</section> </section>
<section title="Per-ACK Steps"> <section><name>Per-ACK Steps</name>
<t>On every ACK starting or during fast recovery, excluding the ACK that conclud es a PRR episode, PRR executes the following steps.</t> <t>On every ACK starting or during fast recovery, excluding the ACK that conclud es a PRR episode, PRR executes the following steps.</t>
<t>First, the sender computes DeliveredData, the data sender's best estimate of <!--[rfced] To improve readability, may we add parentheses in this
the total number of bytes that the current ACK indicates have been delivered to sentence? Please review and let us know if thus suggested update
the receiver since the previously received ACK. With SACK, DeliveredData can be retains the intended meaning.
computed precisely as the change in SND.UNA, plus the (signed) change in SACKed.
Thus, in the special case when there are no SACKed sequence ranges in the score
board before or after the ACK, DeliveredData is the change in SND.UNA. In recove
ry without SACK, DeliveredData is estimated to be 1 SMSS on receiving a duplicat
e ACK, and on a subsequent partial or full ACK DeliveredData is the change in SN
D.UNA, minus 1 SMSS for each preceding duplicate ACK. Note that without SACK, a
poorly-behaved receiver that returns extraneous duplicate ACKs (as described in
<xref target='Savage99' />) could attempt to artificially inflate DeliveredData
. As a mitigation, if not using SACK then PRR disallows incrementing DeliveredDa
ta when the total bytes delivered in a PRR episode would exceed the estimated da
ta outstanding upon entering recovery (RecoverFS).</t>
<t>Next, the sender computes inflight, the data sender's best estimate of the nu Original:
mber of bytes that are in flight in the network. To calculate inflight, connecti In recovery without SACK, DeliveredData is estimated to be
ons with SACK enabled and using <xref target="RFC6675"/> loss detection MAY use 1 SMSS on receiving a duplicate ACK, and on a subsequent partial or
the "pipe" algorithm as specified in <xref target="RFC6675"/>. SACK-enabled conn full ACK DeliveredData is the change in SND.UNA, minus 1 SMSS for
ections using RACK-TLP loss detection <xref target="RFC8985"/> or other loss det each preceding duplicate ACK.
ection algorithms MUST calculate inflight by starting with SND.NXT - SND.UNA, s
ubtracting out bytes SACKed in the scoreboard, subtracting out bytes marked lost
in the scoreboard, and adding bytes in the scoreboard that have been retransmit
ted since they were last marked lost. For non-SACK-enabled connections, instead
of subtracting out bytes SACKed in the SACK scoreboard, senders MUST subtract ou
t: min(RecoverFS, 1 SMSS for each preceding duplicate ACK in the fast recovery e
pisode); the min() with RecoverFS is to protect against misbehaving receivers <x
ref target='Savage99' />.</t>
<t>Next, the sender computes SafeACK, a local boolean variable indicating that t Perhaps:
he current ACK reported good progress. SafeACK is true only when the ACK has cum In recovery without SACK, DeliveredData is estimated to be
ulatively acknowledged new data and the ACK does not indicate further losses. Fo 1 SMSS on receiving a duplicate ACK (and the change is in SND.UNA on
r example, an ACK triggering <xref target="RFC6675"/> "rescue" retransmission (S a subsequent partial or full ACK DeliveredData), minus 1 SMSS for
ection 4, NextSeg() condition 4) may indicate further losses. Both conditions in each preceding duplicate ACK.
dicate the recovery is making good progress and the sender can send more aggress -->
ively, increasing inflight, if appropriate. </t>
<t>Finally, the sender uses DeliveredData, inflight, SafeACK, and other PRR stat <t>First, the sender computes DeliveredData, the data sender's best estimate of
e to compute SndCnt, a local variable indicating exactly how many bytes should b the total number of bytes that the current ACK indicates have been delivered to
e sent in response to each ACK, and then uses SndCnt to update cwnd.</t> the receiver since the previously received ACK. With SACK, DeliveredData can be
computed precisely as the change in SND.UNA, plus the (signed) change in SACK. T
hus, in the special case when there are no SACKed sequence ranges in the scorebo
ard before or after the ACK, DeliveredData is the change in SND.UNA. In recovery
without SACK, DeliveredData is estimated to be 1 SMSS on receiving a duplicate
ACK, and on a subsequent partial or full ACK DeliveredData is the change in SND.
UNA, minus 1 SMSS for each preceding duplicate ACK. Note that without SACK, a po
orly behaved receiver that returns extraneous duplicate ACKs (as described in <
xref target='Savage99' />) could attempt to artificially inflate DeliveredData.
As a mitigation, if not using SACK, then PRR disallows incrementing DeliveredDat
a when the total bytes delivered in a PRR episode would exceed the estimated dat
a outstanding upon entering recovery (RecoverFS).</t>
<t>Next, the sender computes inflight, the data sender's best estimate of the nu
mber of bytes that are in flight in the network. To calculate inflight, connecti
ons with SACK enabled and using loss detection <xref target="RFC6675"/> <bcp14>M
AY</bcp14> use the "pipe" algorithm as specified in <xref target="RFC6675"/>. SA
CK-enabled connections using RACK-TLP loss detection <xref target="RFC8985"/> or
other loss detection algorithms <bcp14>MUST</bcp14> calculate inflight by start
ing with SND.NXT - SND.UNA, subtracting out bytes SACKed in the scoreboard, sub
tracting out bytes marked lost in the scoreboard, and adding bytes in the scoreb
oard that have been retransmitted since they were last marked lost. For non-SACK
-enabled connections, instead of subtracting out bytes SACKed in the SACK scoreb
oard, senders <bcp14>MUST</bcp14> subtract out: min(RecoverFS, 1 SMSS for each p
receding duplicate ACK in the fast recovery episode); the min() with RecoverFS i
s to protect against misbehaving receivers <xref target='Savage99' />.</t>
<t>Next, the sender computes SafeACK, a local boolean variable indicating that t
he current ACK reported good progress. SafeACK is true only when the ACK has cum
ulatively acknowledged new data and the ACK does not indicate further losses. Fo
r example, an ACK triggering "rescue" retransmission (<xref target="RFC6675" sec
tion="4"/>, NextSeg() condition 4) may indicate further losses. Both conditions
indicate the recovery is making good progress and the sender can send more aggre
ssively, increasing inflight, if appropriate. </t>
<t>Finally, the sender uses DeliveredData, inflight, SafeACK, and other PRR stat
es to compute SndCnt, a local variable indicating exactly how many bytes should
be sent in response to each ACK and then uses SndCnt to update cwnd.</t>
<t>The full sequence of per-ACK PRR algorithm steps is as follows:</t> <t>The full sequence of per-ACK PRR algorithm steps is as follows:</t>
<sourcecode><![CDATA[ <sourcecode type="pseudocode"><![CDATA[
if (DeliveredData is 0) if (DeliveredData is 0)
Return Return
prr_delivered += DeliveredData prr_delivered += DeliveredData
inflight = (estimated volume of in-flight data) inflight = (estimated volume of in-flight data)
SafeACK = (SND.UNA advances and no further loss indicated) SafeACK = (SND.UNA advances and no further loss indicated)
if (inflight > ssthresh) { if (inflight > ssthresh) {
// Proportional Rate Reduction // Proportional Rate Reduction
// This uses integer division, rounding up: // This uses integer division, rounding up:
#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d)) #define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
skipping to change at line 418 skipping to change at line 328
SndCnt += SMSS SndCnt += SMSS
} }
// Attempt to catch up, as permitted // Attempt to catch up, as permitted
SndCnt = MIN(ssthresh - inflight, SndCnt) SndCnt = MIN(ssthresh - inflight, SndCnt)
} }
if (prr_out is 0 AND SndCnt is 0) { if (prr_out is 0 AND SndCnt is 0) {
// Force a fast retransmit upon entering recovery // Force a fast retransmit upon entering recovery
SndCnt = SMSS SndCnt = SMSS
} }
cwnd = inflight + SndCnt cwnd = inflight + SndCnt]]></sourcecode>
]]></sourcecode>
<t>After the sender computes SndCnt and uses it to update cwnd, the sender trans mits more data. Note that the decision of which data to send (e.g., retransmit m issing data or send more new data) is out of scope for this document.</t> <t>After the sender computes SndCnt and uses it to update cwnd, the sender trans mits more data. Note that the decision of which data to send (e.g., retransmit m issing data or send more new data) is out of scope for this document.</t>
</section> </section>
<section title="Per-Transmit Steps"> <section><name>Per-Transmit Steps</name>
<t>On any data transmission or retransmission, PRR executes the following:</t> <t>On any data transmission or retransmission, PRR executes the following:</t>
<sourcecode><![CDATA[ <sourcecode type="pseudocode"><![CDATA[
prr_out += (data sent) prr_out += (data sent)
]]></sourcecode> ]]></sourcecode>
</section> </section>
<section title="Completion Steps"> <section><name>Completion Steps</name>
<t> A PRR episode ends upon either completing fast recovery, or before initiatin g a new PRR episode due to a new congestion control response episode. </t> <t> A PRR episode ends upon either completing fast recovery or before initiating a new PRR episode due to a new congestion control response episode. </t>
<t>On the completion of a PRR episode, PRR executes the following:</t> <t>On the completion of a PRR episode, PRR executes the following:</t>
<sourcecode><![CDATA[ <sourcecode type="pseudocode"><![CDATA[
cwnd = ssthresh cwnd = ssthresh
]]></sourcecode> ]]></sourcecode>
<t> Note that this step that sets cwnd to ssthresh can potentially, in some scen arios, allow a burst of back-to-back segments into the network. </t> <t> Note that this step that sets cwnd to ssthresh can potentially, in some scen arios, allow a burst of back-to-back segments into the network. </t>
<t>It is RECOMMENDED that implementations use pacing to reduce the burstiness of data traffic. This recommendation is consistent with current practice to mitiga te bursts (e.g., <xref target="I-D.welzl-iccrg-pacing" />), including pacing tra nsmission bursts after restarting from idle. </t> <t>It is <bcp14>RECOMMENDED</bcp14> that implementations use pacing to reduce th e burstiness of data traffic. This recommendation is consistent with current pra ctice to mitigate bursts (e.g., <xref target="I-D.welzl-iccrg-pacing" />), inclu ding pacing transmission bursts after restarting from idle. </t>
</section> </section>
</section> <!-- Algorithm --> </section>
<section title="Properties"> <section><name>Properties</name>
<t>The following properties are common to both PRR-CRB and PRR-SSRB, except as n oted:</t> <t>The following properties are common to both PRR-CRB and PRR-SSRB, except as n oted:</t>
<t>PRR attempts to maintain the sender's ACK clocking across recovery events, in cluding burst losses. By contrast, <xref target="RFC6675" /> can send large, unc locked bursts following burst losses.</t> <t>PRR attempts to maintain the sender's ACK clocking across recovery events, in cluding burst losses. By contrast, <xref target="RFC6675" /> can send large, unc locked bursts following burst losses.</t>
<t>Normally, PRR will spread voluntary window reductions out evenly across a ful l RTT. This has the potential to generally reduce the burstiness of Internet tr affic, and could be considered to be a type of soft pacing. Hypothetically, an y pacing increases the probability that different flows are interleaved, reducin g the opportunity for ACK compression and other phenomena that increase traffic burstiness. However, these effects have not been quantified.</t> <t>Normally, PRR will spread voluntary window reductions out evenly across a ful l RTT. This has the potential to generally reduce the burstiness of Internet tr affic and could be considered to be a type of soft pacing. Hypothetically, any pacing increases the probability that different flows are interleaved, reducing the opportunity for ACK compression and other phenomena that increase traffic b urstiness. However, these effects have not been quantified.</t>
<t>If there are minimal losses, PRR will converge to exactly the target window c hosen by the congestion control algorithm. Note that as the sender approaches th e end of recovery, prr_delivered will approach RecoverFS and SndCnt will be comp uted such that prr_out approaches ssthresh.</t> <t>If there are minimal losses, PRR will converge to exactly the target window c hosen by the congestion control algorithm. Note that as the sender approaches th e end of recovery, prr_delivered will approach RecoverFS and SndCnt will be comp uted such that prr_out approaches ssthresh.</t>
<t>Implicit window reductions, due to multiple isolated losses during recovery, cause later voluntary reductions to be skipped. For small numbers of losses, th e window size ends at exactly the window chosen by the congestion control algori thm.</t> <t>Implicit window reductions, due to multiple isolated losses during recovery, cause later voluntary reductions to be skipped. For small numbers of losses, th e window size ends at exactly the window chosen by the congestion control algori thm.</t>
<t>For burst losses, earlier voluntary window reductions can be undone by sendin g extra segments in response to ACKs arriving later during recovery. Note tha t as long as some voluntary window reductions are not undone, and there is no ap plication stall, the final value for inflight will be the same as ssthresh.</t> <t>For burst losses, earlier voluntary window reductions can be undone by sendin g extra segments in response to ACKs arriving later during recovery. Note tha t as long as some voluntary window reductions are not undone, and there is no ap plication stall, the final value for inflight will be the same as ssthresh.</t>
<t>PRR using either Reduction Bound improves the situation when there are <t>PRR using either Reduction Bound improves the situation when there are
application stalls, e.g., when the sending application does not queue data for application stalls, e.g., when the sending application does not queue data for
transmission quickly enough or the receiver stops advancing its receive window. transmission quickly enough or the receiver stops advancing its receive window.
skipping to change at line 483 skipping to change at line 391
opportunities to send due to stalls are treated like banked voluntary window opportunities to send due to stalls are treated like banked voluntary window
reductions; specifically, they cause prr_delivered - prr_out to be significantly positive. If the application catches up while the sender is still in recovery, the sender will send a partial window burst to grow inflight to catch up to exa ctly where it would have been had the application never stalled. Although such a burst could negatively impact the given flow or other sharing flows, this is exactly what happens every time there is a partial-RTT application stall while n ot in recovery. PRR makes partial-RTT stall behavior uniform in all states. C hanging this behavior is out of scope for this document.</t> reductions; specifically, they cause prr_delivered - prr_out to be significantly positive. If the application catches up while the sender is still in recovery, the sender will send a partial window burst to grow inflight to catch up to exa ctly where it would have been had the application never stalled. Although such a burst could negatively impact the given flow or other sharing flows, this is exactly what happens every time there is a partial-RTT application stall while n ot in recovery. PRR makes partial-RTT stall behavior uniform in all states. C hanging this behavior is out of scope for this document.</t>
<t>PRR with Reduction Bound is less sensitive to errors in the inflight estimato r. <t>PRR with Reduction Bound is less sensitive to errors in the inflight estimato r.
While in recovery, inflight is intrinsically an estimator, using incomplete While in recovery, inflight is intrinsically an estimator, using incomplete
information to estimate if un-SACKed segments are actually lost or merely out information to estimate if un-SACKed segments are actually lost or merely out
of order in the network. Under some conditions, inflight can have significant errors; for example, inflight is underestimated when a burst of reordered data i s prematurely assumed to be lost and marked for retransmission. If the transmiss ions are regulated directly by inflight as they are with <xref target="RFC6675" />, a step discontinuity in the inflight estimator causes a burst of data, which cannot be retracted once the inflight estimator is corrected a few ACKs later. For PRR dynamics, inflight merely determines which algorithm, PRR or the Reduc tion Bound, is used to compute SndCnt from DeliveredData. While inflight is und erestimated, the algorithms are different by at most 1 segment per ACK. Once in flight is updated, they converge to the same final window at the end of recovery .</t> of order in the network. Under some conditions, inflight can have significant errors; for example, inflight is underestimated when a burst of reordered data i s prematurely assumed to be lost and marked for retransmission. If the transmiss ions are regulated directly by inflight as they are with <xref target="RFC6675" />, a step discontinuity in the inflight estimator causes a burst of data, which cannot be retracted once the inflight estimator is corrected a few ACKs later. For PRR dynamics, inflight merely determines which algorithm, PRR or the Reduc tion Bound, is used to compute SndCnt from DeliveredData. While inflight is und erestimated, the algorithms are different by at most 1 segment per ACK. Once in flight is updated, they converge to the same final window at the end of recovery .</t>
<t>Under all conditions and sequences of events during recovery, PRR-CRB strictl y bounds the data transmitted to be equal to or less than the amount of data del ivered to the receiver. This Strong Packet Conservation Bound is the most aggr essive algorithm that does not lead to additional forced losses in some environm ents. It has the property that if there is a standing queue at a bottleneck wi th no cross traffic, the queue will maintain exactly constant length for the dur ation of the recovery, except for +1/-1 fluctuation due to differences in packet arrival and exit times. See <xref target="conservative" /> for a detailed dis cussion of this property.</t> <t>Under all conditions and sequences of events during recovery, PRR-CRB strictl y bounds the data transmitted to be equal to or less than the amount of data del ivered to the receiver. This Strong Packet Conservation Bound is the most aggr essive algorithm that does not lead to additional forced losses in some environm ents. It has the property that if there is a standing queue at a bottleneck wi th no cross traffic, the queue will maintain exactly constant length for the dur ation of the recovery, except for +1/-1 fluctuation due to differences in packet arrival and exit times. See <xref target="conservative" /> for a detailed dis cussion of this property.</t>
<t>Although the Strong Packet Conservation Bound is very appealing for a number <!-- [rfced] May we clarify "[RFC6675] 'half window of silence'" as follows?
of reasons, earlier measurements (in section 6 of <xref target="RFC6675" />) d
emonstrate that it is less aggressive and does not perform as well as <xref targ Original:
et="RFC6675" />, which permits bursts of data when there are bursts of losses. The [RFC6675] "half window of silence" may temporarily
PRR-SSRB is a compromise that permits a sender to send one extra segment per AC reduce queue pressure when congestion control does not reduce the
K as compared to the Packet Conserving Bound when the ACK indicates the recovery congestion window entering recovery to avoid further losses.
is in good progress without further losses. From the perspective of a strict P
acket Conserving Bound, PRR-SSRB does indeed open the window during recovery; ho Perhaps:
wever, it is significantly less aggressive than <xref target="RFC6675" /> in the The "half window of silence" that a SACK-based Conservative Loss
presence of burst losses. The <xref target="RFC6675" /> "half window of silence Recovery Algorithm [RFC6675] experiences may temporarily
" may temporarily reduce queue pressure when congestion control does not reduce reduce queue pressure when congestion control does not reduce the
the congestion window entering recovery to avoid further losses. The goal of PRR congestion window entering recovery to avoid further losses.
is to minimize the opportunities to lose the self clock by smoothly controlling -->
inflight toward the target set by the congestion control. It is the congestion
control's responsibility to avoid a full queue, not PRR. <t>Although the Strong Packet Conservation Bound is very appealing for a number
of reasons, earlier measurements (in <xref target="RFC6675" section="6"/>) demo
nstrate that it is less aggressive and does not perform as well as <xref target=
"RFC6675" />, which permits bursts of data when there are bursts of losses. PR
R-SSRB is a compromise that permits a sender to send one extra segment per ACK a
s compared to the Packet Conserving Bound when the ACK indicates the recovery is
in good progress without further losses. From the perspective of a strict Pack
et Conserving Bound, PRR-SSRB does indeed open the window during recovery; howev
er, it is significantly less aggressive than <xref target="RFC6675" /> in the pr
esence of burst losses. The <xref target="RFC6675" /> "half window of silence" m
ay temporarily reduce queue pressure when congestion control does not reduce the
congestion window entering recovery to avoid further losses. The goal of PRR is
to minimize the opportunities to lose the self clock by smoothly controlling in
flight toward the target set by the congestion control. It is the congestion con
trol's responsibility to avoid a full queue, not PRR.
</t> </t>
</section> </section>
<section title="Examples"> <section><name>Examples</name>
<t>This section illustrates the PRR and <xref target="RFC6675" /> algorithms by
showing their different behaviors for two example scenarios: a connection experi
encing either a single loss or a burst of 15 consecutive losses. All cases use b
ulk data transfers (no application pauses), Reno congestion control <xref target
="RFC5681" />, and cwnd = FlightSize = inflight = 20 segments, so ssthresh will
be set to 10 at the beginning of recovery. The scenarios use standard Fast Ret
ransmit <xref target="RFC5681" /> and Limited Transmit <xref target="RFC3042" />
, so the sender will send two new segments followed by one retransmit in respons
e to the first three duplicate ACKs following the losses.</t>
<t>Each of the diagrams below shows the per ACK response to the first round trip <t>This section illustrates the PRR and <xref target="RFC6675" /> algorithm by s
for the two recovery algorithms when the zeroth segment is lost. The top line howing their different behaviors for two example scenarios: a connection experie
("ack#") indicates the transmitted segment number triggering the ACKs, with an ncing either a single loss or a burst of 15 consecutive losses. All cases use bu
X for the lost segment. The "cwnd" and "inflight" lines indicate the values of lk data transfers (no application pauses), Reno congestion control <xref target=
cwnd and inflight, respectively, for these algorithms after processing each retu "RFC5681" />, and cwnd = FlightSize = inflight = 20 segments, so ssthresh will b
rning ACK but before further (re)transmission. The "sent" line indicates how mu e set to 10 at the beginning of recovery. The scenarios use standard Fast Retr
ch 'N'ew or 'R'etransmitted data would be sent. Note that the algorithms for de ansmit <xref target="RFC5681" /> and Limited Transmit <xref target="RFC3042" />,
ciding which data to send are out of scope of this document.</t> so the sender will send two new segments followed by one retransmit in response
to the first three duplicate ACKs following the losses.</t>
<t>Each of the diagrams below shows the per ACK response to the first round trip
for the two recovery algorithms when the zeroth segment is lost. The top line
("ack#") indicates the transmitted segment number triggering the ACKs, with an
X for the lost segment. The "cwnd" and "inflight" lines indicate the values of
cwnd and inflight, respectively, for these algorithms after processing each retu
rning ACK but before further (re)transmission. The "sent" line indicates how mu
ch "N"ew or "R"etransmitted data would be sent. Note that the algorithms for de
ciding which data to send are out of scope of this document.</t>
<figure><artwork><![CDATA[ <figure><artwork><![CDATA[
RFC 6675 RFC 6675
a X 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 a X 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
c 20 20 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 c 20 20 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
i 19 19 18 18 17 16 15 14 13 12 11 10 9 9 9 9 9 9 9 9 9 9 i 19 19 18 18 17 16 15 14 13 12 11 10 9 9 9 9 9 9 9 9 9 9
s N N R N N N N N N N N N N s N N R N N N N N N N N N N
PRR PRR
a X 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 a X 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
skipping to change at line 544 skipping to change at line 466
SndCnt = ceil(prr_delivered * ssthresh / RecoverFS) - prr_out; SndCnt = ceil(prr_delivered * ssthresh / RecoverFS) - prr_out;
cwnd = inflight + SndCnt; cwnd = inflight + SndCnt;
} }
print pkt_num, cwnd, inflight, SndCnt; print pkt_num, cwnd, inflight, SndCnt;
prr_out += SndCnt; prr_out += SndCnt;
inflight += SndCnt; inflight += SndCnt;
} }
} }
--> -->
<t>In this first example, ACK#1 through ACK#19 contain SACKs for the original fl ight of data, ACK#20 and ACK#21 carry SACKs for the limited transmits triggered by the first and second SACKed segments, and ACK#22 carries the full cumulative ACK covering all data up through the limited transmits. ACK#22 completes the fas t recovery episode, and thus completes the PRR episode.</t> <t>In this first example, ACK#1 through ACK#19 contain SACKs for the original fl ight of data, ACK#20 and ACK#21 carry SACKs for the limited transmits triggered by the first and second SACKed segments, and ACK#22 carries the full cumulative ACK covering all data up through the limited transmits. ACK#22 completes the fas t recovery episode and thus completes the PRR episode.</t>
<t>Note that both algorithms send the same total amount of data, and both algori thms complete the fast recovery episode with a cwnd matching the ssthresh of 20. <xref target="RFC6675" /> experiences a "half window of silence" while PRR spr eads the voluntary window reduction across an entire RTT.</t> <t>Note that both algorithms send the same total amount of data, and both algori thms complete the fast recovery episode with a cwnd matching the ssthresh of 20. <xref target="RFC6675" /> experiences a "half window of silence" while PRR spr eads the voluntary window reduction across an entire RTT.</t>
<t>Next, consider an example scenario with the same initial conditions, except t hat the first 15 packets (0-14) are lost. During the remainder of the lossy ro und trip, only 5 ACKs are returned to the sender. The following examines each of these algorithms in succession. <t>Next, consider an example scenario with the same initial conditions, except t hat the first 15 packets (0-14) are lost. During the remainder of the lossy ro und trip, only 5 ACKs are returned to the sender. The following examines each of these algorithms in succession.
</t> </t>
<figure><artwork><![CDATA[ <figure><artwork><![CDATA[
RFC 6675 RFC 6675
a X X X X X X X X X X X X X X X 15 16 17 18 19 a X X X X X X X X X X X X X X X 15 16 17 18 19
c 20 20 10 10 10 c 20 20 10 10 10
skipping to change at line 565 skipping to change at line 487
i 19 19 4 9 9 i 19 19 4 9 9
s N N 6R R R s N N 6R R R
PRR PRR
a X X X X X X X X X X X X X X X 15 16 17 18 19 a X X X X X X X X X X X X X X X 15 16 17 18 19
c 20 20 5 5 5 c 20 20 5 5 5
i 19 19 4 4 4 i 19 19 4 4 4
s N N R R R s N N R R R
a: ack#; c: cwnd; i: inflight; s: sent a: ack#; c: cwnd; i: inflight; s: sent
]]></artwork></figure> ]]></artwork></figure>
<t>In this specific situation, <xref target="RFC6675" /> is more aggressive beca use once Fast Retransmit is triggered (on the ACK for segment 17), the sender i mmediately retransmits sufficient data to bring inflight up to cwnd. Earlier me asurements (in section 6 of <xref target="RFC6675" />) indicate that <xref targ et="RFC6675" /> significantly outperforms <xref target="RFC6937" /> PRR using o nly PRR-CRB, and some other similarly conservative algorithms that were tested, showing that it is significantly common for the actual losses to exceed the cwnd reduction determined by the congestion control algorithm. </t> <t>In this specific situation, <xref target="RFC6675" /> is more aggressive beca use once Fast Retransmit is triggered (on the ACK for segment 17), the sender i mmediately retransmits sufficient data to bring inflight up to cwnd. Earlier me asurements (in <xref target="RFC6675" section="6"/>) indicate that <xref target= "RFC6675" /> significantly outperforms PRR <xref target="RFC6937" /> using only PRR-CRB and some other similarly conservative algorithms that were tested, showi ng that it is significantly common for the actual losses to exceed the cwnd redu ction determined by the congestion control algorithm. </t>
<t>Under such heavy losses, during the first round trip of fast recovery PRR use s the PRR-CRB to follow the packet conservation principle. Since the total los ses bring inflight below ssthresh, data is sent such that the total data transmi tted, prr_out, follows the total data delivered to the receiver as reported by r eturning ACKs. Transmission is controlled by the sending limit, which is set to prr_delivered - prr_out. </t> <t>Under such heavy losses, during the first round trip of fast recovery, PRR us es the PRR-CRB to follow the packet conservation principle. Since the total lo sses bring inflight below ssthresh, data is sent such that the total data transm itted, prr_out, follows the total data delivered to the receiver as reported by returning ACKs. Transmission is controlled by the sending limit, which is set to prr_delivered - prr_out. </t>
<t>While not shown in the figure above, once the fast retransmits sent starting at ACK#17 are delivered and elicit ACKs that increment the SND.UNA, PRR enters P RR-SSRB and increases the window by exactly 1 segment per ACK until inflight ri ses to ssthresh during recovery. On heavy losses when cwnd is large, PRR-SSRB r ecovers the losses exponentially faster than PRR-CRB. Although increasing the wi ndow during recovery seems to be ill advised, it is important to remember that t his is actually less aggressive than permitted by <xref target="RFC6675" />, whi ch sends the same quantity of additional data as a single burst in response to t he ACK that triggered Fast Retransmit.</t> <t>While not shown in the figure above, once the fast retransmits sent starting at ACK#17 are delivered and elicit ACKs that increment the SND.UNA, PRR enters P RR-SSRB and increases the window by exactly 1 segment per ACK until inflight ri ses to ssthresh during recovery. On heavy losses when cwnd is large, PRR-SSRB r ecovers the losses exponentially faster than PRR-CRB. Although increasing the wi ndow during recovery seems to be ill advised, it is important to remember that t his is actually less aggressive than permitted by <xref target="RFC6675" />, whi ch sends the same quantity of additional data as a single burst in response to t he ACK that triggered Fast Retransmit.</t>
<t>For less severe loss events, where the total losses are smaller than the diff erence between FlightSize and ssthresh, PRR-CRB and PRR-SSRB are not invoked sin ce PRR stays in the proportional rate reduction mode. </t> <t>For less severe loss events, where the total losses are smaller than the diff erence between FlightSize and ssthresh, PRR-CRB and PRR-SSRB are not invoked sin ce PRR stays in the Proportional Rate Reduction mode. </t>
</section> </section>
<section title="Adapting PRR to other transport protocols"> <section><name>Adapting PRR to Other Transport Protocols</name>
<t>The main PRR algorithm and reductions bounds can be adapted to any transport that can support <xref target="RFC6675" />. In one major implementation (Linux T CP) PRR has been the fast recovery algorithm for its default and supported conge stion control modules since its introduction in 2011 <xref target="First_TCP_PRR " />. </t> <t>The main PRR algorithm and reductions bounds can be adapted to any transport that can support <xref target="RFC6675" />. In one major implementation (Linux T CP), PRR has been the fast recovery algorithm for its default and supported cong estion control modules since its introduction in 2011 <xref target="First_TCP_PR R" />. </t>
<t>The SafeACK heuristic can be generalized as any ACK of a retransmission that does not cause some other segment to be marked for retransmission. </t> <t>The SafeACK heuristic can be generalized as any ACK of a retransmission that does not cause some other segment to be marked for retransmission. </t>
</section> </section>
<section title="Measurement Studies"> <section><name>Measurement Studies</name>
<t> <t>
For <xref target="RFC6937" /> a companion paper <xref target="IMC11" /> evaluate d <xref target="RFC3517" /> and various experimental PRR versions in a large-sca le measurement study. At the time of publication, the legacy algorithms used in that study are no longer present in the code base used in that study, making su ch comparisons difficult without recreating historical algorithms. Readers int erested in the measurement study should review section 5 of <xref target="RFC693 7" /> and the IMC paper <xref target="IMC11" />. For <xref target="RFC6937" />, a companion paper <xref target="IMC11" /> evaluat ed <xref target="RFC3517" /> and various experimental PRR versions in a large-sc ale measurement study. At the time of publication, the legacy algorithms used i n that study are no longer present in the code base used in that study, making s uch comparisons difficult without recreating historical algorithms. Readers in terested in the measurement study should review <xref target="RFC6937" section=" 5"/> and the IMC paper <xref target="IMC11" />.
</t> </t>
</section> </section>
<section title="Operational Considerations"> <section><name>Operational Considerations</name>
<section title="Incremental Deployment"> <section><name>Incremental Deployment</name>
<t> <t>
PRR is incrementally deployable, because it utilizes only existing transport pro tocol mechanisms for data delivery acknowledgment and the detection of lost data . PRR only requires only changes to the transport protocol implementation at the data sender; it does not require any changes at data receivers or in networks. This allows data senders using PRR to work correctly with any existing data rece ivers or networks. PRR does not require any changes to or assistance from router s, switches, or other devices in the network. PRR is incrementally deployable, because it utilizes only existing transport pro tocol mechanisms for data delivery acknowledgment and the detection of lost data . PRR only requires changes to the transport protocol implementation at the data sender; it does not require any changes at data receivers or in networks. This allows data senders using PRR to work correctly with any existing data receivers or networks. PRR does not require any changes to or assistance from routers, sw itches, or other devices in the network.
</t> </t>
</section> </section>
<section title="Fairness"> <section><name>Fairness</name>
<t> <t>
PRR is designed to maintain the fairness properties of the congestion control al gorithm with which it is deployed. PRR only operates during a congestion control response episode, such as fast recovery or response to <xref target="RFC3168" / > ECN, and only makes short-term, per-acknowledgment decisions to smoothly regul ate the volume of in-flight data during an episode such that at the end of the episode it will be as close as possible to the slow start threshold (ssthresh), as determined by the congestion control algorithm. PRR does not modify the conge stion control cwnd increase or decrease mechanisms outside of congestion control response episodes. PRR is designed to maintain the fairness properties of the congestion control al gorithm with which it is deployed. PRR only operates during a congestion control response episode, such as fast recovery or response to ECN <xref target="RFC316 8" />, and only makes short-term, per-acknowledgment decisions to smoothly regul ate the volume of in-flight data during an episode such that at the end of the episode it will be as close as possible to the slow start threshold (ssthresh), as determined by the congestion control algorithm. PRR does not modify the conge stion control cwnd increase or decrease mechanisms outside of congestion control response episodes.
</t> </t>
</section> </section>
<section title="Protecting the Network Against Excessive Queuing and Packet Loss "> <section><name>Protecting the Network Against Excessive Queuing and Packet Loss< /name>
<t>Over long time scales, PRR is designed to maintain the queuing and packet los s properties of the congestion control algorithm with which it is deployed. As n oted above, PRR only operates during a congestion control response episode, such as fast recovery or response to ECN, and only makes short-term, per-acknowledgm ent decisions to smoothly regulate the volume of in-flight data during an episo de such that at the end of the episode it will be as close as possible to the sl ow start threshold (ssthresh), as determined by the congestion control algorithm . </t> <t>Over long time scales, PRR is designed to maintain the queuing and packet los s properties of the congestion control algorithm with which it is deployed. As n oted above, PRR only operates during a congestion control response episode, such as fast recovery or response to ECN, and only makes short-term, per-acknowledgm ent decisions to smoothly regulate the volume of in-flight data during an episo de such that at the end of the episode it will be as close as possible to the sl ow start threshold (ssthresh), as determined by the congestion control algorithm . </t>
<t> Over short time scales, PRR is designed to cause lower packet loss rates tha n preceding approaches like <xref target="RFC6675" />. At a high level, PRR is i nspired by the packet conservation principle, and, as much as possible, PRR reli es on the self clock process. By contrast, with <xref target="RFC6675" /> a sing le ACK carrying a SACK option that implies a large quantity of missing data can cause a step discontinuity in the pipe estimator, which can cause Fast Retransmi t to send a large burst of data that is much larger than the volume of delivered data. PRR avoids such bursts by basing transmission decisions on the volume of delivered data rather than the volume of lost data. Furthermore, as noted above, PRR-SSRB is less aggressive than <xref target="RFC6675" /> (transmitting fewer segments or taking more time to transmit them), and it outperforms due to the lo wer probability of additional losses during recovery.</t> <t> Over short time scales, PRR is designed to cause lower packet loss rates tha n preceding approaches like <xref target="RFC6675" />. At a high level, PRR is i nspired by the packet conservation principle, and as much as possible, PRR relie s on the self clock process. By contrast, with <xref target="RFC6675" />, a sing le ACK carrying a SACK option that implies a large quantity of missing data can cause a step discontinuity in the pipe estimator, which can cause Fast Retransmi t to send a large burst of data that is much larger than the volume of delivered data. PRR avoids such bursts by basing transmission decisions on the volume of delivered data rather than the volume of lost data. Furthermore, as noted above, PRR-SSRB is less aggressive than <xref target="RFC6675" /> (transmitting fewer segments or taking more time to transmit them), and it outperforms due to the lo wer probability of additional losses during recovery.</t>
</section> </section>
</section> </section>
<section title="Acknowledgements"> <section anchor="IANA"><name>IANA Considerations</name>
<t>This document is based in part on previous work by Janey C. Hoe (see section <t>This document has no IANA actions.</t>
3.2, "Recovery from Multiple Packet Losses", of <xref target="Hoe96Startup" />)
and Matt Mathis, Jeff Semke, and Jamshid Mahdavi <xref target="RHID" />, and inf
luenced by several discussions with John Heffner.</t>
<t>Monia Ghobadi and Sivasankar Radhakrishnan helped analyze the experiments. Il
po Jarvinen reviewed the initial implementation. Mark Allman, Richard Scheffeneg
ger, Markku Kojo, Mirja Kuehlewind, Gorry Fairhurst, Russ Housley, Paul Aitken,
Daniele Ceccarelli, and Mohamed Boucadair improved the document through their in
sightful reviews and suggestions.</t>
</section>
<section anchor="IANA">
<!-- All drafts are required to have an IANA considerations section. See RFC
8126 for a guide.-->
<name>IANA Considerations</name>
<t>This memo includes no request to IANA.</t>
</section> </section>
<section title="Security Considerations"> <section><name>Security Considerations</name>
<t>PRR does not change the risk profile for transport protocols.</t> <t>PRR does not change the risk profile for transport protocols.</t>
<t>Implementers that change PRR from counting bytes to segments have to be cauti ous about the effects of ACK splitting attacks <xref target='Savage99' />, where the receiver acknowledges partial segments for the purpose of confusing the sen der's congestion accounting.</t> <t>Implementers that change PRR from counting bytes to segments have to be cauti ous about the effects of ACK splitting attacks <xref target='Savage99' />, where the receiver acknowledges partial segments for the purpose of confusing the sen der's congestion accounting.</t>
</section> </section>
</middle> </middle>
<back> <back>
<references title="Normative References"> <displayreference target="I-D.mathis-tcp-ratehalving" to="TCP-RH"/>
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.1191. <displayreference target="I-D.welzl-iccrg-pacing" to="PACING"/>
xml" /> <references><name>References</name>
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.2018. <references><name>Normative References</name>
xml" /> <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.1191.xml"
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.2119. />
xml" /> <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2018.xml"
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.6582. />
xml" /> <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2119.xml"
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.4821. />
xml" /> <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6582.xml"
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.5681. />
xml" /> <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.4821.xml"
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.6675. />
xml" /> <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.5681.xml"
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.8174. />
xml" /> <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6675.xml"
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.8201. />
xml" /> <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8174.xml"
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.8985. />
xml" /> <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8201.xml"
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.9293. />
xml" /> <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8985.xml"
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.9438. />
xml" /> <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9293.xml"
/>
<xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9438.xml"
/>
</references> </references>
<references title='Informative References'> <!--[rfced] FYI - We found free access versions of these references in the ACM
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.3042. Digital Library and added DOIs and URLs to these references.
xml" />
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.3168. Current:
xml" /> [Flach2016policing]
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.3517. Flach, T., Papageorge, P., Terzis, A., Pedrosa, L., Cheng,
xml" /> Y., Karim, T., Katz-Bassett, E., and R. Govindan, "An
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.6937. Internet-Wide Analysis of Traffic Policing", SIGCOMM '16:
xml" /> Proceedings of the 2016 ACM SIGCOMM Conference, pp.
468-482, DOI 10.1145/2934872.2934873, August 2016,
<https://doi.org/10.1145/2934872.2934873>.
[Hoe96Startup]
Hoe, J., "Improving the Start-up Behavior of a Congestion
Control Scheme for TCP", SIGCOMM '96: Conference
Proceedings on Applications, Technologies, Architectures,
and Protocols for Computer Communications, pp. 270-280,
DOI 10.1145/248157.248180, August 1996,
<https://doi.org/10.1145/248157.248180>.
[IMC11] Dukkipati, N., Mathis, M., Cheng, Y., and M. Ghobadi,
"Proportional Rate Reduction for TCP", IMC '11:
Proceedings of the 2011 ACM SIGCOMM Conference on Internet
Measurement Conference, pp. 155-170,
DOI 10.1145/2068816.2068832, November 2011,
<https://doi.org/10.1145/2068816.2068832>.
[Jacobson88]
Jacobson, V., "Congestion Avoidance and Control",
Symposium proceedings on Communications architectures and
protocols (SIGCOMM '88), pp. 314-329,
DOI 10.1145/52325.52356, August 1988,
<https://doi.org/10.1145/52325.52356>.
[Savage99] Savage, S., Cardwell, N., Wetherall, D., and T. Anderson,
"TCP Congestion Control with a Misbehaving Receiver", ACM
SIGCOMM Computer Communication Review, vol. 29, no. 5, pp.
71-78, DOI 10.1145/505696.505704, October 1999,
<https://doi.org/10.1145/505696.505704>.
[VCC] Cronkite-Ratcliff, B., Bergman, A., Vargaftik, S., Ravi,
M., McKeown, N., Abraham, I., and I. Keslassy,
"Virtualized Congestion Control (Extended Version)",
SIGCOMM '16: Proceedings of the 2016 ACM SIGCOMM
Conference pp. 230-243, DOI 10.1145/2934872.2934889,
August 2016, <http://www.ee.technion.ac.il/~isaac/p/
sigcomm16_vcc_extended.pdf>.
-->
<references><name>Informative References</name>
<xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.3042.xml"
/>
<xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.3168.xml"
/>
<xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.3517.xml"
/>
<xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6937.xml"
/>
<reference anchor="First_TCP_PRR" target="https://git.kernel.org/pub/scm/linux/ kernel/git/torvalds/linux.git/commit/?id=a262f0cdf1f2916ea918dc329492abb5323d9a6 c" quoteTitle="true"> <reference anchor="First_TCP_PRR" target="https://git.kernel.org/pub/scm/linux/ kernel/git/torvalds/linux.git/commit/?id=a262f0cdf1f2916ea918dc329492abb5323d9a6 c" quoteTitle="true">
<front> <front>
<title>Proportional Rate Reduction for TCP.</title> <title>Proportional Rate Reduction for TCP.</title>
<author> <author>
<organization showOnFrontPage="true"/> <organization showOnFrontPage="true"/>
</author> </author>
<date month="August" year="2011"/> <date month="August" year="2011"/>
</front> </front>
<refcontent>commit a262f0cdf1f2916ea918dc329492abb5323d9a6c</refcontent> <refcontent>commit a262f0cdf1f2916ea918dc329492abb5323d9a6c</refcontent>
skipping to change at line 689 skipping to change at line 653
<organization /> <organization />
</author> </author>
<author initials='Y' surname='Cheng' fullname='Yuchung Cheng'> <author initials='Y' surname='Cheng' fullname='Yuchung Cheng'>
<organization /> <organization />
</author> </author>
<author initials="M" surname="Ghobadi" fullname="Monia Ghobadi"> <author initials="M" surname="Ghobadi" fullname="Monia Ghobadi">
<organization /> <organization />
</author> </author>
<date month='November' year='2011' /> <date month='November' year='2011' />
</front> </front>
<seriesInfo name='Proceedings of the 11th ACM SIGCOMM Conference on Internet <refcontent>IMC '11: Proceedings of the 2011 ACM SIGCOMM Conference on Internet
Measurement 2011,' value='Berlin, Germany' /> Measurement Conference, pp. 155-170</refcontent>
<seriesInfo name="DOI" value="10.1145/2068816.2068832"/>
</reference> </reference>
<reference anchor='Flach2016policing'> <reference anchor='Flach2016policing' target="">
<front> <front>
<title>An Internet-Wide Analysis of Traffic Policing</title> <title>An Internet-Wide Analysis of Traffic Policing</title>
<author initials='T' surname='Flach' fullname='Tobias Flach'> <author initials='T' surname='Flach' fullname='Tobias Flach'>
<organization /></author> <organization /></author>
<author initials='P' surname='Papageorge' fullname='Pavlos Papageorge'> <author initials='P' surname='Papageorge' fullname='Pavlos Papageorge'>
<organization /></author> <organization /></author>
<author initials='A' surname='Terzis' fullname='Andreas Terzis'> <author initials='A' surname='Terzis' fullname='Andreas Terzis'>
<organization /></author> <organization /></author>
<author initials='L' surname='Pedrosa' fullname='Luis Pedrosa'> <author initials='L' surname='Pedrosa' fullname='Luis Pedrosa'>
<organization /></author> <organization /></author>
<author initials='Y' surname='Cheng' fullname='Yuchung Cheng'> <author initials='Y' surname='Cheng' fullname='Yuchung Cheng'>
<organization /></author> <organization /></author>
<author initials='T' surname='Al Karim' fullname='Tayeb Al Karim'> <author initials='T' surname='Karim' fullname='Tayeb Karim'>
<organization /></author> <organization /></author>
<author initials='E' surname='Katz-Bassett' fullname='Ethan B Katz-Bassett'> <author initials='E' surname='Katz-Bassett' fullname='Ethan Katz-Bassett'>
<organization /></author> <organization /></author>
<author initials='R' surname='Govindan' fullname='R. Govindan'> <author initials='R' surname='Govindan' fullname='R. Govindan'>
<organization /></author> <organization /></author>
<date month='August' year='2016' /> <date month='August' year='2016' />
</front> </front>
<seriesInfo name="ACM SIGCOMM" value='SIGCOMM2016' /> <refcontent>SIGCOMM '16: Proceedings of the 2016 ACM SIGCOMM Conference, pp. 468
-482</refcontent>
<seriesInfo name="DOI" value="10.1145/2934872.2934873"/>
</reference> </reference>
<reference anchor='Hoe96Startup'> <reference anchor='Hoe96Startup'>
<front> <front>
<title>Improving the start-up behavior of a congestion control scheme for TCP</t itle> <title>Improving the Start-up Behavior of a Congestion Control Scheme for TCP</t itle>
<author initials='J' surname='Hoe' fullname='Janey C. Hoe'> <author initials='J' surname='Hoe' fullname='Janey C. Hoe'>
<organization /></author> <organization /></author>
<date month='August' year='1996' /> <date month='August' year='1996' />
</front> </front>
<seriesInfo name="ACM SIGCOMM" value='SIGCOMM1996' /> <refcontent>SIGCOMM '96: Conference Proceedings on Applications, Technologies, A
rchitectures, and Protocols for Computer Communications, pp. 270-280</refcontent
>
<seriesInfo name="DOI" value="10.1145/248157.248180"/>
</reference> </reference>
<reference anchor='FACK' target='https://dl.acm.org/doi/pdf/10.1145/248157.24818 1'> <reference anchor='FACK' target='https://dl.acm.org/doi/pdf/10.1145/248157.24818 1'>
<front> <front>
<title>Forward Acknowledgment: Refining TCP Congestion Control</title> <title>Forward Acknowledgment: Refining TCP Congestion Control</title>
<author initials='M.' surname='Mathis' fullname='Matthew Mathis'> <author initials='M.' surname='Mathis' fullname='Matthew Mathis'>
<organization /></author> <organization /></author>
<author initials='J.' surname='Mahdavi' fullname='Jamshid Mahdavi'> <author initials='J.' surname='Mahdavi' fullname='Jamshid Mahdavi'>
<organization /></author> <organization /></author>
<date month='August' year='1996' /> <date month='August' year='1996' />
</front> </front>
<seriesInfo name='ACM SIGCOMM' value='SIGCOMM1996' /> <refcontent>ACM SIGCOMM Computer Communication Review, vol. 26, no. 4, pp. 281-2
91</refcontent>
<seriesInfo name="DOI" value="10.1145/248157.248181"/>
</reference> </reference>
<!-- draft-mathis-tcp-ratehalving (Expired) --> <!-- [RHID]
<reference anchor='RHID' target='https://datatracker.ietf.org/doc/html/draft-mat draft-mathis-tcp-ratehalving-00
his-tcp-ratehalving'> IESG State: Expired as of 10/24/25
<front> -->
<title>The Rate-Halving Algorithm for TCP Congestion Control</title> <xi:include href="https://datatracker.ietf.org/doc/bibxml3/reference.I-D.mathis-
<author initials='M' surname='Mathis' fullname='Matt Mathis'> tcp-ratehalving.xml"/>
<organization />
</author>
<author initials='J' surname='Semke'>
<organization />
</author>
<author initials='J' surname='Mahdavi'>
<organization />
</author>
<date month='August' year='1999' />
</front>
<seriesInfo name="Work in" value="Progress"/>
</reference>
<reference anchor="I-D.welzl-iccrg-pacing" target="https://datatracker.ietf.org/ <!-- [I-D.welzl-iccrg-pacing]
doc/html/draft-welzl-iccrg-pacing"> draft-welzl-iccrg-pacing-03
<front> IESG State: I-D Exists as of 10/24/25
<title>Pacing in Transport Protocols</title> -->
<author initials="M." surname="Welzl" fullname="Michael Welzl"> <xi:include href="https://datatracker.ietf.org/doc/bibxml3/reference.I-D.welzl-i
<organization>University of Oslo</organization> ccrg-pacing.xml"/>
</author>
<author initials="W." surname="Eddy" fullname="Wesley Eddy">
<organization>MTI Systems</organization>
</author>
<author initials="V." surname="Goel" fullname="Vidhi Goel">
<organization>Apple Inc.</organization>
</author>
<author initials="M." surname="Txen" fullname="Michael Txen">
<organization>Mnster University of Applied Sciences</organization>
</author>
<date month="March" day="3" year="2025"/>
<abstract>
<t> Applications or congestion control mechanisms can produce bursty traffic whi
ch can cause unnecessary queuing and packet loss. To reduce the burstiness of tr
affic, the concept of evenly spacing out the traffic from a data sender over a r
ound-trip time known as "pacing" has been used in many transport protocol implem
entations. This document gives an overview of pacing and how some known pacing i
mplementations work. </t>
</abstract>
</front>
<seriesInfo name="Internet-Draft" value="draft-welzl-iccrg-pacing"/>
</reference>
<reference anchor='VCC' target='http://www.ee.technion.ac.il/~isaac/p/sigcomm16_ vcc_extended.pdf'> <reference anchor='VCC' target='http://www.ee.technion.ac.il/~isaac/p/sigcomm16_ vcc_extended.pdf'>
<front> <front>
<title>Virtualized Congestion Control (Extended Version)</title> <title>Virtualized Congestion Control (Extended Version)</title>
<author initials='B' surname='Cronkite-Ratcliff' fullname='Bryce Cronkite-Ratcli ff'></author> <author initials='B' surname='Cronkite-Ratcliff' fullname='Bryce Cronkite-Ratcli ff'></author>
<author initials='A' surname='Bergman' fullname='Aran Bergman'></author> <author initials='A' surname='Bergman' fullname='Aran Bergman'></author>
<author initials='S' surname='Vargaftik' fullname='Shay Vargaftik'></author> <author initials='S' surname='Vargaftik' fullname='Shay Vargaftik'></author>
<author initials='M' surname='Ravi' fullname='Madhusudhan Ravi'></author> <author initials='M' surname='Ravi' fullname='Madhusudhan Ravi'></author>
<author initials='N' surname='McKeown' fullname='Nick McKeown'></author> <author initials='N' surname='McKeown' fullname='Nick McKeown'></author>
<author initials='I' surname='Abraham' fullname='Ittai Abraham'></author> <author initials='I' surname='Abraham' fullname='Ittai Abraham'></author>
<author initials='I' surname='Keslassy' fullname='Isaac Keslassy'></author> <author initials='I' surname='Keslassy' fullname='Isaac Keslassy'></author>
<date year='2016' month='August' /> <date year='2016' month='August' />
</front> </front>
<seriesInfo name="DOI" value="10.1145/2934872.2934889"/>
<refcontent>SIGCOMM '16: Proceedings of the 2016 ACM SIGCOMM Conference
pp. 230-243</refcontent>
</reference> </reference>
<!-- REMOVED <!-- REMOVED
<reference anchor='RHweb' target='http://www.psc.edu/networking/papers/FACKnotes /current/'> <reference anchor='RHweb' target='http://www.psc.edu/networking/papers/FACKnotes /current/'>
<front> <front>
<title>TCP Rate-Halving with Bounding Parameters</title> <title>TCP Rate-Halving with Bounding Parameters</title>
<author initials='M.' surname='Mathis' fullname='Matthew Mathis'> <author initials='M.' surname='Mathis' fullname='Matthew Mathis'>
<organization /></author> <organization /></author>
<author initials='J.' surname='Mahdavi' fullname='Jamshid Mahdavi'> <author initials='J.' surname='Mahdavi' fullname='Jamshid Mahdavi'>
<organization /></author> <organization /></author>
skipping to change at line 820 skipping to change at line 760
<reference anchor='CUBIC'> <reference anchor='CUBIC'>
<front> <front>
<title>CUBIC: A new TCP-friendly high-speed TCP variant</title> <title>CUBIC: A new TCP-friendly high-speed TCP variant</title>
<author initials='I.' surname='Rhee' fullname='Injong Rhee'> <author initials='I.' surname='Rhee' fullname='Injong Rhee'>
<organization /></author> <organization /></author>
<author initials='L.' surname='Xu' fullname='L Xu'> <author initials='L.' surname='Xu' fullname='L Xu'>
<organization /></author> <organization /></author>
<date month='February' year='2005' /> <date month='February' year='2005' />
<abstract><t></t></abstract>
</front>
<seriesInfo name='PFLDnet' value='2005' />
</reference> </reference>
--> -->
<!-- Van 88 --> <!-- Van 88 -->
<reference anchor='Jacobson88'> <reference anchor='Jacobson88'>
<front> <front>
<title>Congestion Avoidance and Control</title> <title>Congestion Avoidance and Control</title>
<author initials='V' surname='Jacobson' > <organization /></author> <author initials='V' surname='Jacobson' > <organization /></author>
<date year='1988' month='August' /> <date year='1988' month='August' />
</front> </front>
<seriesInfo name='SIGCOMM Comput. Commun. Rev.' value="18(4)" /> <refcontent>Symposium proceedings on Communications architectures and protocols
(SIGCOMM '88), pp. 314-329</refcontent>
<seriesInfo name="DOI" value="10.1145/52325.52356"/>
</reference> </reference>
<!-- ACK splitting attacks --> <!-- ACK splitting attacks -->
<reference anchor='Savage99'> <reference anchor='Savage99'>
<front> <front>
<title>TCP congestion control with a misbehaving receiver</title> <title>TCP Congestion Control with a Misbehaving Receiver</title>
<author initials='S' surname='Savage' > <organization /></author> <author initials='S' surname='Savage' > <organization /></author>
<author initials='N' surname='Cardwell' > <organization /></author> <author initials='N' surname='Cardwell' > <organization /></author>
<author initials='D' surname='Wetherall' > <organization /></author> <author initials='D' surname='Wetherall' > <organization /></author>
<author initials='T' surname='Anderson' > <organization /></author> <author initials='T' surname='Anderson' > <organization /></author>
<date year='1999' month='October ' /> <date year='1999' month='October ' />
</front> </front>
<seriesInfo name='SIGCOMM Comput. Commun. Rev.' value="29(5)" /> <refcontent>ACM SIGCOMM Computer Communication Review, vol. 29, no. 5, pp. 71-78
</refcontent>
<seriesInfo name="DOI" value="10.1145/505696.505704"/>
</reference> </reference>
<!-- REMOVED <!-- REMOVED
<!- - draft-mathis-tcpm-tcp-laminar (Expired) - -> <!- - draft-mathis-tcpm-tcp-laminar (Expired) - ->
<reference anchor='Laminar'> <reference anchor='Laminar'>
<front> <front>
<title>Laminar TCP and the case for refactoring TCP congestion control</title> <title>Laminar TCP and the case for refactoring TCP congestion control</title>
<author initials='M' surname='Mathis' fullname='Matt Mathis'> <author initials='M' surname='Mathis' fullname='Matt Mathis'>
<organization /> <organization />
</author> </author>
<date month='July' day='16' year='2012' /> <date month='July' day='16' year='2012' />
</front> </front>
<seriesInfo name='Work in' value='Progress' /> <seriesInfo name='Work in' value='Progress' />
</reference> </reference>
--> -->
</references> </references>
</references>
<section anchor="conservative" title="Strong Packet Conservation Bound"> <section anchor="conservative"><name>Strong Packet Conservation Bound</name>
<t> <t>
PRR-CRB is based on a conservative, philosophically pure, and aesthetically appe aling Strong Packet Conservation Bound, described here. Although inspired by t he packet conservation principle <xref target="Jacobson88" />, it differs in how it treats segments that are missing and presumed lost. Under all conditions a nd sequences of events during recovery, PRR-CRB strictly bounds the data transmi tted to be equal to or less than the amount of data delivered to the receiver. PRR-CRB is based on a conservative, philosophically pure, and aesthetically appe aling Strong Packet Conservation Bound, described here. Although inspired by t he packet conservation principle <xref target="Jacobson88" />, it differs in how it treats segments that are missing and presumed lost. Under all conditions a nd sequences of events during recovery, PRR-CRB strictly bounds the data transmi tted to be equal to or less than the amount of data delivered to the receiver.
Note that the effects of presumed losses are included in the inflight calculatio n, but do not affect the outcome of PRR-CRB, once inflight has fallen below ssth resh.</t> Note that the effects of presumed losses are included in the inflight calculatio n but do not affect the outcome of PRR-CRB once inflight has fallen below ssthre sh.</t>
<t>This Strong Packet Conservation Bound is the most aggressive algorithm that d oes not lead to additional forced losses in some environments. It has the prop erty that if there is a standing queue at a bottleneck that is carrying no other traffic, the queue will maintain exactly constant length for the entire durati on of the recovery, except for +1/-1 fluctuation due to differences in packet ar rival and exit times. Any less aggressive algorithm will result in a declinin g queue at the bottleneck. Any more aggressive algorithm will result in an incr easing queue or additional losses if it is a full drop tail queue.</t> <t>This Strong Packet Conservation Bound is the most aggressive algorithm that d oes not lead to additional forced losses in some environments. It has the prop erty that if there is a standing queue at a bottleneck that is carrying no other traffic, the queue will maintain exactly constant length for the entire durati on of the recovery, except for +1/-1 fluctuation due to differences in packet ar rival and exit times. Any less aggressive algorithm will result in a declinin g queue at the bottleneck. Any more aggressive algorithm will result in an incr easing queue or additional losses if it is a full drop tail queue.</t>
<t>This property is demonstrated with a thought experiment:</t> <t>This property is demonstrated with a thought experiment:</t>
<t> <t>
Imagine a network path that has insignificant delays in both directions, except for the processing time and queue at a single bottleneck in the forward path. I n particular, when a packet is "served" at the head of the bottleneck queue, the following events happen in much less than one bottleneck packet time: the packe t arrives at the receiver; the receiver sends an ACK that arrives at the sender; the sender processes the ACK and sends some data; the data is queued at the bot tleneck. </t> Imagine a network path that has insignificant delays in both directions, except for the processing time and queue at a single bottleneck in the forward path. I n particular, when a packet is "served" at the head of the bottleneck queue, the following events happen in much less than one bottleneck packet time: the packe t arrives at the receiver; the receiver sends an ACK that arrives at the sender; the sender processes the ACK and sends some data; the data is queued at the bot tleneck. </t>
<t> <t>
If SndCnt is set to DeliveredData and nothing else is inhibiting sending data, If SndCnt is set to DeliveredData and nothing else is inhibiting sending data,
then clearly the data arriving at the bottleneck queue will exactly replace the then clearly the data arriving at the bottleneck queue will exactly replace the
data that was served at the head of the queue, so the queue will have a data that was served at the head of the queue, so the queue will have a
constant length. If queue is drop tail and full, then the queue will stay constant length. If the queue is drop tail and full, then the queue will stay
exactly full. Losses or reordering on the ACK path only cause wider exactly full. Losses or reordering on the ACK path only cause wider
fluctuations in the queue size, but do not raise its peak size, independent of fluctuations in the queue size but do not raise its peak size, independent of
whether the data is in order or out of order (including loss recovery from an ea whether the data is in order or out of order (including loss recovery from an ea
rlier RTT). Any more aggressive algorithm that sends additional data will overf rlier RTT). Any more aggressive algorithm that sends additional data will overf
low the drop tail queue and cause loss. Any less aggressive algorithm will unde low the drop tail queue and cause loss. Any less aggressive algorithm will unde
r-fill the queue. Therefore, setting SndCnt to DeliveredData is the most aggres r-fill the queue. Therefore, setting SndCnt to DeliveredData is the most aggres
sive algorithm that does not cause forced losses in this simple network. Relaxi sive algorithm that does not cause forced losses in this simple network. Relaxi
ng the assumptions (e.g., making delays more authentic and adding more flows, de ng the assumptions (e.g., making delays more authentic and adding more flows, de
layed ACKs, etc.)&nbsp;is likely to increase the fine grained fluctuations in qu layed ACKs, etc.)&nbsp;is likely to increase the fine-grained fluctuations in qu
eue size but does not change its basic behavior.</t> eue size but does not change its basic behavior.</t>
<t>Note that the congestion control algorithm implements a broader notion of opt imal that includes appropriately sharing the network. Typical congestion contro l algorithms are likely to reduce the data sent relative to the Packet Conservin g Bound implemented by PRR, bringing TCP's actual window down to ssthresh.</t> <t>Note that the congestion control algorithm implements a broader notion of opt imal that includes appropriately sharing the network. Typical congestion contro l algorithms are likely to reduce the data sent relative to the Packet Conservin g Bound implemented by PRR, bringing TCP's actual window down to ssthresh.</t>
</section> </section>
<section numbered="false"><name>Acknowledgments</name>
<t>This document is based in part on previous work by <contact fullname="Janey C
. Hoe"/> (see "Recovery from Multiple Packet Losses", Section 3.2 of <xref targe
t="Hoe96Startup" />), <contact fullname="Matt Mathis"/>, <contact fullname="Jeff
Semke"/>, and <contact fullname="Jamshid Mahdavi"/> <xref target="I-D.mathis-tc
p-ratehalving" /> and influenced by several discussions with <contact fullname="
John Heffner"/>.</t>
<t><contact fullname="Monia Ghobadi"/> and <contact fullname="Sivasankar Radhakr
ishnan"/> helped analyze the experiments. <contact fullname="Ilpo Jarvinen"/> re
viewed the initial implementation. <contact fullname="Mark Allman"/>, <contact f
ullname="Richard Scheffenegger"/>, <contact fullname="Markku Kojo"/>, <contact f
ullname="Mirja Kuehlewind"/>, <contact fullname="Gorry Fairhurst"/>, <contact fu
llname="Russ Housley"/>, <contact fullname="Paul Aitken"/>, <contact fullname="D
aniele Ceccarelli"/>, and <contact fullname="Mohamed Boucadair"/> improved the d
ocument through their insightful reviews and suggestions.</t>
</section>
</back> </back>
<!-- [rfced] Some author comments are present in the XML. Please confirm that
no updates related to these comments are outstanding. Note that the
comments will be deleted prior to publication.
-->
<!-- [rfced] Abbreviations
a) FYI - We have added expansions for the following abbreviations
per Section 3.6 of RFC 7322 ("RFC Style Guide"). Please review each
expansion in the document carefully to ensure correctness.
Content Delivery Network (CDN)
Forward Acknowledgment (FACK)
Recent Acknowledgment Tail Loss Probe (RACK-TLP)
b) Both the expansion and the acronym for the following term are used
throughout the document. Would you like to update to use the expansion upon
first usage and the acronym for the rest of the document?
round-trip time (RTT)
-->
<!--[rfced] Throughout the text, the following terminology appears to be used
inconsistently. May we update each to the form on the right?
Fast Retransmit > fast retransmit
limited transmit > Limited Transmit
-->
<!-- [rfced] Please review the "Inclusive Language" portion of the online
Style Guide <https://www.rfc-editor.org/styleguide/part2/#inclusive_language>
and let us know if any changes are needed. Updates of this nature typically
result in more precise language, which is helpful for readers.
Note that our script did not flag any words in particular, but this should
still be reviewed as a best practice.
-->
</rfc> </rfc>
 End of changes. 93 change blocks. 
662 lines changed or deleted 533 lines changed or added

This html diff was produced by rfcdiff 1.48.