<?xml version="1.0" encoding="US-ASCII"?>
<!-- This specifies a new standards-track PRR that obsoletes experimental RFC6937.
It is shared outside of Google.
Build instructions appear at the end of this document.
-->
<?xml-model href="rfc7991bis.rnc"?> encoding="UTF-8"?>

<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type='text/xsl' href='rfc2629.xslt' ?>
<!-- used by XSLT processors -->
<!-- For a complete list and description of processing instructions (PIs),
    please see http://xml.resource.org/authoring/README.html. -->

<rfc xmlns:xi="http://www.w3.org/2001/XInclude" category="std" consensus="true" docName="draft-ietf-tcpm-prr-rfc6937bis-21" number="9937" ipr="trust200902" obsoletes="6937" updates="" submissionType="IETF" xml:lang="en" tocInclude="true" tocDepth="4" symRefs="true" sortRefs="true"
      version="3"
>

 <!-- ***** FRONT MATTER ***** --> version="3">

 <front>
  <title abbrev="Proportional Rate Reduction"> Proportional Rate Reduction</title>
  <seriesInfo name="RFC" value="9937"/>

<author fullname="Matt Mathis" initials="M." surname="Mathis">
   <address>
   <email>ietf@mattmathis.net</email>
   </address>
</author>

<author fullname="Neal Cardwell" initials="N." surname="Cardwell">
   <organization>Google, Inc.</organization>
   <address>
   <email> ncardwell@google.com </email>
   </address>
</author>

<author fullname="Yuchung Cheng" initials="Y." surname="Cheng">
   <organization>Google, Inc.</organization>
   <address>
   <email> ycheng@google.com </email>
   </address>
</author>

<author fullname="Nandita Dukkipati" initials="N." surname="Dukkipati">
   <organization>Google, Inc.</organization>
   <address>
   <email>nanditad@google.com </email>
   </address>
</author>

<date month="June" day="22" month="November" year="2025" />

<area> Transport Area </area>
<workgroup> TCP Maintenance Working Group </workgroup>

<area>WIT</area>
<workgroup>tcpm</workgroup>

<!-- [rfced] Please insert any keywords (beyond those that appear in
the title) for use on https://www.rfc-editor.org/search. -->

<keyword>example</keyword>

<abstract>
<t>
This
  <t>This document specifies a standards-track Standards Track version of the Proportional Rate Reduction (PRR) algorithm that obsoletes the experimental Experimental version described in RFC6937. RFC 6937.   PRR regulates the amount of data sent by TCP or other transport protocols during fast recovery.  PRR accurately regulates the actual flight size through recovery such that at the end of recovery it will be as close as possible to the slow start threshold (ssthresh), as determined by the congestion control algorithm.
</t>
   </abstract>
 </front>
<middle>

<section title="Introduction">

<section><name>Introduction</name>

<t>Van Jacobson's packet conservation principle <xref target="Jacobson88" /> defines a self clock process wherein N data segments delivered to the receiver generate acknowledgments that the data sender uses as the clock to trigger sending another N data segments into the network.</t>

<!-- [rfced] "Reno" is not used in RFC 5681, except in titles in the
References section. Please review and let us know if/how this citation
should be updated. Note that there are multiple occurrences of this
throughout the document.

Original:
   Congestion control algorithms like Reno [RFC5681] and CUBIC [RFC9438]
   are built on the conceptual foundation of this self clock process.
-->

<t>Congestion control algorithms like Reno <xref target="RFC5681" /> and CUBIC <xref target="RFC9438" /> are built on the conceptual foundation of this self clock process. They control the sending process of a transport protocol connection by using a congestion window ("cwnd") to limit "inflight", the volume of data that a connection estimates is in-flight in flight in the network at a given time. Furthermore, these algorithms require that transport protocol connections  reduce their cwnd in response to packet losses. Fast recovery (see <xref target="RFC5681" /> and <xref target="RFC6675" />) is the algorithm for making this cwnd reduction using feedback from acknowledgements. acknowledgments.  Its stated goal is to maintain a sender's self clock by relying on returning ACKs during recovery to clock more data into the network. Without Proportional Rate Reduction (PRR), fast recovery typically adjusts the window by waiting for a large fraction of a round-trip time (RTT) (one half round-trip time of ACKs for Reno <xref target="RFC5681" />, /> or 30% of a round-trip time for CUBIC <xref target="RFC9438" />) to pass before sending any data.</t>

<t><xref target="RFC6675" /> makes fast recovery with Selective Acknowledgement Acknowledgment (SACK) <xref target="RFC2018" /> more accurate by computing "pipe", a sender-side estimate of the number of bytes still outstanding in the network.   With <xref target="RFC6675" />, fast recovery is implemented by sending data as necessary on each ACK to allow pipe to rise to match ssthresh, the target window size for fast recovery, as determined by the congestion control algorithm.  This protects fast recovery from timeouts in many cases where there are heavy losses. However, <xref target="RFC6675" /> has two significant drawbacks. First, because it makes a large multiplicative decrease in cwnd at the start of fast recovery, it can cause a timeout if the entire second half of the window of data or ACKs are lost.  Second, a single ACK carrying a SACK option that implies a large quantity of missing data can cause a step discontinuity in the pipe estimator, which can cause Fast Retransmit to send a large burst of data.</t>

<t>PRR  regulates the transmission process during fast recovery in a manner that avoids these excess window adjustments, such that transmissions progress smoothly, and at the end of recovery recovery, the actual window size will be as close as possible to ssthresh.  </t>

<t>PRR's approach is inspired by Van Jacobson's packet conservation principle.  As much as possible, PRR relies  on the self clock process, process and is only slightly affected by the accuracy of estimators estimators, such as the estimate of the volume of in-flight data.   This is what gives the algorithm its precision in the presence of events that cause uncertainty in other estimators.</t>

<t> When inflight is above ssthresh, PRR reduces inflight smoothly toward ssthresh by clocking out transmissions at a rate that is in proportion to both the delivered data and ssthresh. </t>

<!--[rfced] To have the abbreviation directly match the expanded form, may
we update this text as follows?

Original:
   As a baseline, to be cautious when there may be
   considerable congestion, PRR uses its Conservative Reduction Bound
   (PRR-CRB), which is strictly packet conserving.  When recovery seems
   to be progressing well, PRR uses its Slow Start Reduction Bound (PRR-
   SSRB), which is more aggressive than PRR-CRB by at most one segment
   per ACK.

Perhaps:
   As a baseline, to be cautious when there may be
   considerable congestion, PRR uses its Conservative Reduction Bound
   (CRB), which is strictly packet conserving.  When recovery seems
   to be progressing well, PRR uses its Slow Start Reduction Bound (SSRB),
   which is more aggressive than PRR-CRB by at most one segment
   per ACK.
-->

<t>When inflight is less than ssthresh, PRR adaptively chooses between one of two  Reduction Bounds to limit the total window reduction due to all mechanisms, including transient application stalls and the losses themselves. As a baseline, to be cautious when there may be considerable congestion, PRR uses its Conservative Reduction Bound (PRR-CRB), which is strictly packet conserving. When recovery seems to be progressing well, PRR uses its Slow Start Reduction Bound (PRR-SSRB), which is more aggressive than PRR-CRB by at most one segment per ACK.  PRR-CRB meets the Strong Packet Conservation Bound described in <xref target="conservative" />; however, when used in real networks as the sole approach, it does not perform as well as the algorithm described in <xref target="RFC6675" />, which prove proves to be more aggressive in a significant number of cases.  PRR-SSRB offers a compromise by allowing a connection to send one additional segment per ACK, relative to PRR-CRB, in some situations. Although PRR-SSRB is less aggressive than <xref target="RFC6675" /> (transmitting fewer segments or taking more time to transmit them), it outperforms due to the lower probability of additional losses during recovery.</t>

<t>The original definition of the packet conservation principle <xref target="Jacobson88" />  treated packets that are presumed to be lost (e.g., marked as candidates for retransmission) as having left the network. This idea is reflected in the inflight estimator used by PRR, but it is distinct from the Strong Packet Conservation Bound as described in <xref target="conservative" />, which is defined solely on the basis of data arriving at the receiver.
</t>

<t>This document specifies several main changes from the earlier version of PRR in <xref target="RFC6937" />. First, it introduces a new adaptive heuristic that replaces a manual configuration parameter that determined how conservative PRR was when inflight was less than ssthresh (whether to use PRR-CRB or PRR-SSRB). Second, the algorithm specifies behavior for non-SACK connections (connections that have not negotiated SACK <xref target="RFC2018" /> SACK support via the "SACK-permitted" option). Third, the algorithm ensures a smooth sending process even when the sender has experienced high reordering and starts loss recovery after a large amount of sequence space has been SACKed.  Finally, this document also includes additional discussion about the integration of PRR with congestion control and loss detection algorithms.
</t>

<t>PRR has extensive deployment experience in multiple TCP implementations since the first widely deployed TCP PRR implementation in 2011 <xref target="First_TCP_PRR" />.</t>

</section>

<section title="Conventions">

<section><name>Conventions</name>
        <t>
    The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
    NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
    "<bcp14>MAY</bcp14>", and "OPTIONAL" "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
    described in BCP 14 BCP&nbsp;14 <xref target="RFC2119" /> target="RFC2119"/> <xref target="RFC8174" /> target="RFC8174"/>
    when, and only when, they appear in all capitals, as shown here.</t>

</section>

<section title="Document and WG Information">

<t><em>RFC Editor: please advise on how we can specify the "Janey C. Hoe" name in the  "Acknowledgements" section as XML that would be correctly translated by xm2rfc into plain ASCII txt output with a single space after the "C." (interpreting the "C." as an initial) rather than a double space after the "C." (interpreting the "C." as the end of the sentence).</em></t>

<t><em>RFC Editor: please remove this section before publication</em></t>

<t>Formatted: 2025-06-22 19:27:52-07:00</t>

<t>Please send all comments, questions and feedback to tcpm@ietf.org</t>

<t>About revision 00:</t>

<t>The introduction above was drawn from draft-mathis-tcpm-rfc6937bis-00.
All of the text below was copied verbatim from RFC 6937, to facilitate comparison between RFC 6937 and this document as it evolves.</t>

<t>About revision 01:</t>
<ul>
<li>Recast the RFC 6937 introduction as background </li>
<li>Made "Changes From RFC 6937" an explicit section</li>
<li>Made Relationships to other standards more explicit</li>
<li>Added a generalized SafeACK heuristic</li>
<li>Provided hints for non TCP implementations</li>
<li>Added language about detecting ACK splitting, but have no advice on actions (yet)</li>
</ul>

<t>About revision 02:</t>
<ul>
<li>Companion RACK loss detection RECOMMENDED </li>
<li>Non-SACK accounting in the pseudo code </li>
<li>cwnd computation in the pseudo code </li>
<li>Force fast retransmit at the beginning of fast recovery </li>
<li>Remove deprecated Rate-Halving text </li>
<li>Fixed bugs in the example traces</li>
</ul>

<t>About revision 03 and 04:</t>
<ul>
<li> Clarify when and how SndCnt becomes 0 </li>
<li> Improve algorithm to smooth the sending rate under higher reordering cases </li>
</ul>

<t>About revision 05:</t>
<ul>
<li> Revert the RecoverFS text and pseudocode to match the behavior in draft revision 03 and more closely match Linux TCP PRR</li>
</ul>

<t>About revision 06:</t>
<ul>
<li> Update RecoverFS to be initialized as: RecoverFS = pipe. </li>
</ul>

<t>About revision 07:</t>
<ul>
<li> Restored the revision 04 prose description for the rationale for initializing RecoverFS as: RecoverFS = pipe. </li>
<li> Added reference to <xref target="Hoe96Startup" /> in acknowledgements</li>
</ul>

<t>About revision 08:</t>
<ul>
<li> Inserted missing reference to <xref target="RFC9293" /></li>
<li> Recategorized "voluntary window reductions" as a phrase introduced by PRR </li>
</ul>

<t>About revision 09:</t>
<ul>
<li> Document the setting of cwnd = ssthresh when the sender completes a PRR episode, based on Linux TCP PRR experience and the mailing list discussion in the TCPM mailing list thread: "draft-ietf-tcpm-prr-rfc6937bis-03: set cwnd to ssthresh exiting fast recovery?". Mention the potential for bursts as a result of setting cwnd = ssthresh. Say that pacing is RECOMMENDED to deal with this.</li>
<li> Revised RecoverFS initialization to handle fast recoveries with mixes of real and spurious loss detection events (due to reordering), and incorporate consideration for a potentially large volume of data that is SACKed before fast recovery starts.</li>
<li>Fixed bugs in the definition of DeliveredData (reverted to definition from RFC 6937).</li>
<li>Clarified PRR triggers initialization based on start of congestion control reduction, not loss recovery, since congestion control may reduce ssthresh for each round trip with new losses in recovery.</li>
<li>Fixed bugs in PRR examples.</li>
</ul>

<t>About revision 10:</t>
<ul>
<li>Minor typo fixes and wordsmithing.</li>
</ul>

<t>About revision 11:</t>
<ul>
<li>Based on comments at the TCPM session at IETF 120, clarified the scope of congestion control algorithms for which PRR can be used, and clarified that it can be used for Reno or CUBIC.</li>
</ul>

<t>About revision 12:</t>
<ul>
<li>Added "About revision 11" and "About revision 12" sections.</li>
<li>Added a clarification about the applicability to CUBIC in the algorithm section.</li>
</ul>

<t>About revision 13:</t>
<ul>
<li>Switch from using the RFC 6675 "pipe" concept to an "inflight" concept that is independent of loss detection algorithm, and thus is usable with RACK-TLP loss detection <xref target="RFC8985"/></li>
</ul>

<t>About revision 14:</t>
<ul>
<li>Numerous editorial changes based on 2025-04-15  review from WIT area director Gorry Fairhurst.</li>
<li>Added a note to the RFC Editor to remove this "Document and WG Information" section before publication.</li>
<li>Rephrased all sentences with "we" or "our" to remove those words.</li>
<li>Updated the RFC2119 MUST/SHOULD/MAY/... text to use the latest boilerplate text from RFC8174, and moved this text into a separate section.</li>
<li>Ensured that each term in the "Definitions" section is listed with (a) the term, (b) an actual in-line definition, and (c) the citation of the original source reference, where appropriate.</li>
<li>Added missing definitions for terms used in the document:  cwnd, rwnd, ssthresh, SND.NXT, RMSS</li>
<li>In the "Relationships to other standards", after the paragraph about the congestion control algorithms with which PRR can be used, added a paragraph about PRR's independence from loss detection algorithm details and an explicit list of loss detection algorithms with which PRR can be used.</li>
<li>Where appropriate, changed "TCP" to a more generic phrase, like: "transport protocol", "connection", or "sender", depending on the context. Left "TCP" in place where that was the precise term that was appropriate in the context, given the protocol or packet header details. There are now no references to "TCP" in between the definition of SMSS and the "Adapting PRR to other transport protocols" section. The "Algorithm", "Examples", and "Properties" sections no longer mention "TCP".</li>
<li>Corrected the two occurrences of "MSS" in the pseudocode to use "SMSS", since "SMSS" has a definition and is consistent with the Reno (RFC5681) and CUBIC (RFC9438) documents.</li>
<li>Clarified the recommendation to use pacing to avoid bursts, and moved this into its own paragraph to make it easier for the reader to see.</li>
</ul>

<t>About revision 15:</t>
<ul>
<li>Fixed the description of the initialization of RecoverFS to match the latest RecoverFS pseudocode</li>
<li> Add a note that in the first example both algorithms (RFC6675 and PRR) complete the fast recovery episode with a cwnd  matching the ssthresh of 20.</li>
<li>Revised order of 2nd and 4th co-author</li>
<li>Numerous editorial changes based on 2025-05-27 last call Genart review from Russ Housley, including the following changes.</li>
<li>Fixed abstract and intro sections that said that this document "updates" the experimental PRR algorithm to clarify that this document obsoletes the experimental PRR RFC</li>
<li>To address the feedback 'The 7th paragraph of Section 5 begins with "A final change"; yet the
8th paragraph talks about another adaptation to PRR', reworded the "A final change" phrase.</li>
<li>Moved the paragraph about measurement studies to a new "Measurement Studies" section, to address the feedback: 'The last paragraph of Section 5 is not really about changes since the publication of RFC 6937'</li>
<li>Fixed various minor editorial issues identified in the review</li>
</ul>

<t>About revision 16:</t>
<ul>
<li>Revised the description and caption for the figures to try to improve clarity.</li>
</ul>

<t>About revision 17:</t>
<ul>
<li>Moved the explanation of "Van Jacobson's packet conservation principle" to be before the first use of the concept in the phrase "strictly packet conserving".</li>
<li>Numerous editorial changes based on the 29 suggestions in the 2025-06-03 perfmetrdir review from Paul Aitken ("perfmetrdir review of draft-ietf-tcpm-prr-rfc6937bis-16"), including the following larger-scale changes.</li>
<li>Ensured that all references to RFCs (mainly RFC6675 and RFC6937) used proper xref tags.</li>
<li> Moved the "Definitions" section to be immediately before the "Background" section, so that more terms are defined before being used.</li>
</ul>

<t>About revision 18:</t>
<ul>
<li>Several editorial changes based on the 2025-06-04 Opsdir review from Daniele Ceccarelli  ("draft-ietf-tcpm-prr-rfc6937bis-16 ietf last call Opsdir review"), including the following larger-scale changes.</li>
<li>Moved the content in the "Background" section into the "Introduction" section and revised the content to ensure that each passage only uses terms and concepts already described by the earlier text.</li>
<li>Made things simpler and more consistent by replacing a few "Reduction Bound algorithms" with "Reduction Bounds". In revision 16 we already had the simpler "Reduction Bounds" phrasing in four spots, so this makes the text more self-consistent.</li>
</ul>

<t>About revision 19:</t>
<ul>
<li>Fix a nit in the abstract caught by "idnits" online tool: 'The abstract seems to contain references ([RFC6937]), which it shouldn't.  Please replace those with straight textual mentions of the documents in question.'</li>
<li>Several editorial changes based on the suggestions in the 2025-06-12 perfmetrdir review from Paul Aitken (tcpm thread: "perfmetrdir review of draft-ietf-tcpm-prr-rfc6937bis-16").</li>
</ul>

<t>About revision 20:</t>
<ul>
<li>Several editorial changes based on the suggestions in the 2025-06-13 review from Mohamed Boucadair (tcpm thread: "Mohamed Boucadair's Yes on draft-ietf-tcpm-prr-rfc6937bis-19: (with COMMENT)"), including the following larger changes.</li>
<li>Changed the "Proportional Rate Reduction for TCP" title to "Proportional Rate Reduction"</li>
<li>Added an "Operational Considerations" section.</li>
<li>Moved the prose description of the computation of DeliveredData, inflight, RecoverFS, etc, from the "Definitions" section to the "Algorithm" section.</li>
<li>Moved the example section so that it is immediately after the discussion about properties, rather than immediately before.</li>
</ul>

<t>About revision 21:</t>
<ul>
<li>Fix a typo from revision 20 where an extra/old sentence about multiple implementations was accidentally left in the document.</li>
</ul> here.
        </t>
</section>

<section title="Definitions">

<section><name>Definitions</name>

<t>The following terms, parameters, and state variables are used as they are defined in earlier documents:</t>

<t>SND.UNA:  The

<dl spacing="normal" newline="false">
  <dt>SND.UNA:</dt><dd>The oldest unacknowledged sequence number. This is
  defined in Section 3.4 of <xref target="RFC9293" />.</t>

<t>SND.NXT: The section="3.4"/>.</dd>
  <dt>SND.NXT:</dt><dd>The next sequence number to be sent.  This is defined
  in Section 3.4 of <xref target="RFC9293" />.</t>

<t>duplicate section="3.4"/>.</dd>
  <dt>duplicate ACK:  An </dt><dd>An acknowledgment is considered a "duplicate
  ACK" or "duplicate acknowledgment" when (a) the receiver of the ACK has
  outstanding data, (b) the incoming acknowledgment carries no data, (c) the
  SYN and FIN bits are both off, (d) the acknowledgment number is equal to
  SND.UNA, and (e) the advertised window in the incoming acknowledgment equals
  the advertised window in the last incoming acknowledgment. This is defined
  in Section 2 of <xref target="RFC5681" />.</t>

<t>FlightSize: The section="2"/>.</dd>
  <dt>FlightSize:</dt><dd>The amount of data that has been sent but not yet
  cumulatively acknowledged. This is defined in Section 2 <xref target="RFC5681" />.</t>

<t>Receiver
  section="2"/>.</dd>
  <dt>Receiver Maximum Segment Size (RMSS): The (RMSS):</dt><dd>The RMSS is the size of
  the largest segment the receiver is willing to accept. This is the value
  specified in the MSS option sent by the receiver during connection startup
  (see Section 3.7.1 of <xref target="RFC9293" />). Or, section="3.7.1"/>). Or if the MSS option is not
  used, it is the default of 536 bytes for IPv4 or 1220 bytes for IPv6 (see Section 3.7.1 of
  <xref target="RFC9293" />). section="3.7.1"/>). The size does not include the
  TCP/IP headers and options. The RMSS is defined in Section 2 of <xref target="RFC5681" />
  section="2"/> and section 3.8.6.3 of <xref target="RFC9293" />.</t>

<t>Sender section="3.8.6.3"/>.</dd>
  <dt>Sender Maximum Segment Size (SMSS): The (SMSS):</dt><dd>The SMSS is the size of the
  largest segment that the sender can transmit.  This value can be based on
  the maximum transmission unit Maximum Transmission Unit (MTU) of the network, the path MTU discovery <xref
  target="RFC1191" /> <xref target="RFC8201" /> <xref target="RFC4821" />
  algorithm, RMSS, or other factors.  The size does not include the TCP/IP
  headers and options. This is defined in Section 2 of <xref target="RFC5681" />.</t>

<t>Receiver
  section="2"/>.</dd>
  <dt>Receiver Window (rwnd): The (rwnd):</dt><dd>The most recently received advertised
  receiver window, in bytes.  At any given time, a connection MUST NOT <bcp14>MUST
  NOT</bcp14> send data with a sequence number higher than the sum of SND.UNA
  and rwnd. This is defined in section 2 <xref target="RFC5681" />.</t>

<t>Congestion section="2"/>.</dd>
  <dt>Congestion Window (cwnd): A (cwnd):</dt><dd>A state variable that limits the
  amount of data a connection can send.  At any given time, a connection MUST NOT
  <bcp14>MUST NOT</bcp14> send data if inflight (see below) matches or exceeds
  cwnd. This is defined in Section 2 of <xref target="RFC5681" />.</t>

<t>Slow section="2"/>.</dd>
  <dt>Slow Start Threshold (ssthresh): The (ssthresh):</dt><dd>The slow start threshold
  (ssthresh) state variable is used to determine whether the slow start or
  congestion avoidance algorithm is used to control data transmission. During
  fast recovery, ssthresh is the target window size for a fast recovery
  episode, as determined by the congestion control algorithm. This is defined
  in Section 3.1 of <xref target="RFC5681" />. </t> section="3.1"/>.</dd>
</dl>

<t>PRR defines additional variables and terms:</t>

<t>  Delivered

<dl spacing="normal" newline="false">
  <dt>Delivered Data (DeliveredData): The (DeliveredData):</dt><dd>The data sender's best estimate of the total number of bytes that the current ACK indicates have been delivered to the receiver since the previously received ACK.</t>

<t> In-Flight ACK.</dd>
  <dt>In-Flight Data (inflight): The (inflight):</dt><dd>The data sender's best estimate of the number of unacknowledged bytes in flight in the network; network, i.e., bytes that were sent and neither lost nor received by the data receiver.</t>

<t> Recovery receiver.</dd>
  <dt>Recovery Flight Size (RecoverFS): The (RecoverFS):</dt><dd>The number of bytes the sender estimates might possibly be delivered over the course of the current PRR episode.</t>

<t>SafeACK: A episode.</dd>
  <dt>SafeACK:</dt><dd>A local boolean variable indicating that the current ACK indicates the recovery is making good progress and the sender can send more aggressively, increasing inflight, if appropriate.</t>

<t>SndCnt: A appropriate.</dd>
  <dt>SndCnt:</dt><dd>A local variable indicating exactly how many bytes should be sent  in response to each ACK. </t>

<t>Voluntary </dd>
  <dt>Voluntary window reductions: choosing reductions:</dt><dd>Choosing not to send data in response to some ACKs, for the purpose of reducing the sending window size and data rate.</t> rate.</dd>
</dl>
</section>

<section title="Changes

<section><name>Changes Relative to RFC 6937"> 6937</name>

<t>The largest change since <xref target="RFC6937" /> is the introduction of a new heuristic that uses good recovery progress (for TCP, when the latest ACK advances SND.UNA and does not indicate that a prior fast retransmit has been lost) to select the Reduction Bound (PRR-CRB or PRR-SSRB).  <xref target="RFC6937" /> left the choice of Reduction Bound to the discretion of the implementer but recommended to use PRR-SSRB by default.  For all of the environments explored in earlier PRR research, the new heuristic is consistent with the old recommendation.</t>

<t>
The paper "An Internet-Wide Analysis of Traffic Policing" <xref target="Flach2016policing"/>
uncovered a crucial situation not previously explored, where both Reduction Bounds perform very poorly, poorly but for different reasons.  Under many configurations, token bucket traffic policers can suddenly start discarding a large fraction of the traffic when tokens are depleted, without any warning to the end systems.  The transport congestion control has no opportunity to measure the token rate, rate and sets ssthresh based on the previously observed path performance.  This value for ssthresh may cause a data rate that is substantially larger than the token replenishment rate, causing high loss. Under these conditions, both Reduction Bounds perform very poorly.   PRR-CRB is too timid, sometimes causing very long recovery times at smaller than necessary windows, and PRR-SSRB is too aggressive, often causing many retransmissions to be lost for multiple rounds. Both cases lead to prolonged recovery, decimating application latency and/or goodput. </t>

<t>Investigating these environments led to the development of a "SafeACK" heuristic to dynamically switch between Reduction Bounds: by default default, conservatively use PRR-CRB and only switch to PRR-SSRB when ACKs indicate the recovery is making good progress (SND.UNA is advancing without detecting any new losses). The SafeACK heuristic was experimented with in Google's CDN Content Delivery Network (CDN) <xref target="Flach2016policing"/> and implemented in Linux TCP since 2015. </t>

<t>This SafeACK heuristic is only invoked where losses, application-limited behavior, or other events cause the current estimate of in-flight data to fall below ssthresh.  The high loss rates that make the heuristic essential are only common in the presence of heavy losses losses, such as traffic policers <xref target="Flach2016policing"/>.  In these environments environments, the heuristic performs better than either bound by itself. </t>

<t>Another PRR algorithm change improves the sending process when the sender enters recovery after a large portion of sequence space has been SACKed. This scenario could happen when the sender has previously detected reordering, for example, by using <xref target="RFC8985"/>. In the previous version of PRR, RecoverFS did not properly account for sequence ranges SACKed before entering fast recovery, which caused PRR to initially send too slowly. With the change, PRR properly accounts for sequence ranges SACKed before entering fast recovery.</t>

<t>Yet another change is to force a fast retransmit  upon the first ACK that triggers the recovery. Previously, PRR may not allow a fast retransmit (i.e., SndCnt is 0) on the first ACK in fast recovery, depending on the loss situation. Forcing a fast retransmit is important to maintain the ACK clock and avoid potential retransmission timeout (RTO) events. The forced fast retransmit only happens once during the entire recovery and still follows the packet conservation principles in PRR. This heuristic has been implemented since the first widely deployed TCP PRR implementation in 2011 <xref target="First_TCP_PRR" />. </t>

<t> In another change, upon exiting recovery recovery, a data sender sets cwnd to ssthresh. This is important for robust performance. Without setting cwnd to ssthresh at the end of recovery, recovery and with application-limited sender behavior and some loss patterns patterns, cwnd could end fast recovery well below ssthresh, leading to bad performance. The performance could, in some cases, be worse than <xref target="RFC6675" /> recovery, which simply sets cwnd to ssthresh at the start of recovery. This behavior of setting cwnd to ssthresh at the end of recovery has been implemented since the first widely deployed TCP PRR implementation in 2011 <xref target="First_TCP_PRR" />, /> and is similar to <xref target="RFC6675" />, which specifies setting cwnd to ssthresh at the start of recovery. </t>

<!--[rfced] To avoid awkward hyphenation of an RFC citation, may we
rephrase the latter part of this sentence as follows?

Original:
   Since [RFC6937] was written, PRR has also been adapted to perform
   multiplicative window reduction for non-loss based congestion control
   algorithms, such as for [RFC3168] style Explicit Congestion
   Notification (ECN).

Perhaps:
   Since [RFC6937] was written, PRR has also been adapted to perform
   multiplicative window reduction for non-loss-based congestion control
   algorithms, such as for Explicit Congestion Notification (ECN) as
   described in [RFC3168].
-->

<t>
Since <xref target="RFC6937" /> was written, PRR has also been adapted to perform multiplicative window reduction for non-loss based non-loss-based congestion control algorithms, such as for <xref target="RFC3168" /> style Explicit Congestion Notification (ECN).   This can be done by using some parts of the loss recovery state machine (in particular particular, the RecoveryPoint from <xref target="RFC6675" />) to invoke the PRR ACK processing for exactly one round trip worth of ACKs. However, note that using PRR for cwnd reductions for ECN <xref target="RFC3168" /> ECN has been observed, with some approaches to Active Queue Management (AQM), to cause an excess cwnd reduction during ECN-triggered congestion episodes, as noted in <xref target="VCC" />.
</t>

</section>
<section title="Relationships
<section><name>Relationships to other standards"> Other Standards</name>

<t>PRR MAY <bcp14>MAY</bcp14> be used in conjunction with any congestion control algorithm that intends to make a multiplicative decrease in its sending rate over approximately the time scale of one round trip round-trip time, as long as the current volume of in-flight data is limited by a congestion window (cwnd) and the target volume of in-flight data during that reduction is a fixed value given by ssthresh. In particular, PRR is applicable to both Reno <xref target="RFC5681" /> and CUBIC <xref target="RFC9438" /> congestion control. PRR is described as a modification to "A Conservative Loss Recovery Algorithm Based on Selective Acknowledgment (SACK) for TCP" <xref target="RFC6675" />.   It is most accurate with SACK <xref target="RFC2018" /> but does not require SACK.</t>

<t>PRR can be used in conjunction with a wide array of loss detection algorithms. This is because PRR does not have any dependencies on the details of how a loss detection algorithm estimates which packets have been delivered and which packets have been lost. Upon the reception of each ACK, PRR simply needs the loss detection algorithm to communicate how many packets have been marked as lost and how many packets have been marked as delivered.  Thus  Thus, PRR MAY <bcp14>MAY</bcp14> be used in conjunction with the  loss detection algorithms specified or described in the following documents: Reno <xref target="RFC5681" />, NewReno <xref target="RFC6582" />, SACK <xref target="RFC6675" />, FACK Forward Acknowledgment (FACK) <xref target="FACK" />, and RACK-TLP Recent Acknowledgment Tail Loss Probe (RACK-TLP) <xref target="RFC8985" />. Because of the performance properties of RACK-TLP, including resilience to tail loss, reordering, and lost retransmissions, it is RECOMMENDED <bcp14>RECOMMENDED</bcp14> that PRR is implemented together with RACK-TLP loss recovery <xref target="RFC8985"/>.
</t>

<t>The SafeACK heuristic came about as a result of robust Lost Retransmission Detection under development in an early precursor to <xref target="RFC8985"/>.  Without Lost Retransmission Detection, policers that cause very high loss rates are at very high risk of causing retransmission timeouts because Reno <xref target="RFC5681" />,  CUBIC <xref target="RFC9438" />, and <xref target="RFC6675" /> can send retransmissions significantly above the policed rate. </t>

</section>

<section title="Algorithm">

<section title="Initialization Steps">

<section><name>Algorithm</name>

<section><name>Initialization Steps</name>

<t>
At the beginning of a congestion control response episode initiated by the congestion control algorithm, a data sender using PRR MUST <bcp14>MUST</bcp14> initialize the PRR state.</t>

<t>The timing of the start of a congestion control response episode is entirely up to the congestion control algorithm, and (for example) could correspond to the start of a fast recovery episode, or a once-per-round-trip reduction when lost retransmits or lost original transmissions are detected after fast recovery is already in progress.</t>

<t>The PRR initialization allows a congestion control algorithm, CongCtrlAlg(), that might set ssthresh to something other than FlightSize/2 (including, e.g., CUBIC <xref target="RFC9438" />). </t>

<t> A key step of PRR initialization is computing Recovery Flight Size (RecoverFS), the number of bytes the data sender estimates might possibly be delivered over the course of the PRR episode. This can be thought of as the sum of the following values at the start of the episode: inflight, the bytes cumulatively acknowledged in the ACK triggering recovery, the bytes SACKed in the ACK triggering recovery, and the bytes between SND.UNA and SND.NXT that have been marked lost. The RecoverFS includes losses because losses are marked using heuristics, so some packets previously marked as lost may ultimately be delivered (without being retransmitted) during recovery. PRR uses RecoverFS to compute a smooth sending rate. Upon entering fast recovery, PRR initializes RecoverFS, and RecoverFS remains constant during a given fast recovery episode.</t>

<t>The full sequence of PRR algorithm initialization steps is as follows:</t>

<sourcecode><![CDATA[

<sourcecode type="pseudocode"><![CDATA[
   ssthresh = CongCtrlAlg()      // Target flight size in recovery
   prr_delivered = 0             // Total bytes delivered in recovery
   prr_out = 0                   // Total bytes sent in recovery
   RecoverFS = SND.NXT - SND.UNA
   // Bytes SACKed before entering recovery will not be
   // marked as delivered during recovery:
   RecoverFS -= (bytes SACKed in scoreboard)
   // Include the (common) case of selectively ACKed bytes:
   RecoverFS += (bytes newly SACKed)
   // Include the (rare) case of cumulatively ACKed bytes:
   RecoverFS += (bytes newly cumulatively acknowledged)
]]></sourcecode>

</section>

<section title="Per-ACK Steps">

<section><name>Per-ACK Steps</name>

<t>On every ACK starting or during fast recovery, excluding the ACK that concludes a PRR episode, PRR executes the following steps.</t>

<!--[rfced] To improve readability, may we add parentheses in this
sentence? Please review and let us know if thus suggested update
retains the intended meaning.

Original:
   In recovery without SACK, DeliveredData is estimated to be
   1 SMSS on receiving a duplicate ACK, and on a subsequent partial or
   full ACK DeliveredData is the change in SND.UNA, minus 1 SMSS for
   each preceding duplicate ACK.

Perhaps:
   In recovery without SACK, DeliveredData is estimated to be
   1 SMSS on receiving a duplicate ACK (and the change is in SND.UNA on
   a subsequent partial or full ACK DeliveredData), minus 1 SMSS for
   each preceding duplicate ACK.
-->

<t>First, the sender computes DeliveredData, the data sender's best estimate of the total number of bytes that the current ACK indicates have been delivered to the receiver since the previously received ACK. With SACK, DeliveredData can be computed precisely as the change in SND.UNA, plus the (signed) change in SACKed. SACK. Thus, in the special case when there are no SACKed sequence ranges in the scoreboard before or after the ACK, DeliveredData is the change in SND.UNA. In recovery without SACK, DeliveredData is estimated to be 1 SMSS on receiving a duplicate ACK, and on a subsequent partial or full ACK DeliveredData is the change in SND.UNA, minus 1 SMSS for each preceding duplicate ACK. Note that without SACK, a poorly-behaved poorly behaved receiver that returns extraneous duplicate ACKs  (as described in <xref target='Savage99' />) could attempt to artificially inflate DeliveredData. As a mitigation, if not using SACK SACK, then PRR disallows incrementing DeliveredData when the total bytes delivered in a PRR episode would exceed the estimated data outstanding upon entering recovery (RecoverFS).</t>

<t>Next, the sender computes inflight, the data sender's best estimate of the number of bytes that are in flight in the network. To calculate inflight, connections with SACK enabled and using <xref target="RFC6675"/> loss detection MAY <xref target="RFC6675"/> <bcp14>MAY</bcp14> use the "pipe" algorithm as specified in <xref target="RFC6675"/>. SACK-enabled connections using RACK-TLP loss detection <xref target="RFC8985"/> or other loss detection algorithms MUST <bcp14>MUST</bcp14> calculate inflight by starting with SND.NXT -  SND.UNA, subtracting out bytes SACKed in the scoreboard, subtracting out bytes marked lost in the scoreboard, and adding bytes in the scoreboard that have been retransmitted since they were last marked lost. For non-SACK-enabled connections, instead of subtracting out bytes SACKed in the SACK scoreboard, senders MUST <bcp14>MUST</bcp14> subtract out: min(RecoverFS, 1 SMSS for each preceding duplicate ACK in the fast recovery episode); the min() with RecoverFS is to protect against misbehaving receivers <xref target='Savage99' />.</t>

<t>Next, the sender computes SafeACK, a local boolean variable indicating that the current ACK reported good progress. SafeACK is true only when the ACK has cumulatively acknowledged new data and the ACK does not indicate further losses. For example, an ACK triggering <xref target="RFC6675"/> "rescue" retransmission (Section 4, (<xref target="RFC6675" section="4"/>, NextSeg() condition 4) may indicate further losses. Both conditions indicate the recovery is making good progress and the sender can send more aggressively, increasing inflight, if appropriate. </t>

<t>Finally, the sender uses DeliveredData, inflight, SafeACK, and other PRR state states to compute SndCnt, a local variable indicating exactly how many bytes should be sent  in response to each ACK, ACK and then uses  SndCnt to update cwnd.</t>

<t>The full sequence of per-ACK PRR algorithm steps is as follows:</t>

<sourcecode><![CDATA[

<sourcecode type="pseudocode"><![CDATA[
   if (DeliveredData is 0)
      Return

   prr_delivered += DeliveredData
   inflight = (estimated volume of in-flight data)
   SafeACK = (SND.UNA advances and no further loss indicated)
   if (inflight > ssthresh) {
      // Proportional Rate Reduction
      // This uses integer division, rounding up:
      #define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
      out = DIV_ROUND_UP(prr_delivered * ssthresh, RecoverFS)
      SndCnt = out - prr_out
   } else {
      // PRR-CRB by default
      SndCnt = MAX(prr_delivered - prr_out, DeliveredData)
      if (SafeACK) {
         // PRR-SSRB when recovery is making good progress
         SndCnt += SMSS
      }
      // Attempt to catch up, as permitted
      SndCnt = MIN(ssthresh - inflight, SndCnt)
   }

   if (prr_out is 0 AND SndCnt is 0) {
      // Force a fast retransmit upon entering recovery
      SndCnt = SMSS
   }
   cwnd = inflight + SndCnt

   ]]></sourcecode> SndCnt]]></sourcecode>

<t>After the sender computes SndCnt and uses it to update cwnd, the sender transmits more data. Note that the decision of which data to send (e.g., retransmit missing data or send more new data) is out of scope for this document.</t>

</section>

<section title="Per-Transmit Steps">

<section><name>Per-Transmit Steps</name>

<t>On any data transmission or retransmission, PRR executes the following:</t>

<sourcecode><![CDATA[

<sourcecode type="pseudocode"><![CDATA[
   prr_out += (data sent)
]]></sourcecode>

</section>

<section title="Completion Steps">

<section><name>Completion Steps</name>

<t> A PRR episode ends upon either completing fast recovery, recovery or before initiating a new PRR episode due to a new congestion control response episode. </t>

<t>On the completion of a PRR episode, PRR executes the following:</t>

<sourcecode><![CDATA[

<sourcecode type="pseudocode"><![CDATA[
   cwnd = ssthresh
]]></sourcecode>

<t> Note that this step that sets cwnd to ssthresh can potentially, in some scenarios, allow a burst of back-to-back segments into the network. </t>

<t>It is RECOMMENDED <bcp14>RECOMMENDED</bcp14> that implementations use pacing to reduce the burstiness of data traffic. This recommendation is consistent with current practice to mitigate bursts (e.g., <xref target="I-D.welzl-iccrg-pacing" />), including pacing transmission bursts after restarting from idle. </t>

</section>

</section> <!-- Algorithm -->

<section title="Properties">

<section><name>Properties</name>

<t>The following properties are common to both PRR-CRB and PRR-SSRB, except as noted:</t>

<t>PRR attempts to maintain the sender's ACK clocking across recovery events, including burst losses. By contrast, <xref target="RFC6675" /> can send large, unclocked bursts following burst losses.</t>

<t>Normally, PRR will spread voluntary window reductions out evenly across a full RTT.  This has the potential to generally reduce the burstiness of Internet traffic, traffic and could be considered to be a type of soft pacing.   Hypothetically, any pacing increases the probability that different flows are interleaved, reducing the opportunity for ACK compression and other phenomena that increase traffic burstiness. However, these effects have not been quantified.</t>

<t>If there are minimal losses, PRR will converge to exactly the target window chosen by the congestion control algorithm. Note that as the sender approaches the end of recovery, prr_delivered will approach RecoverFS and SndCnt will be computed such that prr_out approaches ssthresh.</t>

<t>Implicit window reductions, due to multiple isolated losses during recovery, cause later voluntary reductions to be skipped.  For small numbers of losses, the window size ends at exactly the window chosen by the congestion control algorithm.</t>

<t>For burst losses, earlier voluntary window reductions can be undone by sending extra segments in response to ACKs arriving later during recovery.    Note that as long as some voluntary window reductions are not undone, and there is no application stall, the final value for inflight will be the same as ssthresh.</t>

<t>PRR using either Reduction Bound improves the situation when there are
application stalls, e.g., when the sending application does not queue data for
transmission quickly enough or the receiver stops advancing its receive window.
When there is an application stall early during recovery, prr_out will
fall behind the sum of transmissions allowed by SndCnt.   The missed
opportunities to send due to stalls are treated like banked voluntary window
reductions; specifically, they cause prr_delivered - prr_out to be significantly positive.  If the application catches up while the sender is still in recovery, the sender will send a partial window burst to grow inflight to catch up to exactly where it would have been had the application never stalled.   Although such a burst could negatively impact the given flow or other sharing flows, this is exactly what happens every time there is a partial-RTT application stall while not in recovery.   PRR makes partial-RTT stall behavior uniform in all states.  Changing this behavior is out of scope for this document.</t>

<t>PRR with Reduction Bound is less sensitive to errors in the inflight estimator.
While in recovery, inflight is intrinsically an estimator, using incomplete
information to estimate if un-SACKed segments are actually lost or merely out
of order in the network.   Under some conditions, inflight can have significant errors; for example, inflight is underestimated when a burst of reordered data is prematurely assumed to be lost and marked for retransmission. If the transmissions are regulated directly by inflight as they are with <xref target="RFC6675" />, a step discontinuity in the inflight estimator causes a burst of data, which cannot be retracted once the inflight estimator is corrected a few ACKs later.   For PRR dynamics, inflight merely determines which algorithm, PRR or the Reduction Bound, is used to compute SndCnt from DeliveredData.  While inflight is underestimated, the algorithms are different by at most 1 segment per ACK.  Once inflight is updated, they converge to the same final window at the end of recovery.</t>

<t>Under all conditions and sequences of events during recovery, PRR-CRB strictly bounds the data transmitted to be equal to or less than the amount of data delivered to the receiver.   This Strong Packet Conservation Bound is the most aggressive algorithm that does not lead to additional forced losses in some environments.   It has the property that if there is a standing queue at a bottleneck with no cross traffic, the queue will maintain exactly constant length for the duration of the recovery, except for +1/-1 fluctuation due to differences in packet arrival and exit times.  See  <xref target="conservative" /> for a detailed discussion of this property.</t>

<!-- [rfced] May we clarify "[RFC6675] 'half window of silence'" as follows?

Original:
   The [RFC6675] "half window of silence" may temporarily
   reduce queue pressure when congestion control does not reduce the
   congestion window entering recovery to avoid further losses.

Perhaps:
   The "half window of silence" that a SACK-based Conservative Loss
   Recovery Algorithm [RFC6675] experiences may temporarily
   reduce queue pressure when congestion control does not reduce the
   congestion window entering recovery to avoid further losses.
-->

<t>Although the Strong Packet Conservation Bound is very appealing for a number of reasons, earlier measurements (in  section 6 of <xref target="RFC6675" />) section="6"/>)  demonstrate that it is less aggressive and does not perform as well as <xref target="RFC6675" />, which permits bursts of data when there are bursts of losses.   PRR-SSRB is a compromise that permits a sender to send one extra segment per ACK as compared to the Packet Conserving Bound when the ACK indicates the recovery is in good progress without further losses.  From the perspective of a strict Packet Conserving Bound, PRR-SSRB does indeed open the window during recovery; however, it is significantly less aggressive than <xref target="RFC6675" /> in the presence of burst losses. The <xref target="RFC6675" /> "half window of silence" may temporarily reduce queue pressure when congestion control does not reduce the congestion window entering recovery to avoid further losses. The goal of PRR is to minimize the opportunities to lose the self clock by smoothly controlling inflight toward the target set by the congestion control. It is the congestion control's responsibility to avoid a full queue, not PRR.
</t>

</section>

<section title="Examples">

<section><name>Examples</name>

<t>This section illustrates the PRR and <xref target="RFC6675" /> algorithms algorithm by showing their different behaviors for two example scenarios: a connection experiencing either a single loss or a burst of 15 consecutive losses. All cases use bulk data transfers (no application pauses), Reno congestion control <xref target="RFC5681" />, and cwnd = FlightSize = inflight = 20 segments, so ssthresh will be set to 10 at the beginning of recovery.   The scenarios use standard Fast Retransmit <xref target="RFC5681" /> and Limited Transmit <xref target="RFC3042" />, so the sender will send two new segments followed by one retransmit in response to the first three duplicate ACKs following the losses.</t>

<t>Each of the diagrams below shows the per ACK response to the first round trip for the two recovery algorithms when the zeroth segment is lost.   The top line ("ack#") indicates the transmitted segment number triggering the ACKs, with an X for the lost segment.  The "cwnd" and "inflight" lines indicate the values of cwnd and inflight, respectively, for these algorithms after processing each returning ACK but before further (re)transmission.  The "sent" line indicates how much 'N'ew "N"ew or 'R'etransmitted "R"etransmitted data would be sent.  Note that the algorithms for deciding which data to send are out of scope of this document.</t>

<figure><artwork><![CDATA[

a X  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22
c   20 20 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
i   19 19 18 18 17 16 15 14 13 12 11 10  9  9  9  9  9  9  9  9  9  9
s    N  N  R                             N  N  N  N  N  N  N  N  N  N

PRR
a X  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22
c   20 20 19 18 18 17 17 16 16 15 15 14 14 13 13 12 12 11 11 10 10 10
i   19 19 18 18 17 17 16 16 15 15 14 14 13 13 12 12 11 11 10 10  9  9
s    N  N  R     N     N     N     N     N     N     N     N     N  N

a: ack#;  c: cwnd;  i: inflight;  s: sent
]]></artwork></figure>
<!-- An AWK script to simulate the PRR case above and generate the PRR table, starting on ACK number 3:
###
#!/usr/bin/awk

function ceil(x, y) {
    y = int(x);
    return (x > y ? y + 1 : y);
}

BEGIN {
    pkt_num = 3;
    flightsize = 20;
    inflight = 19;
    ssthresh = flightsize/2;
    RecoverFS = flightsize;
    prr_delivered = 0;

    print "pkt_num cwnd inflight SndCnt";
    for (pkt_num = 3; pkt_num <= 22; pkt_num++) {
        inflight -=1 ;
        prr_delivered += 1;
        if (prr_delivered >= flightsize) {
            cwnd = ssthresh;
            SndCnt = cwnd - inflight;
        } else {
            SndCnt = ceil(prr_delivered * ssthresh / RecoverFS) - prr_out;
            cwnd = inflight + SndCnt;
        }
        print pkt_num, cwnd, inflight, SndCnt;
        prr_out += SndCnt;
        inflight += SndCnt;
    }
}
-->

<t>In this first example, ACK#1 through ACK#19 contain SACKs for the original flight of data, ACK#20 and ACK#21 carry SACKs for the limited transmits triggered by the first and second SACKed segments, and ACK#22 carries the full cumulative ACK covering all data up through the limited transmits. ACK#22 completes the fast recovery episode, episode and thus completes the PRR episode.</t>

<t>Note that both algorithms send the same total amount of data, and both algorithms complete the fast recovery episode with a cwnd matching the ssthresh of 20.  <xref target="RFC6675" /> experiences a "half window of silence" while PRR spreads the voluntary window reduction across an entire RTT.</t>

<t>Next, consider an example scenario with the same initial conditions, except that the first 15 packets (0-14) are lost.   During the remainder of the lossy round trip, only 5 ACKs are returned to the sender.   The following examines each of these algorithms in succession.
</t>

<figure><artwork><![CDATA[

a X  X  X  X  X  X  X  X  X  X  X  X  X  X  X  15 16 17 18 19
c                                              20 20 10 10 10
i                                              19 19  4  9  9
s                                               N  N 6R  R  R

PRR
a X  X  X  X  X  X  X  X  X  X  X  X  X  X  X  15 16 17 18 19
c                                              20 20  5  5  5
i                                              19 19  4  4  4
s                                               N  N  R  R  R

a: ack#;  c: cwnd;  i: inflight;  s: sent
]]></artwork></figure>

<t>In this specific situation, <xref target="RFC6675" /> is more aggressive because once Fast Retransmit is triggered (on the ACK for segment 17), the sender  immediately retransmits sufficient data to bring inflight up to cwnd.  Earlier measurements (in  section 6 of <xref target="RFC6675" />) section="6"/>) indicate that <xref target="RFC6675" /> significantly outperforms PRR <xref target="RFC6937" /> PRR using only PRR-CRB, PRR-CRB and some other similarly conservative algorithms that were tested, showing that it is significantly common for the actual losses to exceed the cwnd reduction determined by the congestion control algorithm. </t>

<t>Under such heavy losses, during the first round trip of fast recovery recovery, PRR uses the PRR-CRB to follow the packet conservation principle.   Since the total losses bring inflight below ssthresh, data is sent such that the total data transmitted, prr_out, follows the total data delivered to the receiver as reported by returning ACKs. Transmission is controlled by the sending limit, which is set to prr_delivered - prr_out. </t>

<t>While not shown in the figure above, once the fast retransmits sent starting at ACK#17 are delivered and elicit ACKs that increment the SND.UNA, PRR enters PRR-SSRB and  increases the window by exactly 1 segment per ACK until inflight rises to ssthresh during recovery.  On heavy losses when cwnd is large, PRR-SSRB recovers the losses exponentially faster than PRR-CRB. Although increasing the window during recovery seems to be ill advised, it is important to remember that this is actually less aggressive than permitted by <xref target="RFC6675" />, which sends the same quantity of additional data as a single burst in response to the ACK that triggered Fast Retransmit.</t>

<t>For less severe loss events, where the total losses are smaller than the difference between FlightSize and ssthresh, PRR-CRB and PRR-SSRB are not invoked since PRR stays in the proportional rate reduction Proportional Rate Reduction mode. </t>

</section>

<section title="Adapting

<section><name>Adapting PRR to other transport protocols"> Other Transport Protocols</name>

<t>The main PRR algorithm and reductions bounds can be adapted to any transport that can support <xref target="RFC6675" />. In one major implementation (Linux TCP) TCP), PRR has been the fast recovery algorithm for its default and supported congestion control modules since its introduction in 2011 <xref target="First_TCP_PRR" />. </t>

<t>The SafeACK heuristic can be generalized as any ACK of a retransmission that does not cause some other segment to be marked for retransmission.  </t>

</section>

<section title="Measurement Studies">

<section><name>Measurement Studies</name>

<t>
For <xref target="RFC6937" /> />, a companion paper <xref target="IMC11" /> evaluated <xref target="RFC3517" /> and various experimental PRR versions in a large-scale measurement study.  At the time of publication, the legacy algorithms used in that study are no longer present in the code base used in that study, making such comparisons difficult without recreating historical algorithms.   Readers interested in the measurement study should review section 5 of <xref target="RFC6937" /> section="5"/> and the IMC paper <xref target="IMC11" />.
</t>

</section>

<section title="Operational Considerations">

<section title="Incremental Deployment">

<section><name>Operational Considerations</name>

<section><name>Incremental Deployment</name>
<t>
PRR is incrementally deployable, because it utilizes only existing transport protocol mechanisms for data delivery acknowledgment and the detection of lost data. PRR only requires only changes to the transport protocol implementation at the data sender; it does not require any changes at data receivers or in networks. This allows data senders using PRR to work correctly with any existing data receivers or networks. PRR does not require any changes to or assistance from routers, switches, or other devices in the network.
</t>
</section>

<section title="Fairness">

<section><name>Fairness</name>
<t>
PRR is designed to maintain the fairness properties of the congestion control algorithm with which it is deployed. PRR only operates during a congestion control response episode, such as fast recovery or response to ECN <xref target="RFC3168" /> ECN, />, and only makes short-term, per-acknowledgment decisions to smoothly regulate the  volume of in-flight data during an episode such that at the end of the episode it will be as close as possible to the slow start threshold (ssthresh), as determined by the congestion control algorithm. PRR does not modify the congestion control cwnd increase or decrease mechanisms outside of congestion control response episodes.
</t>
</section>

<section title="Protecting

<section><name>Protecting the Network Against Excessive Queuing and Packet Loss"> Loss</name>
<t>Over long time scales, PRR is designed to maintain the queuing and packet loss properties of the congestion control algorithm with which it is deployed. As noted above, PRR only operates during a congestion control response episode, such as fast recovery or response to ECN, and only makes short-term, per-acknowledgment decisions to smoothly regulate the  volume of in-flight data during an episode such that at the end of the episode it will be as close as possible to the slow start threshold (ssthresh), as determined by the congestion control algorithm. </t>

<t> Over short time scales, PRR is designed to cause lower packet loss rates than preceding approaches like <xref target="RFC6675" />. At a high level, PRR is inspired by the packet conservation principle, and, and as much as possible, PRR relies on the self clock process. By contrast, with <xref target="RFC6675" /> />, a single ACK carrying a SACK option that implies a large quantity of missing data can cause a step discontinuity in the pipe estimator, which can cause Fast Retransmit to send a large burst of data that is much larger than the volume of delivered data. PRR avoids such bursts by basing transmission decisions on the volume of delivered data rather than the volume of lost data. Furthermore, as noted above, PRR-SSRB is less aggressive than <xref target="RFC6675" /> (transmitting fewer segments or taking more time to transmit them), and it outperforms due to the lower probability of additional losses during recovery.</t>
</section>

</section>

<section title="Acknowledgements">
<t>This document is based in part on previous work by Janey C. Hoe (see section 3.2, "Recovery from Multiple Packet Losses", of <xref target="Hoe96Startup" />) and Matt Mathis, Jeff Semke, and Jamshid Mahdavi <xref target="RHID" />, and influenced by several discussions with John Heffner.</t>

<t>Monia Ghobadi and Sivasankar Radhakrishnan helped analyze the experiments. Ilpo Jarvinen reviewed the initial implementation. Mark Allman, Richard Scheffenegger, Markku Kojo, Mirja Kuehlewind, Gorry Fairhurst, Russ Housley, Paul Aitken, Daniele Ceccarelli, and Mohamed Boucadair improved the document through their insightful reviews and suggestions.</t>

</section>

<section anchor="IANA">
    <!-- All drafts are required to have an IANA considerations section. See RFC 8126 for a guide.-->
      <name>IANA anchor="IANA"><name>IANA Considerations</name>
      <t>This memo includes document has no request to IANA.</t> IANA actions.</t>
    </section>

<section title="Security Considerations">

<section><name>Security Considerations</name>
<t>PRR does not change the risk profile for transport protocols.</t>

<t>Implementers that change PRR from counting bytes to segments have to be cautious about the effects of ACK splitting attacks <xref target='Savage99' />, where the receiver acknowledges partial segments for the purpose of confusing the sender's congestion accounting.</t>

</section>

</middle>

<back>
<references title="Normative References">
  <displayreference target="I-D.mathis-tcp-ratehalving" to="TCP-RH"/>
  <displayreference target="I-D.welzl-iccrg-pacing" to="PACING"/>
<references><name>References</name>
<references><name>Normative References</name>
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.1191.xml" href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.1191.xml" />
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.2018.xml" href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2018.xml" />
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.2119.xml" href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2119.xml" />
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.6582.xml" href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6582.xml" />
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.4821.xml" href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.4821.xml" />
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.5681.xml" href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.5681.xml" />
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.6675.xml" href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6675.xml" />
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.8174.xml" href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8174.xml" />
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.8201.xml" href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8201.xml" />
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.8985.xml" href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8985.xml" />
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.9293.xml" href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9293.xml" />
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.9438.xml" href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9438.xml" />
</references>

<references title='Informative References'>

<!--[rfced] FYI - We found free access versions of these references in the ACM
Digital Library and added DOIs and URLs to these references.

Current:
   [Flach2016policing]
              Flach, T., Papageorge, P., Terzis, A., Pedrosa, L., Cheng,
              Y., Karim, T., Katz-Bassett, E., and R. Govindan, "An
              Internet-Wide Analysis of Traffic Policing", SIGCOMM '16:
              Proceedings of the 2016 ACM SIGCOMM Conference, pp.
              468-482, DOI 10.1145/2934872.2934873, August 2016,
              <https://doi.org/10.1145/2934872.2934873>.

   [Hoe96Startup]
              Hoe, J., "Improving the Start-up Behavior of a Congestion
              Control Scheme for TCP", SIGCOMM '96: Conference
              Proceedings on Applications, Technologies, Architectures,
              and Protocols for Computer Communications, pp. 270-280,
              DOI 10.1145/248157.248180, August 1996,
              <https://doi.org/10.1145/248157.248180>.

   [IMC11]    Dukkipati, N., Mathis, M., Cheng, Y., and M. Ghobadi,
              "Proportional Rate Reduction for TCP", IMC '11:
              Proceedings of the 2011 ACM SIGCOMM Conference on Internet
              Measurement Conference, pp. 155-170,
              DOI 10.1145/2068816.2068832, November 2011,
              <https://doi.org/10.1145/2068816.2068832>.

   [Jacobson88]
              Jacobson, V., "Congestion Avoidance and Control",
              Symposium proceedings on Communications architectures and
              protocols (SIGCOMM '88), pp. 314-329,
              DOI 10.1145/52325.52356, August 1988,
              <https://doi.org/10.1145/52325.52356>.

   [Savage99] Savage, S., Cardwell, N., Wetherall, D., and T. Anderson,
              "TCP Congestion Control with a Misbehaving Receiver", ACM
              SIGCOMM Computer Communication Review, vol. 29, no. 5, pp.
              71-78, DOI 10.1145/505696.505704, October 1999,
              <https://doi.org/10.1145/505696.505704>.

   [VCC]      Cronkite-Ratcliff, B., Bergman, A., Vargaftik, S., Ravi,
              M., McKeown, N., Abraham, I., and I. Keslassy,
              "Virtualized Congestion Control (Extended Version)",
              SIGCOMM '16: Proceedings of the 2016 ACM SIGCOMM
              Conference pp. 230-243, DOI 10.1145/2934872.2934889,
              August 2016, <http://www.ee.technion.ac.il/~isaac/p/
              sigcomm16_vcc_extended.pdf>.

-->

<references><name>Informative References</name>
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.3042.xml" href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.3042.xml" />
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.3168.xml" href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.3168.xml" />
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.3517.xml" href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.3517.xml" />
<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml/reference.RFC.6937.xml" href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6937.xml" />

 <reference anchor="First_TCP_PRR" target="https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=a262f0cdf1f2916ea918dc329492abb5323d9a6c" quoteTitle="true">
        <front>
          <title>Proportional Rate Reduction for TCP.</title>
          <author>
            <organization showOnFrontPage="true"/>
          </author>
          <date month="August" year="2011"/>
        </front>
        <refcontent>commit a262f0cdf1f2916ea918dc329492abb5323d9a6c</refcontent>
 </reference>

<reference anchor='IMC11'>
<front>
<title>Proportional Rate Reduction for TCP</title>
<author initials='N' surname='Dukkipati' fullname='Nandita Dukkipati'>
    <organization />
</author>
<author initials='M' surname='Mathis' fullname='Matt Mathis'>
    <organization />
</author>
<author initials='Y' surname='Cheng' fullname='Yuchung Cheng'>
    <organization />
</author>
<author initials="M" surname="Ghobadi" fullname="Monia Ghobadi">
    <organization />
</author>
<date month='November' year='2011' />
</front>
<seriesInfo name='Proceedings
<refcontent>IMC '11: Proceedings of the 11th 2011 ACM SIGCOMM Conference on Internet Measurement 2011,' value='Berlin, Germany' /> Conference, pp. 155-170</refcontent>
<seriesInfo name="DOI" value="10.1145/2068816.2068832"/>
</reference>

<reference anchor='Flach2016policing'> anchor='Flach2016policing' target="">
<front>
<title>An Internet-Wide Analysis of Traffic Policing</title>
<author initials='T' surname='Flach' fullname='Tobias Flach'>
    <organization /></author>
<author initials='P' surname='Papageorge' fullname='Pavlos Papageorge'>
    <organization /></author>
<author initials='A' surname='Terzis' fullname='Andreas Terzis'>
    <organization /></author>
<author initials='L' surname='Pedrosa' fullname='Luis Pedrosa'>
    <organization /></author>
<author initials='Y' surname='Cheng' fullname='Yuchung Cheng'>
    <organization /></author>
<author initials='T' surname='Al Karim' surname='Karim' fullname='Tayeb Al Karim'>
    <organization /></author>
<author initials='E' surname='Katz-Bassett' fullname='Ethan B Katz-Bassett'>
    <organization /></author>
<author initials='R' surname='Govindan' fullname='R. Govindan'>
    <organization /></author>
<date month='August' year='2016' />
</front>
<refcontent>SIGCOMM '16: Proceedings of the 2016 ACM SIGCOMM Conference, pp. 468-482</refcontent>
<seriesInfo name="ACM SIGCOMM" value='SIGCOMM2016' /> name="DOI" value="10.1145/2934872.2934873"/>
</reference>

<reference anchor='Hoe96Startup'>
<front>
<title>Improving the start-up behavior Start-up Behavior of a congestion control scheme Congestion Control Scheme for TCP</title>
<author initials='J' surname='Hoe' fullname='Janey C. Hoe'>
    <organization /></author>
<date month='August' year='1996' />
</front>
<refcontent>SIGCOMM '96: Conference Proceedings on Applications, Technologies, Architectures, and Protocols for Computer Communications, pp. 270-280</refcontent>
<seriesInfo name="ACM SIGCOMM" value='SIGCOMM1996' /> name="DOI" value="10.1145/248157.248180"/>
</reference>

<reference anchor='FACK' target='https://dl.acm.org/doi/pdf/10.1145/248157.248181'>
<front>
<title>Forward Acknowledgment: Refining TCP Congestion Control</title>
<author initials='M.' surname='Mathis' fullname='Matthew Mathis'>
    <organization /></author>
<author initials='J.' surname='Mahdavi' fullname='Jamshid Mahdavi'>
    <organization /></author>
<date month='August' year='1996' />
</front>
<refcontent>ACM SIGCOMM Computer Communication Review, vol. 26, no. 4, pp. 281-291</refcontent>
<seriesInfo name='ACM SIGCOMM' value='SIGCOMM1996' /> name="DOI" value="10.1145/248157.248181"/>
</reference>

<!-- draft-mathis-tcp-ratehalving (Expired) -->
<reference anchor='RHID' target='https://datatracker.ietf.org/doc/html/draft-mathis-tcp-ratehalving'>
<front>
<title>The Rate-Halving Algorithm for TCP Congestion Control</title>
<author initials='M' surname='Mathis' fullname='Matt Mathis'>
    <organization />
</author>
<author initials='J' surname='Semke'>
    <organization />
</author>
<author initials='J' surname='Mahdavi'>
    <organization />
</author>
<date month='August' year='1999' />
</front>
<seriesInfo name="Work in" value="Progress"/>
</reference>

<reference anchor="I-D.welzl-iccrg-pacing" target="https://datatracker.ietf.org/doc/html/draft-welzl-iccrg-pacing">
<front>
<title>Pacing in Transport Protocols</title>
<author initials="M." surname="Welzl" fullname="Michael Welzl">
<organization>University of Oslo</organization>
</author>
<author initials="W." surname="Eddy" fullname="Wesley Eddy">
<organization>MTI Systems</organization>
</author>
<author initials="V." surname="Goel" fullname="Vidhi Goel">
<organization>Apple Inc.</organization>
</author>
<author initials="M." surname="Txen" fullname="Michael Txen">
<organization>Mnster University of Applied Sciences</organization>
</author>
<date month="March" day="3" year="2025"/>
<abstract>
<t> Applications or congestion control mechanisms can produce bursty traffic which can cause unnecessary queuing and packet loss. To reduce the burstiness of traffic, the concept [RHID]
draft-mathis-tcp-ratehalving-00
IESG State: Expired as of evenly spacing out the traffic from a data sender over a round-trip time known 10/24/25
-->
<xi:include href="https://datatracker.ietf.org/doc/bibxml3/reference.I-D.mathis-tcp-ratehalving.xml"/>

<!-- [I-D.welzl-iccrg-pacing]
draft-welzl-iccrg-pacing-03
IESG State: I-D Exists as "pacing" has been used in many transport protocol implementations. This document gives an overview of pacing and how some known pacing implementations work. </t>
</abstract>
</front>
<seriesInfo name="Internet-Draft" value="draft-welzl-iccrg-pacing"/>
</reference> 10/24/25
-->
<xi:include href="https://datatracker.ietf.org/doc/bibxml3/reference.I-D.welzl-iccrg-pacing.xml"/>

<reference anchor='VCC' target='http://www.ee.technion.ac.il/~isaac/p/sigcomm16_vcc_extended.pdf'>
<front>
<title>Virtualized Congestion Control (Extended Version)</title>
<author initials='B' surname='Cronkite-Ratcliff' fullname='Bryce Cronkite-Ratcliff'></author>
<author initials='A' surname='Bergman' fullname='Aran Bergman'></author>
<author initials='S' surname='Vargaftik' fullname='Shay Vargaftik'></author>
<author initials='M' surname='Ravi' fullname='Madhusudhan Ravi'></author>
<author initials='N' surname='McKeown' fullname='Nick McKeown'></author>
<author initials='I' surname='Abraham' fullname='Ittai Abraham'></author>
<author initials='I' surname='Keslassy' fullname='Isaac Keslassy'></author>
<date year='2016' month='August' />
</front>
<seriesInfo name="DOI" value="10.1145/2934872.2934889"/>
<refcontent>SIGCOMM '16: Proceedings of the 2016 ACM SIGCOMM Conference
pp. 230-243</refcontent>
</reference>

<!-- REMOVED
<reference anchor='RHweb' target='http://www.psc.edu/networking/papers/FACKnotes/current/'>
<front>
<title>TCP Rate-Halving with Bounding Parameters</title>
<author initials='M.' surname='Mathis' fullname='Matthew Mathis'>
    <organization /></author>
<author initials='J.' surname='Mahdavi' fullname='Jamshid Mahdavi'>
    <organization /></author>
<date month='December' year='1997' />
</front>
<seriesInfo name='Web' value='publication'/>
</reference>
-->

<!-- REMOVED
<reference anchor='CUBIC'>
<front>
<title>CUBIC: A new TCP-friendly high-speed TCP variant</title>

<author initials='I.' surname='Rhee' fullname='Injong Rhee'>
    <organization /></author>
<author initials='L.' surname='Xu' fullname='L Xu'>
    <organization /></author>

<date month='February' year='2005' />

<abstract><t></t></abstract>

</front>

<seriesInfo name='PFLDnet' value='2005' />
</reference>
-->

<!-- Van 88 -->
<reference anchor='Jacobson88'>
<front>
<title>Congestion Avoidance and Control</title>
<author initials='V' surname='Jacobson' > <organization /></author>
<date year='1988' month='August' />
</front>
<refcontent>Symposium proceedings on Communications architectures and protocols (SIGCOMM '88), pp. 314-329</refcontent>
<seriesInfo name='SIGCOMM Comput. Commun. Rev.' value="18(4)" /> name="DOI" value="10.1145/52325.52356"/>
</reference>

<!-- ACK splitting attacks  -->
<reference anchor='Savage99'>
<front>
<title>TCP congestion control Congestion Control with a misbehaving receiver</title> Misbehaving Receiver</title>
<author initials='S' surname='Savage' > <organization /></author>
<author initials='N' surname='Cardwell' > <organization /></author>
<author initials='D' surname='Wetherall' > <organization /></author>
<author initials='T' surname='Anderson' > <organization /></author>
<date year='1999' month='October ' />
</front>
<refcontent>ACM SIGCOMM Computer Communication Review, vol. 29, no. 5, pp. 71-78</refcontent>
<seriesInfo name='SIGCOMM Comput. Commun. Rev.' value="29(5)" /> name="DOI" value="10.1145/505696.505704"/>
</reference>

<!-- REMOVED
<!- - draft-mathis-tcpm-tcp-laminar (Expired) - ->
<reference anchor='Laminar'>
<front>
<title>Laminar TCP and the case for refactoring TCP congestion control</title>
<author initials='M' surname='Mathis' fullname='Matt Mathis'>
    <organization />
</author>
<date month='July' day='16' year='2012' />
</front>
<seriesInfo name='Work in' value='Progress' />
</reference>
-->

</references>
</references>

<section anchor="conservative" title="Strong anchor="conservative"><name>Strong Packet Conservation Bound"> Bound</name>

<t>
PRR-CRB is based on a conservative, philosophically pure, and aesthetically appealing Strong Packet Conservation Bound, described here.   Although inspired by the packet conservation principle <xref target="Jacobson88" />, it differs in how it treats segments that are missing and presumed lost.   Under all conditions and sequences of events during recovery, PRR-CRB strictly bounds the data transmitted to be equal to or less than the amount of data delivered to the receiver.
Note that the effects of presumed losses are included in the inflight calculation, calculation but do not affect the outcome of PRR-CRB, PRR-CRB once inflight has fallen below ssthresh.</t>

<t>This Strong Packet Conservation Bound is the most aggressive algorithm that does not lead to additional forced losses in some environments.   It has the property that if there is a standing queue at a bottleneck that is carrying no other traffic, the queue will maintain exactly constant length for the entire  duration of the recovery, except for +1/-1 fluctuation due to differences in packet arrival and exit times.    Any less aggressive algorithm will result in a declining queue at the bottleneck.  Any more aggressive algorithm will result in an increasing queue or additional losses if it is a full drop tail queue.</t>

<t>This property is demonstrated with a thought experiment:</t>

<t>
Imagine a network path that has insignificant delays in both directions, except for the processing time and queue at a single bottleneck in the forward path.  In particular, when a packet is "served" at the head of the bottleneck queue, the following events happen in much less than one bottleneck packet time: the packet arrives at the receiver; the receiver sends an ACK that arrives at the sender; the sender processes the ACK and sends some data; the data is queued at the bottleneck.  </t>

<t>
If SndCnt is set to DeliveredData and nothing else is inhibiting sending data,
then clearly the data arriving at the bottleneck queue will exactly replace the
data that was served at the head of the queue, so the queue will have a
constant length.  If the queue is drop tail and full, then the queue will stay
exactly full. Losses or reordering on the ACK path only cause wider
fluctuations in the queue size, size but do not raise its peak size, independent of
whether the data is in order or out of order (including loss recovery from an earlier RTT).  Any more aggressive algorithm that sends additional data will overflow the drop tail queue and cause loss.  Any less aggressive algorithm will under-fill the queue.  Therefore, setting SndCnt to DeliveredData is the most aggressive algorithm that does not cause forced losses in this simple network.  Relaxing the assumptions (e.g., making delays more authentic and adding more flows, delayed ACKs, etc.)&nbsp;is likely to increase the fine grained fine-grained fluctuations in queue size but does not change its basic behavior.</t>

<t>Note that the congestion control algorithm implements a broader notion of optimal that includes appropriately sharing the network.  Typical congestion control algorithms are likely to reduce the data sent relative to the Packet Conserving Bound implemented by PRR, bringing TCP's actual window down to ssthresh.</t>

</section>

<section numbered="false"><name>Acknowledgments</name>
<t>This document is based in part on previous work by <contact fullname="Janey C. Hoe"/> (see "Recovery from Multiple Packet Losses", Section 3.2 of <xref target="Hoe96Startup" />), <contact fullname="Matt Mathis"/>, <contact fullname="Jeff Semke"/>, and <contact fullname="Jamshid Mahdavi"/> <xref target="I-D.mathis-tcp-ratehalving" /> and influenced by several discussions with <contact fullname="John Heffner"/>.</t>

<t><contact fullname="Monia Ghobadi"/> and <contact fullname="Sivasankar Radhakrishnan"/> helped analyze the experiments. <contact fullname="Ilpo Jarvinen"/> reviewed the initial implementation. <contact fullname="Mark Allman"/>, <contact fullname="Richard Scheffenegger"/>, <contact fullname="Markku Kojo"/>, <contact fullname="Mirja Kuehlewind"/>, <contact fullname="Gorry Fairhurst"/>, <contact fullname="Russ Housley"/>, <contact fullname="Paul Aitken"/>, <contact fullname="Daniele Ceccarelli"/>, and <contact fullname="Mohamed Boucadair"/> improved the document through their insightful reviews and suggestions.</t>

</section>

</back>

<!-- [rfced] Some author comments are present in the XML. Please confirm that
no updates related to these comments are outstanding. Note that the
comments will be deleted prior to publication.
-->

<!-- [rfced] Abbreviations

a) FYI - We have added expansions for the following abbreviations
per Section 3.6 of RFC 7322 ("RFC Style Guide"). Please review each
expansion in the document carefully to ensure correctness.

 Content Delivery Network (CDN)
 Forward Acknowledgment (FACK)
 Recent Acknowledgment Tail Loss Probe (RACK-TLP)

b) Both the expansion and the acronym for the following term are used
throughout the document. Would you like to update to use the expansion upon
first usage and the acronym for the rest of the document?

round-trip time (RTT)
-->

<!--[rfced] Throughout the text, the following terminology appears to be used
inconsistently. May we update each to the form on the right?

 Fast Retransmit > fast retransmit
 limited transmit > Limited Transmit
-->

<!-- [rfced] Please review the "Inclusive Language" portion of the online
Style Guide <https://www.rfc-editor.org/styleguide/part2/#inclusive_language>
and let us know if any changes are needed.  Updates of this nature typically
result in more precise language, which is helpful for readers.

Note that our script did not flag any words in particular, but this should
still be reviewed as a best practice.
-->
</rfc>