<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc SYSTEM "rfc2629-xhtml.ent">
<?rfc toc="yes"?>
<?rfc tocompact="yes"?>
<?rfc tocdepth="3"?>
<?rfc tocindent="yes"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="no"?>

<rfc xmlns:xi="http://www.w3.org/2001/XInclude" category="info" docName="draft-ietf-spring-segment-routing-msdc-11"
     ipr="trust200902" obsoletes="" updates="" submissionType="IETF"
     consensus="true" number="9999" xml:lang="en" tocInclude="true" symRefs="true" sortRefs="true" version="3">

  <!-- xml2rfc v2v3 conversion 2.23.0 -->

  <front>
    <title abbrev="BGP-Prefix SID in large-scale DCs">BGP-Prefix Segment in
    large-scale data centers</title>
    <seriesInfo name="Internet-Draft" value="draft-ietf-spring-segment-routing-msdc-11"/> name="RFC" value="9999"/>
    <author fullname="Clarence Filsfils" initials="C." role="editor" surname="Filsfils">
      <organization>Cisco Systems, Inc.</organization>
      <address>
        <postal>
          <street/>
          <city>Brussels</city>
          <region/>
          <code/>
          <country>BE</country>
        </postal>
        <email>cfilsfil@cisco.com</email>
      </address>
    </author>
    <author fullname="Stefano Previdi" initials="S." surname="Previdi">
      <organization>Cisco Systems, Inc.</organization>
      <address>
        <postal>
          <street/>
          <city/>
          <code/>
          <country>Italy</country>
        </postal>
        <email>stefano@previdi.net</email>
      </address>
    </author>
    <author fullname="Gaurav Dawra" initials="G." surname="Dawra">
      <organization>LinkedIn</organization>
      <address>
        <postal>
          <street/>
          <city/>
          <code/>
          <country>USA</country>
        </postal>
        <email>gdawra.ietf@gmail.com</email>
      </address>
    </author>
    <author fullname="Ebben Aries" initials="E." surname="Aries">
      <organization>Juniper Networks</organization>
      <address>
        <postal>
          <street>1133 Innovation Way</street>
          <city>Sunnyvale</city>
          <code>CA 94089</code>
          <country>US</country>
        </postal>
        <email>exa@juniper.net</email>
      </address>
    </author>
    <author fullname="Petr Lapukhov" initials="P." surname="Lapukhov">
      <organization>Facebook</organization>
      <address>
        <postal>
          <street/>
          <city/>
          <code/>
          <country>US</country>
        </postal>
        <email>petr@fb.com</email>
      </address>
    </author>
    <date year="2018"/> month="July" year="2019"/>
    <workgroup>Network Working Group</workgroup>
    <abstract>
      <t>This document describes the motivation and benefits for applying
      segment routing in BGP-based large-scale data-centers. It describes the
      design to deploy segment routing in those data-centers, for both the
      MPLS and IPv6 dataplanes.</t>
    </abstract>
  </front>
  <middle>
    <section anchor="INTRO" numbered="true" toc="default">
      <name>Introduction</name>
      <t>Segment Routing (SR), as described in <xref target="I-D.ietf-spring-segment-routing" format="default"/> leverages the source routing
      paradigm. A node steers a packet through an ordered list of
      instructions, called segments. A segment can represent any instruction,
      topological or service-based. A segment can have a local semantic to an
      SR node or global within an SR domain. SR allows to enforce a flow
      through any topological path while maintaining per-flow state only at
      the ingress node to the SR domain. Segment Routing can be applied to the
      MPLS and IPv6 data-planes.</t>
      <t>The use-cases described in this document should be considered in the
      context of the BGP-based large-scale data-center (DC) design described
      in <xref target="RFC7938" format="default"/>. This document extends it by applying SR
      both with IPv6 and MPLS dataplane.</t>
    </section>
    <section anchor="LARGESCALEDC" numbered="true" toc="default">
      <name>Large Scale Data Center Network Design Summary</name>
      <t>This section provides a brief summary of the informational document
      <xref target="RFC7938" format="default"/> that outlines a practical network design
      suitable for data-centers of various scales:</t>
      <ul spacing="normal">
        <li>Data-center networks have highly symmetric topologies with
          multiple parallel paths between two server attachment points. The
          well-known Clos topology is most popular among the operators (as
          described in <xref target="RFC7938" format="default"/>). In a Clos topology, the
          minimum number of parallel paths between two elements is determined
          by the "width" of the "Tier-1" stage. See <xref target="FIGLARGE" format="default"/>
          below for an illustration of the concept.</li>
        <li>Large-scale data-centers commonly use a routing protocol, such as
          BGP-4 <xref target="RFC4271" format="default"/> in order to provide endpoint
          connectivity. Recovery after a network failure is therefore driven
          either by local knowledge of directly available backup paths or by
          distributed signaling between the network devices.</li>
        <li>Within data-center networks, traffic is load-shared using the
          Equal Cost Multipath (ECMP) mechanism. With ECMP, every network
          device implements a pseudo-random decision, mapping packets to one
          of the parallel paths by means of a hash function calculated over
          certain parts of the packet, typically a combination of various
          packet header fields.</li>
      </ul>
      <t>The following is a schematic of a five-stage Clos topology, with four
      devices in the "Tier-1" stage. Notice that number of paths between Node1
      and Node12 equals to four: the paths have to cross all of Tier-1
      devices. At the same time, the number of paths between Node1 and Node2
      equals two, and the paths only cross Tier-2 devices. Other topologies
      are possible, but for simplicity only the topologies that have a single
      path from Tier-1 to Tier-3 are considered below. The rest could be
      treated similarly, with a few modifications to the logic.</t>
      <section anchor="REFDESIGN" numbered="true" toc="default">
        <name>Reference design</name>
        <figure anchor="FIGLARGE">
          <name>5-stage Clos topology</name>
          <artwork name="" type="" align="left" alt=""><![CDATA[                                Tier-1
                               +-----+
                               |NODE |
                            +->|  5  |--+
                            |  +-----+  |
                    Tier-2  |           |   Tier-2
                   +-----+  |  +-----+  |  +-----+
     +------------>|NODE |--+->|NODE |--+--|NODE |-------------+
     |       +-----|  3  |--+  |  6  |  +--|  9  |-----+       |
     |       |     +-----+     +-----+     +-----+     |       |
     |       |                                         |       |
     |       |     +-----+     +-----+     +-----+     |       |
     | +-----+---->|NODE |--+  |NODE |  +--|NODE |-----+-----+ |
     | |     | +---|  4  |--+->|  7  |--+--|  10 |---+ |     | |
     | |     | |   +-----+  |  +-----+  |  +-----+   | |     | |
     | |     | |            |           |            | |     | |
   +-----+ +-----+          |  +-----+  |          +-----+ +-----+
   |NODE | |NODE | Tier-3   +->|NODE |--+   Tier-3 |NODE | |NODE |
   |  1  | |  2  |             |  8  |             | 11  | |  12 |
   +-----+ +-----+             +-----+             +-----+ +-----+
     | |     | |                                     | |     | |
     A O     B O            <- Servers ->            Z O     O O
]]></artwork>
        </figure>
        <t>In the reference topology illustrated in <xref target="FIGLARGE" format="default"/>,
        It is assumed:</t>
        <ul spacing="normal">
          <li>
            <t>Each node is its own AS (Node X has AS X). 4-byte AS numbers
            are recommended (<xref target="RFC6793" format="default"/>).</t>
            <ul spacing="normal">
              <li>For simple and efficient route propagation filtering,
                Node5, Node6, Node7 and Node8 use the same AS, Node3 and Node4
                use the same AS, Node9 and Node10 use the same AS.</li>
              <li>In case of 2-byte autonomous system numbers are used and
                for efficient usage of the scarce 2-byte Private Use AS pool,
                different Tier-3 nodes might use the same AS.</li>
              <li>Without loss of generality, these details will be
                simplified in this document and assume that each node has its
                own AS.</li>
            </ul>
          </li>
          <li>Each node peers with its neighbors with a BGP session. If not
            specified, eBGP is assumed. In a specific use-case, iBGP will be
            used but this will be called out explicitly in that case.</li>
          <li>
            <t>Each node originates the IPv4 address of its loopback interface
            into BGP and announces it to its neighbors. </t>
            <ul spacing="normal">
              <li>The loopback of Node X is 192.0.2.x/32.</li>
            </ul>
          </li>
        </ul>
        <t>In this document, the Tier-1, Tier-2 and Tier-3 nodes are referred
        to respectively as Spine, Leaf and ToR (top of rack) nodes. When a ToR
        node acts as a gateway to the "outside world", it is referred to as a
        border node.</t>
      </section>
    </section>
    <section anchor="OPENPROBS" numbered="true" toc="default">
      <name>Some open problems in large data-center networks</name>
      <t>The data-center network design summarized above provides means for
      moving traffic between hosts with reasonable efficiency. There are few
      open performance and reliability problems that arise in such design:
      </t>
      <ul spacing="normal">
        <li>ECMP routing is most commonly realized per-flow. This means that
          large, long-lived "elephant" flows may affect performance of
          smaller, short-lived "mouse" flows and reduce efficiency
          of per-flow load-sharing. In other words, per-flow ECMP does not
          perform efficiently when flow lifetime distribution is heavy-tailed.
          Furthermore, due to hash-function inefficiencies it is possible to
          have frequent flow collisions, where more flows get placed on one
          path over the others.</li>
        <li>Shortest-path routing with ECMP implements an oblivious routing
          model, which is not aware of the network imbalances. If the network
          symmetry is broken, for example due to link failures, utilization
          hotspots may appear. For example, if a link fails between Tier-1 and
          Tier-2 devices (e.g. Node5 and Node9), Tier-3 devices Node1 and
          Node2 will not be aware of that, since there are other paths
          available from perspective of Node3. They will continue sending
          roughly equal traffic to Node3 and Node4 as if the failure didn't
          exist which may cause a traffic hotspot.</li>
        <li>Isolating faults in the network with multiple parallel paths and
          ECMP-based routing is non-trivial due to lack of determinism.
          Specifically, the connections from HostA to HostB may take a
          different path every time a new connection is formed, thus making
          consistent reproduction of a failure much more difficult. This
          complexity scales linearly with the number of parallel paths in the
          network, and stems from the random nature of path selection by the
          network devices.</li>
      </ul>
      <t>First, it will be explained how to apply SR in the DC, for MPLS and
      IPv6 data-planes.</t>
    </section>
    <section anchor="APPLYSR" numbered="true" toc="default">
      <name>Applying Segment Routing in the DC with MPLS dataplane</name>
      <section anchor="BGPREFIXSEGMENT" numbered="true" toc="default">
        <name>BGP Prefix Segment (BGP-Prefix-SID)</name>
        <t>A BGP Prefix Segment is a segment associated with a BGP prefix. A
        BGP Prefix Segment is a network-wide instruction to forward the packet
        along the ECMP-aware best path to the related prefix.</t>
        <t>The BGP Prefix Segment is defined as the BGP-Prefix-SID Attribute
        in <xref target="I-D.ietf-idr-bgp-prefix-sid" format="default"/> which contains an
        index. Throughout this document the BGP Prefix Segment Attribute is
        referred as the BGP-Prefix-SID and the encoded index as the
        label-index.</t>
        <t>In this document, the network design decision has been made to
        assume that all the nodes are allocated the same SRGB (Segment Routing
        Global Block), e.g. [16000, 23999]. This provides operational
        simplification as explained in <xref target="SINGLESRGB" format="default"/>, but this
        is not a requirement.</t>
        <t>For illustration purpose, when considering an MPLS data-plane, it
        is assumed that the label-index allocated to prefix 192.0.2.x/32 is X.
        As a result, a local label (16000+x) is allocated for prefix
        192.0.2.x/32 by each node throughout the DC fabric.</t>
        <t>When IPv6 data-plane is considered, it is assumed that Node X is
        allocated IPv6 address (segment) 2001:DB8::X.</t>
      </section>
      <section anchor="eBGP8277" numbered="true" toc="default">
        <name>eBGP Labeled Unicast (RFC8277)</name>
        <t>Referring to <xref target="FIGLARGE" format="default"/> and <xref target="RFC7938" format="default"/>, the following design modifications are
        introduced:</t>
        <ul spacing="normal">
          <li>Each node peers with its neighbors via a eBGP session with
            extensions defined in <xref target="RFC8277" format="default"/> (named "eBGP8277"
            throughout this document) and with the BGP-Prefix-SID attribute
            extension as defined in <xref target="I-D.ietf-idr-bgp-prefix-sid" format="default"/>.</li>
          <li>The forwarding plane at Tier-2 and Tier-1 is MPLS.</li>
          <li>The forwarding plane at Tier-3 is either IP2MPLS (if the host
            sends IP traffic) or MPLS2MPLS (if the host sends MPLS-
            encapsulated traffic).</li>
        </ul>
        <t><xref target="FIGSMALL" format="default"/> zooms into a path from server A to server
        Z within the topology of <xref target="FIGLARGE" format="default"/>.</t>
        <figure anchor="FIGSMALL">
          <name>Path from A to Z via nodes 1, 4, 7, 10 and 11</name>
          <artwork name="" type="" align="left" alt=""><![CDATA[
                   +-----+     +-----+     +-----+
       +---------->|NODE |     |NODE |     |NODE |
       |           |  4  |--+->|  7  |--+--|  10 |---+
       |           +-----+     +-----+     +-----+   |
       |                                             |
   +-----+                                         +-----+
   |NODE |                                         |NODE |
   |  1  |                                         | 11  |
   +-----+                                         +-----+
     |                                              |
     A                    <- Servers ->             Z
]]></artwork>
        </figure>
        <t>Referring to <xref target="FIGLARGE" format="default"/> and <xref target="FIGSMALL" format="default"/> and assuming the IP address with the AS and
        label-index allocation previously described, the following sections
        detail the control plane operation and the data plane states for the
        prefix 192.0.2.11/32 (loopback of Node11)</t>
        <section anchor="CONTROLPLANE" numbered="true" toc="default">
          <name>Control Plane</name>
          <t>Node11 originates 192.0.2.11/32 in BGP and allocates to it a
          BGP-Prefix-SID with label-index: index11 <xref target="I-D.ietf-idr-bgp-prefix-sid" format="default"/>.</t>
          <t>Node11
<ul empty="true">
<li><t>Node11 sends the following eBGP8277 update to Node10:</t>
          <artwork name="" type="" align="left" alt=""><![CDATA[. IP Prefix:  192.0.2.11/32
. Label: Implicit-Null
. Next-hop: Node11’s
<dl spacing="compact">
<dt>IP Prefix:</dt><dd>192.0.2.11/32</dd>
<dt>Label:</dt><dd>Implicit-Null</dd>
<dt>Next-hop:</dt><dd>Node11's interface address on the link to Node10
. AS Path: {11}
. BGP-Prefix-SID: Label-Index 11
]]></artwork> Node10</dd>
<dt>AS Path:</dt><dd>{11}</dd>
<dt>BGP-Prefix-SID:</dt><dd>Label-Index 11</dd>
</dl>
</li>
</ul>

          <t>Node10 receives the above update. As it is SR capable, Node10 is
          able to interpret the BGP-Prefix-SID and hence understands that it
          should allocate the label from its own SRGB block, offset by the
          Label-Index received in the BGP-Prefix-SID (16000+11 hence 16011) to
          the NLRI instead of allocating a non-deterministic label out of a
          dynamically allocated portion of the local label space. The
          implicit-null label in the NLRI tells Node10 that it is the
          penultimate hop and must pop the top label on the stack before
          forwarding traffic for this prefix to Node11.</t>
          <t>Then,
<ul empty="true">
<li><t>Then, Node10 sends the following eBGP8277 update to Node7:</t>
          <artwork name="" type="" align="left" alt=""><![CDATA[. IP Prefix:  192.0.2.11/32
. Label: 16011
. Next-hop: Node10’s
<dl spacing="compact">
<dt>IP Prefix:</dt><dd>192.0.2.11/32</dd>
<dt>Label:</dt><dd>16011</dd>
<dt>Next-hop:</dt><dd>Node10's interface address on the link to Node7
. AS Path: {10, 11}
. BGP-Prefix-SID: Label-Index 11
]]></artwork> Node7</dd>
<dt>AS Path:</dt><dd>{10, 11}</dd>
<dt>BGP-Prefix-SID:</dt><dd>Label-Index 11</dd>
</dl>
</li>
</ul>
          <t>Node7 receives the above update. As it is SR capable, Node7 is
          able to interpret the BGP-Prefix-SID and hence allocates the local
          (incoming) label 16011 (16000 + 11) to the NLRI (instead of
          allocating a "dynamic" local label from its label
          manager). Node7 uses the label in the received eBGP8277 NLRI as the
          outgoing label (the index is only used to derive the local/incoming
          label).</t>
          <t>Node7
<ul empty="true">
<li><t>Node7 sends the following eBGP8277 update to Node4:</t>
          <artwork name="" type="" align="left" alt=""><![CDATA[. IP Prefix:  192.0.2.11/32
. Label: 16011
. Next-hop: Node7’s
<dl spacing="compact">
<dt>Label:</dt><dd>16011</dd>
<dt>Next-hop:</dt><dd>Node7's interface address on the link to Node4
. AS Path: {7, Node4</dd>
<dt>AS Path:</dt><dd>{7, 10, 11}
. BGP-Prefix-SID: Label-Index 11
]]></artwork> 11}</dd>
<dt>BGP-Prefix-SID:</dt><dd>Label-Index 11</dd>
</dl>
</li>
</ul>
          <t>Node4 receives the above update. As it is SR capable, Node4 is
          able to interpret the BGP-Prefix-SID and hence allocates the local
          (incoming) label 16011 to the NLRI (instead of allocating a
          "dynamic" local label from its label manager). Node4
          uses the label in the received eBGP8277 NLRI as outgoing label (the
          index is only used to derive the local/incoming label).</t>
          <t>Node4

<ul empty="true">
<li><t>Node4 sends the following eBGP8277 update to Node1:</t>
          <artwork name="" type="" align="left" alt=""><![CDATA[. IP Prefix:  192.0.2.11/32
. Label: 16011
. Next-hop: Node4’s
<dl spacing="compact">
<dt>IP Prefix:</dt><dd>192.0.2.11/32</dd>
<dt>Label:</dt><dd>16011</dd>
<dt>Next-hop:</dt><dd>Node4's interface address on the link to Node1
. AS Path: {4, Node1</dd>
<dt>AS Path:</dt><dd>{4, 7, 10, 11}
. BGP-Prefix-SID: Label-Index 11
]]></artwork> 11}</dd>
<dt>BGP-Prefix-SID:</dt><dd>Label-Index 11</dd>
</dl>
</li>
</ul>

          <t>Node1 receives the above update. As it is SR capable, Node1 is
          able to interpret the BGP-Prefix-SID and hence allocates the local
          (incoming) label 16011 to the NLRI (instead of allocating a
          "dynamic" local label from its label manager). Node1
          uses the label in the received eBGP8277 NLRI as outgoing label (the
          index is only used to derive the local/incoming label).</t>
        </section>
        <section anchor="DATAPLANE" numbered="true" toc="default">
          <name>Data Plane</name>
          <t>Referring to <xref target="FIGLARGE" format="default"/>, and assuming all nodes
          apply the same advertisement rules described above and all nodes
          have the same SRGB (16000-23999), here are the IP/MPLS forwarding
          tables for prefix 192.0.2.11/32 at Node1, Node4, Node7 and
          Node10.</t>
          <figure anchor="NODE1FIB">
          <table anchor="NODE1FIB" align="center">
            <name>Node1 Forwarding Table</name>
            <artwork align="center" name="" type="" alt=""><![CDATA[-----------------------------------------------
Incoming
            <thead>
              <tr>
                <th align="center">Incoming label    | outgoing label | Outgoing or IP destination |                | Interface
------------------+----------------+-----------
     16011        |      16011     | ECMP{3, 4}
  192.0.2.11/32   |      16011     | ECMP{3, 4}
------------------+----------------+-----------]]></artwork>
          </figure>
          <figure anchor="NODE4FIB"> destination</th>
                <th align="center">Outgoing label</th>
                <th align="center">Outgoing Interface</th>
              </tr>
            </thead>
            <tbody>
              <tr>
               <td align="center">16011</td>
               <td align="center">16011</td>
               <td align="center">ECMP{3, 4}</td>
              </tr>
              <tr>
               <td align="center">192.0.2.11/32</td>
               <td align="center">16011</td>
               <td align="center">ECMP{3, 4}</td>
              </tr>
            </tbody>
          </table>

          <table anchor="NODE4FIB" align="center">
            <name>Node4 Forwarding Table</name>
            <artwork align="center" name="" type="" alt=""><![CDATA[
-----------------------------------------------
Incoming label    | outgoing
            <thead>
              <tr>
                <th align="center">Incoming label | Outgoing or IP destination |                | Interface
------------------+----------------+-----------
     16011        |      16011     | ECMP{7, 8}
  192.0.2.11/32   |      16011     | ECMP{7, 8}
------------------+----------------+-----------]]></artwork>
          </figure>
          <figure anchor="NODE7FIB"> destination</th>
                <th align="center">Outgoing label</th>
                <th align="center">Outgoing Interface</th>
              </tr>
            </thead>
            <tbody>
              <tr>
               <td align="center">16011</td>
               <td align="center">16011</td>
               <td align="center">ECMP{7, 8}</td>
              </tr>
              <tr>
               <td align="center">192.0.2.11/32</td>
               <td align="center">16011</td>
               <td align="center">ECMP{7, 8}</td>
              </tr>
            </tbody>
          </table>

          <table anchor="NODE7FIB" align="center">
            <name>Node7 Forwarding Table</name>
            <artwork align="center" name="" type="" alt=""><![CDATA[
-----------------------------------------------
Incoming
            <thead>
              <tr>
                <th align="center">Incoming label    | outgoing label | Outgoing or IP destination |                | Interface
------------------+----------------+-----------
     16011        |      16011     |    10
  192.0.2.11/32   |      16011     |    10
------------------+----------------+-----------]]></artwork>
          </figure>
          <artwork align="center" name="" type="" alt=""><![CDATA[
-----------------------------------------------
Incoming label    | outgoing destination</th>
                <th align="center">Outgoing label</th>
                <th align="center">Outgoing Interface</th>
              </tr>
            </thead>
            <tbody>
              <tr>
               <td align="center">16011</td>
               <td align="center">16011</td>
               <td align="center">10</td>
              </tr>
              <tr>
               <td align="center">192.0.2.11/32</td>
               <td align="center">16011</td>
               <td align="center">10</td>
              </tr>
            </tbody>
          </table>

          <table align="center">
            <name/>
            <thead>
              <tr>
                <th align="center">Incoming label | Outgoing or IP destination |                | Interface
------------------+----------------+-----------
     16011        |      POP       |    11
  192.0.2.11/32   |      N/A       |    11
------------------+----------------+-----------]]></artwork> destination</th>
                <th align="center">Outgoing label</th>
                <th align="center">Outgoing Interface</th>
              </tr>
            </thead>
            <tbody>
              <tr>
               <td align="center">16011</td>
               <td align="center">POP</td>
               <td align="center">11</td>
              </tr>
              <tr>
               <td align="center">192.0.2.11/32</td>
               <td align="center">N/A</td>
               <td align="center">11</td>
              </tr>
            </tbody>
          </table>
        </section>
        <section anchor="VARIATIONS" numbered="true" toc="default">
          <name>Network Design Variation</name>
          <t>A network design choice could consist of switching all the
          traffic through Tier-1 and Tier-2 as MPLS traffic. In this case, one
          could filter away the IP entries at Node4, Node7 and Node10. This
          might be beneficial in order to optimize the forwarding table
          size.</t>
          <t>A network design choice could consist in allowing the hosts to
          send MPLS-encapsulated traffic based on the Egress Peer Engineering
          (EPE) use-case as defined in <xref target="I-D.ietf-spring-segment-routing-central-epe" format="default"/>. For example,
          applications at HostA would send their Z-destined traffic to Node1
          with an MPLS label stack where the top label is 16011 and the next
          label is an EPE peer segment (<xref target="I-D.ietf-spring-segment-routing-central-epe" format="default"/>) at Node11
          directing the traffic to Z.</t>
        </section>
        <section anchor="FABRIC" numbered="true" toc="default">
          <name>Global BGP Prefix Segment through the fabric</name>
          <t>When the previous design is deployed, the operator enjoys global
          BGP-Prefix-SID and label allocation throughout the DC fabric.</t>
          <t>A few examples follow:</t>
          <ul spacing="normal">
            <li>Normal forwarding to Node11: a packet with top label 16011
              received by any node in the fabric will be forwarded along the
              ECMP-aware BGP best-path towards Node11 and the label 16011 is
              penultimate-popped at Node10 (or at Node 9).</li>
            <li>Traffic-engineered path to Node11: an application on a host
              behind Node1 might want to restrict its traffic to paths via the
              Spine node Node5. The application achieves this by sending its
              packets with a label stack of {16005, 16011}. BGP Prefix SID
              16005 directs the packet up to Node5 along the path (Node1,
              Node3, Node5). BGP-Prefix-SID 16011 then directs the packet down
              to Node11 along the path (Node5, Node9, Node11).</li>
          </ul>
        </section>
        <section anchor="INCRDEP" numbered="true" toc="default">
          <name>Incremental Deployments</name>
          <t>The design previously described can be deployed incrementally.
          Let us assume that Node7 does not support the BGP-Prefix-SID and let
          us show how the fabric connectivity is preserved.</t>
          <t>From a signaling viewpoint, nothing would change: even though
          Node7 does not support the BGP-Prefix-SID, it does propagate the
          attribute unmodified to its neighbors.</t>
          <t>From a label allocation viewpoint, the only difference is that
          Node7 would allocate a dynamic (random) label to the prefix
          192.0.2.11/32 (e.g. 123456) instead of the "hinted" label as
          instructed by the BGP-Prefix-SID. The neighbors of Node7 adapt
          automatically as they always use the label in the BGP8277 NLRI as
          outgoing label.</t>
          <t>Node4 does understand the BGP-Prefix-SID and hence allocates the
          indexed label in the SRGB (16011) for 192.0.2.11/32.</t>
          <t>As a result, all the data-plane entries across the network would
          be unchanged except the entries at Node7 and its neighbor Node4 as
          shown in the figures below.</t>
          <t>The key point is that the end-to-end Label Switched Path (LSP) is
          preserved because the outgoing label is always derived from the
          received label within the BGP8277 NLRI. The index in the
          BGP-Prefix-SID is only used as a hint on how to allocate the local
          label (the incoming label) but never for the outgoing label.</t>
          <figure anchor="NODE7FIBINC">
<table anchor="NODE7FIBINC" align="center">
              <name>Node7 Forwarding Table</name>
            <artwork align="center" name="" type="" alt=""><![CDATA[------------------------------------------
Incoming
              <thead>
                <tr>
                  <th align="center">Incoming label     | outgoing | Outgoing or IP destination  |  label   | Interface
-------------------+----------------------
     12345         |  16011   |   10
]]></artwork>
          </figure>
          <figure anchor="NODE4FIBINC"> destination</th>
                  <th align="center">Outgoing label</th>
                  <th align="center">Outgoing interface</th>
                        </tr>
              </thead>
              <tbody>
                <tr>
                  <td align="center">12345</td>
                  <td align="center">16011</td>
                  <td align="center">10</td>
                        </tr>
              </tbody>
</table>
<table anchor="NODE4FIBINC" align="center">
              <name>Node4 Forwarding Table</name>
            <artwork align="center" name="" type="" alt=""><![CDATA[------------------------------------------
Incoming
              <thead>
                <tr>
                  <th align="center">Incoming label     | outgoing | Outgoing or IP destination  |  label   | Interface
-------------------+----------------------
     16011         |  12345   |   7
]]></artwork>
          </figure> destination</th>
                  <th align="center">Outgoing label</th>
                  <th align="center">Outgoing interface</th>
                </tr>
              </thead>
              <tbody>
                <tr>
                  <td align="center">16011</td>
                  <td align="center">12345</td>
                  <td align="center">7</td>
                </tr>
              </tbody>
</table>
          <t>The BGP-Prefix-SID can thus be deployed incrementally one node at
          a time.</t>
          <t>When deployed together with a homogeneous SRGB (same SRGB across
          the fabric), the operator incrementally enjoys the global prefix
          segment benefits as the deployment progresses through the
          fabric.</t>
        </section>
      </section>
      <section anchor="iBGP3107" numbered="true" toc="default">
        <name>iBGP Labeled Unicast (RFC8277)</name>
        <t>The same exact design as eBGP8277 is used with the following
        modifications:</t>
        <ul empty="true" spacing="normal">
          <li>All nodes use the same AS number.</li>
          <li>Each node peers with its neighbors via an internal BGP session
            (iBGP) with extensions defined in <xref target="RFC8277" format="default"/> (named
            "iBGP8277" throughout this document).</li>
          <li>Each node acts as a route-reflector for each of its neighbors
            and with the next-hop-self option. Next-hop-self is a well known
            operational feature which consists of rewriting the next-hop of a
            BGP update prior to send it to the neighbor. Usually, it's a
            common practice to apply next-hop-self behavior towards iBGP peers
            for eBGP learned routes. In the case outlined in this section it
            is proposed to use the next-hop-self mechanism also to iBGP
            learned routes.</li>
          <li>
            <figure anchor="IBGPFIG">
              <name>iBGP Sessions with Reflection and Next-Hop-Self</name>
              <artwork name="" type="" align="left" alt=""><![CDATA[
                               Cluster-1
                            +-----------+
                            |  Tier-1   |
                            |  +-----+  |
                            |  |NODE |  |
                            |  |  5  |  |
                 Cluster-2  |  +-----+  |  Cluster-3
                +---------+ |           | +---------+
                | Tier-2  | |           | |  Tier-2 |
                | +-----+ | |  +-----+  | | +-----+ |
                | |NODE | | |  |NODE |  | | |NODE | |
                | |  3  | | |  |  6  |  | | |  9  | |
                | +-----+ | |  +-----+  | | +-----+ |
                |         | |           | |         |
                |         | |           | |         |
                | +-----+ | |  +-----+  | | +-----+ |
                | |NODE | | |  |NODE |  | | |NODE | |
                | |  4  | | |  |  7  |  | | |  10 | |
                | +-----+ | |  +-----+  | | +-----+ |
                +---------+ |           | +---------+
                            |           |
                            |  +-----+  |
                            |  |NODE |  |
          Tier-3            |  |  8  |  |         Tier-3
      +-----+ +-----+       |  +-----+  |      +-----+ +-----+
      |NODE | |NODE |       +-----------+      |NODE | |NODE |
      |  1  | |  2  |                          | 11  | |  12 |
      +-----+ +-----+                          +-----+ +-----+
                            ]]></artwork>
            </figure>
          </li>
          <li>
            <t>For simple and efficient route propagation filtering and as
            illustrated in <xref target="IBGPFIG" format="default"/>: </t>
            <ul spacing="normal">
              <li>Node5, Node6, Node7 and Node8 use the same Cluster ID
                (Cluster-1)</li>
              <li>Node3 and Node4 use the same Cluster ID (Cluster-2)</li>
              <li>Node9 and Node10 use the same Cluster ID (Cluster-3)</li>
            </ul>
          </li>
          <li>The control-plane behavior is mostly the same as described in
            the previous section: the only difference is that the eBGP8277
            path propagation is simply replaced by an iBGP8277 path reflection
            with next-hop changed to self.</li>
          <li>The data-plane tables are exactly the same.</li>
        </ul>
      </section>
    </section>
    <section anchor="IPV6" numbered="true" toc="default">
      <name>Applying Segment Routing in the DC with IPv6 dataplane</name>
      <t>The design described in <xref target="RFC7938" format="default"/> is reused with one
      single modification. It is highlighted using the example of the
      reachability to Node11 via spine node Node5.</t>
      <t>Node5 originates 2001:DB8::5/128 with the attached BGP-Prefix-SID for
      IPv6 packets destined to segment 2001:DB8::5 (<xref target="I-D.ietf-idr-bgp-prefix-sid" format="default"/>).</t>
      <t>Node11 originates 2001:DB8::11/128 with the attached BGP-Prefix-SID
      advertising the support of the SRH for IPv6 packets destined to segment
      2001:DB8::11.</t>
      <t>The control-plane and data-plane processing of all the other nodes in
      the fabric is unchanged. Specifically, the routes to 2001:DB8::5 and
      2001:DB8::11 are installed in the FIB along the eBGP best-path to Node5
      (spine node) and Node11 (ToR node) respectively.</t>
      <t>An application on HostA which needs to send traffic to HostZ via only
      Node5 (spine node) can do so by sending IPv6 packets with a Segment
      Routing header (SRH, <xref target="I-D.ietf-6man-segment-routing-header" format="default"/>). The destination
      address and active segment is set to 2001:DB8::5. The next and last
      segment is set to 2001:DB8::11.</t>
      <t>The application must only use IPv6 addresses that have been
      advertised as capable for SRv6 segment processing (e.g. for which the
      BGP prefix segment capability has been advertised). How applications
      learn this (e.g.: centralized controller and orchestration) is outside
      the scope of this document.</t>
    </section>
    <section anchor="COMMHOSTS" numbered="true" toc="default">
      <name>Communicating path information to the host</name>
      <t>There are two general methods for communicating path information to
      the end-hosts: "proactive" and "reactive", aka "push" and "pull" models.
      There are multiple ways to implement either of these methods. Here, it
      is noted that one way could be using a centralized controller: the
      controller either tells the hosts of the prefix-to-path mappings
      beforehand and updates them as needed (network event driven push), or
      responds to the hosts making request for a path to specific destination
      (host event driven pull). It is also possible to use a hybrid model,
      i.e., pushing some state from the controller in response to particular
      network events, while the host pulls other state on demand.</t>
      <t>It is also noted, that when disseminating network-related data to the
      end-hosts a trade-off is made to balance the amount of information Vs.
      the level of visibility in the network state. This applies both to push
      and pull models. In the extreme case, the host would request path
      information on every flow, and keep no local state at all. On the other
      end of the spectrum, information for every prefix in the network along
      with available paths could be pushed and continuously updated on all
      hosts.</t>
    </section>
    <section anchor="BENEFITS" numbered="true" toc="default">
      <name>Additional Benefits</name>
      <section anchor="MPLSIMPLE" numbered="true" toc="default">
        <name>MPLS Dataplane with operational simplicity</name>
        <t>As required by <xref target="RFC7938" format="default"/>, no new signaling protocol
        is introduced. The BGP-Prefix-SID is a lightweight extension to BGP
        Labeled Unicast <xref target="RFC8277" format="default"/>. It applies either to eBGP or
        iBGP based designs.</t>
        <t>Specifically, LDP and RSVP-TE are not used. These protocols would
        drastically impact the operational complexity of the Data Center and
        would not scale. This is in line with the requirements expressed in
        <xref target="RFC7938" format="default"/>.</t>
        <t>Provided the same SRGB is configured on all nodes, all nodes use
        the same MPLS label for a given IP prefix. This is simpler from an
        operation standpoint, as discussed in <xref target="SINGLESRGB" format="default"/></t>
      </section>
      <section anchor="MINFIB" numbered="true" toc="default">
        <name>Minimizing the FIB table</name>
        <t>The designer may decide to switch all the traffic at Tier-1 and
        Tier-2's based on MPLS, hence drastically decreasing the IP table size
        at these nodes.</t>
        <t>This is easily accomplished by encapsulating the traffic either
        directly at the host or the source ToR node by pushing the
        BGP-Prefix-SID of the destination ToR for intra-DC traffic, or the
        BGP-Prefix-SID for the the border node for inter-DC or
        DC-to-outside-world traffic.</t>
      </section>
      <section anchor="EPE" numbered="true" toc="default">
        <name>Egress Peer Engineering</name>
        <t>It is straightforward to combine the design illustrated in this
        document with the Egress Peer Engineering (EPE) use-case described in
        <xref target="I-D.ietf-spring-segment-routing-central-epe" format="default"/>.</t>
        <t>In such case, the operator is able to engineer its outbound traffic
        on a per host-flow basis, without incurring any additional state at
        intermediate points in the DC fabric.</t>
        <t>For example, the controller only needs to inject a per-flow state
        on the HostA to force it to send its traffic destined to a specific
        Internet destination D via a selected border node (say Node12 in <xref target="FIGLARGE" format="default"/> instead of another border node, Node11) and a
        specific egress peer of Node12 (say peer AS 9999 of local PeerNode
        segment 9999 at Node12 instead of any other peer which provides a path
        to the destination D). Any packet matching this state at host A would
        be encapsulated with SR segment list (label stack) {16012, 9999}.
        16012 would steer the flow through the DC fabric, leveraging any ECMP,
        along the best path to border node Node12. Once the flow gets to
        border node Node12, the active segment is 9999 (because of PHP on the
        upstream neighbor of Node12). This EPE PeerNode segment forces border
        node Node12 to forward the packet to peer AS 9999, without any IP
        lookup at the border node. There is no per-flow state for this
        engineered flow in the DC fabric. A benefit of segment routing is the
        per-flow state is only required at the source.</t>
        <t>As well as allowing full traffic engineering control such a design
        also offers FIB table minimization benefits as the Internet-scale FIB
        at border node Node12 is not required if all FIB lookups are avoided
        there by using EPE.</t>
      </section>
      <section anchor="ANYCAST" numbered="true" toc="default">
        <name>Anycast</name>
        <t>The design presented in this document preserves the availability
        and load-balancing properties of the base design presented in <xref target="I-D.ietf-spring-segment-routing" format="default"/>.</t>
        <t>For example, one could assign an anycast loopback 192.0.2.20/32 and
        associate segment index 20 to it on the border Node11 and Node12 (in
        addition to their node-specific loopbacks). Doing so, the EPE
        controller could express a default "go-to-the-Internet via any border
        node" policy as segment list {16020}. Indeed, from any host in the DC
        fabric or from any ToR node, 16020 steers the packet towards the
        border Node11 or Node12 leveraging ECMP where available along the best
        paths to these nodes.</t>
      </section>
    </section>
    <section anchor="SINGLESRGB" numbered="true" toc="default">
      <name>Preferred SRGB Allocation</name>
      <t>In the MPLS case, it is recommend to use same SRGBs at each node.</t>
      <t>Different SRGBs in each node likely increase the complexity of the
      solution both from an operational viewpoint and from a controller
      viewpoint.</t>
      <t>From an operation viewpoint, it is much simpler to have the same
      global label at every node for the same destination (the MPLS
      troubleshooting is then similar to the IPv6 troubleshooting where this
      global property is a given).</t>
      <t>From a controller viewpoint, this allows us to construct simple
      policies applicable across the fabric.</t>
      <t>Let us consider two applications A and B respectively connected to
      Node1 and Node2 (ToR nodes). A has two flows FA1 and FA2 destined to Z.
      B has two flows FB1 and FB2 destined to Z. The controller wants FA1 and
      FB1 to be load-shared across the fabric while FA2 and FB2 must be
      respectively steered via Node5 and Node8.</t>
      <t>Assuming a consistent unique SRGB across the fabric as described in
      the document, the controller can simply do it by instructing A and B to
      use {16011} respectively for FA1 and FB1 and by instructing A and B to
      use {16005 16011} and {16008 16011} respectively for FA2 and FB2.</t>
      <t>Let us assume a design where the SRGB is different at every node and
      where the SRGB of each node is advertised using the Originator SRGB TLV
      of the BGP-Prefix-SID as defined in <xref target="I-D.ietf-idr-bgp-prefix-sid" format="default"/>: SRGB of Node K starts at value
      K*1000 and the SRGB length is 1000 (e.g. Node1's SRGB is [1000,
      1999], Node2's SRGB is [2000, 2999], ...).</t>
      <t>In this case, not only the controller would need to collect and store
      all of these different SRGB's (e.g., through the Originator SRGB
      TLV of the BGP-Prefix-SID), furthermore it would need to adapt the
      policy for each host. Indeed, the controller would instruct A to use
      {1011} for FA1 while it would have to instruct B to use {2011} for FB1
      (while with the same SRGB, both policies are the same {16011}).</t>
      <t>Even worse, the controller would instruct A to use {1005, 5011} for
      FA1 while it would instruct B to use {2011, 8011} for FB1 (while with
      the same SRGB, the second segment is the same across both policies:
      16011). When combining segments to create a policy, one need to
      carefully update the label of each segment. This is obviously more
      error-prone, more complex and more difficult to troubleshoot.</t>
    </section>
    <section anchor="IANA" numbered="true" toc="default">
      <name>IANA Considerations</name>
      <t>This document does not make any IANA request.</t>
    </section>
    <section anchor="MANAGE" numbered="true" toc="default">
      <name>Manageability Considerations</name>
      <t>The design and deployment guidelines described in this document are
      based on the network design described in <xref target="RFC7938" format="default"/>.</t>
      <t>The deployment model assumed in this document is based on a single
      domain where the interconnected DCs are part of the same administrative
      domain (which, of course, is split into different autonomous systems).
      The operator has full control of the whole domain and the usual
      operational and management mechanisms and procedures are used in order
      to prevent any information related to internal prefixes and topology to
      be leaked outside the domain.</t>
      <t>As recommended in <xref target="I-D.ietf-spring-segment-routing" format="default"/>,
      the same SRGB should be allocated in all nodes in order to facilitate
      the design, deployment and operations of the domain.</t>
      <t>When EPE (<xref target="I-D.ietf-spring-segment-routing-central-epe" format="default"/>) is used (as
      explained in <xref target="EPE" format="default"/>, the same operational model is
      assumed. EPE information is originated and propagated throughout the
      domain towards an internal server and unless explicitly configured by
      the operator, no EPE information is leaked outside the domain
      boundaries.</t>
    </section>
    <section anchor="SEC" numbered="true" toc="default">
      <name>Security Considerations</name>
      <t>This document proposes to apply Segment Routing to a well known
      scalability requirement expressed in <xref target="RFC7938" format="default"/> using the
      BGP-Prefix-SID as defined in <xref target="I-D.ietf-idr-bgp-prefix-sid" format="default"/>.</t>
      <t>It has to be noted, as described in <xref target="MANAGE" format="default"/> that the
      design illustrated in <xref target="RFC7938" format="default"/> and in this document,
      refer to a deployment model where all nodes are under the same
      administration. In this context, it is assumed that the operator doesn't
      want to leak outside of the domain any information related to internal
      prefixes and topology. The internal information includes prefix-sid and
      EPE information. In order to prevent such leaking, the standard BGP
      mechanisms (filters) are applied on the boundary of the domain.</t>
      <t>Therefore, the solution proposed in this document does not introduce
      any additional security concerns from what expressed in <xref target="RFC7938" format="default"/> and <xref target="I-D.ietf-idr-bgp-prefix-sid" format="default"/>. It
      is assumed that the security and confidentiality of the prefix and
      topology information is preserved by outbound filters at each peering
      point of the domain as described in <xref target="MANAGE" format="default"/>.</t>
    </section>
    <section anchor="Acknowledgements" numbered="true" toc="default">
      <name>Acknowledgements</name>
      <t>The authors would like to thank Benjamin Black, Arjun Sreekantiah,
      Keyur Patel, Acee Lindem and Anoop Ghanwani for their comments and
      review of this document.</t>
    </section>
    <section anchor="Contributors" numbered="true" toc="default">
      <name>Contributors</name>
      <artwork name="" type="" align="left" alt=""><![CDATA[Gaya
<artwork><![CDATA[
Gaya Nagarajan
Facebook
US

Email: gaya@fb.com]]></artwork>
      <artwork name="" type="" align="left" alt=""><![CDATA[Gaurav gaya@fb.com

Gaurav Dawra
Cisco Systems
US

Email: gdawra.ietf@gmail.com]]></artwork>
      <artwork name="" type="" align="left" alt=""><![CDATA[Dmitry gdawra.ietf@gmail.com

Dmitry Afanasiev
Yandex
RU

Email: fl0w@yandex-team.ru]]></artwork>
      <artwork name="" type="" align="left" alt=""><![CDATA[Tim fl0w@yandex-team.ru

Tim Laberge
Cisco
US

Email: tlaberge@cisco.com]]></artwork>
      <artwork name="" type="" align="left" alt=""><![CDATA[Edet tlaberge@cisco.com

Edet Nkposong
Salesforce.com Inc.
US

Email: enkposong@salesforce.com]]></artwork>
      <artwork name="" type="" align="left" alt=""><![CDATA[Mohan enkposong@salesforce.com

Mohan Nanduri
Microsoft
US

Email: mnanduri@microsoft.com]]></artwork>
      <artwork name="" type="" align="left" alt=""><![CDATA[James mnanduri@microsoft.com

James Uttaro
ATT
US

Email: ju1738@att.com]]></artwork>
      <artwork name="" type="" align="left" alt=""><![CDATA[Saikat ju1738@att.com

Saikat Ray
Unaffiliated
US

Email: raysaikat@gmail.com]]></artwork>
      <artwork name="" type="" align="left" alt=""><![CDATA[Jon raysaikat@gmail.com

Jon Mitchell
Unaffiliated
US

Email: jrmitche@puck.nether.net]]></artwork> jrmitche@puck.nether.net
]]></artwork>

    </section>
  </middle>
  <back>
    <references>
      <name>References</name>
      <references>
        <name>Normative References</name>

        <reference anchor="RFC2119"
		   target="https://www.rfc-editor.org/info/rfc2119"
		   xml:base="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.2119.xml">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement
	    Levels</title>
            <seriesInfo name="DOI" value="10.17487/RFC2119"/>
            <seriesInfo name="RFC" value="2119"/>
            <seriesInfo name="BCP" value="14"/>
            <author initials="S." surname="Bradner" fullname="S. Bradner">
              <organization/>
            </author>
            <date year="1997" month="March"/>
            <abstract>
              <t>In many standards track documents several words are used to
	      signify the requirements in the specification.  These words are
	      often capitalized. This document defines these words as they
	      should be interpreted in IETF documents.  This document
	      specifies an Internet Best Current Practices for the Internet
	      Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
        </reference>
        <reference anchor="RFC8277"
		   target="https://www.rfc-editor.org/info/rfc8277"
		   xml:base="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.8277.xml">
          <front>
            <title>Using BGP to Bind MPLS Labels to Address Prefixes</title>
            <seriesInfo name="DOI" value="10.17487/RFC8277"/>
            <seriesInfo name="RFC" value="8277"/>
            <author initials="E." surname="Rosen" fullname="E. Rosen">
              <organization/>
            </author>
            <date year="2017" month="October"/>
            <abstract>
              <t>This document specifies a set of procedures for using BGP to
	      advertise that a specified router has bound a specified MPLS
	      label (or a specified sequence of MPLS labels organized as a
	      contiguous part of a label stack) to a specified address prefix.
	      This can be done by sending a BGP UPDATE message whose Network
	      Layer Reachability Information field contains both the prefix
	      and the MPLS label(s) and whose Next Hop field identifies the
	      node at which said prefix is bound to said label(s).  This
	      document obsoletes RFC 3107.</t>
            </abstract>
          </front>
        </reference>
        <reference anchor="RFC4271"
		   target="https://www.rfc-editor.org/info/rfc4271"
		   xml:base="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.4271.xml">
          <front>
            <title>A Border Gateway Protocol 4 (BGP-4)</title>
            <seriesInfo name="DOI" value="10.17487/RFC4271"/>
            <seriesInfo name="RFC" value="4271"/>
            <author initials="Y." surname="Rekhter" fullname="Y. Rekhter" role="editor">
              <organization/>
            </author>
            <author initials="T." surname="Li" fullname="T. Li" role="editor">
              <organization/>
            </author>
            <author initials="S." surname="Hares" fullname="S. Hares" role="editor">
              <organization/>
            </author>
            <date year="2006" month="January"/>
            <abstract>
              <t>This document discusses the Border Gateway Protocol (BGP),
	      which is an inter-Autonomous System routing protocol.</t>
              <t>The primary function of a BGP speaking system is to exchange
	      network reachability information with other BGP systems.  This
	      network reachability information includes information on the
	      list of Autonomous Systems (ASes) that reachability information
	      traverses. This information is sufficient for constructing a
	      graph of AS connectivity for this reachability from which
	      routing loops may be pruned, and, at the AS level, some policy
	      decisions may be enforced.</t>
              <t>BGP-4 provides a set of mechanisms for supporting Classless
	      Inter-Domain Routing (CIDR).  These mechanisms include support
	      for advertising a set of destinations as an IP prefix, and
	      eliminating the concept of network "class" within BGP.  BGP-4
	      also introduces mechanisms that allow aggregation of routes,
	      including aggregation of AS paths.</t>
              <t>This document obsoletes RFC 1771.  [STANDARDS-TRACK]</t>
            </abstract>
          </front>
        </reference>

        <reference anchor="RFC7938"
		   target="https://www.rfc-editor.org/info/rfc7938"
		   xml:base="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.7938.xml">
          <front>
            <title>Use of BGP for Routing in Large-Scale Data Centers</title>
            <seriesInfo name="DOI" value="10.17487/RFC7938"/>
            <seriesInfo name="RFC" value="7938"/>
            <author initials="P." surname="Lapukhov" fullname="P. Lapukhov">
              <organization/>
            </author>
            <author initials="A." surname="Premji" fullname="A. Premji">
              <organization/>
            </author>
            <author initials="J." surname="Mitchell" fullname="J. Mitchell" role="editor">
              <organization/>
            </author>
            <date year="2016" month="August"/>
            <abstract>
              <t>Some network operators build and operate data centers that
	      support over one hundred thousand servers.  In this document,
	      such data centers are referred to as "large-scale" to
	      differentiate them from smaller infrastructures.  Environments
	      of this scale have a unique set of network requirements with an
	      emphasis on operational simplicity and network stability.  This
	      document summarizes operational experience in designing and
	      operating large-scale data centers using BGP as the only routing
	      protocol.  The intent is to report on a proven and stable
	      routing design that could be leveraged by others in the
	      industry.</t>
            </abstract>
          </front>
        </reference>
        <reference anchor="I-D.ietf-spring-segment-routing"
		   target="http://www.ietf.org/internet-drafts/draft-ietf-spring-segment-routing-15.txt">
          <front>
            <title>Segment Routing Architecture</title>
            <seriesInfo name="Internet-Draft"
			value="draft-ietf-spring-segment-routing-15"/>
            <author initials="C" surname="Filsfils" fullname="Clarence Filsfils">
              <organization/>
            </author>
            <author initials="S" surname="Previdi" fullname="Stefano Previdi">
              <organization/>
            </author>
            <author initials="L" surname="Ginsberg" fullname="Les Ginsberg">
              <organization/>
            </author>
            <author initials="B" surname="Decraene" fullname="Bruno Decraene">
              <organization/>
            </author>
            <author initials="S" surname="Litkowski" fullname="Stephane Litkowski">
              <organization/>
            </author>
            <author initials="R" surname="Shakir" fullname="Rob Shakir">
              <organization/>
            </author>
            <date month="January" day="25" year="2018"/>
            <abstract>
              <t>Segment Routing (SR) leverages the source routing paradigm.
	      A node steers a packet through an ordered list of instructions,
	      called segments.  A segment can represent any instruction,
	      topological or service-based.  A segment can have a semantic
	      local to an SR node or global within an SR domain.  SR allows to
	      enforce a flow through any topological path while maintaining
	      per-flow state only at the ingress nodes to the SR domain.
	      Segment Routing can be directly applied to the MPLS
	      architecture with no change on the forwarding plane.  A segment
	      is encoded as an MPLS label.  An ordered list of segments is
	      encoded as a stack of labels. The segment to process is on the
	      top of the stack.  Upon completion of a segment, the related
	      label is popped from the stack.  Segment Routing can be applied
	      to the IPv6 architecture, with a new type of routing header.  A
	      segment is encoded as an IPv6 address.  An ordered list of
	      segments is encoded as an ordered list of IPv6 addresses in the
	      routing header.  The active segment is indicated by the
	      Destination Address of the packet.  The next active segment is
	      indicated by a pointer in the new routing header.</t>
            </abstract>
          </front>
        </reference>

        <reference anchor="I-D.ietf-idr-bgp-prefix-sid"
		   target="http://www.ietf.org/internet-drafts/draft-ietf-idr-bgp-prefix-sid-27.txt">
          <front>
            <title>Segment Routing Prefix SID extensions for BGP</title>
            <seriesInfo name="Internet-Draft"
			value="draft-ietf-idr-bgp-prefix-sid-27"/>
            <author initials="S" surname="Previdi" fullname="Stefano Previdi">
              <organization/>
            </author>
            <author initials="C" surname="Filsfils" fullname="Clarence Filsfils">
              <organization/>
            </author>
            <author initials="A" surname="Lindem" fullname="Acee Lindem">
              <organization/>
            </author>
            <author initials="A" surname="Sreekantiah" fullname="Arjun Sreekantiah">
              <organization/>
            </author>
            <author initials="H" surname="Gredler" fullname="Hannes Gredler">
              <organization/>
            </author>
            <date month="June" day="26" year="2018"/>
            <abstract>
              <t>Segment Routing (SR) leverages the source routing paradigm.
	      A node steers a packet through an ordered list of instructions,
	      called segments.  A segment can represent any instruction,
	      topological or service-based.  The ingress node prepends an SR
	      header to a packet containing a set of segment identifiers
	      (SID).  Each SID represents a topological or a service-based
	      instruction.  Per-flow state is maintained only on the ingress
	      node of the SR domain.  An SR domain is defined as a single
	      administrative domain for global SID assignment.  This document
	      defines an optional, transitive BGP attribute for announcing BGP
	      Prefix Segment Identifiers (BGP Prefix-SID) information and the
	      specification for SR-MPLS SIDs.</t>
            </abstract>
          </front>
        </reference>
        <reference anchor="I-D.ietf-spring-segment-routing-central-epe"
		   target="http://www.ietf.org/internet-drafts/draft-ietf-spring-segment-routing-central-epe-10.txt">
          <front>
            <title>Segment Routing Centralized BGP Egress Peer
	    Engineering</title>
            <seriesInfo name="Internet-Draft"
			value="draft-ietf-spring-segment-routing-central-epe-10"/>
            <author initials="C" surname="Filsfils" fullname="Clarence Filsfils">
              <organization/>
            </author>
            <author initials="S" surname="Previdi" fullname="Stefano Previdi">
              <organization/>
            </author>
            <author initials="G" surname="Dawra" fullname="Gaurav Dawra">
              <organization/>
            </author>
            <author initials="E" surname="Aries" fullname="Ebben Aries">
              <organization/>
            </author>
            <author initials="D" surname="Afanasiev" fullname="Dmitry Afanasiev">
              <organization/>
            </author>
            <date month="December" day="21" year="2017"/>
            <abstract>
              <t>Segment Routing (SR) leverages source routing.  A node steers
	      a packet through a controlled set of instructions, called
	      segments, by prepending the packet with an SR header.  A segment
	      can represent any instruction topological or service-based.  SR
	      allows to enforce a flow through any topological path while
	      maintaining per-flow state only at the ingress node of the SR
	      domain.  The Segment Routing architecture can be directly
	      applied to the MPLS dataplane with no change on the forwarding
	      plane.  It requires a minor extension to the existing link-state
	      routing protocols.  This document illustrates the application of
	      Segment Routing to solve the BGP Egress Peer Engineering
	      (BGP-EPE) requirement.  The SR-based BGP-EPE solution allows a
	      centralized (Software Defined Network, SDN) controller to
	      program any egress peer policy at ingress border routers or at
	      hosts within the domain.</t>
            </abstract>
          </front>
        </reference>
      </references>

      <references>
        <name>Informative References</name>
        <reference anchor="RFC6793"
		   target="https://www.rfc-editor.org/info/rfc6793"
		   xml:base="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.6793.xml">
          <front>
            <title>BGP Support for Four-Octet Autonomous System (AS) Number
	    Space</title>
            <seriesInfo name="DOI" value="10.17487/RFC6793"/>
            <seriesInfo name="RFC" value="6793"/>
            <author initials="Q." surname="Vohra" fullname="Q. Vohra">
              <organization/>
            </author>
            <author initials="E." surname="Chen" fullname="E. Chen">
              <organization/>
            </author>
            <date year="2012" month="December"/>
            <abstract>
              <t>The Autonomous System number is encoded as a two-octet entity
	      in the base BGP specification.  This document describes
	      extensions to BGP to carry the Autonomous System numbers as
	      four-octet entities.  This document obsoletes RFC 4893 and
	      updates RFC 4271.  [STANDARDS-TRACK]</t>
            </abstract>
          </front>
        </reference>
        <reference anchor="I-D.ietf-6man-segment-routing-header"
		   target="http://www.ietf.org/internet-drafts/draft-ietf-6man-segment-routing-header-21.txt">
          <front>
            <title>IPv6 Segment Routing Header (SRH)</title>
            <seriesInfo name="Internet-Draft"
			value="draft-ietf-6man-segment-routing-header-21"/>
            <author initials="C" surname="Filsfils" fullname="Clarence Filsfils">
              <organization/>
            </author>
            <author initials="D" surname="Dukes" fullname="Darren Dukes">
              <organization/>
            </author>
            <author initials="S" surname="Previdi" fullname="Stefano Previdi">
              <organization/>
            </author>
            <author initials="J" surname="Leddy" fullname="John Leddy">
              <organization/>
            </author>
            <author initials="S" surname="Matsushima" fullname="Satoru Matsushima">
              <organization/>
            </author>
            <author initials="d" surname="daniel.voyer@bell.ca"
		    fullname="daniel.voyer@bell.ca">
              <organization/>
            </author>
            <date month="June" day="13" year="2019"/>
            <abstract>
              <t>Segment Routing can be applied to the IPv6 data plane using a
	      new type of Routing Extension Header.  This document describes
	      the Segment Routing Extension Header and how it is used by
	      Segment Routing capable nodes.</t>
            </abstract>
          </front>
        </reference>

      </references>
    </references>
  </back>
</rfc>