CURRENT_MEETING_REPORT_ Reported by Bill Fenner/Xerox PARC Minutes of the Inter-Domain Multicast Routing Working Group (IDMR) The IDMR Working Group met twice at the 33rd IETF in Stockholm. The first session was held on Tuesday, 18 July, and the second session on Wednesday, 19 July. First Session The first session was shorter than expected due to some agenda juggling. Tony Ballardie, IDMR co-chair, mentioned the new IDMR Working Group URL: http://www.cs.ucl.ac.uk/ietf/idmr Ken Carlberg's Presentation Ken Carlberg, of SAIC, provided a further report on Harris's simulation efforts. Harris is simulating the performance of CBT and PIM for a Distributed Interactive Simulation (DIS -- think ``interactive video gaming'') environment. (Steve Batsell gave a presentation at Danvers on DIS requirements). Ken said that Harris has put out a technical report describing their simulation and results, but it is not yet available publicly. Contact Bibb Cain or Steve Batsell for availability of the report. The results Ken presented used two different topologies: the AAI topology (which is basically a single loop), and an artificial mesh topology with average node degree of 3. Several measurements were simulated. o End-to-end delay using PIM-sparse and source-based trees was 15% (AAI) to 30% (mesh) lower than CBT. o Bandwidth efficiency (protocol overhead) was approximately equal for PIM-sparse, using both shared and source-based trees, and CBT. PIM-dense protocol overhead was much higher. o PIM-sparse with unicast to RP on the AAI has end-to-end delays that are 60% higher than with CBT, and uses 50% more bandwidth. o Routing table size was significantly smaller using CBT than PIM. With 20,000 groups and 200 senders to each group and 50% group membership, PIM-sparse with source based trees uses 3,010,000 routing table entries but CBT only uses 10,000. Based upon these simulation results, Harris recommends: o CBT should be used to support DIS applications. o PIM should be modified such that the decision on source-based tree mode vs. shared-tree mode should be made by group initiator rather than receivers. Classifying a group for its whole lifetime reduces the complexity of switching trees and can reduce routing table size. Eric Crawley's Presentation The second presenter was Eric Crawley, from Bay Networks. Eric talked about experience gained from implementing CBT in a router. Eric suggested a new mode for CBT, called ``native mode''. When all the multicast routers on a subnet use CBT, there is no need to encapsulate traffic with the CBT header, thus saving extra traffic on a shared subnet. The DR Selection algorithm increases join times and can be optimized in the case when a router knows that it is the only CBT router on the network, but it would be nice to find a faster algorithm for the general case. CBT also takes less room in forwarding tables, since there is only one entry per group, instead of one per source and group. The tree-setup phase of CBT also means that resources can be allocated while building the tree, and if resource reservation fails alternate tree paths may be tried, without need for modifications to the core protocol. Eric plans to collect data comparing performance of DVMRP and CBT on the same platform. Ross Finlayson's Presentation The final presenter was Ross Finlayson, who talked about automatic multicast tunnelling through a firewall. Ross implemented a system that uses `sd', the session directory, to identify candidates to be relayed across the firewall. The portion inside the firewall periodically sends a `ping' to each candidate group, with a high enough TTL to reach all internal nodes. If it gets a response to its `ping', there is an internal member, and it notifies the portion outside the firewall to start forwarding traffic. The firewall uses a set of application-level gateways to forward traffic, so that other data on the same multicast address cannot get through. The decision was made to trust the MBONE tools (`vat', `wb', `sd', etc.) as there are never any commands directly transmitted, just multimedia data. Second Session Bill Fenner's Presentation Bill Fenner from Xerox PARC spoke about Hierarchical Multicast Routing for the MBONE. The MBONE is growing at an extreme rate, and is still using a single instance of DVMRP, a RIP-like protocol that was designed to handle at most a couple of hundred routes. It is currently handling thousands of routes, and the routing tables continue to grow. Moving to hierarchical routing will solve many problems, such as route instability by isolating routing problems to the domain in which they occur, and will allow the testing of new multicast routing protocols without interfering with global multicast routing. To ease the transition to hierarchical routing, the requirements for Level 1 (intra-domain) routers are kept small: o Participate in a Domain-Wide Report protocol, similar to IGMP. The DWR protocol allows domain membership to be known by the Level 2 (inter-domain) routers. o Supply the Level 2 border routers with a list of address prefixes describing the networks inside the domain. This allows the Level 2 routers to determine whether a multicast packet is from an internal source or is transit from another domain. o Deliver copies of all internally-generated traffic with sufficient scope to all Level 2 border routers. o Accept and propagate a ``default'' multicast route. The initial Level 2 (inter-domain) protocol will be a modification of DVMRP. In hierarchical DVMRP, each domain is assigned an identifier and packets are routed at Level 2 on their domain-identifier. Level 2 routers use a special all-border-routers multicast group in the Level 1 multicast domain for transit traffic, and to allow injection of packets from each border router. This can cause duplicate traffic, as the encapsulated and non-encapsulated forms of a packet may traverse the same link. However, it is expected that proper placement of domain boundaries can solve this problem, and that a domain will generally either be transit (in which case the traffic is solely encapsulated) or leaf (in which case the traffic is solely un-encapsulated). A paper on hierarchical DVMRP by Ajit Thyagarajan and Steve Deering that describes hierarchical DVMRP in more detail is available from: ftp://ftp.parc.xerox.com/pub/net-research/mbone/hierarchical-dvmrp.ps.Z When PIM is used in a Level 1 domain, a subset of the border routers are designated as RPs for groups with no global RP. When there is an internal source, it sends a Register to one of these border routers, which informs the other border routers and they join the distribution tree for the group. This allows the injection of internal traffic to all L2 border routers. For incoming traffic, the internal receivers join to the same RP, which is adjacent to the L2 border router, and data flows down the (*,G) tree. With PIM at Level 2, it actually appears to be a single large PIM domain, using the hierarchy inherent in unicast routing. This is because it is expected that the hierarchy boundaries will be the same for unicast and multicast routing, and since the routers already have the unicast routing table there is no need to add another one paralleling it. Note that some of this functionality will change as the Level 2 protocol evolves (e.g., changing from DVMRP to PIM-sparse). Changes can be isolated to multicast border routers, placed at the L1/L2 boundary. (In fact, MBRs will probably be implemented on the same machine as the L2 router, but that is not a requirement). It is unclear whether there is a single L1<>L2 interaction that can be defined. It may be that 4 will suffice (`Dense'<>`Dense' protocol, `Dense'<>`Sparse', S<>S, S<>D), but it is not clear yet whether that is either necessary or sufficient. Further research is required on the topic of this interaction. Tony Ballardie's Presentation Tony Ballardie from UCL then spoke on hierarchical routing as it applies to CBT. When using CBT as a Level 2 protocol, the Level 2 border routers must have a way to choose or discover the Level 2 cores for a group. This may be via domain-wide reports, or some other mechanism. The Level 2 tree must be built before data can be forwarded; this may result in some data loss depending on the size of queues and the speed of creating the tree branch. This is considered acceptable since it is better than flooding the data everywhere. When the Level 1 protocol is CBT, the injecting border router unicasts a CBT-encapsulated copy of the packet to an internal core if there are internal members. The address of the internal core is included in the domain-wide report. All groups have internal cores, whether they are inter-domain or intra-domain. In addition, if the packet must transit this domain, the border router sends it on the all-border-routers group, using the Level 2 DVMRP encapsulation. For sources within a Level 1 CBT cloud, the ``master core'' forwards the packet to the core of the Level 2 all-border-routers group. There was some discussion as to how the intra-domain core knows what kind of encapsulation the packet should use, and it was resolved that this issue needs more investigation, as it is unwise to require all internal cores to have knowledge of the Level 2 protocol. Deborah Estrin's Presentation Deborah Estrin presented a PIM specification update over the MBONE from Los Angeles. MBONE quality from LA to Stockholm was extremely impressive. Recent changes in the PIM specification include: o Separation of the document into individual Sparse and Dense mode documents. o Major editorial changes and updated figures. o New RP mechanisms integrated. o Changes in packet formats. The new RP mechanism was described at the Danvers IETF, and is now incorporated into the specification. There were several new messages (Register-Ack, Candidate-RP-Advertisement, and Poll), changed messages (Join/Prune, Register, RP-Reachability), and obsolete messages (Register-Stop). The updated specification will be submitted as an Internet-Draft, but until it is, an interim version is available from: ftp://catarina.usc.edu/pub/pim/PIM-SM.ps Also see PIM-SM.diffs in the same directory for details of the specification changes. Open Discussion An open discussion followed on how the IDMR working group should advance. It is clear that the working group charter needs to be revised, as all of the milestones listed have passed. It was suggested that PIM and CBT should be submitted as Experimental RFCs. Hierarchical routing looks promising to help solve some of the routing problems, but it is still too early to know what problems will crop up with it (i.e., will the L1<>L2 interactions end up being an n-squared problem?).