Minutes of the Policy Framework WG Meeting, 44th IETF (Minneapolis, MN) Thursday, March 18, 1999 Minutes reported by John Strassner and Ed Ellesson (thanks also to Andrea Westerinen for submitting her notes for inclusion) Agenda (first session): -------------------------------- 1. Introduction and agenda bashing Ed Ellesson 2. Core Schema update Bob Moore 3. Directory containment proposal Lee Rafalow 4. Proposal for simplified policy rules Raju Rajan 5. PolicyValidityPeriod Silvano Gai 6. Policy Execution Engine David Black 7. Roles Andrea Westerinen, Silvano Gai Policy Framework WG Meeting Minutes Bob Moore - Core Schema Update ------------------------------------------------- Bob Moore provided a summary of what has been agreed on in the schema. This represents work both from the DMTF as well as from the IETF. This document is under IETF revision control. Bob started with a picture of the IETF policy model, which is a simplification of the policy model that is being developed in the DMTF. This is shown in the attached presentation from Bob. It is part of CIM. There are four main objects. PolicyGroup provides administrative grouping of policies. PolicyRule contains the semantics of a policy; namely, a policy consists of a set of conditions that, if satisfied, enable a corresponding set of actions to execute. PolicyCondition and PolicyAction are base classes for defining conditions and actions for a policy rule. By base class, we mean that these represent general concepts only, and it is expected that application-specific uses (e.g., IPSec, DHCP, etc.) will subclass these to add their own semantics. The biggest challenge that we had in defining an LDAP schema was how to represent associations and aggregations in LDAP. Recall that associations and aggregations define different types of relationships between objects. LDAP, of course, is not able to represent such things directly. Therefore, we have to map the relationship and its semantics to something that can be represented in LDAP (e.g., auxiliary classes, pairs of attributes, DIT containment, etc.). Bob then produced a diagram showing the mapping that we have defined to go from CIM to LDAP. In the center of the picture, we find the policyRule class. It is a structural class. In the upper left is the policyGroup class. We have introduced two auxiliary classes, policyGroupContainmentAuxClass and policyRuleContainmentAuxClass. These auxiliary classes realize a set of associations that are in the CIM model. >From the policyRule class, there are dn pointers to policyCondition and policyAction classes. There is also an additional dn pointer to the policyTimePeriodCondition class, which is a special type of policy condition. VendorPolicyCondition and VendorPolicyAction provide a standard escape mechanism for vendor-specific extensions to the core policy schema (e.g., for representing information that has not been modeled with specific attributes or relationships). There are three types of relationships in this picture. Blue represents LDAP inheritance. Red is a DN pointer to a structural class. Green is a special type of dn pointer. There are two instances of this, policyRuleConditionList and policyRuleActionList, that have a specialized string syntax. The first is policyRuleConditionList, which has the format: . The second, policyRuleActionList, has the format: . Looking at policyRuleActionList, we use this syntax to capture ordering. This is necessary because LDAP does not provide ordered return of results. So we use the "n" portion of the format to indicate the relative position of the policyRuleAction with respect to other policyRuleActions in the list. The policyRuleConditionList format is similar. GroupNum defines how to group the policyRuleCondition according to CNF or DNF. The "+" or "-" says not negated or negated. This is a more efficient representation for including the NOT operator than having a separate set of classes to represent NOT. Please refer to the draft for additional comments. Lee Rafalow - Directory Containment Proposal ---------------------------------------------------------------- Where are the policy class instances coming from? Problem is that there is no defined structure of the DIT. There are two fundamentally different ways to organize policy: arbitrarily (which we won't consider) and by using a methodology to structure the policy. Three common examples of this latter are by the administrator, by using a tool, and by using specific classes and entries to shape the DIT. The key is to find a starting point to look for our policy instances. References can be thru jurisdiction and thru location. Jurisdictional aspect: what are the set of policies that apply to a given PDP. Locational aspect: what are the policies that apply to a given "location". These can be found by DN, by a special attribute, or by other means. The key point is that if this is not defined, we will have serious interoperability and possibly performance problems. In order to retrieve policies, there are two basic approaches. You could traverse the DIT, traversing DN pointers and collecting entries one at a time. Retrieve all entries and put in a local store. How do you retrieve policies? We would like to introduce a policy container (similar to the linkedContainer of DEN). So, to do a bulk retrieval, the container could contain policies directly and/or the container could retrieve all objects from each policyLocation attribute. (see Lee's picture). At runtime it is not deterministic as to how many LDAP calls will be made. But the advantage of this method is that you reduce from the number of total conditions, actions and rules to a much smaller number (in Lee's example, to 3 instead of 50, or 100, or larger). Small semantic confusion over whether or not this is "bulk" retrieval - Lee to add better wording in the formal submission. Concerns over access control. If the policies are spread over the DIT, then it may be somewhat dependent on the particular directory implementation and how it handles access control on objects (e.g., some directories inherit access control mechanisms, others don't). May want to investigate the use of aliases, and their use of suffixes for subtree retrieval. Discussion on the list is encouraged. Consensus was that directory containment search optimization was required. Skip Booth had several comments from the perspective of IPSec policy, so Skip and Lee will work together to harmonize their approaches, and send the result to the list. Raju Rajan - Suggestions for Simplifying the PolicyRule Structure ---------------------------------------------------------------------- ------------------- We can have a PolicyGroup point to multiple PolicyRules, where each PolicyRule points to multiple PolicyConditions, PolicyActions, and PolicyTimePeriodConditions. Note that the PolicyActions are ordered, and that the PolicyConditions are combined to yield a single boolean expression. Concern that this gets very complicated; used an IPSec example. Advantages of complex rules. Flexible - classes may be sub-classed for future extensions. Mix and combine PolicyConditions using CNF/DNF. Instantiate and reuse condition and action objects - space and query efficiency. So you have PolicyRule1 that uses UserCondition1, IPSecAction1 and DiffServAction1, and PolicyRule2 using UserCondition2, DiffServAction1 and ApplicationConditon2. This enables us to reuse conditions and actions as well as define new conditions and actions on a rule-by-rule basis. Now, how can having more objects lead to space and query efficiency? Because as the set of rules, conditions and actions grows, you start reusing more and more of each of these objects. And you can structure common queries to retrieve common sets of rules. Simple Policy Rules. Single object incorporating a simple set of conditions and actions. This provides simpler parsing and correctness checking. No fancy re-combination of actions using arbitrary boolean expressions. And the query can then be optimized for this. So the conclusion is that we need to provide for both, since it depends on the usage environment (number of policyRules, need for reuse of condition and action objects, and need for query optimization). The point is that the current document subclasses simple rules from complex rules. This is plain wrong. (Note: objections from several people besides current authors) Option 1: no need for the simple rule class. Option 2: change the inheritance hierarchy, as follows: Top->PolicyRule->SimplePolicyRule ->ComplexPolicyRule Now, the problem is, must conditions and actions be defined twice, once for simple and once for complex policy rules? Proposal avoids this by using aux classes. This enables them to be attached to either PolicyCondition or to the SimplePolicyRule class. Two questions on the table. First, do we need SimplePolicyRule object. Second, if we do, how do we implement it. Option 1 is not viable - DHC working group is already using proposal in current draft, and we must avoid the use of instantiating multiple objects. Option 2 - you're limiting flexibility . Concern over how we can combine ordered actions of multiple PolicyRules in a PolicyGroup Are we trying to define a schema or an LDAP optimization? More people are in favor of investigating the simplePolicyCondition class, so we should take this to the list. Silvano Gai - PolicyValidityPeriod ---------------------------------------------- Three types of conditions: transitions, states, and packet conditions. Transitions are instantaneous events in time. States represent a period of time wherein a set of conditions are satisfied. Packet conditions are particular types of conditions that apply to filtering packets. There are two types of actions, simple actions and download (e.g., install) actions. States and Transitions. When you write a condition, you must be very clear (for interoperability reasons) if the condition is representing a state or a transition. By definition, there can be no two instantaneous transitions that intersect. Not true for states. Diagram shows three states (before the first transition, between the two transitions, and after the second transition. The fact that it is 4pm or 5pm, and the fact that the use John logs on or off, is a transition. The fact that a network became congested is a transition. But the point is that saying: IF time == 4pm AND DSCP = 7 is meaningless. However, IF time BETWEEN 4pm and 5PM, AND DSCP = 7 makes perfect sense. As another example, the fact that is between 4PM and 5PM, or that John is logged on, is a state. And we need to be very clear when we write a policy condition to ensure that we differentiate between transitions and state. Next major point is that there is no way that you will be able to evaluate a packet condition in the PDP - you will need to define and evaluate it in the PEP. This is defined in an RFC. The PDP does not have the mechanisms to test this, the PEP does. Why are you differentiating between state and transition? Because most people assumed that the PDP would evaluate all conditions, and because the draft implies that the PDP is the only place where conditions can be evaluated. Packet conditions are evaluated when the packet arrives. You need to evaluate the packet condition in the forwarding plane of the device. If the PDP and the PEP are physically separated, obviously you won't make a call to a separate device on a per-packet basis. Silvano is NOT trying to modify the core schema. But in looking forward to the design of the QoS Schema, we need to include these concepts. What Silvano is getting at, is if we admit that there are different types of conditions, then how can we ensure interoperability. If the condition is kept opaque, then it is very hard to do conflict detection. (Refer to Silvano's Legal combinations slide.) If you try to AND two conditions, but they are transitions, it doesn't make sense (e.g., if it is 4pm and if it is 5 pm.). This is because two transitions can never happen in the same time. The point is that we need more than just generic CNF/DNF expression of policy - we need to include the different types of conditions, and ensure that only legal combinations of them are allowed to be expressed in the PolicyRule. (Refer to Silvano's Examples slide. He has defined the following policies: if (4 PM <= time <= 5PM) && (IPSA = 13.19.1.1) then DSCP = 5 if time == 4PM then install(MFC(IPSA == 13.19.1.1), Action(DSCP = 5)) if time == 5PM then deinstall(MFC(IPSA == 13.19.1.1), Action(DSCP = 5)) Note that the second and third policies seem to be equivalent to the first. But in fact they aren't. This is made obvious by considering the case when you boot at 4:30 - the second rule doesn't fire. Conclusion. If we want to write a QoS schema, and ensure that we have interoperable policies, then we should consider this taxonomy. Most importantly, actions can not be opaque, because then conflict detection is impossible. Do you see policy as a way of programming the network, or is it more restricted? Raju thinks that we should step up a level, and use policy with respect to how an administrator will apply policy. Also, this seems to be the start of a document that presents a set of guidelines to the QoS schema developer. One of the differences between putting the condition in the action, then the PDP can make up its mind as to where we want the . So the question is, should we use the schema to define where conditions are evaluated? Summary of a long discussion on this entire topic: - How to characterize different types of conditions for purposes of conflict detection? a. Explicit within the schema b. Via recommendations to schema developers - Needs to be considered within the context of a worked example (QOS) - to be supplied. - Should we allow IF conditions within the ACTION part of a rule? Needs more discussion on the list. David Black - Execution Engine -------------------------------------------- Networks are fundamentally non-deterministic, so policy should be extra careful as to not make things worse. Simple example: Engineers get bronze service, John gets gold service, and John is an engineer - which service does he get? Furthermore, when/where does he get it? This might change at 3am for no good reason (e.g., some other piece of policy was reloaded). It is important to strive to write consistent, verifiable policies. Otherwise, it will be very difficult to debug. The goal is that there should be one, and only one, correct rule execution sequence, for non-mandatory rules. However, complications exist in trying to represent dynamic state (e.g., the condition might change during its evaluation) and non-mandatory rules. Dynamic state is the other way around (e.g., "has this meter seen more than 10,000 packets?" depends on when you ask). Removing possible gotchas due to dynamic state requires additional support, one possibility is the invocation scoped variable binding stuff ([ed]: this is discussed at the end of David's talk; I strongly recommend reviewing David's talk for this). How many rules do you allow to fire at once? One, because as soon as we get into concurrent rule firing, we must bite off cross-rule action ordering, and this is a mess, so just say no. There is a resulting ordering requirement: the action sets should be serializable (and see the database literature for the definition of serializable). But what happens if you have a large number of PEPs and you are evaluating a packet condition (e.g., billing)? There are some conflicts that we can't detect at the PEP level. Beyond that, we need some way to ensure consistent execution. If there is one invocation that fires multiple rules, then the rules that are fired must be ordered (note: the intended meaning of "ordered" here is serializable action sets). The next issue is how to select a rule to execute. Rule priority might not be enough. You need one global decision criteria that is executed the same way. So use DN structure in the DIT - DN structure of policy must not change across multiple representations of policy across all repositories in use. Note that this is probably necessary in any case to cope with the nonstandard n:DN, n:+:DN, and n:-:DN syntax in the schema. A problem exists in the ability to specify cross-policy ordering independent of intra-policy ordering (so authors of policy pieces don't have to grapple with global ordering issues). This can be conceptualized as combining policies. In order to do this, policy group priority is proposed (again - noted that this was originally proposed and killed by the DMTF). [ed: Subsequent to the meeting, it has been observed on the DMTF list that policy groups are too general a structure to solve this problem -- for group policy to work, one needs to have a unique hierarchy, and policy groups are more general than that.] Problem: how do we know when to end? One answer is when there are no more rules to fire. But the problem with this approach is that it fires conflicting rules, and if we're not careful, the lowest priority rule will win, because it is last to fire. Yuck! Therefore, we should add a standard action, DONE WITH GROUP, to prevent firing conflicting rules. We should relate this more to how a device is built (e.g., relate the download of actions to fire to what the silicon can do). Note that DONE WITH GROUP explicitly names the group that is DONE. How does the DONE semantic deal with multiple conflicts with multiple groups (e.g., conflicts in QoS and conflicts in security)? The short answer is "not intended to solve that problem". DONE is aimed at local conflict detection problems like the "when does John get Gold Service?" problem. How about ordering combined with exclusive execution priority (e.g., if you have two rules that conflict, then give them comparable rule priorities but exclusive use priorities, so that only one is chosen). This is part of the global conflict detection level. And it isn't even that simple - you might have conflicts that arise because the two groups interact. What happens if an action in a rule fails? There are infinite levels of complexity that are hiding in this. You probably need to analyze all of the possible failures and handle them in a script. There is a level of complexity of error cases and recovery actions that is better dealt with by rolling all the actions and error logic into a script rather than cluttering up the schema with complicated error case annotations to actions. After a long discussion, it was decided that this entire subject needed more discussion on the list. Specifically: - Conflict detection - Policy Group priority may be needed - Introduction of "Done" Action Roles - Andrea Westerinen, Silvano Gai ------------------------------------------------------- What is a Role? At the device level, it is an attribute of an interface. An interface can have many roles (e.g., "edge" and "frame relay"). Roles are defined by the device (e.g., pre-configured by the vendor) and/or by the administrator. It provides a convenient way to administer interfaces as well as to associate important data for downloading from the directory for device configuration. How is a role modeled? We use a collection. The name of the collection is the role label, and the interface is placed into the named collection using MIB or other data. The collection object aggregates interfaces, and their policy rules and groups. Named collections are a more generic concept that roles. They provide the notion of explicit membership, as well as other semantics. They are a class, and are therefore referenced using DNs. Policy jurisdiction and scope is provided by the PolicyRuleContainmentAuxClass and the PolicyGroupContainmentAuxClass. These contain DN references to PolicyRules and PolicyGroups, respectively. PolicyRules and PolicyGroups apply to the object which includes the aux class. What about the case when you assign Gold service to one queue but then move to another queue? This is a different qualitative behavior. So we need to tie this back to the administrator, and come up with a way that this can be signaled to all other affected elements. Put in either by the PDP or via an instrumentation provider (e.g., as in CIM). You can in theory have combined roles, but this is the topic of future design discussions. Future work - how to do a join, and how to persist it. When we combine roles, we must decide that the associated semantics are all or nothing. If it isn't, then we get unpredictable and probably incorrect behavior. Future work will be after our current version of the core schema (we need to get experience first). Any representation of role combinations, for the time being, will be done manually. Agenda (second session): ------------------------------------ 1. Introduction and agenda bashing John Strassner 2. Good News - Convergence! Bob Moore 3. The Policy Information Base Keith McCloghrie 4. Architecture comments David Blight 5. Architecture comments Evan McGinnis 6. Security considerations Russ Mundy 7. Wrap-up and next steps John and Ed Good News - Convergence! - Bob Moore -------------------------------------------------------- The two major architecture proposals, one from Cisco-Intel-IPHighway-HP-Nortel and one from IBM-Microsoft-Nortel, have been merged, and a new draft will be written from these drafts as well as other input from the list and the last Interim Meeting. Editor for the draft will be Glenn Waters. The new architecture looks as follows: Policy prescriptions | V +----------------------------+ | Policy | | Management Tool | +------------+--------------+ + + + ++++++++++ + +---------+----------+ | Policy Repository | | (Directory Server, | <-- Policy | Database, etc. ) | rules +-----------+--------------+ + + +----------+-------------+ | Policy Decision | | Point (PDP) | +----------+--------------+ | | <--- Policy Protocol for policy mechanisms | +---------+------------------+ <+++ Repository Access Protocol | Policy Enforcement | (e.g., LDAP) - REQUIRED | Point (PEP) | +-----------------------------+ The PIB - Keith McCloghrie -------------------------------------- Provisioning needs an extensible name space (e.g., like SNMP's OIDs). We think that this might need to accommodate a large set of objects, and definitely must be carried over a protocol, such as COPS. However, it should be pointed out that the PIB is independent of COPS. The PIB is a policy-based abstraction, joining the capabilities as described in the MIB with the higher-level business rules defined in (for example) a QoS Schema. The PIB joins these two different, yet related, sets of information. The PIB is a special type of MIB, and can in fact algorithmically be transformed into a MIB. However, not all of the SMI is needed for the PIB. Keith offered the following picture: SLA | V Core Schema (realizes SLA) | V QoS Schema | V QoS PIB ^ | MIB ^ | PEP In the above diagram, the arrows do NOT represent the direction of data flow, but rather the direction in which PIB/schema definitions are derived. For example, COPS will download entire rows at a time. We also assume that when policy is in operation, other functions, such as other SNMP agents, will be disabled. Therefore we don't need part of the SMI (e.g., scalars and RowStatus). This is a Good Thing - it's closer to a schema, in that we deal just with objects. Recommendation - before we go forward, we should add the QoS version of a PIB into our working group. But this requires re-chartering, so after Keith is finished, we will get a sense of the room as to whether this is acceptable to the people present here or not. Concern over whether this is directly tied to usage of COPS or not. Answer is that this is independent of the COPS protocol. Question as to when we can advance the core schema. Must it be held up for just the QoS schema, the QoS schema and the PIB (if we decide to work on it), the QoS schema and a schema from another domain (e.g., DHCP or IPSec)? Ed: we have to have a concrete deliverable delivered before we expand our charter. John: the core schema is in theory independent of the PIB. But the core schema should not be advanced until we have at least two different examples that can use it (e.g., QoS and IPSec). Otherwise, we won't know if the core schema is sufficiently general to serve people's needs. Note that I am not recommending that all three I-Ds be advanced together. I am recommending that the QoS and some other draft be revised at least once so that we have a good feeling that the core schema meets their needs. We'll take both of these questions (PIB and core schema advancement) to the list. Back to the PIB... Shai Herzog and David Black signed up for clarifying the multi-layer model (expansion and description of) in Keith's picture. Note that the PIB provides the ability to late-bind conditions and actions. Example: SLA: provide gold, silver, and bronze QoS Core: provides policy definition QoS: IF user IN Bldg1Group THEN give GoldService => IF IPSrcAddr -- 198.10.20.1 AND Role == 'Edge' THEN remark with DSCP=5 PIB: IF device has role 'Edge' THEN map SDCP into queue/threshold MIB: tells how to provision the interface(s) defined by the Role 'Edge' The utility of the PIB for a QoS (e.g., DiffServ) schema is that much data, like specific device capabilities, PDP synchronization, detailed knowledge of how DSCPs map to queues, how interfaces are provisioned through roles, etc., can be much more efficiently represented in the PIB. This avoids cluttering the QoS schema and makes better use of the directory. To do this, the PIB contains several mapping tables: - DSCP mapping to queues, (per-interface-type and role-combination) - DSCP mapping to IP precedence and 802.1p - 802.1p to DSCP The structure of the PIB is simple. Each AccessControlEntry (ACE) is a packet condition. ACEs are ordered into an AccessControlList (ACL). A set of ACLs are applied to a role-combination. This yields a rudimentary NOT, as an ACE match can skip to the next ACL. Action(s) are associated with an ACL. Packet conditions represented in the PIB include IP source and destination address and mask, DSCP range, IP protocol, and TCP/UDP source and destination port range. The PIB defines a single ACL action: the DSCP value for marking a packet. Possible future additions include microflow policers, aggregate policiers, shapers, and other DiffServ mechanisms. David Blight www.ee.umanitoba.ca/~blight/talks/pbn-arch.pdf We have to have multiple consoles in a management system, and multiple heterogeneous devices that we have to match. One reason that we have multiple consoles is because we have different information coming into the system (policies, network topology information, network state information, network alarms, etc.). Doesn't like PDP (only need for it is to accommodate legacy devices). Prefers Policy Interpreter. David is interested in devices at multiple layers. That is, David wants to use different levels of abstraction (network, AS, ISP, etc.). So you are in reality talking to another network management system, not to a device. And his devices are "virtual" devices. It is impossible to catch all conflicts, or even to avoid having conflicts. However, not all policy systems will produce conflicts. Example: PBMS at IP layer, and another PBMS at the ATM layer. Different abstractions provide different types of information to different systems or consoles. His idea of a policy-based management system is something that includes four main functions: a policy interpreter, a means to absorb network information, a means to generate network configuration information, and a means to produce network-specific views. This needs some protocol support. Chairs noted that this WG doesn't do protocols, but that this WG can generate requirements for other WGs that do work on protocols. Doesn't like LDAP ;-) This is because LDAP entries are not ideal to hold network information and policies. However, the market likes LDAP, so the real problem is in getting it to work. One way could be to provide an LDAP-compatible directory for each network management system. Another is to have a global LDAP directory. Is there a bootstrap issue? Mentioned in the draft. You could use a service location protocol (e.g., SLP) to basically locate management systems as appropriate, and then pass the appropriate type of information (e.g., CIM data) to the (discovered) management system. Evan Wasn't happy with proposal A which relied completely on the directory and with proposal B that seemed to hide the directory and depend completely on the message passing service. Thus, Evan presented his compromise. Main point is to separate a policy enforcement point (PEP) into a part that does enforcement (the PEP) and a part that performs the actual functions requested (policy execution point, or PXP). Put another way, the PEP is where the policies are programmed, and the PXP is the logic that actually handles the packets. Do we need to standardize the interface between the PEP and the Policy Execution Point? No. Then why is it shown? Because enforcement and execution are two different things. Evan's architecture allows for a couple of extra capabilities, such pulling policies directly from the directory and the device. After some discussion, the following conclusions were made: - The possible deconstruction of a PEP into a PEP and a PXP needs more discussion on the list - Need more discussion of the generic models presented in Evan's presentation, for the purposes of standardization: Discussion on list. Security Considerations - Russ Mundy ----------------------------------------------------- (Russ is our security advisor) Russ is here to help the WG determine what the security requirements for this WG are. In a nutshell, real-world use of the specifications produced by this WG requires SOME security. The difficulty is in determining the RIGHT security. Who determines if the specs are right with respect to security requirements? The WG, validated by the IESG. But the proof is in people that use the specifications for implementations. These are the ones that really count, and we need to establish a feedback loop to include their comments. Thus, we need to address security-related concerns of the architecture to meet the user's requirement. What do people need from security as they use this architecture? They need to consider the following things. Within all of the boxes of the architecture, you will need to have some amount of various types of security capabilities. They include security at each individual box as well as overall security end-to-end. They include integrity, authentication, authorization, access control, confidentiality, and perhaps non-repudiation. Why do you need to do this? For examples, thinking about integrity, if the entry of the policy data is done in a protected environment using known people and known locations, you might be OK. But what if it isn' t? People usually aren't confined to secure rooms, and how do you know if the person that entered the data was really who he or she claimed to be? And how do you know that you are in fact talking to a valid policy repository? And that the data entered has not been tampered with? Russ recommends the following types of security be investigated. Note, however, that each "box" and "line", as well as "End to End" of the Architecture, MAY need different amounts of security: - Integrity - Authentication - Authorization Determination - Access Control - Confidentiality - (Perhaps) Non-Repudiation Integrity: Intended policy information will be stored, moved (transported), & transformed accurately (within Boxes, on Lines, End2End) Authentication: Access to policy information is only permitted when those accessing it provide sufficient proof of identity (within Boxes, on Lines, ?End2End?) Authorization Determination: Any part of specified system, e.g., Repository, PDP, PEP, protocols, will permit them & only them to enter or change their policy information (within Boxes, ?End2End?) Access Control: Policy Authority needs to be able to control who sees &/or uses things related to their policy information (maybe controlled to individual objects and/or rules, e.g., ISP may let Customer A 'see' some (but not all) of their service rules but Customer B cannot 'see' any of Customer A service rules (within Boxes, on Lines, ?End2End?) Confidentiality: Policy Authority's private information must be protected, e.g., private keys, customer rules, some SLA information, private business relations (within Boxes, on Lines, End2End). With respect to confidentiality, if you are using private keys anywhere in the system, you certainly need confidentiality. Another example is business arrangements between companies. (Perhaps) Non-Repudiation: Non-repudiation is a possibility, and really requires a detailed examination from people in this working group. What are my concerns? This is a hard problem. For example, what do you really mean wen you say policy? If you agree on this definition, then you can start defining requirements. But you also need to characterize both the users that will use this system as well as their business and application needs. Current Concerns: - Policy is a complex problem - Defining policy is a real hard problem - Specifying policy system is a REAL HARD PROBLEM - Conclusion: WG has a complex, real, REAL HARD PROBLEM - Eventual Users - Aren't Well Defined - Haven't thought much about their security requirements - Will Probably Have A _Wide_Range_ of security expectations - Current Set of IDs Only support (At Most) ONE Policy Authority - Too Much Emphasis on LDAP Repository instead of User & Architecture Concerns about data that spans multiple boxes, that protocols such as IPSEC won't catch. It's really an information flow issue. One final concern is that there is too much focus on schema. We need to start focusing on these issues. Security should be specifically addressed in the architecture document. (Action: need volunteers for incorporating these considerations into the design.not as an afterthought.) Russ is here to help. Wrapup ----------- Take PIB question to the list. Don't need to make an RFC1812-style PDP requirements document, but do need to specify this. Perhaps put this into the architecture document. Russ recommends that security should be part of the architecture document. Need to address security considerations for core schema before it can be progressed. Need a plan to complete. We need to address evaluation model of conditions and actions. Second, ordering of actions is contentious. What about termination conditions? We agreed on the converged architecture picture. With regard to the QoS Schema, is this the right place to do QoS schema? DiffServ didn't want to do it, RAP didn't want to do it. (Note from chairs: Ed and I met with Fred and Brian Carpenter, and agreed that the Policy Framework WG should indeed do the QoS Schema). IPSEC Policy BOF: Chair coordination will be initiated. QOS Working Groups (RAP, RSVP, DiffServ, IntServ): Chair coordination will be continued/initiated, as required.