UMA telecon 2013-02-21
UMA telecon 2013-02-21
Date and Time
- Focus meeting on Thursday, Feb 21, at 9am PT (time chart)
- Skype: +99051000000481
- US: +1-805-309-2350 (other international dial-in lines available) | Room Code: 178-2540
Agenda
- Action item review
- UMA for healthcare report (targeted event is on this same day)
- IDESG meeting report
- RSA opportunities to get together
- Review NSTIC "envision it" scenarios for UMA matches/implications
- Identify key elements of "UMA for enterprise API authorization" implications and benefits ("Gluu" scenario)
- Capture AI for threat model conclusions from last meeting
- AOB
Minutes
Action item review
Done. Some of the items are in progress.
UMA for healthcare report (targeted event is on this same day)
Adrian, Sal, and Josh Mandel got together and did a bit of work. Josh presented to the Blue Button+ group last week. Sal is working on the UMA parts of the presentation. Also, Adrian met yesterday with the White House CTO in his Patient Privacy Rights role. There's high interest in demonstrating BB+ with HIEs, which are in high gear right now. PPR is promoting patient-mediated exchange. One of the HIE projects is likely to target BB+, which is already OAuth2-friendly. The management of consent is a problem waiting to be solved. They're going to be hunting for software to support these use cases; this is why Nick has joined us today. (See the Implementations wiki page.)
IDESG meeting report
No one on the call was in attendance.
RSA opportunities to get together
Eve and Thomas will be around. Come to the Kantara Tuesday night event! Also, there's a Kantara breakfast on Tuesday morning.
Review NSTIC "envision it" scenarios for UMA matches/implications
Ken Klingenstein oversees the Scalable Privacy project that has gotten an NSTIC grant. Part of this discussion involves Idemix and U-Prove. There's interest in understanding how UMA could help with the NSTIC scenarios.
Antonio (p. 11)
Antonio, age thirteen, wants to enter an online chat room that is specifically for adolescents, between the ages of twelve and seventeen. His parents give him permission to get a digital credential from his school His school also acts as an attribute provider: it validates that he is between the age of twelve and seventeen without actually revealing his name, birth date or any other information about him. The credential employs privacy-enhancing technology to validate Antonio’s age without informing the school that he is using the credential. Antonio can speak anonymously but with confidence that the other participants are between the ages of twelve and seventeen. (p. 11)
Age is a PII that the individual would like to control. This suggests that perhaps UMA can provide a means of this control.
The school is being used as a trustworthy third-party attribute provider. UMA can provide brokering of access to a resource server holding this information.
Providing information about being "old enough" is a higher-skill goal beyond just providing Antonio's age. Eve's take: UMA doesn't preclude this but doesn't solve it directly in any particular way. Thomas notes that the AM could, at the app (not protocol) level, serve as an aggregator or a judge of third-party attributes coming from resource servers connected to it. Look at the example of kids traveling around England and France: An engine would be needed to calculate the kid's "old enough" quotient according to French law vs. English law. (Hey, will we see a resurgence of AI languages of the past to solve this?)
Not directly relevant but indirectly so: The old UMA "Custodian" scenario. (This would be where Antonio is 12, not 13, and thus according to COPPA laws can't consent.)
- Resource owner: Antonio
- Authorization server: school (also IdP) – useful to co-locate these components
- Resource server: school (again) – serving user attributes, which are trustworthy due to the source
- Client: online chat room (presumed web app)
- Requesting party: Antonio (again) – thus, this sharing scenario is "person-to-self" (see the e-Transcript case study for an example of another person-to-self case)
See also the online personal loan case study for how person-to-self sharing can work in practice. Essentially, Alice sets policies ahead of time that mention herself (claims only she can prove). The wireframes in this case study show how this can work.
If the AS and RS were not colocated, then there would be elements of AS-RS introduction and communication that would need to be covered. See the personal cloud subscription case study for these elements (but note that it's person-to-person sharing).
Identify key elements of "UMA for enterprise API authorization" implications and benefits ("Gluu" scenario)
- Resource owner: enterprise
- Resource owner agent: policy administrator working for the enterprise
- Authorization server: enterprise (roughly) PDP/(formal) PIP/AS – assumed that it's able to make authorization decisions based on no run-time access by the policy administrator, vs. the human resource owner case where it's an option for the AS application (not baked into UMA) to offer policies that say "SMS me for approval when so-and-so approaches", Interaction Service-like.
- Resource server: enterprise app/API/(roughly) PEP
- Client: some client app (need good example)
- Requesting party: enterprise again (2-legged) or some developer (3-legged)? – e.g., researchers from third-party institutions might run client apps that make API calls?
Note that enterprises are likelier to have a formal policy store, so that the UMA AS could start to become a more formal client of a PIP interface. The PIP could be XACML-based or anything else.
Capture AI for threat model conclusions from last meeting
AI: Thomas: Write up threat model text for the core spec based on the discussion in the 2013-01-31 meeting. Due sometime after RSA!
Attendees
- Eve
- Domenico
- Andrew
- George
- Adrian
- Keith
- Nick
- Thomas
- Sal
Regrets:
- Maciej
Next Meetings
- All-hands meeting on Thursday, Feb 28, at 9am PT (time chart) – Thomas regrets
Â