Trust Frameworks Analogue to Digital Converters

Issue/Topic: Trust Frameworks as Analog o Digital Converters

Session: Tuesday 1B

Conference: IIW-11 November 2-4, Mountain View, Complete Notes Page

Convener: Scott David

Notes-taker(s): Jamie Clark

Tags:

trust_framework, taxonomy, contracts, risk_allocation, UI

Discussion notes:



Facilitating Personal Data Transactions in a Secured Manner on a Global Scale&quot;: part of presentation for WEF (Davos) prep session on &quot;Rethinking Personal data&quot; workshop, New York, September 2010;  should be posted shortly to OIX website

What's the international law of identity?

There isn't any.

Can we do things with law and/or rules and/or tech to weave together the disparate systems that interact?

What should identity systems do? Meet &quot;system participant&quot; (user) needs. Such as: These high-level 'needs' share some basic lower-level functional requirements like, security, reliability, UI, etc.
 * data subjects need identity integrity
 * replying parties need assurance
 * identity providers need risk reduction

What can tech and law do about this?
 * technology tools guide data movement &amp; protect data at rest
 * legal rules create duties to incent behavior

-- By far most of the data breaches I've seen (S. David) were human error, not tech failure. So the human rules and incentives matter.

A &quot;Trust Framework&quot; is a possible documentation style (&quot;term sheet&quot;?) for the agreed risk and reliance arrangements between system participants.

There is some &quot;low hanging fruit&quot; of law and practice guiding these duties: control
 * In the US: NSTIC, Levels of Assurance. In some states, data breach laws.
 * Privacy laws like HIPAA, Gramm-Leach, FICA, etc.
 * Fair Info Practice Principles (originally US DHEW 1973) - levels of

ABA drafting a report on Federated Identity which addresses a taxonomy of issues and actors; OIX doing a &quot;risks wiki&quot;;  some out for public review now; posted work product expected early 2011(?)

One difficulty is operationalizing assurance which is mostly processed by end-users as emotional states like &quot;trust&quot;, &quot;reliability.&quot; Quantification needed, to clear the semantic fog here.

The idea here is to address some recurring liability issues, but not all. 80/20 approach, not boiling the ocean. May be industry groups and self- regulatory efforts that give rise to the best evolving solutions.

First step is a candidate common analytical framework, to get to &quot;apples-to- apples&quot; on some of the risks, practices and concepts

Inspirational vision: UI simplification - risks and control issues displayed simply like red-light-yellow-light-green-light displays.

Audience: Frameworks generally get developed in a context of siloes - non-interoperable specialized cases. Is there a &quot;metalanguage&quot; for crosswalks among the privacy practices of those siloed players? Or 15% of them, anyway, for scalability's sake.

there is a PPT deck associated with this session: &quot;nov 2 Rethinking Personal Data Workshop.ppt&quot;