The topic how to use Assurance Levels pertains to different protection requirements such as entity authentication assurance or privacy. To build consistent trust frameworks we need a clear model of assurance metrics. This draft document shall outline some thoughts on such a model.

Levels of Assurance (of identity, authentication, privacy, user control, session protection, service level etc.) shall be simple to use and provide comprehensive information on the safeguards that the assuring actor implemented to satisfy the relying actor. These two requirements are obviously in opposition. Simple use is afforded with a single scale (like 1-4), whereas a comprehensive policy is more like the SAML Authentication Context with 50 elements, end even that does not cover all areas.

What are the basic requirements?

Assurance metrics provide the relying actor with the assurance that its protection requirements are fulfilled with a mutually understood minimum quality level. The assurance has to be communicated in a structured form with some granularity. The key question is the granularity. Do we have to provide a fat list in the size of the ISO 27002 paper, or a single number that covers everything from information security to privacy? The most simple signal to an end user would be a binary, displaying the existence or absence of some assurance.Studies regarding end user's perception of SSL-usage in browsers are not encouraging and definitely point in the direction to make the UI as simple and consistent as possible.

I suggest taking the field of consumer reports as a reference. If Stiftung Warentest (the main tester in Europe) compares products or services, the do that in roughly 4 steps:

  1. Description of the subject field in general, with the key expectations and state-of-the-art solutions, plus lessons learned from former tests. Rational why the subject field was limited to a certain range of products.
  2. Product by product textual review highlighting special features that would not be easily conveyed in a formally structured table.
  3. Structured comparison as a large table containing the relevant properties.
  4. A summary rating according to some predefined weighting scheme. It can also be used on packaged products to show the overall rating.

Assuming that a test would compare digital cameras, one could study the comparison in detail to come to informed decision, applying different weights to usability, durability, picture quality, zoom range etc. Or one could use the score “very good” in the shop when purchasing a camera ad hoc, without unbiased consulting at hand.

I derive from consumer reports that both approaches are meaningful and two audiences should be addressed. The enterprise risk manager would most probably see a detailed set of assurances to protect valuable assets, whereas the typical SOHO user would like to get away with a 3 second attention span on the security and privacy topic and would be served better with a simple scale.

To achieve interoperability most controls are already aggregated, mixing apples and oranges. Take the famous encryption strength: Looking at Ecrypt’s yearly report on key sizes on learns quickly, that a “128 bit equivalent” in composed of symmetric and asymmetric key lengths, hash functions, padding, protection duration and more. The same is true for many other controls. It will be the challenge of each trust framework to find the adequate level of granularity for each control, which balances control with flexibility and interoperability.

Further aspects:

When controls are implemented to fulfill a protection requirement the controls can be measures along 3 generic vectors:

  1. Relative virtue compared to the state of the art (e.g. optional data minimization is weaker than if it is mandatory)
  2. Strength of enforcement (contractual, by law, technically, with or without audit, ..)
  3. Capability maturity of the asserting actor (ad hoc, controlled, improving processes)

Existing models