European Workshop on Trust & Identity

In cooperation with GeantLogo184x80

December 3rd

Session 34 <Legal Crash Course> (14:15/Room K6)

Convener: Patrick van Eecke

Tag: eIDAS, e-Signatures


I want to discuss a new kind of framework that has been set up for public services.

In Europe you can't do anything when it wasn't put into law, whereas in the US it's usually the other way around. The standard of the European commission is written in English, and lawyers are writing for lawyers, E.g. the EU directive on e-signatures - why was it changed in 1993? Reasons: legal ambiguities.

There was a kind of introduction between that directive and the legal effect of electronic signatures.

  • Handwritten vs. electronic signatures,
  • Scanned handwritings,
  • Authorisation

You can use all of that but you have to convince a judge that it's as good as the handwritten version.

Qualified electronic signature: you have to follow certain requirements

  • If the electronic signature meets those requirements, then every court in Europe has to give it the same recognition
  • DigiNotar example (for generating electronic signatures, and it got hacked, all certificates had to be revoked)
  • In a world of trust services we have electronic signatures, but also many others

Regulation: 910/2014 (eIDAS regulation)

  • The European Commission first wanted not to focus on electronic signatures alone, but to also do time stamps, electronic messages
  • Now we have to add another art in that piece of legislation: eIDs (plastic cards) - some governments resist to issue them, but Germany, Spain etc. have them already
  • Use to identify yourself physically in the public, also to access public services online
  • EU: great initiatives, but they don't work together. We as single market want to make sure that someone going to Spain for holiday, that that person has to get in touch with Spanish authorities, Belgium identity card cannot be used for Spanish services (yet) -- problem: EU wants an open market
  • eID principle of mutual recognition of electronic identification
  • EU member states are not obliged to accept ID cards from other member states. Austria: has decided that's good for us, any other member state should accept our ID cards. Problem: different levels of trust/security in eID between different countries.
  • 2 opportunities for NGO's / 1 - they can also make use of the system voluntarily / 2 - governments issue eID themselves, public authority issue
  • UK: not going to do that, not in culture/tradition. Governments can point to private companies who are issuing documents, those documents can be used for identification for government services as well
  • timeline: 17.09.2014 Entry into force of the regulation / 18.09.2015 voluntary recognition of eIDs / 1.7.16 - date of application

eID out of scope

  • Member states are not obliged to have an ID scheme or to notify their eID schemes
  • Notified eIDs are not necessarily ID cards
  • No "EU database" of any kind
  • No "EU eID"
  • No coverage "soft IDs (e.g. Facebook); only "official eID"

Discussion: is it possible to use it in Denmark/Sweden?

We have the regulation, although when you read the regulation it's quite detailed - these are just the basic principles, currently implemented (secondary legislation)

eID: 4 implementing acts already published (more detailed level of how to implement the rules of the regulation) - implementation acts (e.g. governments should use English language to cooperate)

Advice: Do not stick to the regulation, read also the implementations!


  • Any electronic service normally provided for remuneration
  • consisting in the creation, verification, validation, handling and preservation of electronic signatures, seals, time stamps, electronic delivery services, website authentication, certificates (incl. certificates for electronic signature and for electronic seals)

Why the electronic identity provider is not at the same time trust (service?) / Identification chapter of regulation, and trust service chapter are totally separated - why?

  • Because there are two units in the European commission. They put together both chapters, and I preferred 2 different acts because they both don't have anything to do with each other.

I also regret that trusted third party is not covered: the trusted archival services: We now create electronic documents etc. and none of us knows how to keep them - we are in need of market operators that archive these documents, and that can be trusted - even after 10 or 20 years. Apparently the European commission decided that the market is not ready yet.

Q: Is it a closed list?

A: Trusted service providers: closed list, European commission can review every year if another/ a new one can be added to the list.

The certificate service provider needs to keep the certificate for 30 years.

Difference between directive and regulation:

  • Directive: when it came to obligations for actors in the field limited to those who claimed they were qualified service providers
  • Regulation: also introducing to (normal) service providers, lower level of requirements

Trust services: strong liability

  • Trust service provider: you'll be held reliable if something is wrong, but the other party needs to prove that you did something wrong
  • Liability increases when (...)

Security requirements

  • Breach notification duty within 24 hours notification of the supervisor (in every country installed)

Qualified trust services - philosophy

  • normal or qualified seal
  • normal or qualified time stamp

Legal affects:

  • eSeal/eSignature - companies can now use eSeals on documents - you can use eSeal and then have to decide if you're using normal or qualified
  • time stamp
  • link to standards: establish reference numbers of standards - European commission will look at them, publish them in official journal of EU
  • eDocuments
  • eDelivery

Implementing acts:

  • Commission Implementing Regulation (EU) 2015/806, 22 May 2015
  • Commission Implementing Decision (EU) 2015/1505
  • Commission Implementing Decision (EU) 2015/1506

Q: Would it be better for people to take action before implementing?

A: stake holders: we can't wait, if we wait we won't have sufficient time to take all the necessary measures. Now organizations say: let's prepare now, then well have competing advantage when the implementing comes out.

Website authentication:

  • only talks about qualified certificates: for website authentication shall meet the requirements laid down in Annex IV
  • no legal effects

Using qualified website certificate doesn't bring any additional legal effects! Commission just wanted to add it, increasing the level of security, hoping the market would pick up.

On the 27th of November a new version came out - this one is probably the final version.

Session 33 <User-Centric within enterprise> (14:15/Room K3)

Convener: Raimund Wilhemer

Abstract: What is the business case for suer-centricity in an enterprise

Tags: User-centric IDM


The reason I want to specify it: it is becoming a “modern word”. If you look at the enterprise, there are some restrictions. In Austrian and German law there are a restriction for e-mailing without a permission etc. The mailbox is on a back-up and there no user centric implementation on it. Within cloud services outside of your company.


What is your definition on user centric in the enterprise?

Aud 1: for me it doesn’t make any sense.

Aud 2: are we talking about even though you are employee you are a person that has some right towards the employer? User centric identity is one line in the whole list.

Example: smart card, the owner is the enterprise, it has no pin but it might have a solution to get into it

What is user centric?

Aud 1: Your citizen identity (private), not connected to any enterprise.

C: User centric is possible within enterprise if there is an external one.

Aud 1: It is illegal to read the mail but they have the access to it. The illegal is to use the access to use it.

Aud 2: If you have a folder “private” on your computer, the employer is not allowed to read it.

C: Example of a colleague who left the company and didn’t give a handover.

Aud 3: Still corporate identity but you need a law for it. Using company mail for private use.

C: PKI - you do a certification of a public key. If it is in the possession of an entrepreneur, is he allowed to use it?

Aud 2: There are many limitations for the control. The enterprise is limited by law. Enterprise can destroy the identity when the person is leaving the company. For example:People bringing their user centric “things” into the enterprise. So using your centric identity within the enterprise is possible but not the other way around (in this context). A smartphone is usually connected to your civil identity (through icloud etc.) It is your identity that you are bringing to the enterprise. If an enterprise creates the identity than it is owned by the enterprise but it is not a user centric/ civil identity.

Aud 2: Nowadays we may have multiple identities.

Aud 4: I do only have one identity, I use indeed different attributes but I do think that there is only one identity.

Is there some definition of user centric?

Aud 2: There are many definitions. I wrote a paper about and distilled my own definition. Also: Kim Cameron (Microsoft)


C: So the user centric identity within the enterprise is not possible.

It is whether user centric or enterprise identity.

The employee might bring his user centric identity to the enterprise.

Session 32 <Rebooting the Web of Trust> (14:15/Room K2)

Convener: Markus Sabadello

Abstract: We discussed a recent event called "Rebooting the Web of Trust", which explored modern technologies (crypto, blockchain, self-sovereign identity). The ambitious goal of the event was to come up with better alternatives to traditional PGP, TLS, name registration, and other Internet services. One of the key projects is to create a blockchain-based registry for permanent identifiers that anyone can use without intermediaries. The community will publish a set of white papers and hold additional events in 2016.

Tags: Trust, Crypto, Archtecture



Black chain-based registry for identifiers à public keys DPKI

Instead of rent + buy a domain name: new model of handling identifiers

Talk about an event in San Francisco, couple of weeks ago


  • PGP 25th anniversary. A lot of people are not using this and cryptology in general
  • X.509 model - problems: trust hierarchy in certificates.
  • SSL - problematic X.509 CA model
  • Naming: email addresses, you never really own a name. You can only rent a domain name, not buy it.

New layer, new architecture that can fix these problems

Some of the people who attended the event in SF: Christopher Allan, Jon Callas (one of the creators of PGP), Bit coin-involved people, Juan Benet. Working on advanced, cutting-edge crypto-protocols.

Idea: come up with ideas as individuals. Own our own identity.

Technologies that are being discussed (SAML, trust frameworks + federations) - you never own something, you're only ever part of a federation (there's authorisation manager, etc.)

In PGP: you create your own private key without a SP

You get started by yourself. Don’t have to pay an account. PGP, SSL etc. try to do it better

Event: all participants submitted papers about what they're interested in: folder of these papers.

Some are pretty advanced: signatures, mark signatures, distributed file systems, semantic web technologies, trust models etc.

Might be interesting to create a new kind of way to do what we currently do with PGP

Security can be combined

User-centric identity is quite common but: self-sovereign identity - new expression people come up with. You don't need anyone else to get started. You can participate in a system without signing up.

  • Johan: how can they communicate with you? What about the key?

Ongoing process. There’ll be an outcome. One of the documents (DPKI - decentralised public key infrastructure): method for registering your key with an identifier in a block-chain

  • Rik: how to ensure there aren't collisions?
  • Johan: even though you got a public key, (...)
  • Aud 2: combination is the trick.

What exactly is it that you put into a block-chain?

One approach: first come, first surf. Public key à then it's your identifier. Someone else can't come after you. You can always write it into a block-chain even though another one already has done it.

You got identifier, you don’t have to manually change

  • Johan: if I create a public key. How do they know I’m attached to the public key.

I can tell you my identifier is 'Markus', or a Twitter user name.

Is anyone familiar with the SUCCOS? Triangle?

Having names like twitter user names in a way that is not controlled by a single authority. Doesn’t enable block-chains.

Not saying that you can’t have all of these properties, it’s just not very likely:

Desirable attributes for identifiers (usernames, domain names, IP,)

  • Human readable
  • globally unique
  • decentralised

Pseudonyms are just local.

Maybe we don't want global identifiers. Maybe I just need local identifiers for my friends. You got a name, so I know it’s you. You can link them.

For example:

(Addresses the audience) you're Johan and you're Rik.

Human readable name: Rik who is known by Johan. Mechanism.

  • Aud: what if Rik doesn't want him to know him? (ha, ha)
  • Rik: limitations + scalability problem: solution?

There’s articles on that. e.g., how secure are block chains?

Extract from paper: "can be vulnerable if you look at the number of nodes that are mining. Whatever is the smallest number, is the vulnerability of the block chain" if you can compromise any of these, you can compromise the block chain. Recommendation in the paper: use multiple block chains. Supposedly decentralised - you register your identifier etc.

But err...what was the question?

  • Rik: There’s the public block chain. Do you have other distributed proof of trust? diff communities operating diff proof
  • Public block chain can scale.

Objective of the event in San Francisco: do create permanent identities? How to eliminate identities? How do you take yourself out of the circulation if you're dead?

  • Aud 3: what if somebody deletes the block chain? -- They’ll have to delete a whole lot.
  • Johan: Name coin. But THIS is more generic. You can put things in multiple block chains.

Registration doesn’t expire. What happens when you die? You can encode these rules in the block chain thing. When you create such a registry, then you can just agree on these rules and say that it's in the consensus.

There’s a project that experiments with that, it's called 'blockstore', created by a company that is called “onename” on the Bitcoin block chain. Putting things on the block chain: approach that you store most of your data outside the block chain. This project is trying to create the higher level component (higher semantics etc.) via Bitcoin. You can register a name but you have to renew it every couple of years.

  • Rik: do you still own it? Or do you have to pay?

You have to pay your bit coin payment, other than that no fee.

'Registration is always done directly by the principle'. Registration services that work on behalf of services is prohibited -> you use your own server/machine, like with java script. Use Bitcoin in your browser and then put it into the blockchain. Cannot technically be prevented.

  • Aud 3: are there reliable Javascript implementations?

Testing tool for trying repairing your keys. (Registry playground for BIP32, BIP39,...)

Idea: creating some kind of object that you put on the block chain. That’s where you have your public key. You can generate it yourself, then register it.

Demo BIP32: interesting ideas from the Bitcoin community. Bitcoin improvement proposal.

39: creating a key pair from a phrase (number of words), not a new idea but you can create a random sequence of words and then create your key pair. Either you download your private key or you remind your key or print your QR code. To make it easier not lose your private key.

32: about hierarchical deterministic keys. Start with a master key pair, derive at another key (grandchildren keys). You can start generating new key pairs without registering new stuff on the block chain.

  • Johan: can you use the key for a one-time-usage? Give my key to you for a limited time
  • Aud: it's a time constraint, not use constraint

You also say what data can be used.

Example: I send you 0.5 Bitcoins, in my wallet: not a lot of keys. You just have to create one key, can create child key pairs too. From this perspective, it's a different key that is used.

Every friend I have: I can just use a derived child key.

HD key - but a bit off-topic.

Concept of think lions:

Full node: in a block chain means you run a full server, you're invalidating all the transactions, you need to be online, you need to have storage etc. not easy on a smartphone.

If you want to register to a block chain on a smartphone, you can't run a full stack of the block chain. You need a think lion (so you can register things and your reg. is valid).

  • Good idea but got a lot of issues, like moving money.

Same challenge like a Bitcoin wallet. You’re not running a full node, not running a full protocol.

  • Johan: why can’t it be built into the wallet? That’s the place it would fit into.

It’s similar but it’s not about Bitcoin but registering and identifying with a public key.

In the article: what if they lose their phone, backups etc.

Shamir secret sharing: sharing it with people you trust (3 best friends are given parts of my private key). They will have to return it to me if I lose mine.

Instead of splitting up my key and distribute them, I can make my friends create a new one for me (instead of getting back my old one).

  • Aud: I hope your friends don't die or get arrested :-)
  • What if they decide they're not your friend anymore? Hopefully you still got other ones.
  • Rik: combining centralised key chains with the blockchains - that way you had have comfort in knowing it's professionally administered.

There doesn't have to be friends but a more official thing.

  • Rik: was it a compelling event? What are the next steps?

White papers are going to be published in December.

Something about the articles:

  • Smart signatures. Within a key/signature, you encode these rules. It's about signatures, verification mechanisms.
  • 1 non-technical as well.
  • 1 'Identity 20-20' project: digital identities for the most vulnerable and excluded members of society, e.g. refugees and homeless people. If your government throws away your passport, you only have a smartphone. How can you verify the things that you have done and the person you are? To prove where you stem from and that you deserve refuge. -- a bit shady and not very clear to me. Sounded interesting though. Self-sovereign identity for those who have nothing.
  • 'Detecting keys misuse' – article
  • 'Rebranding web of trust' - protocols etc.

Next year: follow-up event.

June/July 2016: demo

25th birthday of PGP

Session 31 <SAML2 Testing Tools (continuation)> (14:15/Room K1)

Conveners: Roland Hedberg, Rainer Hörbe

Abstract: Continuing from the yesterday's session, the topic is covering the testing tools used to check for errors in federations and pointing exactly where the error is. Discussion on how the tools could be improved and what would they be mostly used for, based on the experience of individual identity managers.

The proposal on the table is to create database

"I am not sure it will be more than one tool. A documentation mapping tool."

Putting this into the GitHub allows you to add that. These documents that are on GitHub must follow a structure.

We are planning to do some JavaScript coding to make the keys that will be used.

Do you think it’s versatile to have it in the GitHub?

I was thinking of doing some kind of a GitHub database, a JavaScript framework pulling data via GitHub API. You are losing the capability of Git-repos, it is not much more then a web application.

I think starting with the document in Github is good and if it becomes too complicated, it still can be moved to something else.

It's not that big so it's impractical to make/do?

It won’t get too big.

The purpose of the session:

To be more specific how are you going to use the test tool? What would you like to achieve with the test tool?

If I am choosing a product I would like to run tests against?

For the purpose of this session what I would really like to have you contribute to this, and to have some kind of mailing list to help us improve this.

To be more specific how are you going to use the test tool? What would you like to achieve with the test tool?

Jim: I would like to run the tests against the product that I am evaluating.

Nick: Customer testing the capability of the service, e.g. an ADFS implementation. When it is passing these 6 tasks it's okay, but until then it would be put on hold. I would pick some test cases that are absolutely crucial.

At the federation level I’d like to use this to a customer or another party testing the capability of their peer.

Jim: Internet 2 has a reputation for providing quality product and that’s what the members are expecting.

Peter: I think it’s better to have interfaces where the handling can be done because there are options which are very capable and on the other hand everyone who make technical work can make some workflow systems, and you don’t want to make too many of them but one of them if possible.

Nick: A simple way to do that is to have configuration time parameters available because a lot of those workflow systems can accept email.

Rainer: E-mail based system would be the only option. . It was never my inner picture of a lot of people doing the testing and we always saw one person executing the test. When you want to change the configuration, to make another round, it would be nice if it was automatic. We talked about this tool where everything is automatic if you got it right. That’s a really good point where some lightweight workflow might be handy because it would easily update my metadata to the federation.

Jim: Seems to me a workflow system is different than tests and workflow tests exist.

Rainer: Yes I would say kick off the next workflow step whatever it is.

Peter: If we have it in a productive federation it would be interesting to have it productive when you want to make tests, it would be really interesting thing but before something like that can be done it has to be included in the productive federations.

Roland: In a production environment it’s very crucial to do these tests continuously. You can do all the tests you want. They will eventually break something because of changing

Peter: We need different environments, one for entering the federation and one who is already in the federation.

I was thinking of something about the logging service, in the legacy protocol we issue the ticket numbers, to login where the ticket is issued and the same mechanism can be applied. A test account for one time usage with a short timeout so you could pass all of your tests, including attribute release.

Nick: That brings up a really interesting point, are people starting to employ stronger authentication systems. A lot of that functional stuff has to be tested to function before it goes into production.

Roland: I also like the idea that the user might find something that is important and that the user can use the test and when the IDP would send a report back.

Roland: It will also evade arguments in between the IDP and GP where we would know exactly who is wrong and not let them have space to argue in between each other. The worst thing is when someone sees the logo and that there is no reliable party to accept. We have that experience where the service is provided when somewhere made the SP into the federation. We have this discussion in between the SP and the IDP yes.

Peter: There is also a lot of work here included to find out where is the problem itself, and if there is a tool that would be amazing.

Nick: One thing that's interesting about the open ID foundation is that it requires people to pass their test. There is no SAML foundation where you need to pass a test. There is no similar thing to SAML.

Rainer: There is Kantara and where the Liberty alliance that has vendors which have quite basic tests, and even they have problems to endorse these tests.

Basically the idea is that he is working and making tests and there is an interest to do those tests but that must be a different business model because this vendor rubber stamping isn’t working anymore so we strongly said we need to do these tests open source and it must be freely available.

For the use cases where somebody needs a certification like the US government, then someone will need a tender and you can get a certification.

Nick: Shal was really wanting us to invent this inside internet2, and I said Roland is doing this so it is certifiable to say this vendor has a comment for inoperability. Hey that means that you should go to this guy.

Rainer: I think if we don’t cooperate in this field we won’t achieve anything. If it is not complete, it is useless.

Peter: I think the IDP testing is a very difficult topic.

Nick: The ad-hoc tests something like this is fantastic. There is a specific pattern in the password or the user ID in which every library SP can rule out as having any kind of access as having echo assertions and something like that.

One example is where you would want to test the access control with the libraries and you have an edgy person that is supposed to grant you access to a journal.

Rainer: It would be certainly better. One of the tests that we made is the attributes as XML and it goes beyond the SP to the application, and the application can block specific user IDs from doing everything. It would be good if it would be a bit more general and not specific.

Session 30 <Why I Hate PGP (and better alternatives)> (13:30/Room K6)

Convener: Aestetix

Abstract: In a post-Snowden society, protecting your private company and personal information is more important than ever. But rather than blindly jumping into encryption, we'll take a look at how (and why) tools like PGP/gpg were created, their purpose, and what their purpose is NOT. We'll also address some of the issues that come up with the so-called Web of Trust.

Tags: Crypto, Trust


Self-presentation: why encrypting is an issue / Anti-surveillance policies in the US / iddsc / Snowden as a catalyst / crypto party & crypto wars of the 90s

History of cryptography

  • Modern cryptography & classic cryptography -- symmetric cypher (one key to encrypt the message, problem: how does the key come to the receiver?)
  • 1976: public paper clock key, algorithm: one key to encrypt, one key to decrypt (private & public key) - the issue with this: I have a private key, the other one should send me something encrypted, so I go to the key server, get the key, and can get the message / problem: to many in the middle.
  • Example: Micah Lee: he didn't trust the key stores for the exact reason - we don't know where the key comes from - so instead he sends an e-mail to the receiver to confirm that that key doesn't belong to a (lawyer?)
  • Crypto-party: an example: you want to generate a key, what I said earlier about real name policy of Google, fb etc., why do I have a problem with this? / GPG page and key signing guidelines?

What is the problem?

  • They could be a fake
  • You're forced to trust in the government - it creates an illusion that the trust that is issued by the government would be more valid than anything else;
  • This idea that there's a key I want to show is trustworthy and I sign it, and I put levels from 1 to 3 or 4 / what does it mean? - Absolute nothing.
  • What do trust levels mean? (PGP trust levels)
  • What are you verifying? - On a governmental document?
  • I couldn't find any issues on governmental (websites?)

Definition: what does it mean to trust a key? What does trust mean?

  • Direct trust - individual
  • Hierarchical trust -
  • Cumulative trust - different ways to verify or someone you already know/who already works for you and you are pretty sure it's them.

RFC - (looking up RFC 4880 "OpenPGP message format) Signature types

Loose definition, probably left open by standard writers intentionally

Search results on the MIT-tool for a key ("oxd255...")

  • You get a list of all the keys that have trusted this key
  • This creates information (?)
  • Public key store means that it is public, so anyone can use it

I created a trust tool:

Example "pgpring –S -k keystore" output

- Possible to have multiple identities with the sub field

OpenPGP Message Format principle -- I made it easier and converted it to a text file, matches up all the elements, whether it's public key or something else. It is defragmented for the user.

What email providers have "secure" users?

  • Gmail - 334,333
  • Hotmail

What news organisations have "secure" users?

  • wall street – 18
  • new York times – 159
  • Fox news - 3

What "intel" agencies have "secure" users?

  • - 54
  • - 39
  • .mil 7,908
  • - 28
  • - 0

How do universities use PGP?

Frequencies: Seem to be rather trial than actual use.

Who has signed the most keys? if you are a new user and use a key by default it stores the private key and compromised your security.

Participant: So they have a copy of a private key?

It's perline party, targeted, binary / I understand why you are upset with them, it's a struggle, they have a noble mission to make it easier.

I agree, it's not only me, having your private keys stored anywhere else is compromising of your security. E.g. a PGP encryption, there's principle of mathematics - key instructed is that you have 2 public keys who share the 3rd prime

Interesting talk about key factoring that was referenced in the talk: title)

Participant: What I've never understood is having a store of keys

-: the trust store, the key store is completely useless. My tool is not online right now.

   (Explanation of the key)

Also means you can do a neighbour kind thing, Meta data, and have interesting connections with that.

Participant: I disagree, those are 2 different kinds of trust paradigms. One is public, you can change it. Trusting keys is establishing some initial relationship.

Answer: PGP issue: if you show up, have trusted key -- the data is still there, internet never forgets them.

Participant: But that is impossible to solve.

Answer: PGP is a fantastic tool for encrypting, but bad for privacy and anonymity.

Participant: Based on names, it is completely unreliable.

Participant: What's frustrating is that the government requires us for getting rent from them, but many researchers are from other countries, and many other countries have different requirements for names. One thing that makes trust hard on internet lies in us being human beings, we're organic stuff, we meet and see each other, and you can't do that online.

Answer: I don't agree, when we're chatting, we are establishing and have established relationships. Example: how Anonymous changed in the chat room and how other in the chat room realized his change in behaviour.

Participant: And in the trust-PGP-context it doesn't mean to trust a person, it means trusting a key!

Participant: If I enter "Edward Snowden has this key" (...)

Participant: What do you think about your knowledge in public key store, people actually communicating with each other there?

Answer: In the key store you can

1. Connect to each other, sign the key randomly

2. Time stamp for when a key was signed is difficult issue (state now, state 10 years ago)

And PGP was created in the early 90s..

Participant: the data we get to another zone is very small

Answer: The issue is not so much signing keys, but posting them publicly.

Participant: I think that one of the biggest trust contributions PGP made was that for the first time a reliable crypto reached mass market.

Session 29 <DP Code of Conduct: Q&A with Art 29> (13:30/Room K5)

Convener: Marcus Hild

Abstract: Geant Code of Conduct Q&A, with the questions being about how the pseudonymity helps or does not help institutions to release data to services or enable access to services. Concensus in the group seems to have been that while p. improves data processing practises, it will often not be sufficient as legal grounds for processing or transfering data.

Tags: Privacy


Valter: Geant is to take it to the article 29 of the working party, which is the body in Europe that has the possibility to encourage and endorse the code of conducts. The endorsement has no legal value but that something has ben endorse means that the local data protection officers will recognize the value. We will submit the second version, which will be more detailed.

The time scale is somewhere before the summer of next year. Because we also agree that we want to do the changes that will be taking the guidelines and making them prescriptive, and with that we will change the text.

The process is still the same, to go from a university organisation to being used in different countries.

Niels: Will the new and the old version be versioned aka recognizable?

Valter: I don’t know, this was a decision that we took 15 minutes ago.

Niels: Is it needed?

Valter: Since it’s going to be more pages it needs to be clear. The most important thing is that the principles are going to remain the same from the version 1. The version 2 is just going to go a lot more into details about how to get to those principles.

Peter: It’s a given that there will be no requirements.

Niels: Data Ownership. The current code of conduct says nothing about it, it’s only about passing data from IDP to SP, the biggest difference is the data ownership.

Marcus: There is a very wide spread misunderstanding of data ownership, it exists but only for the data subject. Anyone processing the data is never an owner, he needs to process the data securely, but he is not the owner and if he is transmitting data from one institution to another. That means that he gives that data to a new controller, he needs to make sure he is giving it to a trustworthy one but the link is still to the original user.

Niels: Aren’t you talking about the attributes, I am talking about the data created by that user, the research papers.

David: But is that actually outside of the scope. The university has ownership on some texts as they funded it.

Niels: This is the vehicle where the SP expresses a number of things, and the institutional IDP and only the whole package that needs to be evaluated by the IDP and for them a bit of the package is important.

David: They are the resource providers that claim to take ownership of the data, so I think you need also a technical element to express that, you have to assert that next to CoCo.

Marcus: That’s copyright basically.

Peter: You will see that it’s in the scope of this, the main thing is access control. If you say its access control to all those systems the data transferred is a part of the other stuff, it’s not a part of the transmission.

David: Systems that do user managed attributes where the user sends attributes, nobody can tell him not to, the IDP doesn’t usually release the attributes because the data is owned by the person. Does the IDP have the right now to release the data if the owner instructs him to release the data?

Marcus: Everything stays in the control of the data. There is no ownership, there is also no right to get the data from the third party. The law will give you the copy of the data the subject has. You can only force him for other reasons, it’s not data protection, like the agreement you have with him, if you want him to release them and he doesn’t and he is bound by a contract he can get a penalty for it.     That might be in breach but laws might not, maybe in the contract its not regulated that he has to do it. DP is always about it, to not let the user give any files, to keep it for himself.

Niels: That is perceived as one of the scenarios to prevent the data being sent over the board and to have the government collect data and if that doesn’t work or help you can just send everything.

Marcus: There are different tools for it.

Matthew: We need something that is unchanging, as we have these requirements, and to have the iGp release anything and because we need that. If someone owes from one institution to another, we need to track that. It’s easier if they are in Irns.

Peter: The member states have different interpretations, one have the pen PAI, the extremist’s view, if someone can make the connection to the PI, as I don’t know what I am releasing. For example in Austria, in the law it says that a person can legally make a connection to someone. So there is no consistency here, there is the danger, that even if you only release something that’s opaque to the recipient, you will be made reliable that you still supply with the persistent identifier that will help him understand the subject.

Niels: We are back to my question to the ownership. There is a service called ORCID, what it does, it is a LinkedIn for researches. You login using the credentials and you type in your whole life, instantly as you do that, you have a lot of identifiers.

Peter: One of the legal grounds to release data is if it is published already; the secrecy can’t be claimed on them. We step in and say hey the IP address is unique, we tell the recipients, don’t use the email address as an identifier and we give them an opaque synonym. If we only gave them the synonym it would satisfy and if we only send them the public information it would be okay but we want them both and that spoils the whole thing, as we give them the synonym.

Marcus: You need to keep the whole picture. That might be an argument that the pseudonymization is not a good helping tool. In most cases it brings you some improvement and sometimes it doesn’t. If you have the full IP and the address you can also use the name as an identifier.

Niels: If you only send a pseudonymous identifier as information, that could put the IDP in trouble who would stop trusting you completely.

Peter: We think we are making an improvement and on the technical level it is obvious if we don’t give them the email and something pseudo we are giving him less, it is an improvement but you have to rely on other grounds to make it legal.

Niels: We have many of these actually but in our community we want to establish collaboration and that requires people to login to other places, with not entire bio attached but something at least so that that they can recognize each other. You have for example in CERN and people from UK login to help.

Peter: I think the answer to the general question is that you get bonus points in other parts (…)

Data Controller sends the data to the SP, and the SP sends this back asking who’s it is, and the IDP asks for the personal information back. The pseudonymization doesn’t hold legally. It’s a bonus point as you said but it’s not free from the general obligations.

Marcus: In Austria people are soft on pseudonymized data but in most European countries that is not the case. For example Amazon, they require personal information like your address in order to deliver you the package.

Peter: This is a good technology for new systems but for old ones it might be pretty bad. What is the likelihood of the attribute release in Europe?

Niels: I think it turned into zero, as it won’t happen unless the contract is signed, and that is never going to scale as there are thousands and thousands of facilities.

Session 28 <Privacy and Business> (13:30/Room K3)

Convener: Raimund Wilhelmer

Abstract: Discussion about business cases for privacy

Tags: Privacy, Business Cases


We would like to have your ideas/input for this project

  1. Touching the privacy project and bringing it to the business level
  2. Defining privacy

Looking for job, problem of not finding it - There is a need for support in order to define their own needs (candidates and companies looking for employees)

In a job seeking centre you have to define yourself, in the higher education area it is different.

It is a project about 2 projects, which came together within a discussion, profit for both on a business perspective.

There might be some critical issues:

  • Something that user does not identify and it's a misleading direction

I (Raimund Wilhelmer) represent a company from Germany in the discussion

Frank (psychologist) is trained to find people jobs that will suit them for long time - right jobs

In a team there supposed to be a gap that should be defined.

  • Personal skills, characteristics that could fill
  • If you are the right people for the job, is the job right for us
  • The better the evaluation of the data is, the better the outcome
  • The relevant information on both part is finding the job descriptions and … on the other part
  • It’s all about statistics
  • Capabilities in the beginning are the reason for the outcome (getting the job, losing it etc.) -
  • Predicting variables try do describe predicting factors
  • Trying to automatize a profession
  • Pre-screening procedures
  • Generated footprints (that we leave online) in 2 min of looking at a Facebook profile it is possible to get a estimate picture of the personality
  • You screen out people based on profiles

This brings rise to all kind of biases.

Information based on superficial information.

PhD physician looking for a job - after applying all the answer was: no need - no job

Then he broke it down to what he was doing before, the outcome: he got invited because of his experience in statistics. Working with job seeking centres (example of one for people with no higher education)

The interface therefore is very different, could be a device, job seeking agent, they describe the profession they have, CV etc. and upload it in the system

The idea there is, providing such a system.

Lower educated centres have a problem - a lot of the need a IT lesson, often it’s useless because the do not need it - big amounts of money invested that could be used in a better way

Future bosses have to talk to them, communication is needed.

The job seekers (low educated) are looking for a ‘profession’ - what could I do? The idea is to match these 2 systems. Sensible data with a profile date and there is a matching in the classification. There is a need for them (the candidates) because they do not realize it.

Both sides could have needs of privacy.

Aud1: protection from 3 parties

  • People who subscribed for the job
  • The company who wrote the job description, intended to find a group of people

The only reason why it came out that Google is building a Google car is because they were looking for people that can build cars. (Apple have found a third party that would find the people for them to do the same)

Legal issue: You provide sensible information and a third party is using this private data. Matching jobs and jobs seekers is essential but the restrictions are problematic. It depends on attributes you cannot really influence.

CV is a tool to present yourself.

What kind of Meta information are giving to the others is what the job seeker does not know. The problem is that after analysis of the information the person gave more out there than he/she probably wanted to share. Matching job and seeker without applying.

What kind of regulation would apply?

Aud2: People identities are being stolen all the time

Aud3: That is the point where you have to be able to assure both parties.

3 perspectives:

  1. Company looks for a candidate
  2. The candidate looking for a job
  3. Matching it automatically

: If you create new information from the “public” information, it is prohibited.

A: it is not reliable. The big problem is that in the whole area of psychological research there is a model, but it’s always a certain ‘guess’. There is not a proper evaluation in the process. From 200 people who apply for a job 90% gets screened out on basis that might be wrong and the company will choose somebody from 10% even though there might have been a person that would be better for this position but they screened it out at the beginning. This what we would like to avoid.


Open for follow-up ideas. We are looking for people who work in this area and would like to cooperate.

Session 27 < Decentralized authoritative business service discovery> (13:30/Room K2)

Convener: Henrik Biering

Abstract: Decentralized discovery and resolution has been a key enabler for the cost effective technical infrastructure of the internet. For business services, however, mutual discovery has been relying on a small number of third party platforms.
Can the official national company registries serve as the basis for a decentralized authoritative registration of business services and reputation of companies?

Tags: Registries


Old situation:

Traditional local commerce: There is a number of people who know each other; each craftsman has a reputation (e.g., for the cheapest or best work), negotiation is possible.

Industrial age:

  • Company - marketing services


  • Sellers need intermediaries
  • Price gets high

The Internet is a global direct online commerce; connection. Buyers and sellers have systems to discover, evaluate, negotiate, and transact with each other.

The problem is to discover the consumers. They have privacy regulations and laws that protect them. Very expensive. If we try to turn it around - companies with their services, price lists etc. want to be discovered. --> How to do this in a way that is systematic?

Data driven business discovery

Domain names to IP addresses: Why not use exactly the same protocols?

EU root (EBR?) has consistent registry: each country has one or more business registers, e.g. Sweden (bolagsverket), Denmark, UK.

Discovery of a function where you enter a number and find a company.

Idea: domain name --> point at name server.

Six countries in Europe already have the open data policy --> in these countries you can download the whole protocol, without missing any numbers and download the service records.

Certificate from another organisation: your website is good for purposes XY.

Advantage: Totally distributed. Nobody can have a monopoly like Google has at the moment.

Implication: Company can make a cross-reference - can put a reference back

Audience1 (from the Netherlands):

Promoting digital single market. One of the steps you have to take: finding companies. Idea good of wanting to do more in this regard. Lots of people want to use tools they are familiar with that is why they use Google.

Empty layer. Many organisations are identified with a number. Solutions how you can get access to certain attributes. Thinking about a regress/response thing. Example: smart energy company. Has 24/7 access to? They need access to that information once per year. One can generate a specific key to access once for a certain period of time to fetch the data. The same principle can apply to attributes. You don't want to copy information but have access to it. The company can see who is able to have access. You need that level of security.

Convener: This could be one application. We want them all. Amazon, LinkedIn, Facebook etc. can't fetch it. Smaller companies could fetch it.

Audience1 (NL): What about creating A-servers to align all the traffic that doesn't store info?

Convener: Price comparison services. The company publishes a list.

Audience1 (NL): A lot of these websites don't store info. They just have links to the info they store. Just point to that.

Convener: But does this really matter?

Audience1 (NL): Yes

Convener: This would be generic. If you're for example a taxi driver, you give away some info like the price etc. Anyone in the world can find you. You're only online for people who want to find me. Win-win situation for taxi driver and consumer.

Two sided market - split it up: more costumers -> more competition. The service rates go down.

Data business authority

When invite banks -> they had other companies that got bank-robbed. Problem: people that want to make apps based on this. Extremely difficult if you want to do it for a whole business -> it becomes complicated.

Audience1 (NL): In the Netherlands, there's something called 'Kamer van Koophandel': Everyone has to register there. They already provide the servers to connect to browse their registry.

Convener: In Belgium, Ireland, Romania and other 3 countries, it's already possible.

Audience1(NL): web server, you don’t have connect with anything

Convener: The idea is that you get to search. If you want to put criteria into the search --> public data base. Say, I want to find companies with 10 employees who are dealing with shoes. The business code might not be accurate enough. Public information + info about 3rd party searchers.

Audience1 (NL): What about creating an additional server? Access through web servers. It would be nice to create servers for users and add ratings from Google etc.

Convener: Business server providers will do the task of providing (??)

2016 version for Denmark. Many small (start-up) companies are really eager to do SEO, reverse-linking, etc. how can you do demos directly and set things up? It would be helpful if people would start using this registry.

Audience1 (NL): In the NL you already have these servers. There's also an online community that is used in 38 countries. They can add coupons, specific attributes etc. to get better search engine results. It's an Australian-based company called 'odd frog' (?).

Audience2: The principle itself is great but there's already something similar available. In the EU there's a register called "LIFE" (?): want to have same kind of services.

bris B R I S: Gov people can have easier access. You can enter the ID in one place.

Audience1 (NL): For discovery, you need some type of registry. A lot of these services probably already exist. I don't know whether they are well-thought through, do they implement social media etc.?

To make a unique selling point for customers: You need to give them a reason to make them visit the pages. Good: if your company has a good rate in Google. Price comparison service should go farther than just comparing prices. Who gets special offers? E.g., restaurant: exclusively people that live near it should get offers.

Reputations for different sayings. This is the level I want to get at. E.g. sushi restaurant wants to reach a special customer with special traits.

Audience1 (NL): Concerning insurances, there's something called 'independor' (?) with which you can compare insurances. Possibility: they have their own connections with the insurance companies. If you could create a hub.

Convener: If you're a vendor, you need to be available on eBay as well as Amazon and pay for every of these services. The reason you use Amazon, eBay etc. is because you don't have private services.

'I have a user who wants to access your website'. Why do we do it stupidly on the business layer? Why not on the technical layer? The security has to be high, of course. Banks have to trust them. This also means that by now you don't need to have a 3rd party trust (very indirect) to document who you are. The second advantage is that there is no middle-man in the security chain.

Audience1 (NL): Let's assume a scenario. I'm a custumer, I want new brown leather shoes. How to do it? - I'm visiting two or three web shops. How would your proposal make it any different?

Privacy. You go to a shoe shop. You need to set to the shop's policies. Only when buying, you are registering. If you're not buying you don't need to register.

100 000 places to sell shoes. Only 10 000 comply with your personal privacy policies. Are you going to trade off your privacy for 10% cheaper shoes? Lots of criteria: what has to be known? E.g., location has to be in the record. What else is in the search engine?

Convener: Let's assume I'm visually impaired and there are websites certified by the organisation of the visually impaired. There'd be a better chance discovering companies that are relevant for me. Companies are spending too much money on Google, it's too much of a 'beauty contest' for companies.

There are lots of attributes in the structure but maybe only 5 apply to my standards. Is this service trustable to me, e.g. health care: calculating trust? Does it make sense to directly store it at the website? If you address the site with a service, you first check their trust policy --> not interesting for me.

Session 26 <MAPPING identity, Internet governance and lawful privacy> (13:30/Room K1)

Convener: Thomas Warwaris

Abstract: Private and commercial Internet users are increasingly awareof their communication data being an immaterial public good. Recentdevelopments show, this also leads to rising distrust. One of the key issues, to regain thistrust, could be a change in identity management, so that internetgovernance regains control and its ability to create rules andsafeguards. Can we outline such a system?

Tags: Privacy, Internet Governance


2 steps:

  1. What I see as a possible perspective of future policies.
  2. Discussion on the relation to identity management

Angela Merkel:
"Personal data is the gold of the future". A sentence that makes sense, looking at the hopes in that market.

Combing that with mining laws: Personal data is not owned by the person, shouldn't be controlled by the person: It is a resource to the public. Pretty normal situation in the mining law.

Side note: Special mining contracts: We already have something similar: "security domains" like healthcare, pub.transport.

Another aspect: Anonymity:

Commissioner Oettinger: "whoever is transporting data has to take responsibility for it"

This will lead into more pressure on ISPs to be able to de-anonymize access.

One issue is the misuse of "anonymous services", but it is nearly impossible to get rid of these kind of services without a global contract. The only possible move forward is to go after the other end.

What will happen is that we will see pressure for de-anonymisation. What I've seen now is that the identity providing could be one of the tools of the trade, regulating of access to services. The landscape now is based on, IDPs slapped on the side of existing entities, which are already managing that data and turned them into IDPs. But being an. IDP does not seem to be a sustainable business model.

Question to the participants:

How could IDPs fit into that scenario and how could they provide the Internet uses with pseudonymity? Anonymity? Should we begin thinking differently how an IDP should look like?

Tom: Reveal some possible future scenario. We have an IDP. By default it provides a subject identifier. Should the University of Chicago continue to supply? Applications that we offer, people of all over are able to access but on-boarding students bring different identity to the services. The practice we've build throughout the years comes back.

Cannot really be an answer, depends on really what you can do. The banking sector starts becoming interested in IDPs. The IDPs know so much. The banks want to collect info about potential customers. One possibility would be to have a different IDP, which serves anonymously, like a proxy IDP which is independent from the bank. Data protection rules should be implemented.

Identity value, preventional management. It's possible to do only one of those now. To offer IDP servers as a service? Which model would be true? Realistic - more important larger commercial services moving into those market services.

Identity provider and the Service Provider both see which services are accessed?

National research institutions- build identity federation on top. What kind of a Chinese wall do they have inside?

  • The network provider has the identity and the identity provider as well.
  • Proxy models that the IDP won't know which servers are used
  • The security incidents.

Internet access. Not bank accessing, nor government, just simple commenting on a webpage. People can choose their ISP? Less restrictions.

The banks are no longer the favourite trusted IDP. Why did it not evolve? It doesn't have that much to do with trust. If you want to go to IRS ... a couple of years ago they started working on a profile ... added to the documents. Every time you are authenticated they need to charge you. The banks locked us from doing that.

In Finland: the banks wanted to have a strong verification, but the government opposed it. In the law it says that you use strong allegation to create another? A new government method. Internet providers are on top, they have 70% of the world’s population.

Session 25 <PbD (Privacy by Design)> (11:30/Room K6)

Convener: Berit Skjernaa

Abstract: How do we as community facilitate the adoption of PbD for SMEs?

Tags: Privacy by Design


How can we as a community facilitate the adoption of principles in small medium enterprises?

We are making a survey for an agency for the use of privacy design - how can they facilitate the adoption of PbD? How can we as society do that?

There's a gap between the university sector and the private sector. So, we do consultancy, and we see that there is willingness to protect private data, but also a lack of knowledge how to do that. So, e.g., the passwords are often implemented wrong.

For security the way forward is to turn the willingness into capability to actually protect the data.

In small organisations it is easier to find responsible persons, it is easier for a small organisation to say what it would like to do (e.g. Daimler Benz - what is the ethical policy with data there? It is much harder to say because it is a bigger organisation).

You can’t have a single person responsible all the time. We don’t host our website by ourselves, we outsourced it, so we give control away in that matter.

Aud 1: main problem in comp is the lacking willingness to protect data. Data minimization on the Internet doesn't work because we want technical support in our lives in so many ways, so data minimization is a good idea but it doesn't work in our modern world. My approach is therefore that we need more anonymization.

Berit: How can they (companies) use data to get value?

We collect data because it became cheaper and cheaper, and companies can make money from it. So why do companies want to preserve privacy?

  1. users require some privacy standards
  2. law requires privacy standards

: We want to do the right thing all around, so users trust us because we tend to do the right thing.

Aud B: A lot of electronic payment is accompanied by background checks, so they collect a lot of data to make sure that it is really me who is using the card - and it is working very well. It is not only a governmental problem if you want to protect against hackers. For sure companies don't want to be known to misuse the trust of the customers.

Berit: There are technologies to protect privacy, but what can we do to make them more available?

Aud C: Are there incentives for companies to do something about privacy?

Aud D: The regular way is to treat data properly because otherwise you can be sued; there are also cultural and ethical requirements.

The culture aspect: to build a culture of health and safety, also a culture of security awareness - right, there are technologies available to protect data (encrypting), and part of the culture aspect is: people are more likely to ask for security possibilities.

Also: e.g. Avanote is user-friendly, but we don't know what it does with the data (encrypting? cloud?) - so is it a good thing to use? If I'm using a package service, it can be hard to encrypt stuff.

Aud E: Also, how do we design this system that uses privacy by design? How do people in companies know how to use this new technology, even the engineers don't know how to implement this in their system? Is that true?

Aud D: There are different systems, also process designs, some need PbD.

Walter: It is risky to touch the running systems and implement them. So let us go 2 steps back - incentives is a good point, but it's not only about them. By default, not even intentionally, we collect data, if we do nothing the data gets stored. But is it allowed to store the data? Deleting is also a conscious decision. And there is also a responsibility issue - who is deciding to delete what? - And an awareness problem.

Aud F: The problem is: the end user cannot know how good the service is - you don't know which service protects your privacy. It would be interesting for end users - and possible help - to establish some label/standard - this service complies with security standards (could be easier to see what technologies qualify)

Berit: e.g. the e-trade-deal in Denmark - could that be a way forward?

Walter: Europrise seal, certification that you comply with data protection law (but it's not the same thing). Car industry: seals - car can only be sold if it has the certification. On international level it will be difficult, but in other areas we have it as well/already.

Trust marks are very useful and are part of the process in educating users. When it comes to the point when the user has to make a choice (e.g. supermarket food). A trust mark would work the same way, e.g. how the traffic light system works: when you know what a traffic light is (you have to be able to understand the information) then you can get to next level information (e.g. knowing 13 % of something is high, then you can decide to buy something with only 5%) -- Informing the user makes informed choice more likely! Example: In the food sector labelling organic food worked well.

Walter: It has to be easy to understand. We should try to find something similar for the security sector as well, something that can be understood easily.

Another factor with labels: the first thing easy labelling does: to question whether something is privacy friendly or not, so to start labelling can bring consumers to think about those issues in the first place.

Aud D: Another example for this is the fair trade logo. It started to appear before people knew what it was about, but once it appeared the consumers started to look it up, and it changed their view on products. But: follow up information is very important in security sector!

Aud: When it comes to privacy you would have to change a lot of things in the system: e.g. it is not profitable mostly to increase security - not always, but in a lot of cases.

Aud G: Yes, profit is one driver, but as well are the costs for investing in security sector. Investment is - for small companies - a difficult decision, but bigger companies are the important one's (banks, insurance companies, web 2.0 companies, governments) - The large players are important.

Berit: I am not sure if I agree with that. E.g. a company in Denmark is collecting data from mobile phones to do investigations of traffic - which routes do people take? Based on this data the traffic routes are made more efficient - it's a small company and it collects a lot of data, they have a lot of sensitive data, and this company is protecting the data very well.

Aud G: Okay, yes, when company is connected to internet/collecting data from the internet of course they have to take special requirements to protect this data.

Walter: To conclude this: we have to distinguish in answering the question: data driven businesses (data is their asset) vs. 'normal companies' who just process their customer’s data, but only process it to run the service. So regulation, legislation and enforcement is very important (data driven businesses especially). The law is already there but can be easily ignored.

Berit: Even small companies often use the data for ads etc., doesn't that account for them as well?

Walter: Theoretically yes, but when I open the internet in the first seconds already something can happen (with my data)

Aud: So better prepared data protection could be a driver for web industry.

Aud: I think what people want is: they don't want to become experts, but they want some system they understand/decide what a safe technology that protects their privacy is.

Aud: For businesses privacy costs something, we don’t know how much, so it needs also higher profits to make it interesting for them - people need to pay something for privacy (in the future).

But are you willing to see one more commercial before entering your email account?

Aud: it also costs money to safe data.

Aud: But now already some services emerged, and people might be prepared to pay a small amount for privacy services.

Session 24 <Strategies for mapping trust frameworks + Incentives for Harmonisation> (11:30/Room K3)

Conveners: Brook Schofield, Joni Brennan

Tags: Federation Policy, Trust Frameworks


Brook, David G, Scott, Joni, PGP Guy, Peter, Lalla, Nick, Ruth, Frank, Christos, STFS guy, Anders, Daniella, Roland, xxx, OCLC guy, Patrick, Hannah, Maarten, Richard, Alicja.



Brook: What incentive (positive or negative) are available to encourage the IDP/SP admins and their machinery to ensure harmonisation.

Joni: How do we move from "country first" to "interfederation first" harmonisation practices.

Main issues discussed

eIDAS only focuses on inter-country collaboration and does not concern itself of the activities within a country. There are policy decisions where possible and in the technical space there are gateways between the countries.

Is it the possibility that then only incentive is €$£ ?

"We" don't have authority over most of these decisions. So how do we make sure our agenda is covered?

1. Sneaky partnerships are great way of getting our agenda presented by those partners that have a seat at the table for higher level discussions.

2. Utilising the services of these partners as a sign of faith in their participation.

Scott works in the research side - not necessary specific to a single country - reputation is an important mark - and "groups" are willing to shop around for a federation that meets the reputation level that they want. There might not be a need to shop around in future - but if there is - it is an option on the table.

Commercial - Money

R&E - Reputation

Govt - Cyber Security

Joni's notes:

Incentivize federations (eduGain) to comply with best practices

Mapping of national / market trust frameworks

One report preliminary was that eIDAS and US ICAM were nearly ~90% identical.

NSTIC (good not so good)

Yubikey is a good example of a > sneaky for good collaboration that works to create gravity

Motivators vary based on some context >>>

Private Sector > Money

Academia > Reputation

Governments > Security and GDP

Perhaps >> Killer App >> Access to resources >>> the cool factor

Example --- retirement portfolio, benefits etc.

Evangelists for the communications of benefits and risks

Austria – discount for students to hardware

Students forced their universities to join that program to get the discount to force IOP

ORCID >> SAML single-sign-on for universities

Incentivize killer apps developers to use standards protocols (SAML etc.)


Identity portability across cloud is critical but there is not interoperability right now

Standardized adoption tools should be more readily available

Giant Cloud IDPs will drive the standardize

  • Developing of unique frameworks (they want to build their own tools)
  • World where we could map frameworks with each other
  • What could make them valuable for the admin?
  • Why is the federation for scaling of this system
  • How can we incentivise this collaboration first?

The aim is to convince national programs that it something they should do.

aud1: what IDs is?

Providing technical mechanism, eIDAS doesn’t care about what happens in the country - this is the problem. Mix of policies at the level it is possible to apply.


In the private sector business a global connectivity is a must


Trust frameworks seems to be monolithically.

Taking a trust framework and breaking apart the strong credentialing from the strong authentication to make it composable and achievable (this pattern applies to other parts of a trust framework as well)

Separating out the jurisdictionally-required parts from the globally-applicable parts so cross-jurisdictional implementation is possible. Making the jurisdictionally-required parts abstract enough so that they can be mapped to similar requirements in other jurisdictions, and documenting those mappings.

Aud 2: the necessity of doing the business is going to drive alive.

The European requirements are much stricter than US.

Aud 4: question about the documents about the mapping

Aud 2: there are on our website - Peter Alterman, Ph.D Chief Operating Officer - SAFE-BioPharma Association.

There is a line toward where the money is.

Joni: so the incentive is money

Aud 3: we get people working around with stuff - organic solutions are starting to emerge

Aud 4: is it better to have a common frameworks or individual?

Joni: I worry about the fact that we don't have the authority about these things. Sneaky partnerships are important! We want to get the people involved (like John Bradly?) and come back to a larger community to review.

Aud3: Example of sneaky collaboration: talking with Yubico to get stuff like PIV implemented in Yubikeys, other things that we need. Diversity is important, there need to be other vendors we can work with besides Yubico on things like that.

Scott: Reputation goes a long way, risk analysis on reputation. We are afraid of security attacks on our reputation. I think that service providers (especially large providers) have the ability to shop around the federations. If one fed doesn’t give us what we need, we will go to another.

Sum up:

  • Incentivisation comes down to contact:
  • Private sectors – money
  • Academia – reputation
  • Government - GDP+money
  • Killer app is a driver
  • Another driver in the US: TIAA/CREF - in higher education. That would be a driver for schools to implement federated strong authentication.

: the ‘coolness factor’

“Kind of competition: which country is cooler?”

Brook: an on boarding accomplishment (used on every level of education in Taiwan)

Peter: the digital device doesn’t exist because there are multiple devices, they cannot cooperate with the banking sites and that is a problem

  • Proposition of having thousands of mini killer apps
  • There are killer apps with massive audience
  • Joni: Gaining knowledge from different countries and showing its incentives
  • the problem of cloud - lack of portability (identity against cloud)

We want to see the value in putting the effort - its money.

Aud: what about the availability of standard tools?

Many of the snowflakes (the administration of it) is coming to having an argument why they are keeping the luxury snowflake instead of the standard one? You need to create a market - nobody is actually taking this step

Aud3: not having the identity on its place is a deal breaker - again organic evolution against ‘good things’. You could get a situation where a lot IDPs don’t have a way to create another snowflakes.


There is no one simple solution.

The issue is that the killer app always works slightly different in each country.

Session 23 <Hub+Spoke Federations> (11:30/Room K2)

Conveners:Niels van Dijk, Arnout Terpstra, Mads Petersen

Abstract:Some topics and challenges in Federated Identity Management are specifically interesting to Hub+Spoke Federations. Last year, a successful separate H+S side meeting was organised that can be repeated again this year, albeit in another form (since there is no room for another side meeting). Instead, come to this session to discuss both technical and non-technical stuff concerning H+S Federations.

Tags: Hub-and-Spoke Federations, Federation Policy


Niels van Dijk: Surfnet (NL): stuff that we're working on

Proposal: questions, discussion

Bullet points:

   1 hub&spoke <-> mesh

   2 hubs + education?

   3 identifying institution

Aud: hub&spoke federations: should keep the acronym. 'hub&smoke, full mesh' (haha)

Aud: beginning to get the sense what h&s federations are not

Need to do some contract. Process for each federation.

Lucas: if you’re h&s fed and not publishers that want to offer services to uni term attributes. Publishers don’t have contracts with all the org. how do h&s fed. Deal with these features?

Niels: not a feature but an attribute.

Mads: slides: WAYF hybrid architecture, thinking of changing the architecture of WAYF.

'This is a full mesh'...

Scoping element: I’m not going to talk to the hub but the BA? Not a good idea --> you have to have a proxy. -> The 'hub scoping'

It is going the other direction. A lot of the federations are making it easier for other federations to join. They’re making SP proxies. This is how we started with WAYF. We have service providers and identity providers.

Aud: registered at WAYF. When the user has to confirm and has to choose.... they have to choose WAYF to get to Copenhagen

WAYF+eduGAIN today

Why do we only do it on the IDP side, why not on SP side as well?

Service provider hub <> ID providers

Making a tick box in our registering, expose meta data for this SP for the meta data feed. Only one connection to WAYF. This is visible to rest of the world.

Meta data wise: a mesh, otherwise: a hub

Should ask: 'do you want to join as a hub or superordinate entity?'

Flows into current situation. They’re starting to build SP proxies. In order to be able to make a consent, you need to do it with the proxy enterprise.

Niels: your SP proxy - you must be including all the existing info into eduGAIN, right? You can’t beforehand know what IDP has access to your proxy. Do you allow services to be SPs to be on eduGAIn only? Connecting outside.

Mads: I don’t know.

Arnout: ORCID is only doing eduGAIn for good reasons. Connectivity. There’s nothing against that from our perspective. Do you have that scenario? Do you support it?

Mads: TNC: were registered locally, were in eduGAIN. We are cross-federating them. If we have approved it, there’s no diff if it comes from ourselves or not. Included eduGAIN, not in WAYF.

Aud: we only have separate SPs there, publishing directly to eduGAIn, to meta data streams. 1 local version, 1 eduGAIN set. They can choose.

Aud: do you have a policy for service providers? Do you have direct contact with service provider? For pre-registration, now that you're publishing?

Niels: it could be there is diff stuff in the contract than what eduGAIN is providing. We could have additional requirements that they don’t have to fulfil. We do write how to implement that. We do prefer the SURFconnect route if we can. Contracts are almost on par.

Aud: SPs or IDPs:

In WAYF: we have to approve by hand given a sponsor from one of the IPs

Has nothing to do with fed architecture, it’s a technical thing

There’s a technical component to this. We as a hub are enforcers of that policy. it would have been the institutions who do that.

Mads: NL and Denmark: basically same. But: we do it also the same way as the Swiss

Niels: can point to the actual (financial) contracts proposed to the servers.

Mads: identifying IDP? We do it for all the IDPs.

Lukas: can they consume any attributes? In your case and in WAYFs case this is true.

Mesh federations: what they accept and not accept as identifiers.

Would it be false to say now there is a consensus on...?

Niels: that would be my recommendation

Lucas: Denmark etc. are doing it

Niels: communication in research (--> flipchart). In this case, IDP is the same as a SP federation. Hub&spoke: different entities (Google, Microsoft, Elsevier, ...)

Middle part (Clarin, Elixir, Géant...) between IDP and SP controlled by the same entity. They’re not going to add services in the middle part though. Need to negotiate on that. What is the purpose of the attributes that are being provided between IDP and the middle part?

Evil, wrong, should not be allowed: moving IDP data to Google. Contracts would be brittle.

Aud: is this about outsourcing companies? Does EGI have to have the hardware for this?

Aud: in the US, there are contracts with Google. Research projects also have outsourcing contracts with Google.

Niels: you’re correct. Example doesn’t necessarily need to be Google. Contract needs to be sign specifically at a certain point.

Aud: In Italy there are two proxies and you can’t see them. SP-proxy: IDP thinks it is talking to services etc. but if EGI has contract: you lack visibility of that. Potential danger. Is that what you're trying to say?

Niels: yes.

Aud: EGI could be violating this contract, uploading data on the website.

David: if you break any of these, you'll get into trouble. This is feasible. Never trust in the agreements that you have. Contracts: they’re not allowed to re-use the information without the user's consent.

Niels: data in the back, e.g. Amazon, must be given in the contract.

David: obligation of SP.

Niels: our contract says that.

Laura: SP. what happens with the discovery side?

Niels: how our service is being discovered? Here, multiple learning systems already exist. Legal perspective: stand up to the SP to live up to the rules. eduGAIN - trivial. Metadata: you only need one entity. SP does something like make a domain name. Google does the same thing.

From a policy perspective, it's up to the SP to make up the rules. In the US: interesting.

Session 22 <Pre-open trust taxonomy > (11:30/Room K1)

Convener: Rainer Hörbe

Abstract1: OTTO is a Kantara WG wanting to devise a metadata infrastructure like SAML to establish federations with OAuth2/OIDC. The distribution mechanism will be based on blockchains. The architecture foresees a 2-layer approach with a generic layer supporting various business processes for establishing trust. Finally, the concept should be generic enough to support other metadata schemes like PKI and SAML metadata out of the same block chains.

Abstract2: The capability to run an identity federation. The first step was to map the SAML.

"It looks like we don't have to do it, because we cannot implement in the metadata."

"Then we realised we can do something better in the future.

Block chains enable to trust paradigms:

TOFU – trust on first use. Once you put something in t the block chain it is under public scrutiny. Consistent distribution is part of the technology.

The initiative came from Mike Schwartz of Gluu to have the capability to run an identity federation being driven by metadata as we have it today with SAML. The group started out mapping the SAML entity.

The first major point in the concept is that we don’t want to have the huge aggregates based on centralized databases. We then realised we can do something in another architecture. Get rid of the central database, it’s like the CA, if you hack it you go bust. Distribution is solved by using a block chain. In it you have two different trust paradigms. One is trust on first use. The concept is you put something in the block chain that is available and the source is then doing an ongoing monitoring that it was not revoked by a later entry. Once you put something in the block chain there is a very high confidence in being unchanged. The other trust paradigm is the proof of being the authoritative source that can be done in a traditional way. With signatures, and the things we have currently, so we are basically getting the best of both approaches. On top of the block chain, one can build an SQL database, so if you look up an entity by URL and see this is my name space, you can have the backlink in the database to verify the block chain to verify the database and to check if it’s of good integrity. The database provides flexible lookups.

The second major point was to extend the scope beyond technical trust. I am friendlier with SAML than OAuth2 so I would like to take the SAML example. In it you have a lot of assertions, but not they are not complete. You don’t have a proper assertion in the EntityDescriptor of the identity of an Entity Owner, for example the University of Vienna that is a legal identity based on the federal law of republic of Austria. SAML metadata is a very technical concept and leaving out the business level generates many of these scaling issues with metadata. So we will look what we can put into the metadata to support the whole business process to trust an entity.

And the third concept we are not just doing OAuth2 but abstracting metadata, doing also PKI, SAML and others, because the underlying fundamental statements have something in common so its fairly expensive to say that this legal entity is a member here, this one is affiliated, this one has a key here. (We have not yet adapted the charter).

There are different things. The SAML EntityDescripter is composed of entity statements which are even of different authoritative sources, so in many cases who is owning a key pair is just a statement of the owner, nobody is going to verify it, if you can’t decode a message nobody is going to check even. But who is asserting R&S entity category? Or certifying the assurance level?

The idea is to have two layers in this trust economy. One is an elementary level where are the statements like this name is a part of a name space and this name is linked to a key, a key holder so if you decompose in just these elementary expressions. I think that the certificate can be expressed.

The underlined triple store should be in the block chain or in the URLs to some external stores and on top of that is more loosely defined, a service that will have an interface to legacy clients.


In terms of getting broader input into the abstract layer that’s something that you could share widely. It would be quite valuable to understand. To represent it in that particular way could really help.


You have the GS1 which is mostly in the trade sector, unless you get more types of organizations together in one group, it will not work.

Johan and Floris: referring to the UETP model, the notes can be found on day 1, K7, third session.

Rainer: It might be more complex because we are aiming for notary functions to all entities related. Authority’s parties, making assertions of claims about somebody, there is an interesting project in Austria’s government for legal identifiers, most advanced implementations in Europe in support of the EU service directive. Almost everything that is kind of a legal or business entity is included. They are pulling together the different sources and registries that are based on law, including sole traders, government agencies, and hospitals.

The idea is to link the business processes that we currently have which are completely broken and provide some infrastructure where you can make plugins to say okay this government registry or this signature scheme can be used to provide trust and assertions which is good enough to relying party to build technology on SAML Data. The business processes will be based on the more general level.

Observation was is that the previous models like PKI and SAML metadata had this old approach of data centric modelling, that was the architecture, and now it’s more state of the art that you do business modelling first before you start writing a specifaction. The working group is aware that we need the data model and the business process model and if we can solve the business ones on a data level it would be much easier to have lighter API and agents that generate the specific metadata parties.

Tom: Why wouldn’t the business processes be a part of the anthology? You should put them together.

Rainer: I fully agree. It’s important not just to have a structural model but behavioural planning and the semantic model.

The fundamental (generic) level and technology specific layer. Not every element will be technology agnostic, so the generic level might include technology-specific data.

Johan: I think there is a must for an implementation level in between and its really important how to choose which things to use from the fundamental level, a level where the choices have to be made.

Tom: You also risk not being able to achieve what you want to. In order that the connection is more stable.

What are the gaps and what is hard to do?

Rainer: The resource gap is there. We need to be more people because that’s really way too big for a handful of people. We need more stakeholders. If we are really going to integrate as a pattern, we really need more resources. It’s also a complete threat for currency models. It’s designed to eradicate X5O9 as a business model.

Tom: The idea of having metadata aggregates produced as big huge files, on the one hand on the other hand we still need those kinds of notaries, or federation operators who put some authority into some of the assertions.

Johan: There is a big unity in the bit coin and to make the combination to use different authorities. Certificate authorities, which want to implement a new protocol to make it possible to have certificate authorities on block chain, the guy who was promoting that was Mike Hern.

Rainer: Another point where is the privacy or confidentiality, so supply chain might not want to show which companies are a part of the supply chain and by linking URLs to some access control schemes you can’t handle that but you obviously have the part of the block chain which is limited.

Other aspect is to have a payment model. For example having a company registry number, there are companies selling the information, you can send the URL somewhere and to sell the infrastructure.

To be informed or contribute you can join the Kantara OTTO working group, which is currently on a weekly call schedule.

Tom: What is the goal of the working group?

Rainer: Creating a federation metadata system for OAuth2 similar to SAML.

Action Items:

Join the mailing list of Kantara OTTO-WG:

There is material on the Wiki:

Session 21 <Privacy by design in federated identity management> (10:45/Room K6)

Conveners: Rainer Hörbe, Walter Hötzendorfer

Abstract: FIM, while solving important problems of remote entity authentication, introduces new privacy risks, like new possibilities of linking private data sets and new opportunities for user profiling. We will discuss privacy by design requirements, transpose them into specific architectural requirements and and evaluate a number of FIM models that have been proposed to mitigate these risks.

Tags: Privacy by Design


The paper presented is

Rainer Hörbe/Walter Hötzendorfer: Privacy by Design in Federated Identity Management

DOI 10.1109/SPW.2015.24

Please do not hesitate to contact us: This email address is being protected from spambots. You need JavaScript enabled to view it.


What are the risks in federated identity management?

Easy to resolve

1. Linkability by introduction common identifiers - two service parties should not be able to know that they are dealing with the same user. The worst thing to do in terms of linkability is to introduce common identifiers.

2. Impersonation by Identity/ Credential Providers or because of weaknesses in SSO mechanism -- here a central instance can observe the behaviour of the user. So an identity provider can see which relying parties the user is interacting with. We should find ways to overcome that.

Non-FIM Privacy Risks:


  • Device fingerprinting
  • IP address


What are the incentives. I would come to the conclusion to think about things like privacy in the systems we are building.

I would like to show you what we found out privacy and privacy by design means in particularly in the field of identity management.

From general provision reduced to requirements for identity systems.

Privacy risks related to FM, linkability and observability as two general problems, linkability is that basically two SPs should not be able to know that they are dealing with the same IDP. The worst thing to do is to make join identifiers.

Motivation and Scope

-          FIM Projects featuring cross-sector federation (e.g. smart cities, citizen eIDs, B2B across supply chains)

-          How to handle the increased privacy risk

Privacy by design

-          The principle of including privacy in the system development life cycle from the beginning

-          What does this mean in practice?

-          Difficulty: Bridging the gap between abstract principles and tangible requirements for a specific system

-          There is no default methodology but rather the need to act as a privacy engineer during the sys design and implementation process

Privacy by Design: The code is Law principle

-          Preclude that the system can be used in a privacy-infringing way

  1. by architecture
  2. by the Software design
  3. by other technical means (To 50% this means data minimization)

-         Particularly important in data protection and privacy because illegitimate use of data usually happens behind closed doors

Approach to Elicit Requirements

PP - Private Principles

PDR - Privacy by Design requirements

BR - Business requirements

AR - Architectural requirements

Lex(legal source)->PP-> PDR-> AR-> FIM models-> BR

Privacy Principles --> Privacy by Design rules

Next: PbD rules --> Architectural requirements

Privacy by design rules--> Architectural requirements

Existing Implementation --> Architectural requirements

Business Requirements --> Architectural requirements

We limited our scope to the WebSSO use case

Scope is limited to a single sign on use case and we could look at several things but the main thing is linkability.

The main difficulty is to bridge the gap in between legal principles and tangible requirements.

Involvement is system design and implementation is very crucial.

There is no direct match in between identifiers and a data protection directive, or there is no methodology.

A very important part of it is the code is law principle, after 1999 book code. In data protection law we mostly don’t learn about the legitimate use of data and the data subject is never aware of some misuse of her data so that’s why it’s very important to preclude that the system can be used in the privacy layer in the first place. It’s important to preclude the misuse with the architecture and the design. Misuse in data protection is very easy and will mostly be unknown to the data subject.

This means data minimization, and now this was made more concrete.

We started at the top and the bottom at the same time. It was focused on the European law and at the bottom we have different models which solve different problems.

We joined both processes to come up with 8 architectural requirements.


Did you ever have the situation where you had the cool idea in your architecture but were completely disconnected by the lawyer?


I had that problem but it was mostly fighting with the laws for the Crypto, and I still haven’t figured it out. We are bypassing it and getting strong crypto.


It’s an interesting thought. One of the factors that the regulators enforce is to make sure that the servers themselves don’t have the USB ports. They could actually be compromised with threat to do harmful things, to insert viruses via USB ports. The services don’t have the USB ports and the area that has to be defended is lowered.

It’s in the fingerprinting end of the spectrum but who is the certification authority, I don’t understand that.


Device fingerprinting is that if you open a website with your browser then the operator can recognize you. The great example is the Frontier foundation which told me that my browser is completely different than the rest.


There is for example an API in browsers to monitor your battery status, which could be used for fingerprinting. The W3C privacy interest group (PING) has a very elaborate document on that.

Comparing PbD Models:

What did we actually do?

There is the list of 5 steps and it somehow fits in our approach, but we call it differently because the main difference is that in privacy by design is to include design from the beginning and then you can see how the privacy is impacted with this.


There are also the business requirements. A business case is a federation for loyalty systems needing a central clearing service. Therefore they need some linkability.


I think it’s interesting, to examine how to prevent them or to demonstrate how to always be successful


No, of course not. We can’t demonstrate that so that would be a correct approach. You have to do the privacy assessment in the end.

Just briefly, the steps you saw in the picture, on the top there is a table of 8 common privacy principles which can be derived from legislation.

  • From that we felt what that means for the domain of identity management and transposed or deduced from the top table, the 5 principles in the bottom table which are specific for the identity management domain.
  • The table of the bottom moved to the top and what we did next. The provider still needs to talk about the particular user.
  • We came up with 8 requirements that can be implemented in identity management systems.


Models: (Rainer)

  • 1. Organizational model, we need technical controls
  • 2. Attribute based credentials,, there are some issues but the actual technology that we used
  • 3. Canadian Model the Late Binding/Federated Credentials, this is not privacy preserving at all but compliant, you wont be sued if you don’t release the attributed
  • 4. Constrained Logging Proxy, WAYF and SURFnet operate a similar thing, you throw away or hide away the logs so if you have no problems with the proxy, (unless there is a man in the middle, owning the proxy)
  • 7. Blind Proxy – the SP can’t be identified by the IDP

There are a number of other models, but those are the most important ones

  • All of them are focused on identifiers and the most use cases have identified attributes.

Question for the group:

Which model was used in your group?

The pairwise email address thing is an interesting thing, it was a part of the 500+ chain.

We were basically talking about targeting IDs that are still uniquely identified. Having an identifier that’s not an email address, and not having facilities that people run the kind of put email addresses in a file or a folder.

Rainer: We don’t do targeted identifiers, because we are releasing the attributes anyway. We don’t privacy here because we don’t have privacy there and so on

Nick: There is a conflicting use case that we mentioned, there are people who specifically don’t want privacy so when we do privacy by design they have to be satisfied as well.

When they are acting as researches and then being able to act synonymously.

We are doing clinical trials involving human beings. When someone interacts with our system they are doing on the record because they are working with human subjects. How do we make sure to minimize the data that we collect? I see potential for many conflicts, that’s where our scientists are coming from and need to authenticate them strongly against these requirements that they are operating on themselves

Tom: A range of use cases must be made available. If you build it by design you can never take that back, there is a must consider the design upfront.

Because there never is one that meets all purposes, we had discussions about these lines, about where to start. By default all these principles were observed. You can start out with the (floor), but what the design must do is to allow the range of these use cases. If you build it by design you cannot take it back.

Walter: There is a solution to that, the system that we built must provide space to any changes.

Even though the user used a synonym, but we have to find a way to figure out who the user was.

Rainer: Most of the solutions are talking about privacy by design are based on pseudonymous identities.

We have pseudonymous identifiers in Austrian DP law, called indirectly identified data, but the typical reaction from lawyers is: that is still private data! Therefore there is no incentive from the legal side to reduce the risk with by using pseudonyms. So we as engineers come up with solutions that are not being taken up by authorities.

Tom: Something that is unique to academics and that is provenance, so when you publish or when you find something to understand who wrote it so that you can trust it. In an academic work that’s very important. There is a big link not for just compliance reasons but that’s why they want to be identified. It is all very critical for science building itself over a period of time, provenance is very crucial.

Walter: We have so many requirements, and it would be so much easier to do that. I don’t see as a problem to tell people what are the requirements.

Rainer: If you think about how to improve the privacy by design: Is there any business requirements for that?

Nick: There are things that we can’t do I n the context of identity federation, that are very difficult to do without relationships so I can’t access an IMR, someplace else via the context federation. It pairs up with the proxy methods you are mentioning. It would be difficult to run them because of fiscal constraints.

Richard: Seems to me that if you have that restriction that you can have a pointer that would direct you to a server that can supply you with the contract, we have it all in place we just don’t use it.

Session 20 <User consent> (10:45/Room K2)

Conveners: Mads Petersen, David Simonsen


  • super-short service purpose template max. 200 characters - could we make that a standard?
  • user consent service



Should there be two templates?: one for hub&spoke, one for full-mesh federation.

How to deal with cluster of services (eg. LIGO has several related to the same purpose)?

Brief discussion about user consent receipts.

outcome: dedicated meeting on user consent

visibility of several user consent initiatives: Sweden, the Netherlands, Denmark and USA

Convener: manages WAYF federation in Denmark

Two proposals of discussion:

  1. User consent template (global?)
  2. User consent service (in the cloud?)

Introduction to user flow + current user consent dialogue:

  • What is a user consent?
  • Proposal for a template

On the PowerPoint presented:

  • Web-page: web based scenario
  • Connected federated institutions
  • The IDP, typically a university, hands over personal attributes about the user (first/last name, etc.) to the central fed. Hub
  • User consent dialogue: where the user is told about the purpose of the service + the attribute types & values; is asked to approve. Can ask the fed. hub to keep the information any time

Zoom-in of the website interface: "You are about to log into ..... "

WAYF (here: dictionary)(association of Danish industries etc.) says: "the purpose is to provide a dictionary by your educational institution"

Maximum of 200 words

Should you show the attribute values and the types or would it be confusing?

This way, users get an insight of the attributes. Technical identifiers should be shown as well.

'Consent template' slide


"The purpose of <SERVICE NAME> is <PURPOSE>."

  • capital letters: dynamic
  • lower case letters: fixed
  • ("the purpose of" is deliberately fronted)

Sometimes the user consent is half a page long, but we want users to get a better grasp of what they're reading.

Question to audience: How do you like this proposal?

Aud1: what could stop us from defining this on purpose?

Convener: nothing.

Aud2: government endorsement: When speaking with government agencies: they don't understand what we're talking about and talking to the right people is hard. We don't have the same kind of connection with the government in our country [Italy].

Convener: At government level, we have contact to people in eIDAS. They also have the desire of connecting and agreeing on a standard at a European level. In Denmark we don't have an endorsement.

Aud4: proposal: pick people who are willing to do this and spread the idea

Aud3: we now have government endorsement, that's the problem

Aud5: privacy work. They're preparing a demo to make sure everyone's aware. PB3

Convener: There'll be an enormous amount of uptake on this. The question is - even though it might be too late now, we should do it.

Aud6: text suggests in shibboleth... last part says: 'every time you will enter this service'.

Convener: 45 pages are too much. 200 characters is good. Can we agree on that?

Aud7: Once you accept the remember consent: hash value lasts for 3 years unless you change the attributes/the service?

Aud8: Do you have a 100% beautiful answer to this? Do you get good service descriptions and how?

Convener: We negotiate them with service providers on the phone or by email. We discuss it and we cook it down a lot. It’s a challenge but it's a way to fight the 45 pages long consents.

Aud9: isn't in some cases the purpose different? From org to org, there are other purposes.

Aud10: if you're doing a proxy, in the case of DARIAH. Should influence the amount of characters

Aggregation point - in some cases you don’t have one.

There could be different consents

Kantara: user-consent receipts

Convener 2: Mads

More technical issue: consent as a service

Deployment for WAYF

Had trouble finding software that integrates this into the hub. Solution: making consent a service:

  • Normally you would send the final consent from the IDP to the service
  • Send it to the consent service and set-up
  • -          Thought of this because we had requirements regarding the consent page

  • Send them to the consent service. Sometimes it could be an IDP, sometimes a user-selected service.

It would be simple to integrate this on every software on the planet, they use some kind of template when they’re sending it to the SP - instead send it to consent service -> once the consent was saved -> pass it on the real SP


The institution itself could run?

How is the consent actually going to enforce the attribute policy? Let’s assume the user can pick and choose on a bundle that is not fixed.


We only allow what the user consumes.

Decision points we had to make: either you place this consent in between data (the IDP doesn’t see more than is consented; IDP=protocol transformer)

What happens if the user says no?

  • Depends on the consent service. Basically, the cons service is the proxy. May have access to metadata.

Scenario: where you’ll never get a response?

  • If user does not accept -> no response.

If not encrypted, the consent service won’t fit.

Convener: you can’t show the actual data to the user

Aud2: discussion of necessity: won’t be necessary to the service if you put it there optionally solving of problem in WAVE: where we have these optional attributes

Aud3: why separate it out of WAVE?

Convener: we don’t want to have it integrated into everything. We had it in the beginning but don’t have any way to disconnect it again

They don’t care about the style. They care about what’s on top of the page.

->Private keys for specific domains.

Aud4: what if we had to do this in a discovery kind of fashion?

Convener: you could say this is ‚cheating‘. You send it to another destination.

Aud5: IDP administrates

Should be sent. The service really needs other stuff. I won’t allow it to add extra attributes.

User wants service.

Aud6: you don’t want to do yes or no but fine-grained stuff. It’s there but not a useful service as a concept

Aud7: question of responsibility - release policy. We take 100% resp., we don’t leave any chance to the users to…

Aud8: outside of country/contract - storing data. You could claim it’s the SP side but they’re thinking quite differently.

Definitely not the same entities.

Such a scenario would be interesting. Another: 1 consent screen is used to show both entity 1 and 2

Aud9: X has Danish users for accessing… has to do the accounting. He will do the mapping. Whether it will work or not, is his thing

  • Have a problem with releasing attributes.
  • Per attribute per IDP bases
  • Tested it in our own service.
  • Completely confused what’s going on -> suspicion cause they asked so much consent (although they asked normal stuff)

Estonian data protection agency: consent is invalid in this case because it has to be freely given. But in this case you’re not freely given the consent.

Aud11: experiment: come back and say: why can’t I just consent for all the providers. At a day, they use up to 10 services, they don’t want to consent each time again.