Discussion:
Liking Linkability
Henry Story
2012-10-06 13:49:12 UTC
Permalink
Notions of unlinkability of identities have recently been deployed
in ways that I would like to argue, are often much too simplistic,
and in fact harmful to wider issues of privacy on the web.

I would like to show this in two stages:
1. That linkability of identity is essential to electronic privacy
on the web
2. Show an example of an argument by Harry Halpin relating to
linkability, and by pulling it apart show how careful one has
to be with taking such arguments at face value

Because privacy is the context in which the linkability or non linkability
of identities is important, I would like to start with a simple working
definition of what constitutes privacy with the following minimal
criterion [0] that I think everyone can agree on:

"A communication between two people is private if the only people
who are party to the conversation are the two people in question.
One can easily generalise to groups: a conversation between groups
of people is private (to the group) if the only people who can
participate/read the information are members of that group"

Note that this does not deal with issues of people who were privy to
the conversation later leaking information voluntarily. We cannot
technically legislate good behaviour, though we can make it possible
for people to express context. [1]


1. On the importance of linkability of identities to privacy
============================================================

A. Issues of Centralisation
---------------------------

We can put this with the following thought experiment which I put
to Ben Laurie recently [0].

First imagine that we all are on one big social network, where
all of our home pages are at the same URL. Nobody could link
to our profile page in any meaningful way. The bigger the network
the more different people that one URL could refer to. People
that were part of the network could log in, and once logged in
communicate with others in their unlinkable channels.

But this would not necessarily give users of the network privacy:
simply because the network owner would be party to the conversation
between any two people or any group of people. Conversations
that do not wish the network owner to be party to the conversation
cannot work within that framework.

At the level of our planet it is clear that there will always be a
huge number of agents that cannot for legal or other reasons allow one
global network owner to be party to all their conversations. We are
therefore socio-logically forced into the social web.

B. Linkability and the Social Web
---------------------------------

Secondly imagine that we now all have Freedom Boxes [4], where
each of us has full control over the box, its software, and the
data on it. (We take this extreme individualistic case to emphasise
the contrast, not because we don't acknowledge the importance of
many intermediate cases as useful) Now we want to create a
distributed social network - the social web - where each of us can
publish information and through access control rules limit who can
access each resource. We would like to limit access to groups such
as:

- friends
- friends of friends
- family
- business colleagues
- ...

Limit access means, that we need to determine when accessing a
resource who is accessing it. For this we need a global identifier
so that can check with the information available to us, if the
referent of that identifier is indeed a member of one of those
groups. We can't have a local identifier, for that would require
that the person we were dealing with had an account on our private
box - which will be extremely unlikely. We therefore need a way
to identify - pseudonymously if be - agents in a global space.

Take the following example. Imagine you come to the WebID TPAC
meeting [6] and I take a picture of everyone present. I would like
to first restrict access to the picture to only those members who
were present. Clearly if I only used local identifiers, I would have
to get each one of you to first create an account on my machine. But
how would I then know that the accounts created on the FBox correspond
to the people who were at the party? It is much easier if we could
create a party members group and publish it like this

http://www.w3.org/2005/Incubator/webid/team.n3

Then I could drag and drop this group on the access control panel
of my FBox admin console to restrict access to only those members.
This shows how through linkability I can restrict access and
increase privacy by making it possible to link identities in a distributed
web. It would be quite possible furthermore for the above team.n3
resource to be protected by access control.


2. Example of how Unlinkability can be used to spread FUD
=========================================================


So here I would like to show how fears about linkability can
then bring intelligent people like Harry Halpin to make some seemingly
plausible arguments. Here is an example [2] of Harry arguing against
W3C WebID CG's http://webid.info/spec/

[[
Please look up "unlinkability" (which is why I kept referencing the
aforementioned IETF doc [sic [3] below it is a draft] which I saw
referenced earlier but whose main point seemed missed). Then explain
how WebID provides unlinkability.

Looking at the spec - to me, WebID doesn't as it still requires
publishing your public key at a URI and then having the relying party go
to your identity provider (i.e. your personal homepage in most cases,
i.e. what it is that hosts your key) in order to verify your cert, which
must provide that URI in the SAN in the cert. Thus, WebID does not
provide unlinkability. There's some waving of hands about guards and
access control, but that would not mediate the above point, as the HTTP
GET to the URI for the key is enough to provide the "link".

In comparison, BrowserID provides better privacy in terms of
unlinkability by having the browser in between the identity provider and
the relying party, so the relying party doesn't have to ping the
identity provider for identity-related transactions. That definitely
helps provide unlinkability in terms of the identity provider not
needing to knowing every time the user goes to a relying party.
]]

If I can rephrase the point seems to be the following: A WebID verification
requires that the site your are authenticating to ( The Relying Party ) verify
your identity by dereferencing ( let me add: anonymously ) your profile
page, which might only contain as much as your public key publicly. The yellow
box in the picture here:

http://www.w3.org/2005/Incubator/webid/spec/#the-webid-protocol

The leakage of information then would not be towards the Relying Party - the
site you are logging into - because that site is the one you just wilfully
sent a proof of your identity to. The leakage of information is (drum roll)
towards your profile page server! That server might discover ( through IP address
sniffing presumably ) which sites you might be visiting.

One reasonable answer to this problem would be for the Relying Party to fetch
this information via Tor which would remove the ip address sniffing problem.

But let us develop the picture of who we are loosing (potentially)
information to. There are a number of profile server scenarios:

A. Profile on My Freedom Box [4]

The FreedomBox is a personal machine that I control, running
free software that I can inspect. Here the only person who has
access to the Freedom Box is me. So if I discover that I logged
in somewhere that should come as no surprise to me. I might even
be interested in this information as a way of gathering information
about where I logged in - and perhaps also if anything had been
logging in somewhere AS me. (Sadly it looks like it might be
difficult to get much good information there as things stand
currently with WebID.)

B. Profile on My Company/University Profile Server

As a member of a company, I am part of a larger agency, namely the
Company or University who is backing my identity as member of that
institution. A profile on a University web site can mean a lot more
than a profile on some social network, because it is in part backed
by that institution. Of course as a member of that institution we
are part of a larger agent hood. And so it is not clear that the institution
and me are in that context that different. This is also why it is
often legally required that one not use one's company identity for
private business.

C. A Social Network ( Google+, Facebook, ... )

It is a bit odd that people who are part of these networks, and who
are "liking" pretty much everything on the web in a way that is clearly
visible and is encouraged by those networks to be visible to the
network, would have an issue with those sites knowing-perhaps (if the
RP does not use Tor or a proxy) where they are logging into. It is certainly
not the way the OAuth, OpenID or other protocols that are in extremely
wide use now have been developed and are used by those sites.

If we look then at BrowserId [7] Now Mozilla Persona, the only difference
really with WebID ( apart from it not being decentralised until crypto in the
browser really works ) is that the certificate is updated at short notice
- once a day - and that relying parties verify the signature. Neither of course
can the relying party get much interesting attributes this way, and if it did
then the whole of the unlinkability argument would collapse immediately.


3. Conclusion
=============

Talking about privacy is like talking about security. It is a breeding ground
for paranoia, which tend to make it difficult to notice important
solutions to the problem we actually have. Linkability or unlinkability as defined in
draft-hansen-privacy-terminology-03 [3] come with complicated definitions,
and are I suppose meant to be applied carefully. But the choice of "unlinkable"
as a word tends to help create rhethorical short cuts that are apt to hide the
real problems of privacy. By trying too hard to make things unlinkable we are moving
inevitably towards a centralised world where all data is in big brother's hands.

I want to argue that we should all *Like* Linkability. We should
do it aware that we can protect ourselves with access control (and TOR)
and realise that we don't need to reveal anything more than anyone knew
before hand in our linkable profiles.

To create a Social Web we need a Linkable ( and likeable ) social web.
We may need other technologies for running Wikileaks type set ups, but
the clearly cannot be the basic for an architecture of privacy - even
if it is an important element in the political landscape.

Henry

[0] this is from a discussion with Ben Laurie
http://lists.w3.org/Archives/Public/public-webid/2012Oct/att-0022/privacy-def-1.pdf
[1] Oshani's Usage Restriction paper
http://dig.csail.mit.edu/2011/Papers/IEEE-Policy-httpa/paper.pdf
[2] http://lists.w3.org/Archives/Public/public-identity/2012Oct/0036.html
[3] https://tools.ietf.org/html/draft-hansen-privacy-terminology-03
[4] http://www.youtube.com/watch?v=SzW25QTVWsE
[6] http://www.w3.org/2012/10/TPAC/
[7] A Comparison between BrowserId and WebId
http://security.stackexchange.com/questions/5406/what-are-the-main-advantages-and-disadvantages-of-webid-compared-to-browserid


Social Web Architect
http://bblfish.net/
Henry Story
2012-10-08 22:05:18 UTC
Permalink
On 8 Oct 2012, at 20:27, "Klaas Wierenga (kwiereng)" <kwiereng-***@public.gmane.org> wrote:

> Hi Henry,
>
> I think your definition of what constitutes a private conversation is a bit limited, especially in an electronic day and age. I consider the simple fact that we are having a conversation, without knowing what we talk about, a privacy sensitive thing. Do you want your wife to know that you are talking to your mistress, or your employer that you have a job interview?
> And do you believe that the location where you are does not constitute a privacy sensitive attribute?

Ok I think my definition still works: If someone knows that you are communicating with someone then they know something about the conversation. In my definition that does constitute a privacy violation at least for that bit of information.

Though I think you exaggerate what they know. Your wife won't know that you are talking to your mistress, just that you are talking to another server (If it is a freedom box, they could narrow it down to an individual). Information about it being a mistress cannot be found just by seeing information move over the wire. Neither does an employer know you have a job interview just because you are communicating with some server x from a different company. But he could be worried.

So if I apply this to WebID ( http://webid.info/ ) - which is I think why you bring it up - WebID is currently based on TLS, which does make it possible to track connections between servers. But remember that the perfect is the enemy of the good. How come? Well, put things in context: by failing to create simple distributed systems which protect privacy of content pretty well, that works with current deployed technologies (e.g. browsers, and servers), we have allowed large social networks to grow to sizes unimaginable in any previous surveillance society. So even a non optimal system like TLS can still bring huge benefits over the current status quo. If only in educating people in how to build such reasonably safe distributed systems.

But having put that in context, the issue of tracking what servers are communicating remains. There are technologies designed to make that opaque, such as Tor. I still need to prove that one can have .onion WebIDs, and that one can also connect with browsers using TLS behind Tor - but it should not be difficult to do. Once one can show this then it should be possible to develop protocols that make this a lot more efficient. Would that convince you?

>
> Klaas
>
> Sent from my iPad
>
> On 8 okt. 2012, at 19:01, "Henry Story" <henry.story-***@public.gmane.org> wrote:
>
>>
>> Notions of unlinkability of identities have recently been deployed
>> in ways that I would like to argue, are often much too simplistic,
>> and in fact harmful to wider issues of privacy on the web.
>>
>> I would like to show this in two stages:
>> 1. That linkability of identity is essential to electronic privacy
>> on the web
>> 2. Show an example of an argument by Harry Halpin relating to
>> linkability, and by pulling it apart show how careful one has
>> to be with taking such arguments at face value
>>
>> Because privacy is the context in which the linkability or non linkability
>> of identities is important, I would like to start with a simple working
>> definition of what constitutes privacy with the following minimal
>> criterion [0] that I think everyone can agree on:
>>
>> "A communication between two people is private if the only people
>> who are party to the conversation are the two people in question.
>> One can easily generalise to groups: a conversation between groups
>> of people is private (to the group) if the only people who can
>> participate/read the information are members of that group"
>>
>> Note that this does not deal with issues of people who were privy to
>> the conversation later leaking information voluntarily. We cannot
>> technically legislate good behaviour, though we can make it possible
>> for people to express context. [1]
>>
>>
>> 1. On the importance of linkability of identities to privacy
>> ============================================================
>>
>> A. Issues of Centralisation
>> ---------------------------
>>
>> We can put this with the following thought experiment which I put
>> to Ben Laurie recently [0].
>>
>> First imagine that we all are on one big social network, where
>> all of our home pages are at the same URL. Nobody could link
>> to our profile page in any meaningful way. The bigger the network
>> the more different people that one URL could refer to. People
>> that were part of the network could log in, and once logged in
>> communicate with others in their unlinkable channels.
>>
>> But this would not necessarily give users of the network privacy:
>> simply because the network owner would be party to the conversation
>> between any two people or any group of people. Conversations
>> that do not wish the network owner to be party to the conversation
>> cannot work within that framework.
>>
>> At the level of our planet it is clear that there will always be a
>> huge number of agents that cannot for legal or other reasons allow one
>> global network owner to be party to all their conversations. We are
>> therefore socio-logically forced into the social web.
>>
>> B. Linkability and the Social Web
>> ---------------------------------
>>
>> Secondly imagine that we now all have Freedom Boxes [4], where
>> each of us has full control over the box, its software, and the
>> data on it. (We take this extreme individualistic case to emphasise
>> the contrast, not because we don't acknowledge the importance of
>> many intermediate cases as useful) Now we want to create a
>> distributed social network - the social web - where each of us can
>> publish information and through access control rules limit who can
>> access each resource. We would like to limit access to groups such
>> as:
>>
>> - friends
>> - friends of friends
>> - family
>> - business colleagues
>> - ...
>>
>> Limit access means, that we need to determine when accessing a
>> resource who is accessing it. For this we need a global identifier
>> so that can check with the information available to us, if the
>> referent of that identifier is indeed a member of one of those
>> groups. We can't have a local identifier, for that would require
>> that the person we were dealing with had an account on our private
>> box - which will be extremely unlikely. We therefore need a way
>> to identify - pseudonymously if be - agents in a global space.
>>
>> Take the following example. Imagine you come to the WebID TPAC
>> meeting [6] and I take a picture of everyone present. I would like
>> to first restrict access to the picture to only those members who
>> were present. Clearly if I only used local identifiers, I would have
>> to get each one of you to first create an account on my machine. But
>> how would I then know that the accounts created on the FBox correspond
>> to the people who were at the party? It is much easier if we could
>> create a party members group and publish it like this
>>
>> http://www.w3.org/2005/Incubator/webid/team.n3
>>
>> Then I could drag and drop this group on the access control panel
>> of my FBox admin console to restrict access to only those members.
>> This shows how through linkability I can restrict access and
>> increase privacy by making it possible to link identities in a distributed
>> web. It would be quite possible furthermore for the above team.n3
>> resource to be protected by access control.
>>
>>
>> 2. Example of how Unlinkability can be used to spread FUD
>> =========================================================
>>
>>
>> So here I would like to show how fears about linkability can
>> then bring intelligent people like Harry Halpin to make some seemingly
>> plausible arguments. Here is an example [2] of Harry arguing against
>> W3C WebID CG's http://webid.info/spec/
>>
>> [[
>> Please look up "unlinkability" (which is why I kept referencing the
>> aforementioned IETF doc [sic [3] below it is a draft] which I saw
>> referenced earlier but whose main point seemed missed). Then explain
>> how WebID provides unlinkability.
>>
>> Looking at the spec - to me, WebID doesn't as it still requires
>> publishing your public key at a URI and then having the relying party go
>> to your identity provider (i.e. your personal homepage in most cases,
>> i.e. what it is that hosts your key) in order to verify your cert, which
>> must provide that URI in the SAN in the cert. Thus, WebID does not
>> provide unlinkability. There's some waving of hands about guards and
>> access control, but that would not mediate the above point, as the HTTP
>> GET to the URI for the key is enough to provide the "link".
>>
>> In comparison, BrowserID provides better privacy in terms of
>> unlinkability by having the browser in between the identity provider and
>> the relying party, so the relying party doesn't have to ping the
>> identity provider for identity-related transactions. That definitely
>> helps provide unlinkability in terms of the identity provider not
>> needing to knowing every time the user goes to a relying party.
>> ]]
>>
>> If I can rephrase the point seems to be the following: A WebID verification
>> requires that the site your are authenticating to ( The Relying Party ) verify
>> your identity by dereferencing ( let me add: anonymously ) your profile
>> page, which might only contain as much as your public key publicly. The yellow
>> box in the picture here:
>>
>> http://www.w3.org/2005/Incubator/webid/spec/#the-webid-protocol
>>
>> The leakage of information then would not be towards the Relying Party - the
>> site you are logging into - because that site is the one you just wilfully
>> sent a proof of your identity to. The leakage of information is (drum roll)
>> towards your profile page server! That server might discover ( through IP address
>> sniffing presumably ) which sites you might be visiting.
>>
>> One reasonable answer to this problem would be for the Relying Party to fetch
>> this information via Tor which would remove the ip address sniffing problem.
>>
>> But let us develop the picture of who we are loosing (potentially)
>> information to. There are a number of profile server scenarios:
>>
>> A. Profile on My Freedom Box [4]
>>
>> The FreedomBox is a personal machine that I control, running
>> free software that I can inspect. Here the only person who has
>> access to the Freedom Box is me. So if I discover that I logged
>> in somewhere that should come as no surprise to me. I might even
>> be interested in this information as a way of gathering information
>> about where I logged in - and perhaps also if anything had been
>> logging in somewhere AS me. (Sadly it looks like it might be
>> difficult to get much good information there as things stand
>> currently with WebID.)
>>
>> B. Profile on My Company/University Profile Server
>>
>> As a member of a company, I am part of a larger agency, namely the
>> Company or University who is backing my identity as member of that
>> institution. A profile on a University web site can mean a lot more
>> than a profile on some social network, because it is in part backed
>> by that institution. Of course as a member of that institution we
>> are part of a larger agent hood. And so it is not clear that the institution
>> and me are in that context that different. This is also why it is
>> often legally required that one not use one's company identity for
>> private business.
>>
>> C. A Social Network ( Google+, Facebook, ... )
>>
>> It is a bit odd that people who are part of these networks, and who
>> are "liking" pretty much everything on the web in a way that is clearly
>> visible and is encouraged by those networks to be visible to the
>> network, would have an issue with those sites knowing-perhaps (if the
>> RP does not use Tor or a proxy) where they are logging into. It is certainly
>> not the way the OAuth, OpenID or other protocols that are in extremely
>> wide use now have been developed and are used by those sites.
>>
>> If we look then at BrowserId [7] Now Mozilla Persona, the only difference
>> really with WebID ( apart from it not being decentralised until crypto in the
>> browser really works ) is that the certificate is updated at short notice
>> - once a day - and that relying parties verify the signature. Neither of course
>> can the relying party get much interesting attributes this way, and if it did
>> then the whole of the unlinkability argument would collapse immediately.
>>
>>
>> 3. Conclusion
>> =============
>>
>> Talking about privacy is like talking about security. It is a breeding ground
>> for paranoia, which tend to make it difficult to notice important
>> solutions to the problem we actually have. Linkability or unlinkability as defined in
>> draft-hansen-privacy-terminology-03 [3] come with complicated definitions,
>> and are I suppose meant to be applied carefully. But the choice of "unlinkable"
>> as a word tends to help create rhethorical short cuts that are apt to hide the
>> real problems of privacy. By trying too hard to make things unlinkable we are moving
>> inevitably towards a centralised world where all data is in big brother's hands.
>>
>> I want to argue that we should all *Like* Linkability. We should
>> do it aware that we can protect ourselves with access control (and TOR)
>> and realise that we don't need to reveal anything more than anyone knew
>> before hand in our linkable profiles.
>>
>> To create a Social Web we need a Linkable ( and likeable ) social web.
>> We may need other technologies for running Wikileaks type set ups, but
>> the clearly cannot be the basic for an architecture of privacy - even
>> if it is an important element in the political landscape.
>>
>> Henry
>>
>> [0] this is from a discussion with Ben Laurie
>> http://lists.w3.org/Archives/Public/public-webid/2012Oct/att-0022/privacy-def-1.pdf
>> [1] Oshani's Usage Restriction paper
>> http://dig.csail.mit.edu/2011/Papers/IEEE-Policy-httpa/paper.pdf
>> [2] http://lists.w3.org/Archives/Public/public-identity/2012Oct/0036.html
>> [3] https://tools.ietf.org/html/draft-hansen-privacy-terminology-03
>> [4] http://www.youtube.com/watch?v=SzW25QTVWsE
>> [6] http://www.w3.org/2012/10/TPAC/
>> [7] A Comparison between BrowserId and WebId
>> http://security.stackexchange.com/questions/5406/what-are-the-main-advantages-and-disadvantages-of-webid-compared-to-browserid
>>
>>
>> Social Web Architect
>> http://bblfish.net/
>>
>> _______________________________________________
>> saag mailing list
>> saag-***@public.gmane.org
>> https://www.ietf.org/mailman/listinfo/saag

Social Web Architect
http://bblfish.net/
Henry Story
2012-10-09 13:19:21 UTC
Permalink
On 9 Oct 2012, at 14:29, "Klaas Wierenga (kwiereng)" <***@cisco.com> wrote:

> Hi Henry,
>
> (adding saag, had not realised that it was a resend)
>
> On Oct 9, 2012, at 12:05 AM, Henry Story <***@bblfish.net> wrote:
>
>>
>> On 8 Oct 2012, at 20:27, "Klaas Wierenga (kwiereng)" <***@cisco.com> wrote:
>>
>>> Hi Henry,
>>>
>>> I think your definition of what constitutes a private conversation is a bit limited, especially in an electronic day and age. I consider the simple fact that we are having a conversation, without knowing what we talk about, a privacy sensitive thing. Do you want your wife to know that you are talking to your mistress, or your employer that you have a job interview?
>>> And do you believe that the location where you are does not constitute a privacy sensitive attribute?
>>
>> Ok I think my definition still works: If someone knows that you are communicating with someone then they know something about the conversation. In my definition that does constitute a privacy violation at least for that bit of information.
>
> ehm, I think that you need quite a bit of fantasy to read that in your definition ;-) So if you mean also "or are aware of the communication" you should perhaps include that, but, as you point out below, that does complicate things big time.

It was meant to be a working definition.
For a much more detailed work on Privacy of course you would go to read

"Privacy in Context: Technology, Policy, and the Integrity of Social Life" by
Helen Nissenbaum
http://www.sup.org/book.cgi?id=8862

The philoweb and the public-privacy groups should ( perhaps with saag ) work
on building up a reading list of philosophical, technical and legal books
on the subject, with perhaps short summaries that we technicians can read -
a great exit route also for those technicians who get bored with coding - it
can happen to the best!

Helen brings in the notion of context as a very important element in the
understanding of what privacy is. Privacy there does not mean secrecy, it
means a lot more something akin to respecting the context in which information
was given initially - eg banking details such as someone's home should not be
divulged because the same information can be gotten from other sources, but
should remain protected because they were given as part of a context. (This
was a supreme court ruling according to Nissenbaum.)

So I think my working definition is good enough and should easily be
extendable to be able to cover other cases like the one you mention. It does
not cover context for example but work by Oshani on usage restriction shows
one way one can go:
http://dig.csail.mit.edu/2011/Papers/IEEE-Policy-httpa/paper.pdf

also remember I explained it as a minimal criteria we could agree on for
privacy, not a full definition.

>
>> Though I think you exaggerate what they know. Your wife won't know that you are talking to your mistress, just that you are talking to another server (If it is a freedom box, they could narrow it down to an individual). Information about it being a mistress cannot be found just by seeing information move over the wire. Neither does an employer know you have a job interview just because you are communicating with some server x from a different company. But he could be worried.
>
> I think you are now digressing from the general case, whilst your definition was meant to be very generic (I believe?). I am not talking about implementations, but about the general principle. The fact that there is an xmpp session between ***@cisco.com and ***@apple.com may indicate to my manager that I am looking for another job.

yes, though if the session were peer to peer, and you were communicating in a
way that only the connection from one server to another could be deduced, then
the information about the precise address would be hidden. I am not sure about
xmpp in this respect. But with WebID once the TLS connection is made the HTTP
layer is encrypted, and so it should be impossible to see if you are doing a GET,
PUT, POST or DELETE and even on which resource you were acting.

Still other things are visible...

> My manager might also be worried if he sees me entering the Google premises, but that is much less likely (even though I have helped applicants get out of the building through the emergency exit because a colleague had arrived in the reception area in the past ;-) The reason I brought these examples up is that I believe something has changed with the ubiquity of online databases and online communication. When I didn't want to be overheard in the past I would go for a walk with someone and we could talk with reasonable assurance. Now I have to trust that say Skype is not listening in to my conversation and that Twitter will not hand my tweets to DHS. So the simple fact that I use an encrypted channel is not sufficient.

Of course. The important thing is that those not part of the conversation
not be able to gather information about the conversation. One can be more
or less strict on the limits here - in some cases it will matter that even
knowing who is communicating with whom be hidden, in other cases it may not
be that important.

>
>>
>> So if I apply this to WebID ( http://webid.info/ ) - which is I think why you bring it up - WebID is currently based on TLS, which does make it possible to track connections between servers. But remember that the perfect is the enemy of the good. How come? Well, put things in context: by failing to create simple distributed systems which protect privacy of content pretty well, that works with current deployed technologies (e.g. browsers, and servers), we have allowed large social networks to grow to sizes unimaginable in any previous surveillance society. So even a non optimal system like TLS can still bring huge benefits over the current status quo. If only in educating people in how to build such reasonably safe distributed systems.
>
> I was not referring to WebID in particular. I applaud your effort, and do realise that perfect will not happen. However I think that your definition of privacy should either be scoped tightly to particular use cases or is too broad a brush. I tend to think that one single definition of privacy is not very useful, and rather like to think about different forms of privacy, location privacy, encrypted channels, plausible deniability etc.

yes. there are a lot of subcases. The point I was trying to make if we can get
back to the "Liking Linkability" argument of the thread (and not get lost
in counting the number of angels on a privacy pin head) is that in order to
create systems where you can be as flexible as possible with whome you want
to share your resources with -ie. without placing yourself in a situation where
someone else is listening in - you need to allow for linkability of identity,
and resources as that is the only way to create a distributed social web.

As such worries about people being able to see that I am communicating with someone
in another company are laudable, but if put in perspective with the really big
issues of loss of privacy, is completely irrelevant for most use cases.

But as I say, those uses cases can be addressed with technologies such as Tor...

>
>>
>> But having put that in context, the issue of tracking what servers are communicating remains. There are technologies designed to make that opaque, such as Tor. I still need to prove that one can have .onion WebIDs, and that one can also connect with browsers using TLS behind Tor - but it should not be difficult to do. Once one can show this then it should be possible to develop protocols that make this a lot more efficient. Would that convince you?
>
> Ehm, what actually concerns me more is not the fact that *it is possible* to design
> proper protocols as much as that I would like to provide guidance to protocol developers
> to *prevent improper protocols*. Does that make sense?

yes, but don't make linkability an a priori bad thing, since it is the most important
building block for creating distributed co-operative structures, and so to privacy.
That is the point of this thread.

You may not be doing that btw, but if you look at Harry Halpin's arguments you'll
see a good example of how terminology of unlinkability as proposed in

http://tools.ietf.org/html/draft-iab-privacy-terminology-01

can be misused. But to be fair it does say at the end of the document

[[
Achieving anonymity, unlinkability, and undetectability may enable
extreme data minimization. Unfortunately, this would also prevent a
certain class of useful two-way communication scenarios. Therefore,
for many applications, a certain amount of linkability and
detectability is usually accepted while attempting to retain
unlinkability between the data subject and his or her transactions.
This is achieved through the use of appropriate kinds of pseudonymous
identifiers. These identifiers are then often used to refer to
established state or are used for access control purposes
]]

Still in my conversations I have found that many people in security spaces
just don't seem to be able to put the issues in context, and can get sidetracked
into not wanting any linkability at all. Not sure how to fix that.


>
> Klaas
>
>>
>>>
>>> Klaas
>>>
>>> Sent from my iPad
>>>
>>> On 8 okt. 2012, at 19:01, "Henry Story" <***@bblfish.net> wrote:
>>>
>>>>
>>>> Notions of unlinkability of identities have recently been deployed
>>>> in ways that I would like to argue, are often much too simplistic,
>>>> and in fact harmful to wider issues of privacy on the web.
>>>>
>>>> I would like to show this in two stages:
>>>> 1. That linkability of identity is essential to electronic privacy
>>>> on the web
>>>> 2. Show an example of an argument by Harry Halpin relating to
>>>> linkability, and by pulling it apart show how careful one has
>>>> to be with taking such arguments at face value
>>>>
>>>> Because privacy is the context in which the linkability or non linkability
>>>> of identities is important, I would like to start with a simple working
>>>> definition of what constitutes privacy with the following minimal
>>>> criterion [0] that I think everyone can agree on:
>>>>
>>>> "A communication between two people is private if the only people
>>>> who are party to the conversation are the two people in question.
>>>> One can easily generalise to groups: a conversation between groups
>>>> of people is private (to the group) if the only people who can
>>>> participate/read the information are members of that group"
>>>>
>>>> Note that this does not deal with issues of people who were privy to
>>>> the conversation later leaking information voluntarily. We cannot
>>>> technically legislate good behaviour, though we can make it possible
>>>> for people to express context. [1]
>>>>
>>>>
>>>> 1. On the importance of linkability of identities to privacy
>>>> ============================================================
>>>>
>>>> A. Issues of Centralisation
>>>> ---------------------------
>>>>
>>>> We can put this with the following thought experiment which I put
>>>> to Ben Laurie recently [0].
>>>>
>>>> First imagine that we all are on one big social network, where
>>>> all of our home pages are at the same URL. Nobody could link
>>>> to our profile page in any meaningful way. The bigger the network
>>>> the more different people that one URL could refer to. People
>>>> that were part of the network could log in, and once logged in
>>>> communicate with others in their unlinkable channels.
>>>>
>>>> But this would not necessarily give users of the network privacy:
>>>> simply because the network owner would be party to the conversation
>>>> between any two people or any group of people. Conversations
>>>> that do not wish the network owner to be party to the conversation
>>>> cannot work within that framework.
>>>>
>>>> At the level of our planet it is clear that there will always be a
>>>> huge number of agents that cannot for legal or other reasons allow one
>>>> global network owner to be party to all their conversations. We are
>>>> therefore socio-logically forced into the social web.
>>>>
>>>> B. Linkability and the Social Web
>>>> ---------------------------------
>>>>
>>>> Secondly imagine that we now all have Freedom Boxes [4], where
>>>> each of us has full control over the box, its software, and the
>>>> data on it. (We take this extreme individualistic case to emphasise
>>>> the contrast, not because we don't acknowledge the importance of
>>>> many intermediate cases as useful) Now we want to create a
>>>> distributed social network - the social web - where each of us can
>>>> publish information and through access control rules limit who can
>>>> access each resource. We would like to limit access to groups such
>>>> as:
>>>>
>>>> - friends
>>>> - friends of friends
>>>> - family
>>>> - business colleagues
>>>> - ...
>>>>
>>>> Limit access means, that we need to determine when accessing a
>>>> resource who is accessing it. For this we need a global identifier
>>>> so that can check with the information available to us, if the
>>>> referent of that identifier is indeed a member of one of those
>>>> groups. We can't have a local identifier, for that would require
>>>> that the person we were dealing with had an account on our private
>>>> box - which will be extremely unlikely. We therefore need a way
>>>> to identify - pseudonymously if be - agents in a global space.
>>>>
>>>> Take the following example. Imagine you come to the WebID TPAC
>>>> meeting [6] and I take a picture of everyone present. I would like
>>>> to first restrict access to the picture to only those members who
>>>> were present. Clearly if I only used local identifiers, I would have
>>>> to get each one of you to first create an account on my machine. But
>>>> how would I then know that the accounts created on the FBox correspond
>>>> to the people who were at the party? It is much easier if we could
>>>> create a party members group and publish it like this
>>>>
>>>> http://www.w3.org/2005/Incubator/webid/team.n3
>>>>
>>>> Then I could drag and drop this group on the access control panel
>>>> of my FBox admin console to restrict access to only those members.
>>>> This shows how through linkability I can restrict access and
>>>> increase privacy by making it possible to link identities in a distributed
>>>> web. It would be quite possible furthermore for the above team.n3
>>>> resource to be protected by access control.
>>>>
>>>>
>>>> 2. Example of how Unlinkability can be used to spread FUD
>>>> =========================================================
>>>>
>>>>
>>>> So here I would like to show how fears about linkability can
>>>> then bring intelligent people like Harry Halpin to make some seemingly
>>>> plausible arguments. Here is an example [2] of Harry arguing against
>>>> W3C WebID CG's http://webid.info/spec/
>>>>
>>>> [[
>>>> Please look up "unlinkability" (which is why I kept referencing the
>>>> aforementioned IETF doc [sic [3] below it is a draft] which I saw
>>>> referenced earlier but whose main point seemed missed). Then explain
>>>> how WebID provides unlinkability.
>>>>
>>>> Looking at the spec - to me, WebID doesn't as it still requires
>>>> publishing your public key at a URI and then having the relying party go
>>>> to your identity provider (i.e. your personal homepage in most cases,
>>>> i.e. what it is that hosts your key) in order to verify your cert, which
>>>> must provide that URI in the SAN in the cert. Thus, WebID does not
>>>> provide unlinkability. There's some waving of hands about guards and
>>>> access control, but that would not mediate the above point, as the HTTP
>>>> GET to the URI for the key is enough to provide the "link".
>>>>
>>>> In comparison, BrowserID provides better privacy in terms of
>>>> unlinkability by having the browser in between the identity provider and
>>>> the relying party, so the relying party doesn't have to ping the
>>>> identity provider for identity-related transactions. That definitely
>>>> helps provide unlinkability in terms of the identity provider not
>>>> needing to knowing every time the user goes to a relying party.
>>>> ]]
>>>>
>>>> If I can rephrase the point seems to be the following: A WebID verification
>>>> requires that the site your are authenticating to ( The Relying Party ) verify
>>>> your identity by dereferencing ( let me add: anonymously ) your profile
>>>> page, which might only contain as much as your public key publicly. The yellow
>>>> box in the picture here:
>>>>
>>>> http://www.w3.org/2005/Incubator/webid/spec/#the-webid-protocol
>>>>
>>>> The leakage of information then would not be towards the Relying Party - the
>>>> site you are logging into - because that site is the one you just wilfully
>>>> sent a proof of your identity to. The leakage of information is (drum roll)
>>>> towards your profile page server! That server might discover ( through IP address
>>>> sniffing presumably ) which sites you might be visiting.
>>>>
>>>> One reasonable answer to this problem would be for the Relying Party to fetch
>>>> this information via Tor which would remove the ip address sniffing problem.
>>>>
>>>> But let us develop the picture of who we are loosing (potentially)
>>>> information to. There are a number of profile server scenarios:
>>>>
>>>> A. Profile on My Freedom Box [4]
>>>>
>>>> The FreedomBox is a personal machine that I control, running
>>>> free software that I can inspect. Here the only person who has
>>>> access to the Freedom Box is me. So if I discover that I logged
>>>> in somewhere that should come as no surprise to me. I might even
>>>> be interested in this information as a way of gathering information
>>>> about where I logged in - and perhaps also if anything had been
>>>> logging in somewhere AS me. (Sadly it looks like it might be
>>>> difficult to get much good information there as things stand
>>>> currently with WebID.)
>>>>
>>>> B. Profile on My Company/University Profile Server
>>>>
>>>> As a member of a company, I am part of a larger agency, namely the
>>>> Company or University who is backing my identity as member of that
>>>> institution. A profile on a University web site can mean a lot more
>>>> than a profile on some social network, because it is in part backed
>>>> by that institution. Of course as a member of that institution we
>>>> are part of a larger agent hood. And so it is not clear that the institution
>>>> and me are in that context that different. This is also why it is
>>>> often legally required that one not use one's company identity for
>>>> private business.
>>>>
>>>> C. A Social Network ( Google+, Facebook, ... )
>>>>
>>>> It is a bit odd that people who are part of these networks, and who
>>>> are "liking" pretty much everything on the web in a way that is clearly
>>>> visible and is encouraged by those networks to be visible to the
>>>> network, would have an issue with those sites knowing-perhaps (if the
>>>> RP does not use Tor or a proxy) where they are logging into. It is certainly
>>>> not the way the OAuth, OpenID or other protocols that are in extremely
>>>> wide use now have been developed and are used by those sites.
>>>>
>>>> If we look then at BrowserId [7] Now Mozilla Persona, the only difference
>>>> really with WebID ( apart from it not being decentralised until crypto in the
>>>> browser really works ) is that the certificate is updated at short notice
>>>> - once a day - and that relying parties verify the signature. Neither of course
>>>> can the relying party get much interesting attributes this way, and if it did
>>>> then the whole of the unlinkability argument would collapse immediately.
>>>>
>>>>
>>>> 3. Conclusion
>>>> =============
>>>>
>>>> Talking about privacy is like talking about security. It is a breeding ground
>>>> for paranoia, which tend to make it difficult to notice important
>>>> solutions to the problem we actually have. Linkability or unlinkability as defined in
>>>> draft-hansen-privacy-terminology-03 [3] come with complicated definitions,
>>>> and are I suppose meant to be applied carefully. But the choice of "unlinkable"
>>>> as a word tends to help create rhethorical short cuts that are apt to hide the
>>>> real problems of privacy. By trying too hard to make things unlinkable we are moving
>>>> inevitably towards a centralised world where all data is in big brother's hands.
>>>>
>>>> I want to argue that we should all *Like* Linkability. We should
>>>> do it aware that we can protect ourselves with access control (and TOR)
>>>> and realise that we don't need to reveal anything more than anyone knew
>>>> before hand in our linkable profiles.
>>>>
>>>> To create a Social Web we need a Linkable ( and likeable ) social web.
>>>> We may need other technologies for running Wikileaks type set ups, but
>>>> the clearly cannot be the basic for an architecture of privacy - even
>>>> if it is an important element in the political landscape.
>>>>
>>>> Henry
>>>>
>>>> [0] this is from a discussion with Ben Laurie
>>>> http://lists.w3.org/Archives/Public/public-webid/2012Oct/att-0022/privacy-def-1.pdf
>>>> [1] Oshani's Usage Restriction paper
>>>> http://dig.csail.mit.edu/2011/Papers/IEEE-Policy-httpa/paper.pdf
>>>> [2] http://lists.w3.org/Archives/Public/public-identity/2012Oct/0036.html
>>>> [3] https://tools.ietf.org/html/draft-hansen-privacy-terminology-03
>>>> [4] http://www.youtube.com/watch?v=SzW25QTVWsE
>>>> [6] http://www.w3.org/2012/10/TPAC/
>>>> [7] A Comparison between BrowserId and WebId
>>>> http://security.stackexchange.com/questions/5406/what-are-the-main-advantages-and-disadvantages-of-webid-compared-to-browserid
>>>>
>>>>
>>>> Social Web Architect
>>>> http://bblfish.net/
>>>>
>>>> _______________________________________________
>>>> saag mailing list
>>>> ***@ietf.org
>>>> https://www.ietf.org/mailman/listinfo/saag
>>
>> Social Web Architect
>> http://bblfish.net/
>>
>

Social Web Architect
http://bblfish.net/
Ben Laurie
2012-10-18 15:34:05 UTC
Permalink
On 9 October 2012 14:19, Henry Story <***@bblfish.net> wrote:
> Still in my conversations I have found that many people in security spaces
> just don't seem to be able to put the issues in context, and can get sidetracked
> into not wanting any linkability at all. Not sure how to fix that.

You persist in missing the point, which is why you can't fix it. The
point is that we want unlinkability to be possible. Protocols that do
not permit it or make it difficult are problematic. I have certainly
never said that you should always be unlinked, that would be stupid
(in fact, I once wrote a paper about how unpleasant it would be).

As I once wrote, anonymity should be the substrate. Once you have
that, you can the build on it to be linked when you choose to be, and
not linked when you choose not to be. If it is not the substrate, then
you do not have this choice.
Kingsley Idehen
2012-10-18 15:41:28 UTC
Permalink
On 10/18/12 11:34 AM, Ben Laurie wrote:
> On 9 October 2012 14:19, Henry Story <henry.story-***@public.gmane.org> wrote:
>> Still in my conversations I have found that many people in security spaces
>> just don't seem to be able to put the issues in context, and can get sidetracked
>> into not wanting any linkability at all. Not sure how to fix that.
> You persist in missing the point, which is why you can't fix it. The
> point is that we want unlinkability to be possible. Protocols that do
> not permit it or make it difficult are problematic. I have certainly
> never said that you should always be unlinked, that would be stupid
> (in fact, I once wrote a paper about how unpleasant it would be).
>
> As I once wrote, anonymity should be the substrate. Once you have
> that, you can the build on it to be linked when you choose to be, and
> not linked when you choose not to be. If it is not the substrate, then
> you do not have this choice.
>
>
>
>

Do you have example of what you describe? By that question I mean:
implicit anonymity as a functional substrate of some realm that we
experience today?

--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Ben Laurie
2012-10-18 16:06:10 UTC
Permalink
On 18 October 2012 16:41, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org> wrote:
> On 10/18/12 11:34 AM, Ben Laurie wrote:
>>
>> On 9 October 2012 14:19, Henry Story <henry.story-***@public.gmane.org> wrote:
>>>
>>> Still in my conversations I have found that many people in security
>>> spaces
>>> just don't seem to be able to put the issues in context, and can get
>>> sidetracked
>>> into not wanting any linkability at all. Not sure how to fix that.
>>
>> You persist in missing the point, which is why you can't fix it. The
>> point is that we want unlinkability to be possible. Protocols that do
>> not permit it or make it difficult are problematic. I have certainly
>> never said that you should always be unlinked, that would be stupid
>> (in fact, I once wrote a paper about how unpleasant it would be).
>>
>> As I once wrote, anonymity should be the substrate. Once you have
>> that, you can the build on it to be linked when you choose to be, and
>> not linked when you choose not to be. If it is not the substrate, then
>> you do not have this choice.
>>
>>
>>
>>
>
> Do you have example of what you describe? By that question I mean: implicit
> anonymity as a functional substrate of some realm that we experience today?

That's what selective disclosure systems like U-Prove and the PRIME
project are all about.
Kingsley Idehen
2012-10-18 16:52:19 UTC
Permalink
On 10/18/12 12:06 PM, Ben Laurie wrote:
> On 18 October 2012 16:41, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org> wrote:
>> On 10/18/12 11:34 AM, Ben Laurie wrote:
>>> On 9 October 2012 14:19, Henry Story <henry.story-***@public.gmane.org> wrote:
>>>> Still in my conversations I have found that many people in security
>>>> spaces
>>>> just don't seem to be able to put the issues in context, and can get
>>>> sidetracked
>>>> into not wanting any linkability at all. Not sure how to fix that.
>>> You persist in missing the point, which is why you can't fix it. The
>>> point is that we want unlinkability to be possible. Protocols that do
>>> not permit it or make it difficult are problematic. I have certainly
>>> never said that you should always be unlinked, that would be stupid
>>> (in fact, I once wrote a paper about how unpleasant it would be).
>>>
>>> As I once wrote, anonymity should be the substrate. Once you have
>>> that, you can the build on it to be linked when you choose to be, and
>>> not linked when you choose not to be. If it is not the substrate, then
>>> you do not have this choice.
>>>
>>>
>>>
>>>
>> Do you have example of what you describe? By that question I mean: implicit
>> anonymity as a functional substrate of some realm that we experience today?
> That's what selective disclosure systems like U-Prove and the PRIME
> project are all about.
>
>
>
Ben,

How is the following incongruent with the fundamental points we've been
trying to make about the combined effects of URIs, Linked Data, and
Logic en route to controlling privacy at Web-scale?

Excerpt from Microsoft page [1]:

A U-Prove token is a new type of credential similar to a PKI certificate
that can encode attributes of any type, but with two important differences:

1) The issuance and presentation of a token is unlinkable due to the
special type of public key and signature encoded in the token; the
cryptographic “wrapping” of the attributes contain no correlation
handles. This prevents unwanted tracking of users when they use their
U-Prove tokens, even by colluding insiders.

2) Users can minimally disclose information about what attributes are
encoded in a token in response to dynamic verifier policies. As an
example, a user may choose to only disclose a subset of the encoded
attributes, prove that her undisclosed name does not appear on a
blacklist, or prove that she is of age without disclosing her actual
birthdate.


Why are you assuming that a hyperlink based pointer (de-referencable
URI) placed in the SAN of minimalist X.509 certificate (i.e., one that
has now personally identifiable information) can't deliver the above and
more?

Please note, WebID is a piece of the picture. Linked Data, Entity
Relationship Semantics and Logic are other critical parts. That's why
there isn't a golden ontology for resource access policies, the resource
publisher can construct a plethora of resource access policies en route
to leveraging the power of machine discernible entity relationship
semantics and first-order logic.

In a most basic super paranoid scenario, if I want to constrain access
to a resource to nebulous entity "You" I would share a PKCS#12 document
with that entity. I would also have an access policy in place based on
the data in said document. I would also call "You" by phone to give you
the password of that PKCS#12 document. Once that's all sorted, you can
open the document, get your crytpo data installed in your local keystore
and then visit the resource I've published :-)

Links:

1. http://research.microsoft.com/en-us/projects/u-prove/
2. http://en.wikipedia.org/wiki/Zero-knowledge_proof -- I don't see
anything about that being incompatible with what the combined use of
de-referencable URIs based names, Linked Data, Entity Relationship
Semantics, Reasoning, and existing PKI deliver.

--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
David Chadwick
2012-10-18 16:56:27 UTC
Permalink
and if the user puts his/her email address attribute in the U-Prove token???

David

On 18/10/2012 17:52, Kingsley Idehen wrote:
> On 10/18/12 12:06 PM, Ben Laurie wrote:
>> On 18 October 2012 16:41, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org> wrote:
>>> On 10/18/12 11:34 AM, Ben Laurie wrote:
>>>> On 9 October 2012 14:19, Henry Story <henry.story-***@public.gmane.org> wrote:
>>>>> Still in my conversations I have found that many people in security
>>>>> spaces
>>>>> just don't seem to be able to put the issues in context, and can get
>>>>> sidetracked
>>>>> into not wanting any linkability at all. Not sure how to fix that.
>>>> You persist in missing the point, which is why you can't fix it. The
>>>> point is that we want unlinkability to be possible. Protocols that do
>>>> not permit it or make it difficult are problematic. I have certainly
>>>> never said that you should always be unlinked, that would be stupid
>>>> (in fact, I once wrote a paper about how unpleasant it would be).
>>>>
>>>> As I once wrote, anonymity should be the substrate. Once you have
>>>> that, you can the build on it to be linked when you choose to be, and
>>>> not linked when you choose not to be. If it is not the substrate, then
>>>> you do not have this choice.
>>>>
>>>>
>>>>
>>>>
>>> Do you have example of what you describe? By that question I mean:
>>> implicit
>>> anonymity as a functional substrate of some realm that we experience
>>> today?
>> That's what selective disclosure systems like U-Prove and the PRIME
>> project are all about.
>>
>>
>>
> Ben,
>
> How is the following incongruent with the fundamental points we've been
> trying to make about the combined effects of URIs, Linked Data, and
> Logic en route to controlling privacy at Web-scale?
>
> Excerpt from Microsoft page [1]:
>
> A U-Prove token is a new type of credential similar to a PKI certificate
> that can encode attributes of any type, but with two important differences:
>
> 1) The issuance and presentation of a token is unlinkable due to the
> special type of public key and signature encoded in the token; the
> cryptographic “wrapping” of the attributes contain no correlation
> handles. This prevents unwanted tracking of users when they use their
> U-Prove tokens, even by colluding insiders.
>
> 2) Users can minimally disclose information about what attributes are
> encoded in a token in response to dynamic verifier policies. As an
> example, a user may choose to only disclose a subset of the encoded
> attributes, prove that her undisclosed name does not appear on a
> blacklist, or prove that she is of age without disclosing her actual
> birthdate.
>
>
> Why are you assuming that a hyperlink based pointer (de-referencable
> URI) placed in the SAN of minimalist X.509 certificate (i.e., one that
> has now personally identifiable information) can't deliver the above and
> more?
>
> Please note, WebID is a piece of the picture. Linked Data, Entity
> Relationship Semantics and Logic are other critical parts. That's why
> there isn't a golden ontology for resource access policies, the resource
> publisher can construct a plethora of resource access policies en route
> to leveraging the power of machine discernible entity relationship
> semantics and first-order logic.
>
> In a most basic super paranoid scenario, if I want to constrain access
> to a resource to nebulous entity "You" I would share a PKCS#12 document
> with that entity. I would also have an access policy in place based on
> the data in said document. I would also call "You" by phone to give you
> the password of that PKCS#12 document. Once that's all sorted, you can
> open the document, get your crytpo data installed in your local keystore
> and then visit the resource I've published :-)
>
> Links:
>
> 1. http://research.microsoft.com/en-us/projects/u-prove/
> 2. http://en.wikipedia.org/wiki/Zero-knowledge_proof -- I don't see
> anything about that being incompatible with what the combined use of
> de-referencable URIs based names, Linked Data, Entity Relationship
> Semantics, Reasoning, and existing PKI deliver.
>
Kingsley Idehen
2012-10-18 17:33:28 UTC
Permalink
On 10/18/12 12:56 PM, David Chadwick wrote:
> and if the user puts his/her email address attribute in the U-Prove
> token???

Then they've broken un-linkability since a mailto: scheme URI is the
ultimate unit of privacy compromise on today's Internet and Web, bearing
in mind the state of the underground personal information networks.
Every social network uses your mailto: scheme URI as a key component.
Even if they don't share this data with 3rd parties, other pieces of the
puzzle come together quite easily due to the fundamental semantics
associated with mailto: scheme URIs i.e., you only need to have them in
an inverseFunctionalProperty relationship for entropy to drive the rest
of the profile coalescence.

The world I envisage starts with the ability to generate (with ease)
X.509 certificates bearing WebIDs in their SAN slots. We will have many
such certificates for a variety of purposes. An email address or any
other overtly identifiable data isn't a mandatory component an X.509
certificate :-)

If I want to send something that's only readable by You, I would encrypt
that email via S/MIME. When I make an access policy or resource ACL I
tend not to require email addresses, for instance [1].

Links:

1. http://bit.ly/Rbnayv -- some posts about the use of social entity
relationship semantics to constrain access to my personal data space on
the Web.

Kingsley
>
> David
>
> On 18/10/2012 17:52, Kingsley Idehen wrote:
>> On 10/18/12 12:06 PM, Ben Laurie wrote:
>>> On 18 October 2012 16:41, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org>
>>> wrote:
>>>> On 10/18/12 11:34 AM, Ben Laurie wrote:
>>>>> On 9 October 2012 14:19, Henry Story <henry.story-***@public.gmane.org> wrote:
>>>>>> Still in my conversations I have found that many people in security
>>>>>> spaces
>>>>>> just don't seem to be able to put the issues in context, and can
>>>>>> get
>>>>>> sidetracked
>>>>>> into not wanting any linkability at all. Not sure how to fix that.
>>>>> You persist in missing the point, which is why you can't fix it. The
>>>>> point is that we want unlinkability to be possible. Protocols that do
>>>>> not permit it or make it difficult are problematic. I have certainly
>>>>> never said that you should always be unlinked, that would be stupid
>>>>> (in fact, I once wrote a paper about how unpleasant it would be).
>>>>>
>>>>> As I once wrote, anonymity should be the substrate. Once you have
>>>>> that, you can the build on it to be linked when you choose to be, and
>>>>> not linked when you choose not to be. If it is not the substrate,
>>>>> then
>>>>> you do not have this choice.
>>>>>
>>>>>
>>>>>
>>>>>
>>>> Do you have example of what you describe? By that question I mean:
>>>> implicit
>>>> anonymity as a functional substrate of some realm that we experience
>>>> today?
>>> That's what selective disclosure systems like U-Prove and the PRIME
>>> project are all about.
>>>
>>>
>>>
>> Ben,
>>
>> How is the following incongruent with the fundamental points we've been
>> trying to make about the combined effects of URIs, Linked Data, and
>> Logic en route to controlling privacy at Web-scale?
>>
>> Excerpt from Microsoft page [1]:
>>
>> A U-Prove token is a new type of credential similar to a PKI certificate
>> that can encode attributes of any type, but with two important
>> differences:
>>
>> 1) The issuance and presentation of a token is unlinkable due to the
>> special type of public key and signature encoded in the token; the
>> cryptographic “wrapping” of the attributes contain no correlation
>> handles. This prevents unwanted tracking of users when they use their
>> U-Prove tokens, even by colluding insiders.
>>
>> 2) Users can minimally disclose information about what attributes are
>> encoded in a token in response to dynamic verifier policies. As an
>> example, a user may choose to only disclose a subset of the encoded
>> attributes, prove that her undisclosed name does not appear on a
>> blacklist, or prove that she is of age without disclosing her actual
>> birthdate.
>>
>>
>> Why are you assuming that a hyperlink based pointer (de-referencable
>> URI) placed in the SAN of minimalist X.509 certificate (i.e., one that
>> has now personally identifiable information) can't deliver the above and
>> more?
>>
>> Please note, WebID is a piece of the picture. Linked Data, Entity
>> Relationship Semantics and Logic are other critical parts. That's why
>> there isn't a golden ontology for resource access policies, the resource
>> publisher can construct a plethora of resource access policies en route
>> to leveraging the power of machine discernible entity relationship
>> semantics and first-order logic.
>>
>> In a most basic super paranoid scenario, if I want to constrain access
>> to a resource to nebulous entity "You" I would share a PKCS#12 document
>> with that entity. I would also have an access policy in place based on
>> the data in said document. I would also call "You" by phone to give you
>> the password of that PKCS#12 document. Once that's all sorted, you can
>> open the document, get your crytpo data installed in your local keystore
>> and then visit the resource I've published :-)
>>
>> Links:
>>
>> 1. http://research.microsoft.com/en-us/projects/u-prove/
>> 2. http://en.wikipedia.org/wiki/Zero-knowledge_proof -- I don't see
>> anything about that being incompatible with what the combined use of
>> de-referencable URIs based names, Linked Data, Entity Relationship
>> Semantics, Reasoning, and existing PKI deliver.
>>
>
>


--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
David Chadwick
2012-10-18 18:17:47 UTC
Permalink
On 18/10/2012 18:33, Kingsley Idehen wrote:
> On 10/18/12 12:56 PM, David Chadwick wrote:
>> and if the user puts his/her email address attribute in the U-Prove
>> token???
>
> Then they've broken un-linkability since a mailto: scheme URI is the
> ultimate unit of privacy compromise on today's Internet and Web,

yes I know. My main point was that using U-Prove or Idemix is employing
a very sophisticated privacy protecting encryption scheme that can
easily and trivially be undone by everyday users who provide their email
address attributes inside the tokens. So I suspect the applicability of
these tokens will be quite limited

regards

David

bearing
> in mind the state of the underground personal information networks.
> Every social network uses your mailto: scheme URI as a key component.
> Even if they don't share this data with 3rd parties, other pieces of the
> puzzle come together quite easily due to the fundamental semantics
> associated with mailto: scheme URIs i.e., you only need to have them in
> an inverseFunctionalProperty relationship for entropy to drive the rest
> of the profile coalescence.
>
> The world I envisage starts with the ability to generate (with ease)
> X.509 certificates bearing WebIDs in their SAN slots. We will have many
> such certificates for a variety of purposes. An email address or any
> other overtly identifiable data isn't a mandatory component an X.509
> certificate :-)
>
> If I want to send something that's only readable by You, I would encrypt
> that email via S/MIME. When I make an access policy or resource ACL I
> tend not to require email addresses, for instance [1].
>
> Links:
>
> 1. http://bit.ly/Rbnayv -- some posts about the use of social entity
> relationship semantics to constrain access to my personal data space on
> the Web.
>
> Kingsley
>>
>> David
>>
>> On 18/10/2012 17:52, Kingsley Idehen wrote:
>>> On 10/18/12 12:06 PM, Ben Laurie wrote:
>>>> On 18 October 2012 16:41, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org>
>>>> wrote:
>>>>> On 10/18/12 11:34 AM, Ben Laurie wrote:
>>>>>> On 9 October 2012 14:19, Henry Story <henry.story-***@public.gmane.org> wrote:
>>>>>>> Still in my conversations I have found that many people in security
>>>>>>> spaces
>>>>>>> just don't seem to be able to put the issues in context, and can
>>>>>>> get
>>>>>>> sidetracked
>>>>>>> into not wanting any linkability at all. Not sure how to fix that.
>>>>>> You persist in missing the point, which is why you can't fix it. The
>>>>>> point is that we want unlinkability to be possible. Protocols that do
>>>>>> not permit it or make it difficult are problematic. I have certainly
>>>>>> never said that you should always be unlinked, that would be stupid
>>>>>> (in fact, I once wrote a paper about how unpleasant it would be).
>>>>>>
>>>>>> As I once wrote, anonymity should be the substrate. Once you have
>>>>>> that, you can the build on it to be linked when you choose to be, and
>>>>>> not linked when you choose not to be. If it is not the substrate,
>>>>>> then
>>>>>> you do not have this choice.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>> Do you have example of what you describe? By that question I mean:
>>>>> implicit
>>>>> anonymity as a functional substrate of some realm that we experience
>>>>> today?
>>>> That's what selective disclosure systems like U-Prove and the PRIME
>>>> project are all about.
>>>>
>>>>
>>>>
>>> Ben,
>>>
>>> How is the following incongruent with the fundamental points we've been
>>> trying to make about the combined effects of URIs, Linked Data, and
>>> Logic en route to controlling privacy at Web-scale?
>>>
>>> Excerpt from Microsoft page [1]:
>>>
>>> A U-Prove token is a new type of credential similar to a PKI certificate
>>> that can encode attributes of any type, but with two important
>>> differences:
>>>
>>> 1) The issuance and presentation of a token is unlinkable due to the
>>> special type of public key and signature encoded in the token; the
>>> cryptographic “wrapping” of the attributes contain no correlation
>>> handles. This prevents unwanted tracking of users when they use their
>>> U-Prove tokens, even by colluding insiders.
>>>
>>> 2) Users can minimally disclose information about what attributes are
>>> encoded in a token in response to dynamic verifier policies. As an
>>> example, a user may choose to only disclose a subset of the encoded
>>> attributes, prove that her undisclosed name does not appear on a
>>> blacklist, or prove that she is of age without disclosing her actual
>>> birthdate.
>>>
>>>
>>> Why are you assuming that a hyperlink based pointer (de-referencable
>>> URI) placed in the SAN of minimalist X.509 certificate (i.e., one that
>>> has now personally identifiable information) can't deliver the above and
>>> more?
>>>
>>> Please note, WebID is a piece of the picture. Linked Data, Entity
>>> Relationship Semantics and Logic are other critical parts. That's why
>>> there isn't a golden ontology for resource access policies, the resource
>>> publisher can construct a plethora of resource access policies en route
>>> to leveraging the power of machine discernible entity relationship
>>> semantics and first-order logic.
>>>
>>> In a most basic super paranoid scenario, if I want to constrain access
>>> to a resource to nebulous entity "You" I would share a PKCS#12 document
>>> with that entity. I would also have an access policy in place based on
>>> the data in said document. I would also call "You" by phone to give you
>>> the password of that PKCS#12 document. Once that's all sorted, you can
>>> open the document, get your crytpo data installed in your local keystore
>>> and then visit the resource I've published :-)
>>>
>>> Links:
>>>
>>> 1. http://research.microsoft.com/en-us/projects/u-prove/
>>> 2. http://en.wikipedia.org/wiki/Zero-knowledge_proof -- I don't see
>>> anything about that being incompatible with what the combined use of
>>> de-referencable URIs based names, Linked Data, Entity Relationship
>>> Semantics, Reasoning, and existing PKI deliver.
>>>
>>
>>
>
>
Kingsley Idehen
2012-10-18 18:27:33 UTC
Permalink
On 10/18/12 2:17 PM, David Chadwick wrote:
>
>
> On 18/10/2012 18:33, Kingsley Idehen wrote:
>> On 10/18/12 12:56 PM, David Chadwick wrote:
>>> and if the user puts his/her email address attribute in the U-Prove
>>> token???
>>
>> Then they've broken un-linkability since a mailto: scheme URI is the
>> ultimate unit of privacy compromise on today's Internet and Web,
>
> yes I know. My main point was that using U-Prove or Idemix is
> employing a very sophisticated privacy protecting encryption scheme
> that can easily and trivially be undone by everyday users who provide
> their email address attributes inside the tokens. So I suspect the
> applicability of these tokens will be quite limited
>
> regards
>
> David

David,

Correct!

My apologies for misunderstanding the position you were taking :-)

Kingsley
>
> bearing
>> in mind the state of the underground personal information networks.
>> Every social network uses your mailto: scheme URI as a key component.
>> Even if they don't share this data with 3rd parties, other pieces of the
>> puzzle come together quite easily due to the fundamental semantics
>> associated with mailto: scheme URIs i.e., you only need to have them in
>> an inverseFunctionalProperty relationship for entropy to drive the rest
>> of the profile coalescence.
>>
>> The world I envisage starts with the ability to generate (with ease)
>> X.509 certificates bearing WebIDs in their SAN slots. We will have many
>> such certificates for a variety of purposes. An email address or any
>> other overtly identifiable data isn't a mandatory component an X.509
>> certificate :-)
>>
>> If I want to send something that's only readable by You, I would encrypt
>> that email via S/MIME. When I make an access policy or resource ACL I
>> tend not to require email addresses, for instance [1].
>>
>> Links:
>>
>> 1. http://bit.ly/Rbnayv -- some posts about the use of social entity
>> relationship semantics to constrain access to my personal data space on
>> the Web.
>>
>> Kingsley
>>>
>>> David
>>>
>>> On 18/10/2012 17:52, Kingsley Idehen wrote:
>>>> On 10/18/12 12:06 PM, Ben Laurie wrote:
>>>>> On 18 October 2012 16:41, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org>
>>>>> wrote:
>>>>>> On 10/18/12 11:34 AM, Ben Laurie wrote:
>>>>>>> On 9 October 2012 14:19, Henry Story <henry.story-***@public.gmane.org>
>>>>>>> wrote:
>>>>>>>> Still in my conversations I have found that many people in
>>>>>>>> security
>>>>>>>> spaces
>>>>>>>> just don't seem to be able to put the issues in context, and can
>>>>>>>> get
>>>>>>>> sidetracked
>>>>>>>> into not wanting any linkability at all. Not sure how to fix that.
>>>>>>> You persist in missing the point, which is why you can't fix it.
>>>>>>> The
>>>>>>> point is that we want unlinkability to be possible. Protocols
>>>>>>> that do
>>>>>>> not permit it or make it difficult are problematic. I have
>>>>>>> certainly
>>>>>>> never said that you should always be unlinked, that would be stupid
>>>>>>> (in fact, I once wrote a paper about how unpleasant it would be).
>>>>>>>
>>>>>>> As I once wrote, anonymity should be the substrate. Once you have
>>>>>>> that, you can the build on it to be linked when you choose to
>>>>>>> be, and
>>>>>>> not linked when you choose not to be. If it is not the substrate,
>>>>>>> then
>>>>>>> you do not have this choice.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>> Do you have example of what you describe? By that question I mean:
>>>>>> implicit
>>>>>> anonymity as a functional substrate of some realm that we experience
>>>>>> today?
>>>>> That's what selective disclosure systems like U-Prove and the PRIME
>>>>> project are all about.
>>>>>
>>>>>
>>>>>
>>>> Ben,
>>>>
>>>> How is the following incongruent with the fundamental points we've
>>>> been
>>>> trying to make about the combined effects of URIs, Linked Data, and
>>>> Logic en route to controlling privacy at Web-scale?
>>>>
>>>> Excerpt from Microsoft page [1]:
>>>>
>>>> A U-Prove token is a new type of credential similar to a PKI
>>>> certificate
>>>> that can encode attributes of any type, but with two important
>>>> differences:
>>>>
>>>> 1) The issuance and presentation of a token is unlinkable due to the
>>>> special type of public key and signature encoded in the token; the
>>>> cryptographic “wrapping” of the attributes contain no correlation
>>>> handles. This prevents unwanted tracking of users when they use their
>>>> U-Prove tokens, even by colluding insiders.
>>>>
>>>> 2) Users can minimally disclose information about what attributes are
>>>> encoded in a token in response to dynamic verifier policies. As an
>>>> example, a user may choose to only disclose a subset of the encoded
>>>> attributes, prove that her undisclosed name does not appear on a
>>>> blacklist, or prove that she is of age without disclosing her actual
>>>> birthdate.
>>>>
>>>>
>>>> Why are you assuming that a hyperlink based pointer (de-referencable
>>>> URI) placed in the SAN of minimalist X.509 certificate (i.e., one that
>>>> has now personally identifiable information) can't deliver the
>>>> above and
>>>> more?
>>>>
>>>> Please note, WebID is a piece of the picture. Linked Data, Entity
>>>> Relationship Semantics and Logic are other critical parts. That's why
>>>> there isn't a golden ontology for resource access policies, the
>>>> resource
>>>> publisher can construct a plethora of resource access policies en
>>>> route
>>>> to leveraging the power of machine discernible entity relationship
>>>> semantics and first-order logic.
>>>>
>>>> In a most basic super paranoid scenario, if I want to constrain access
>>>> to a resource to nebulous entity "You" I would share a PKCS#12
>>>> document
>>>> with that entity. I would also have an access policy in place based on
>>>> the data in said document. I would also call "You" by phone to give
>>>> you
>>>> the password of that PKCS#12 document. Once that's all sorted, you can
>>>> open the document, get your crytpo data installed in your local
>>>> keystore
>>>> and then visit the resource I've published :-)
>>>>
>>>> Links:
>>>>
>>>> 1. http://research.microsoft.com/en-us/projects/u-prove/
>>>> 2. http://en.wikipedia.org/wiki/Zero-knowledge_proof -- I don't see
>>>> anything about that being incompatible with what the combined use of
>>>> de-referencable URIs based names, Linked Data, Entity Relationship
>>>> Semantics, Reasoning, and existing PKI deliver.
>>>>
>>>
>>>
>>
>>
>
>
>


--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Ben Laurie
2012-10-19 11:21:47 UTC
Permalink
On 18 October 2012 17:52, Kingsley Idehen <***@openlinksw.com> wrote:
> On 10/18/12 12:06 PM, Ben Laurie wrote:
>>
>> On 18 October 2012 16:41, Kingsley Idehen <***@openlinksw.com> wrote:
>>>
>>> On 10/18/12 11:34 AM, Ben Laurie wrote:
>>>>
>>>> On 9 October 2012 14:19, Henry Story <***@bblfish.net> wrote:
>>>>>
>>>>> Still in my conversations I have found that many people in security
>>>>> spaces
>>>>> just don't seem to be able to put the issues in context, and can get
>>>>> sidetracked
>>>>> into not wanting any linkability at all. Not sure how to fix that.
>>>>
>>>> You persist in missing the point, which is why you can't fix it. The
>>>> point is that we want unlinkability to be possible. Protocols that do
>>>> not permit it or make it difficult are problematic. I have certainly
>>>> never said that you should always be unlinked, that would be stupid
>>>> (in fact, I once wrote a paper about how unpleasant it would be).
>>>>
>>>> As I once wrote, anonymity should be the substrate. Once you have
>>>> that, you can the build on it to be linked when you choose to be, and
>>>> not linked when you choose not to be. If it is not the substrate, then
>>>> you do not have this choice.
>>>>
>>>>
>>>>
>>>>
>>> Do you have example of what you describe? By that question I mean:
>>> implicit
>>> anonymity as a functional substrate of some realm that we experience
>>> today?
>>
>> That's what selective disclosure systems like U-Prove and the PRIME
>> project are all about.
>>
>>
>>
> Ben,
>
> How is the following incongruent with the fundamental points we've been
> trying to make about the combined effects of URIs, Linked Data, and Logic en
> route to controlling privacy at Web-scale?
>
> Excerpt from Microsoft page [1]:
>
> A U-Prove token is a new type of credential similar to a PKI certificate
> that can encode attributes of any type, but with two important differences:
>
> 1) The issuance and presentation of a token is unlinkable due to the special
> type of public key and signature encoded in the token; the cryptographic
> “wrapping” of the attributes contain no correlation handles. This prevents
> unwanted tracking of users when they use their U-Prove tokens, even by
> colluding insiders.
>
> 2) Users can minimally disclose information about what attributes are
> encoded in a token in response to dynamic verifier policies. As an example,
> a user may choose to only disclose a subset of the encoded attributes, prove
> that her undisclosed name does not appear on a blacklist, or prove that she
> is of age without disclosing her actual birthdate.
>
>
> Why are you assuming that a hyperlink based pointer (de-referencable URI)
> placed in the SAN of minimalist X.509 certificate (i.e., one that has now
> personally identifiable information) can't deliver the above and more?

Because it contains "correlation handles" to use the terminology of the quote.

> Please note, WebID is a piece of the picture. Linked Data, Entity
> Relationship Semantics and Logic are other critical parts. That's why there
> isn't a golden ontology for resource access policies, the resource publisher
> can construct a plethora of resource access policies en route to leveraging
> the power of machine discernible entity relationship semantics and
> first-order logic.
>
> In a most basic super paranoid scenario, if I want to constrain access to a
> resource to nebulous entity "You" I would share a PKCS#12 document with that
> entity. I would also have an access policy in place based on the data in
> said document. I would also call "You" by phone to give you the password of
> that PKCS#12 document. Once that's all sorted, you can open the document,
> get your crytpo data installed in your local keystore and then visit the
> resource I've published :-)
>
> Links:
>
> 1. http://research.microsoft.com/en-us/projects/u-prove/
> 2. http://en.wikipedia.org/wiki/Zero-knowledge_proof -- I don't see anything
> about that being incompatible with what the combined use of de-referencable
> URIs based names, Linked Data, Entity Relationship Semantics, Reasoning, and
> existing PKI deliver.
>
>
> --
>
> Regards,
>
> Kingsley Idehen
> Founder & CEO
> OpenLink Software
> Company Web: http://www.openlinksw.com
> Personal Weblog: http://www.openlinksw.com/blog/~kidehen
> Twitter/Identi.ca handle: @kidehen
> Google+ Profile: https://plus.google.com/112399767740508618350/about
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
>
>
>
>
>
Anders Rundgren
2012-10-18 19:30:54 UTC
Permalink
On 2012-10-18 18:06, Ben Laurie wrote:
>>
>> Do you have example of what you describe? By that question I mean: implicit
>> anonymity as a functional substrate of some realm that we experience today?
>
> That's what selective disclosure systems like U-Prove and the PRIME
> project are all about.
>

Which will never be of any practical use because without a reference
back you cannot really get anything useful done. The search service
monopoly your employer (Google) runs is clearly among the largest threats
to privacy there is so I don't understand what you are blabbing about.

Is this about theory versus practice :-)

Anders
Henry Story
2012-10-19 13:13:42 UTC
Permalink
On 19 Oct 2012, at 14:43, Klaas Wierenga <***@cisco.com> wrote:

> Hi,
>
> (as a side note: shouldn't this be on the privacy list rather than the saag list?)

It kind of covers both, security and privacy. They are closely related. Also
WebID is a protocol that uses IETF's TLS so closely that I like to have the IETF
people in the loop. We're kind of between two institutions here. (I am used to
that: my mother is Austrian, my father British, lived in the US for a long time,
and was brought up in France :-).

>
> On Oct 18, 2012, at 9:30 PM, Anders Rundgren <***@telia.com> wrote:
>
>> On 2012-10-18 18:06, Ben Laurie wrote:
>>>>
>>>> Do you have example of what you describe? By that question I mean: implicit
>>>> anonymity as a functional substrate of some realm that we experience today?
>>>
>>> That's what selective disclosure systems like U-Prove and the PRIME
>>> project are all about.
>>>
>>
>> Which will never be of any practical use because without a reference
>> back you cannot really get anything useful done. The search service
>> monopoly your employer (Google) runs is clearly among the largest threats
>> to privacy there is so I don't understand what you are blabbing about.
>>
>> Is this about theory versus practice :-)
>
> Let's refrain from ad hominem attacks in a technical discussion….

agree.

But I think the fear expressed by that attack is justified and is really part
of what this thread is about. By focusing one unlinkability of identifiers one
in fact creates the space for large mega providers that have a Panopoticon-like
oversight over huge numbers of users to emerge. While I do wish to applaud those
services for the bold vision they have displayed in making us conscious of
the advantages to be gained by working together on such a scale, I wish to enlarge
that vision to a much larger space allowing the same to be done by players that
do not wish or cannot legally allow those players as intermediaries.
>
> I don't think anyone has argued that linkability is a bad thing per se, what I believe is the crux is whether the links exists -by default- (like locators for a person that can be looked up by 3d parties in DNS) rather than -by choice-. It is the difference between being listed in the phone directory versus giving someone your phone number. I think the likes of Tor are not sufficient here, if the norm is that you are linkable than someone that is using Tor is by definition suspicious…

It is helpful to bring Tor into the discussion because it helps show what types of technology
can fix that type of problem.

> David Chadwick rightfully remarks that there is a balance that you need to strike based on a risk analysis, for me the question is how much of that risk analysis you want to leave to the protocol designer versus the end-user.

In risk analysis you need to also consider the other side of the question: what do you do if you don't have linkability? The answer is that you have to go to a central provider.

> As an end-user I like to have sufficient control over my privacy without having to understand how to do Tor.

If Tor, or something similar became widespread, you'd have no trouble using it, just like most people using Apple's products have no trouble using Unix (it used to be argued that Unix was impossibly difficult to use)

Anyway, we have a continuum:

1. you use a mega provider with one login to it
a. the mega provider can read all your mail, and everything you are communicating with other people
b. the telcos can tell you are using the mega provider - but not what you are communicating about (assuming you use TLS)

2. You use WebID over TLS + Access Controlled Read Write Web
a. you can communicate only with the people/organisation you want to
( no need for a mega provider, though they are not excluded )
b. the telcos can see where your traffic is going more precisely - but they can't read your messages

3. You use WebID + ACLed RWW + Tor
a. you can communicate with the people/orgs you want to (and only them)
b. the telcos can't see where your traffic is going

That is the continuum. So currently we are at 1. WebID adds the choice of 2 and 3, to increase
the options for privacy.

>
> Klaas
>

Social Web Architect
http://bblfish.net/
Anders Rundgren
2012-10-19 13:19:10 UTC
Permalink
On 2012-10-19 14:43, Klaas Wierenga wrote:
> Hi,
>
> (as a side note: shouldn't this be on the privacy list rather than the saag list?)
>
> On Oct 18, 2012, at 9:30 PM, Anders Rundgren <anders.rundgren-***@public.gmane.org> wrote:
>
>> On 2012-10-18 18:06, Ben Laurie wrote:
>>>> Do you have example of what you describe? By that question I mean: implicit
>>>> anonymity as a functional substrate of some realm that we experience today?
>>> That's what selective disclosure systems like U-Prove and the PRIME
>>> project are all about.
>>>
>> Which will never be of any practical use because without a reference
>> back you cannot really get anything useful done. The search service
>> monopoly your employer (Google) runs is clearly among the largest threats
>> to privacy there is so I don't understand what you are blabbing about.
>>
>> Is this about theory versus practice :-)

> Let's refrain from ad hominem attacks in a technical discussion….


Pardon, I get a little bit bored by hearing folks from Google preach about privacy when they are sitting on one of the largest piles of personal information there is.

And U-Prove surely haven't been a success. I expect it to fail like all other Microsoft ID-related initiatives from Passport, to InformationCards, and forward.

>
> I don't think anyone has argued that linkability is a bad thing per se, what I believe is the crux is whether the links exists -by default- (like locators for a person that can be looked up by 3d parties in DNS) rather than -by choice-. It is the difference between being listed in the phone directory versus giving someone your phone number. I think the likes of Tor are not sufficient here, if the norm is that you are linkable than someone that is using Tor is by definition suspicious…
> David Chadwick rightfully remarks that there is a balance that you need to strike based on a risk analysis, for me the question is how much of that risk analysis you want to leave to the protocol designer versus the end-user. As an end-user I like to have sufficient control over my privacy without having to understand how to do Tor.

I think that the unlinkability should be put in a wider privacy context:
- We know that cell-phone providers know not only who we speak to, but also our surfing habits, and our location.
- We also know that 0.5Bn individuals have a Facebook account.
- We also know that the healthcare community/industry is building HUGE journal systems making WikiLeaks-like attacks both possible and potentially useful.

So I honestly do not think that a globally unique (highly linkable) e-mail address is something anybody except very paranoid people should worry about.
BTW, I use Google as IdP to several other sites and I like it.

Identity theft seems to be a MUCH worse problem.

Well, IF there had been anonymous digital money that would have been great! But it didn't work for a lot of reasons including unlinkability which opens the gates to money laundering.

Anders


>
> Klaas
>
>
Kingsley Idehen
2012-10-19 14:56:37 UTC
Permalink
On 10/19/12 8:43 AM, Klaas Wierenga wrote:
> Let's refrain from ad hominem attacks in a technical discussion….
>
> I don't think anyone has argued that linkability is a bad thing per se, what I believe is the crux is whether the links exists -by default- (like locators for a person that can be looked up by 3d parties in DNS) rather than -by choice-.

You lookup machine names via DNS. Neither you nor I are machines.

> It is the difference between being listed in the phone directory versus giving someone your phone number.

I am not my phone or combination of phone and phone number.

As per my earlier post, the Web of Linked Document (yet another network)
has resolvable names for documents which combined with a user agent and
machine name give you a composite key. This composite key still doesn't
denote you.

Then we have the Web of Linked Data where the a name for entity: you, is
added to the composite. This URI that denotes entity isn't as linkable
as Ben presumes. It isn't an entropy favorable email address (aka.
mailto: scheme URI), it can embody all of the dexterity required to
handle the intersection of context fluidity and nebulous identity.

> I think the likes of Tor are not sufficient here, if the norm is that you are linkable than someone that is using Tor is by definition suspicious…

Depends on the context, and herein lies the problem. Identity is
nebulous and context is fluid. Thus, you have to leverage entity
relationship graphs, their relationship semantics, and logic.

> David Chadwick rightfully remarks that there is a balance that you need to strike based on a risk analysis, for me the question is how much of that risk analysis you want to leave to the protocol designer versus the end-user. As an end-user I like to have sufficient control over my privacy without having to understand how to do Tor.

Correct re. Tor. I don't see Tor as the answer per say, but I understand
why Henry presents in response to Ben's arguments.

What we all need is a solution that's capable of handling the
challenging intersection of context fluidity and nebulous identity, at
Web-scale. This is really what you end up with when you combine the
following items that are naturally integrated into architecture of the Web:

1. URIs
2. WebID -- cryptographically verifiable personal de-referencabe URI
3. WebID protocol -- the verification/authentication mechanism
4. Linked Data -- entity relationship graph based structured data
representation that leverages de-referencable URIs
5. Entity Relationship Semantics -- that leverages first-order logic as
the basis for a conceptual schema
6. Data Access Policies or Rules -- based on Logic.

>
> Klaas


--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Josh Howlett
2012-10-18 16:08:34 UTC
Permalink
>As I once wrote, anonymity should be the substrate. Once you have
>that, you can the build on it to be linked when you choose to be, and
>not linked when you choose not to be. If it is not the substrate, then
>you do not have this choice.

+1 -- unlinked must be the default, with the option to link. Anything else
is untenable.

Josh.



Janet is a trading name of The JNT Association, a company limited
by guarantee which is registered in England under No. 2881024
and whose Registered Office is at Lumen House, Library Avenue,
Harwell Oxford, Didcot, Oxfordshire. OX11 0SG
Sam Hartman
2012-10-18 18:41:47 UTC
Permalink
>>>>> "Josh" == Josh Howlett <Josh.Howlett-***@public.gmane.org> writes:

>> As I once wrote, anonymity should be the substrate. Once you have
>> that, you can the build on it to be linked when you choose to be,
>> and not linked when you choose not to be. If it is not the
>> substrate, then you do not have this choice.

Josh> +1 -- unlinked must be the default, with the option to
Josh> link. Anything else is untenable.

Josh> Josh.



If you're looking for real unlinkability, that implies no
fingerprinting.

Unfortunately, that rules out a lot of things we generally think of as
good design practices.
It tends to rule out future extensibility, configuration option that can
be remotely observed, and implementation flexibility that can be
remotely observed.

Unfortunately, I think that's too high of a price to pay for
unlinkability.
So I've come to the conclusion that anonymity will depend on protocols
like TOR specifically designed for it.


If you're talking about some weak form of anonymity/unlinkability that
does not involve forbidding fingerprinting, I'd like to better
understand what you mean by unlinkability and what the expected
advantages of this system are. Then we can evaluate whether it achieves
them.
Mouse
2012-10-18 19:04:25 UTC
Permalink
> [...]
> Unfortunately, I think that's too high of a price to pay for
> unlinkability.
> So I've come to the conclusion that anonymity will depend on
> protocols like TOR specifically designed for it.

Is it my imagination, or is this stuff confusing anonymity with
pseudonymity? I feel reasonably sure I've missed some of the thread,
but what I have seem does seem to be confusing the two.

This whole thing about linking, for example, seems to be based on
linking identities of some sort, implying that the systems in question
*have* identities, in which case they are (at best) pseudonymous, not
anonymous.

/~\ The ASCII Mouse
\ / Ribbon Campaign
X Against HTML ***@rodents-montreal.org
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B
Henry Story
2012-10-18 19:20:09 UTC
Permalink
On 18 Oct 2012, at 21:04, Mouse <***@Rodents-Montreal.ORG> wrote:

>> [...]
>> Unfortunately, I think that's too high of a price to pay for
>> unlinkability.
>> So I've come to the conclusion that anonymity will depend on
>> protocols like TOR specifically designed for it.
>
> Is it my imagination, or is this stuff confusing anonymity with
> pseudonymity? I feel reasonably sure I've missed some of the thread,
> but what I have seem does seem to be confusing the two.
>
> This whole thing about linking, for example, seems to be based on
> linking identities of some sort, implying that the systems in question
> *have* identities, in which case they are (at best) pseudonymous, not
> anonymous.

With WebID ( http://webid.info/ ) you have a pseudonymous global identifier,
that is tied to a document on the Web that need only reveal your public key.
That WebID can then link to further information that is access controlled,
so that only your friends would be able to see it.

The first diagram in the spec shows this well

http://webid.info/spec/#publishing-the-webid-profile-document

If you put WebID behind TOR and only have .onion WebIDs - something that
should be possible to do - then nobody would know WHERE the box hosting your
profile is, so they would not be able to just find your home location
from your ip-address. But you would still be able to link up in an access
controlled manner to your friends ( who may or may not be serving their pages
behind Tor ).

You would then be unlinkable in the sense of
http://tools.ietf.org/html/draft-iab-privacy-considerations-03

[[
Within a particular set of information, the
inability of an observer or attacker to distinguish whether two
items of interest are related or not (with a high enough degree of
probability to be useful to the observer or attacker).
]]

from any person that was not able to access the resources. But you would
be linkable by your friends. I think you want both. Linkability by those
authorized, unlinkability for those unauthorized. Hence linkability is not
just a negative.

Henry


>
> /~\ The ASCII Mouse
> \ / Ribbon Campaign
> X Against HTML ***@rodents-montreal.org
> / \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B
> _______________________________________________
> saag mailing list
> ***@ietf.org
> https://www.ietf.org/mailman/listinfo/saag

Social Web Architect
http://bblfish.net/
Ben Laurie
2012-10-18 19:29:37 UTC
Permalink
On Thu, Oct 18, 2012 at 8:20 PM, Henry Story <***@bblfish.net> wrote:
>
> On 18 Oct 2012, at 21:04, Mouse <***@Rodents-Montreal.ORG> wrote:
>
>>> [...]
>>> Unfortunately, I think that's too high of a price to pay for
>>> unlinkability.
>>> So I've come to the conclusion that anonymity will depend on
>>> protocols like TOR specifically designed for it.
>>
>> Is it my imagination, or is this stuff confusing anonymity with
>> pseudonymity? I feel reasonably sure I've missed some of the thread,
>> but what I have seem does seem to be confusing the two.
>>
>> This whole thing about linking, for example, seems to be based on
>> linking identities of some sort, implying that the systems in question
>> *have* identities, in which case they are (at best) pseudonymous, not
>> anonymous.
>
> With WebID ( http://webid.info/ ) you have a pseudonymous global identifier,
> that is tied to a document on the Web that need only reveal your public key.
> That WebID can then link to further information that is access controlled,
> so that only your friends would be able to see it.
>
> The first diagram in the spec shows this well
>
> http://webid.info/spec/#publishing-the-webid-profile-document
>
> If you put WebID behind TOR and only have .onion WebIDs - something that
> should be possible to do - then nobody would know WHERE the box hosting your
> profile is, so they would not be able to just find your home location
> from your ip-address. But you would still be able to link up in an access
> controlled manner to your friends ( who may or may not be serving their pages
> behind Tor ).
>
> You would then be unlinkable in the sense of
> http://tools.ietf.org/html/draft-iab-privacy-considerations-03
>
> [[
> Within a particular set of information, the
> inability of an observer or attacker to distinguish whether two
> items of interest are related or not (with a high enough degree of
> probability to be useful to the observer or attacker).
> ]]
>
> from any person that was not able to access the resources. But you would
> be linkable by your friends. I think you want both. Linkability by those
> authorized, unlinkability for those unauthorized. Hence linkability is not
> just a negative.

I really feel like I am beating a dead horse at this point, but
perhaps you'll eventually admit it. Your public key links you. Access
control on the rest of the information is irrelevant. Indeed, access
control on the public key is irrelevant, since you must reveal it when
you use the client cert. Incidentally, to observers as well as the
server you connect to.
Henry Story
2012-10-19 12:01:04 UTC
Permalink
On 18 Oct 2012, at 21:29, Ben Laurie <***@links.org> wrote:

> On Thu, Oct 18, 2012 at 8:20 PM, Henry Story <***@bblfish.net> wrote:
>>
>> On 18 Oct 2012, at 21:04, Mouse <***@Rodents-Montreal.ORG> wrote:
>>
>>>> [...]
>>>> Unfortunately, I think that's too high of a price to pay for
>>>> unlinkability.
>>>> So I've come to the conclusion that anonymity will depend on
>>>> protocols like TOR specifically designed for it.
>>>
>>> Is it my imagination, or is this stuff confusing anonymity with
>>> pseudonymity? I feel reasonably sure I've missed some of the thread,
>>> but what I have seem does seem to be confusing the two.
>>>
>>> This whole thing about linking, for example, seems to be based on
>>> linking identities of some sort, implying that the systems in question
>>> *have* identities, in which case they are (at best) pseudonymous, not
>>> anonymous.
>>
>> With WebID ( http://webid.info/ ) you have a pseudonymous global identifier,
>> that is tied to a document on the Web that need only reveal your public key.
>> That WebID can then link to further information that is access controlled,
>> so that only your friends would be able to see it.
>>
>> The first diagram in the spec shows this well
>>
>> http://webid.info/spec/#publishing-the-webid-profile-document
>>
>> If you put WebID behind TOR and only have .onion WebIDs - something that
>> should be possible to do - then nobody would know WHERE the box hosting your
>> profile is, so they would not be able to just find your home location
>> from your ip-address. But you would still be able to link up in an access
>> controlled manner to your friends ( who may or may not be serving their pages
>> behind Tor ).
>>
>> You would then be unlinkable in the sense of
>> http://tools.ietf.org/html/draft-iab-privacy-considerations-03
>>
>> [[
>> Within a particular set of information, the
>> inability of an observer or attacker to distinguish whether two
>> items of interest are related or not (with a high enough degree of
>> probability to be useful to the observer or attacker).
>> ]]
>>
>> from any person that was not able to access the resources. But you would
>> be linkable by your friends. I think you want both. Linkability by those
>> authorized, unlinkability for those unauthorized. Hence linkability is not
>> just a negative.
>
> I really feel like I am beating a dead horse at this point, but
> perhaps you'll eventually admit it. Your public key links you.

The question is to whom? What is the scenario you are imagining, and who is
the attacker there?

> Access
> control on the rest of the information is irrelevant. Indeed, access
> control on the public key is irrelevant, since you must reveal it when
> you use the client cert.

You are imagining that the server I am connecting to, and that I have
decided to identify myself to, is the one that is attacking me? Right?
Because otherwise I cannot understand your issue.

But then I still do not understand your issue, since I deliberately
did connect to that site in an identifiable manner with a global id.
I could have created a locally valid ID only, had I wanted to not
connect with a globally valid one.

So your issue boils down to this: if I connect to a web site deliberately
with a global identifier, then I am globally identified by that web site.
Which is what I wanted.

So perhaps it is up to you to answer: why should I not want that?

> Incidentally, to observers as well as the
> server you connect to.

Not when you re-negotiation I think.
And certainly not if you use Tor, right?


Social Web Architect
http://bblfish.net/
Ben Laurie
2012-10-19 13:31:08 UTC
Permalink
On 19 October 2012 13:01, Henry Story <***@bblfish.net> wrote:
>
> On 18 Oct 2012, at 21:29, Ben Laurie <***@links.org> wrote:
>
>> On Thu, Oct 18, 2012 at 8:20 PM, Henry Story <***@bblfish.net> wrote:
>>>
>>> On 18 Oct 2012, at 21:04, Mouse <***@Rodents-Montreal.ORG> wrote:
>>>
>>>>> [...]
>>>>> Unfortunately, I think that's too high of a price to pay for
>>>>> unlinkability.
>>>>> So I've come to the conclusion that anonymity will depend on
>>>>> protocols like TOR specifically designed for it.
>>>>
>>>> Is it my imagination, or is this stuff confusing anonymity with
>>>> pseudonymity? I feel reasonably sure I've missed some of the thread,
>>>> but what I have seem does seem to be confusing the two.
>>>>
>>>> This whole thing about linking, for example, seems to be based on
>>>> linking identities of some sort, implying that the systems in question
>>>> *have* identities, in which case they are (at best) pseudonymous, not
>>>> anonymous.
>>>
>>> With WebID ( http://webid.info/ ) you have a pseudonymous global identifier,
>>> that is tied to a document on the Web that need only reveal your public key.
>>> That WebID can then link to further information that is access controlled,
>>> so that only your friends would be able to see it.
>>>
>>> The first diagram in the spec shows this well
>>>
>>> http://webid.info/spec/#publishing-the-webid-profile-document
>>>
>>> If you put WebID behind TOR and only have .onion WebIDs - something that
>>> should be possible to do - then nobody would know WHERE the box hosting your
>>> profile is, so they would not be able to just find your home location
>>> from your ip-address. But you would still be able to link up in an access
>>> controlled manner to your friends ( who may or may not be serving their pages
>>> behind Tor ).
>>>
>>> You would then be unlinkable in the sense of
>>> http://tools.ietf.org/html/draft-iab-privacy-considerations-03
>>>
>>> [[
>>> Within a particular set of information, the
>>> inability of an observer or attacker to distinguish whether two
>>> items of interest are related or not (with a high enough degree of
>>> probability to be useful to the observer or attacker).
>>> ]]
>>>
>>> from any person that was not able to access the resources. But you would
>>> be linkable by your friends. I think you want both. Linkability by those
>>> authorized, unlinkability for those unauthorized. Hence linkability is not
>>> just a negative.
>>
>> I really feel like I am beating a dead horse at this point, but
>> perhaps you'll eventually admit it. Your public key links you.
>
> The question is to whom? What is the scenario you are imagining, and who is
> the attacker there?
>
>> Access
>> control on the rest of the information is irrelevant. Indeed, access
>> control on the public key is irrelevant, since you must reveal it when
>> you use the client cert.
>
> You are imagining that the server I am connecting to, and that I have
> decided to identify myself to, is the one that is attacking me? Right?
> Because otherwise I cannot understand your issue.
>
> But then I still do not understand your issue, since I deliberately
> did connect to that site in an identifiable manner with a global id.
> I could have created a locally valid ID only, had I wanted to not
> connect with a globally valid one.
>
> So your issue boils down to this: if I connect to a web site deliberately
> with a global identifier, then I am globally identified by that web site.
> Which is what I wanted.
>
> So perhaps it is up to you to answer: why should I not want that?

I am not saying you should not want that, I am saying that ACLs on the
resources do not achieve unlinkability.

>> Incidentally, to observers as well as the
>> server you connect to.
>
> Not when you re-negotiation I think.

That's true, but is not specified in WebID, right? Also, because of
the renegotiation attack, this is currently insecure in many cases.

> And certainly not if you use Tor, right?

Tor has no impact on the visibility of the communication at the server end.

>
>
> Social Web Architect
> http://bblfish.net/
>
>
> _______________________________________________
> saag mailing list
> ***@ietf.org
> https://www.ietf.org/mailman/listinfo/saag
>
Henry Story
2012-10-19 13:46:03 UTC
Permalink
On 19 Oct 2012, at 15:31, Ben Laurie <***@google.com> wrote:

> On 19 October 2012 13:01, Henry Story <***@bblfish.net> wrote:
>>
>> On 18 Oct 2012, at 21:29, Ben Laurie <***@links.org> wrote:
>>
>>> On Thu, Oct 18, 2012 at 8:20 PM, Henry Story <***@bblfish.net> wrote:
>>>>
>>>> On 18 Oct 2012, at 21:04, Mouse <***@Rodents-Montreal.ORG> wrote:
>>>>
>>>>>> [...]
>>>>>> Unfortunately, I think that's too high of a price to pay for
>>>>>> unlinkability.
>>>>>> So I've come to the conclusion that anonymity will depend on
>>>>>> protocols like TOR specifically designed for it.
>>>>>
>>>>> Is it my imagination, or is this stuff confusing anonymity with
>>>>> pseudonymity? I feel reasonably sure I've missed some of the thread,
>>>>> but what I have seem does seem to be confusing the two.
>>>>>
>>>>> This whole thing about linking, for example, seems to be based on
>>>>> linking identities of some sort, implying that the systems in question
>>>>> *have* identities, in which case they are (at best) pseudonymous, not
>>>>> anonymous.
>>>>
>>>> With WebID ( http://webid.info/ ) you have a pseudonymous global identifier,
>>>> that is tied to a document on the Web that need only reveal your public key.
>>>> That WebID can then link to further information that is access controlled,
>>>> so that only your friends would be able to see it.
>>>>
>>>> The first diagram in the spec shows this well
>>>>
>>>> http://webid.info/spec/#publishing-the-webid-profile-document
>>>>
>>>> If you put WebID behind TOR and only have .onion WebIDs - something that
>>>> should be possible to do - then nobody would know WHERE the box hosting your
>>>> profile is, so they would not be able to just find your home location
>>>> from your ip-address. But you would still be able to link up in an access
>>>> controlled manner to your friends ( who may or may not be serving their pages
>>>> behind Tor ).
>>>>
>>>> You would then be unlinkable in the sense of
>>>> http://tools.ietf.org/html/draft-iab-privacy-considerations-03
>>>>
>>>> [[
>>>> Within a particular set of information, the
>>>> inability of an observer or attacker to distinguish whether two
>>>> items of interest are related or not (with a high enough degree of
>>>> probability to be useful to the observer or attacker).
>>>> ]]
>>>>
>>>> from any person that was not able to access the resources. But you would
>>>> be linkable by your friends. I think you want both. Linkability by those
>>>> authorized, unlinkability for those unauthorized. Hence linkability is not
>>>> just a negative.
>>>
>>> I really feel like I am beating a dead horse at this point, but
>>> perhaps you'll eventually admit it. Your public key links you.
>>
>> The question is to whom? What is the scenario you are imagining, and who is
>> the attacker there?
>>
>>> Access
>>> control on the rest of the information is irrelevant. Indeed, access
>>> control on the public key is irrelevant, since you must reveal it when
>>> you use the client cert.
>>
>> You are imagining that the server I am connecting to, and that I have
>> decided to identify myself to, is the one that is attacking me? Right?
>> Because otherwise I cannot understand your issue.
>>
>> But then I still do not understand your issue, since I deliberately
>> did connect to that site in an identifiable manner with a global id.
>> I could have created a locally valid ID only, had I wanted to not
>> connect with a globally valid one.
>>
>> So your issue boils down to this: if I connect to a web site deliberately
>> with a global identifier, then I am globally identified by that web site.
>> Which is what I wanted.
>>
>> So perhaps it is up to you to answer: why should I not want that?
>
> I am not saying you should not want that, I am saying that ACLs on the
> resources do not achieve unlinkability.

Can you expand on what the dangers are?

>
>>> Incidentally, to observers as well as the
>>> server you connect to.
>>
>> Not when you re-negotiation I think.
>
> That's true, but is not specified in WebID, right? Also, because of
> the renegotiation attack, this is currently insecure in many cases.

WebID on TLS does rely on TLS. Security is not a goal one can reach,
it is a way of travelling. So I do expect every security protocol to
have issues. These ones are being fixed, and if more people build on
them, the priority of the need to fix them will grow faster.

>
>> And certainly not if you use Tor, right?
>
> Tor has no impact on the visibility of the communication at the server end.

You really need to expand on what the danger is. Because again
I think you are thinking of the site I am connecting to as the attacker.
But I may be wrong.

>
>>
>>
>> Social Web Architect
>> http://bblfish.net/
>>
>>
>> _______________________________________________
>> saag mailing list
>> ***@ietf.org
>> https://www.ietf.org/mailman/listinfo/saag
>>

Social Web Architect
http://bblfish.net/
Ben Laurie
2012-10-19 13:52:25 UTC
Permalink
On 19 October 2012 14:46, Henry Story <***@bblfish.net> wrote:
>
> On 19 Oct 2012, at 15:31, Ben Laurie <***@google.com> wrote:
>
>> On 19 October 2012 13:01, Henry Story <***@bblfish.net> wrote:
>>>
>>> On 18 Oct 2012, at 21:29, Ben Laurie <***@links.org> wrote:
>>>
>>>> On Thu, Oct 18, 2012 at 8:20 PM, Henry Story <***@bblfish.net> wrote:
>>>>>
>>>>> On 18 Oct 2012, at 21:04, Mouse <***@Rodents-Montreal.ORG> wrote:
>>>>>
>>>>>>> [...]
>>>>>>> Unfortunately, I think that's too high of a price to pay for
>>>>>>> unlinkability.
>>>>>>> So I've come to the conclusion that anonymity will depend on
>>>>>>> protocols like TOR specifically designed for it.
>>>>>>
>>>>>> Is it my imagination, or is this stuff confusing anonymity with
>>>>>> pseudonymity? I feel reasonably sure I've missed some of the thread,
>>>>>> but what I have seem does seem to be confusing the two.
>>>>>>
>>>>>> This whole thing about linking, for example, seems to be based on
>>>>>> linking identities of some sort, implying that the systems in question
>>>>>> *have* identities, in which case they are (at best) pseudonymous, not
>>>>>> anonymous.
>>>>>
>>>>> With WebID ( http://webid.info/ ) you have a pseudonymous global identifier,
>>>>> that is tied to a document on the Web that need only reveal your public key.
>>>>> That WebID can then link to further information that is access controlled,
>>>>> so that only your friends would be able to see it.
>>>>>
>>>>> The first diagram in the spec shows this well
>>>>>
>>>>> http://webid.info/spec/#publishing-the-webid-profile-document
>>>>>
>>>>> If you put WebID behind TOR and only have .onion WebIDs - something that
>>>>> should be possible to do - then nobody would know WHERE the box hosting your
>>>>> profile is, so they would not be able to just find your home location
>>>>> from your ip-address. But you would still be able to link up in an access
>>>>> controlled manner to your friends ( who may or may not be serving their pages
>>>>> behind Tor ).
>>>>>
>>>>> You would then be unlinkable in the sense of
>>>>> http://tools.ietf.org/html/draft-iab-privacy-considerations-03
>>>>>
>>>>> [[
>>>>> Within a particular set of information, the
>>>>> inability of an observer or attacker to distinguish whether two
>>>>> items of interest are related or not (with a high enough degree of
>>>>> probability to be useful to the observer or attacker).
>>>>> ]]
>>>>>
>>>>> from any person that was not able to access the resources. But you would
>>>>> be linkable by your friends. I think you want both. Linkability by those
>>>>> authorized, unlinkability for those unauthorized. Hence linkability is not
>>>>> just a negative.
>>>>
>>>> I really feel like I am beating a dead horse at this point, but
>>>> perhaps you'll eventually admit it. Your public key links you.
>>>
>>> The question is to whom? What is the scenario you are imagining, and who is
>>> the attacker there?
>>>
>>>> Access
>>>> control on the rest of the information is irrelevant. Indeed, access
>>>> control on the public key is irrelevant, since you must reveal it when
>>>> you use the client cert.
>>>
>>> You are imagining that the server I am connecting to, and that I have
>>> decided to identify myself to, is the one that is attacking me? Right?
>>> Because otherwise I cannot understand your issue.
>>>
>>> But then I still do not understand your issue, since I deliberately
>>> did connect to that site in an identifiable manner with a global id.
>>> I could have created a locally valid ID only, had I wanted to not
>>> connect with a globally valid one.
>>>
>>> So your issue boils down to this: if I connect to a web site deliberately
>>> with a global identifier, then I am globally identified by that web site.
>>> Which is what I wanted.
>>>
>>> So perhaps it is up to you to answer: why should I not want that?
>>
>> I am not saying you should not want that, I am saying that ACLs on the
>> resources do not achieve unlinkability.
>
> Can you expand on what the dangers are?
>
>>
>>>> Incidentally, to observers as well as the
>>>> server you connect to.
>>>
>>> Not when you re-negotiation I think.
>>
>> That's true, but is not specified in WebID, right? Also, because of
>> the renegotiation attack, this is currently insecure in many cases.
>
> WebID on TLS does rely on TLS. Security is not a goal one can reach,
> it is a way of travelling. So I do expect every security protocol to
> have issues. These ones are being fixed, and if more people build on
> them, the priority of the need to fix them will grow faster.
>
>>
>>> And certainly not if you use Tor, right?
>>
>> Tor has no impact on the visibility of the communication at the server end.
>
> You really need to expand on what the danger is. Because again
> I think you are thinking of the site I am connecting to as the attacker.
> But I may be wrong.

I'm getting quite tired of this: the point is, you cannot achieve
unlinkability with WebID except by using a different WebIDs. You made
the claim that ACLs on resources achieve unlinkability. This is
incorrect.

So yes, the scenario is there are two sites that I connect to using
WebID and I want each of them to not be able to link my connections to
the other. To do this, I need two WebIDs, one for each site. ACLs do
not assist.

>
>>
>>>
>>>
>>> Social Web Architect
>>> http://bblfish.net/
>>>
>>>
>>> _______________________________________________
>>> saag mailing list
>>> ***@ietf.org
>>> https://www.ietf.org/mailman/listinfo/saag
>>>
>
> Social Web Architect
> http://bblfish.net/
>
Henry Story
2012-10-19 14:16:13 UTC
Permalink
On 19 Oct 2012, at 15:52, Ben Laurie <***@google.com> wrote:

> On 19 October 2012 14:46, Henry Story <***@bblfish.net> wrote:
>>
>> On 19 Oct 2012, at 15:31, Ben Laurie <***@google.com> wrote:
>>
>>> On 19 October 2012 13:01, Henry Story <***@bblfish.net> wrote:
>>>>
>>>> On 18 Oct 2012, at 21:29, Ben Laurie <***@links.org> wrote:
>>>>
>>>>> On Thu, Oct 18, 2012 at 8:20 PM, Henry Story <***@bblfish.net> wrote:
>>>>>>
>>>>>> On 18 Oct 2012, at 21:04, Mouse <***@Rodents-Montreal.ORG> wrote:
>>>>>>
>>>>>>>> [...]
>>>>>>>> Unfortunately, I think that's too high of a price to pay for
>>>>>>>> unlinkability.
>>>>>>>> So I've come to the conclusion that anonymity will depend on
>>>>>>>> protocols like TOR specifically designed for it.
>>>>>>>
>>>>>>> Is it my imagination, or is this stuff confusing anonymity with
>>>>>>> pseudonymity? I feel reasonably sure I've missed some of the thread,
>>>>>>> but what I have seem does seem to be confusing the two.
>>>>>>>
>>>>>>> This whole thing about linking, for example, seems to be based on
>>>>>>> linking identities of some sort, implying that the systems in question
>>>>>>> *have* identities, in which case they are (at best) pseudonymous, not
>>>>>>> anonymous.
>>>>>>
>>>>>> With WebID ( http://webid.info/ ) you have a pseudonymous global identifier,
>>>>>> that is tied to a document on the Web that need only reveal your public key.
>>>>>> That WebID can then link to further information that is access controlled,
>>>>>> so that only your friends would be able to see it.
>>>>>>
>>>>>> The first diagram in the spec shows this well
>>>>>>
>>>>>> http://webid.info/spec/#publishing-the-webid-profile-document
>>>>>>
>>>>>> If you put WebID behind TOR and only have .onion WebIDs - something that
>>>>>> should be possible to do - then nobody would know WHERE the box hosting your
>>>>>> profile is, so they would not be able to just find your home location
>>>>>> from your ip-address. But you would still be able to link up in an access
>>>>>> controlled manner to your friends ( who may or may not be serving their pages
>>>>>> behind Tor ).
>>>>>>
>>>>>> You would then be unlinkable in the sense of
>>>>>> http://tools.ietf.org/html/draft-iab-privacy-considerations-03
>>>>>>
>>>>>> [[
>>>>>> Within a particular set of information, the
>>>>>> inability of an observer or attacker to distinguish whether two
>>>>>> items of interest are related or not (with a high enough degree of
>>>>>> probability to be useful to the observer or attacker).
>>>>>> ]]
>>>>>>
>>>>>> from any person that was not able to access the resources. But you would
>>>>>> be linkable by your friends. I think you want both. Linkability by those
>>>>>> authorized, unlinkability for those unauthorized. Hence linkability is not
>>>>>> just a negative.
>>>>>
>>>>> I really feel like I am beating a dead horse at this point, but
>>>>> perhaps you'll eventually admit it. Your public key links you.
>>>>
>>>> The question is to whom? What is the scenario you are imagining, and who is
>>>> the attacker there?
>>>>
>>>>> Access
>>>>> control on the rest of the information is irrelevant. Indeed, access
>>>>> control on the public key is irrelevant, since you must reveal it when
>>>>> you use the client cert.
>>>>
>>>> You are imagining that the server I am connecting to, and that I have
>>>> decided to identify myself to, is the one that is attacking me? Right?
>>>> Because otherwise I cannot understand your issue.
>>>>
>>>> But then I still do not understand your issue, since I deliberately
>>>> did connect to that site in an identifiable manner with a global id.
>>>> I could have created a locally valid ID only, had I wanted to not
>>>> connect with a globally valid one.
>>>>
>>>> So your issue boils down to this: if I connect to a web site deliberately
>>>> with a global identifier, then I am globally identified by that web site.
>>>> Which is what I wanted.
>>>>
>>>> So perhaps it is up to you to answer: why should I not want that?
>>>
>>> I am not saying you should not want that, I am saying that ACLs on the
>>> resources do not achieve unlinkability.
>>
>> Can you expand on what the dangers are?
>>
>>>
>>>>> Incidentally, to observers as well as the
>>>>> server you connect to.
>>>>
>>>> Not when you re-negotiation I think.
>>>
>>> That's true, but is not specified in WebID, right? Also, because of
>>> the renegotiation attack, this is currently insecure in many cases.
>>
>> WebID on TLS does rely on TLS. Security is not a goal one can reach,
>> it is a way of travelling. So I do expect every security protocol to
>> have issues. These ones are being fixed, and if more people build on
>> them, the priority of the need to fix them will grow faster.
>>
>>>
>>>> And certainly not if you use Tor, right?
>>>
>>> Tor has no impact on the visibility of the communication at the server end.
>>
>> You really need to expand on what the danger is. Because again
>> I think you are thinking of the site I am connecting to as the attacker.
>> But I may be wrong.
>
> I'm getting quite tired of this: the point is, you cannot achieve
> unlinkability with WebID except by using a different WebIDs. You made
> the claim that ACLs on resources achieve unlinkability. This is
> incorrect.

The definition of unlinkability higher up is relative to the notion of
an attacker. That is why you cannot just make a statement that WebID
cannot achieve unlinkability without specifying who the attackers are.
Such a sentence is incomplete.

>
> So yes, the scenario is there are two sites that I connect to using
> WebID and I want each of them to not be able to link my connections to
> the other. To do this, I need two WebIDs, one for each site. ACLs do
> not assist.

Thanks for filling in the picture.

So the difference between us is that your are considering situations
where you wish to identify to a web site which you think of as an
attacker. There WebID is not the right technology, and indeed very
few technologies may be right - indeed it may be impossible for that
to be used by a large public, since most such attacking sites would
ask a user for his e-mail address, or a few pieces of information that
together are linkable, and link him/her.

In the situations we are considering with WebID the sites we are
connecting to are not thought of as the enemy. Now I agree we should
add a privacy/security section to the spec ( ISSUE-68 [1] ) that
makes this limitation clear.
In some situations this is problematic. But for creating a distributed
SocialWeb which is what I am interested in doing, I think this is essential.
We want to be able to allow friends of friends to work together, create
ad hoc working groups of people with distributed identities, etc... These
are the types of things people do one social networks, we just want to do
those on a global scale with no center of control.

Does that make sense? Would adding that to a privacy section be satisfactory
to you?

Henry

[1] http://www.w3.org/2005/Incubator/webid/track/issues/68

Social Web Architect
http://bblfish.net/
Kingsley Idehen
2012-10-19 17:42:34 UTC
Permalink
On 10/19/12 9:52 AM, Ben Laurie wrote:
>> You really need to expand on what the danger is. Because again
>> >I think you are thinking of the site I am connecting to as the attacker.
>> >But I may be wrong.
> I'm getting quite tired of this: the point is, you cannot achieve
> unlinkability with WebID except by using a different WebIDs. You made
> the claim that ACLs on resources achieve unlinkability. This is
> incorrect.

What is an ACL (Access Control List) to you?

Does "Data Access Policy" work any better so that we stop being
distracted by something with different means to the participants in this
debate.

Can a data access policy deliver unlinkability ?
>
> So yes, the scenario is there are two sites that I connect to using
> WebID and I want each of them to not be able to link my connections to
> the other.

This is an absolute non issue re. the combination of WebID, the WebID
authentication protocol, and logic based data access policies. You're
basically saying I (as in nebulous "You") have the personas 'Spiderman'
and 'Peter Parker' and I want those personas to remain distinct. All of
this holding true within the contextual fluidity of the Internet and
World Wide Web.

> To do this, I need two WebIDs, one for each site. ACLs do
> not assist.

It's a problem solved via the combination of WebIDs (cryptographically
verifiable identifiers), WebID authentication protocol, and logic based
data access policies. If this was actually the deal breaker for WebID
(verifiable identifiers and authentication protocol) based data access
policies (or ACLs) why would Henry and I invest some much time trying to
get you to move beyond this fundamental misconception?
>


--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Sam Hartman
2012-10-19 18:21:56 UTC
Permalink
>>>>> "Kingsley" == Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org> writes:

Kingsley> Does "Data Access Policy" work any better so that we stop
Kingsley> being distracted by something with different means to the
Kingsley> participants in this debate.

Kingsley> Can a data access policy deliver unlinkability ?


Absolutely not. I think you're talking past each other, but the data
access policy on the accessed resource cannot deliver unlinkability in
the sense that I and I think Ben are using. The data access policy on a
centrally stored credential may be part of delivering unlinkability with
regard to certain parties in some security schemes.

If you believe that data access policies are part of unlinkability, then
I'd suggest starting to see if we're talking about the same definition
of unlinkability.
Kingsley Idehen
2012-10-19 18:40:07 UTC
Permalink
On 10/19/12 2:21 PM, Sam Hartman wrote:
>>>>>> "Kingsley" == Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org> writes:
> Kingsley> Does "Data Access Policy" work any better so that we stop
> Kingsley> being distracted by something with different means to the
> Kingsley> participants in this debate.
>
> Kingsley> Can a data access policy deliver unlinkability ?
>
>
> Absolutely not. I think you're talking past each other, but the data
> access policy on the accessed resource cannot deliver unlinkability in
> the sense that I and I think Ben are using.

Okay.

> The data access policy on a
> centrally stored credential may be part of delivering unlinkability with
> regard to certain parties in some security schemes.

I am lost. Why do credentials have to be centrally stored?

I don't believe in centralization of anything when dealing with privacy
via verifiable identity. I don't think we are talking past one another,
I think we have differing view points with regards to network topology
and the implications on verifiable identity in a social context.


>
> If you believe that data access policies are part of unlinkability, then
> I'd suggest starting to see if we're talking about the same definition
> of unlinkability.

I am sure you could spell this out with some additional clarity if my
position outlined above remains unclear.

I see privacy as self-calibration of one's vulnerability, in any realm.

How do you know that I sent this mail? And don't tell me its down to the
mail signature below :-)
>
>
>


--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Nathan
2012-10-21 10:22:07 UTC
Permalink
Ben Laurie wrote:
> I'm getting quite tired of this: the point is, you cannot achieve
> unlinkability with WebID except by using a different WebIDs. You made
> the claim that ACLs on resources achieve unlinkability. This is
> incorrect.

You're 100% correct here Ben, and I'm unsure why it's so hard to convey!?

If you use the same identifier for more than one request, subsequent
requests can be associated with the first request. An identifier here is
any identifying, stable, information - key parts and URIs.

If the issue is only unlinkability across sites, then you just have a
keypair+uri per site. Or better, key-pair only, and that's associated
with an identifier for the agent behind the interface.

You're correct that ACLs won't cut it.
Kingsley Idehen
2012-10-21 16:49:13 UTC
Permalink
On 10/21/12 6:22 AM, Nathan wrote:
> Ben Laurie wrote:
>> I'm getting quite tired of this: the point is, you cannot achieve
>> unlinkability with WebID except by using a different WebIDs. You made
>> the claim that ACLs on resources achieve unlinkability. This is
>> incorrect.
>
> You're 100% correct here Ben, and I'm unsure why it's so hard to convey!?
>
> If you use the same identifier for more than one request, subsequent
> requests can be associated with the first request. An identifier here
> is any identifying, stable, information - key parts and URIs.
>
> If the issue is only unlinkability across sites, then you just have a
> keypair+uri per site. Or better, key-pair only, and that's associated
> with an identifier for the agent behind the interface.
>
> You're correct that ACLs won't cut it.
>
>
>
>
>
Nathan,

What is the subject of unlinkability ?

I am sure you know that Henry and I are fundamentally referring to
nebulous real-world entities such as "You" and "I". A composite key of:
machine name, user agent name, and a document referrer links != said
neboulus entity. Even further away in today world of multiple form
factor devices that interact with the Internet and Web.

There is no precise mechanism for electronically nailing down nebulous
entity "You" and "I". We aren't of the Internet or Web, so you can
apprehend us in person. At best you can speculate that we are the
subjects of tokens comprised of composite keys.

Unlinkability is subject to context fluidity and temporality once you
add neboulus congnitive entites (not of the Web or Internet) to the
equation. I believe you know this anyway :-)

--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Nathan
2012-10-21 19:52:08 UTC
Permalink
Kingsley Idehen wrote:
> On 10/21/12 6:22 AM, Nathan wrote:
>> Ben Laurie wrote:
>>> I'm getting quite tired of this: the point is, you cannot achieve
>>> unlinkability with WebID except by using a different WebIDs. You made
>>> the claim that ACLs on resources achieve unlinkability. This is
>>> incorrect.
>>
>> You're 100% correct here Ben, and I'm unsure why it's so hard to convey!?
>>
>> If you use the same identifier for more than one request, subsequent
>> requests can be associated with the first request. An identifier here
>> is any identifying, stable, information - key parts and URIs.
>>
>> If the issue is only unlinkability across sites, then you just have a
>> keypair+uri per site. Or better, key-pair only, and that's associated
>> with an identifier for the agent behind the interface.
>>
>> You're correct that ACLs won't cut it.
>>
>>
>>
>>
>>
> Nathan,
>
> What is the subject of unlinkability ?
>
> I am sure you know that Henry and I are fundamentally referring to
> nebulous real-world entities such as "You" and "I". A composite key of:
> machine name, user agent name, and a document referrer links != said
> neboulus entity. Even further away in today world of multiple form
> factor devices that interact with the Internet and Web.
>
> There is no precise mechanism for electronically nailing down nebulous
> entity "You" and "I". We aren't of the Internet or Web, so you can
> apprehend us in person. At best you can speculate that we are the
> subjects of tokens comprised of composite keys.
>
> Unlinkability is subject to context fluidity and temporality once you
> add neboulus congnitive entites (not of the Web or Internet) to the
> equation. I believe you know this anyway :-)

We cannot say that a URI refers to "you" or "I" in one breathe, and say
it doesn't (or may not) in another.

There is a use case which provides a technical requirement here, one
which is simply to not use identifiable information between requests to
different origin servers, and sometimes more granular, not using the
same identifiable information between requests to the same server.

WebID, just like any auth protocol can be used, it just means using it
on a one time basis, or only for a particular origin.

Personally I feel there are still questions here with WebID, as
currently people use usernames/emails and passwords almost everywhere,
and they can pick different usernames/emails/passwords on every
site/origin. Suppose WebID was to gain 100% adoption overnight, we'd
suddenly be in a position where everybody usually used the same
identifier (rather than usernames and email addresses) and the same key
(rather than multiple passwords) - because we've never been in a world
like that, we don't know the consequences yet.

Thus, when security and identity experts suggest that we need to handle
unlinkability, or consider that we may often need per origin WebIDs (or
even have that as the default mode), then we may be wise to say "okay",
go away and find our options, then report them back for consideration
and review.

It by no means limits WebID, rather it just makes it applicable to a
broader range of use cases.

Best as always,

Nathan
Kingsley Idehen
2012-10-22 02:16:23 UTC
Permalink
On 10/21/12 3:52 PM, Nathan wrote:
> Kingsley Idehen wrote:
>> On 10/21/12 6:22 AM, Nathan wrote:
>>> Ben Laurie wrote:
>>>> I'm getting quite tired of this: the point is, you cannot achieve
>>>> unlinkability with WebID except by using a different WebIDs. You made
>>>> the claim that ACLs on resources achieve unlinkability. This is
>>>> incorrect.
>>>
>>> You're 100% correct here Ben, and I'm unsure why it's so hard to
>>> convey!?
>>>
>>> If you use the same identifier for more than one request, subsequent
>>> requests can be associated with the first request. An identifier
>>> here is any identifying, stable, information - key parts and URIs.
>>>
>>> If the issue is only unlinkability across sites, then you just have
>>> a keypair+uri per site. Or better, key-pair only, and that's
>>> associated with an identifier for the agent behind the interface.
>>>
>>> You're correct that ACLs won't cut it.
>>>
>>>
>>>
>>>
>>>
>> Nathan,
>>
>> What is the subject of unlinkability ?
>>
>> I am sure you know that Henry and I are fundamentally referring to
>> nebulous real-world entities such as "You" and "I". A composite key
>> of: machine name, user agent name, and a document referrer links !=
>> said neboulus entity. Even further away in today world of multiple
>> form factor devices that interact with the Internet and Web.
>>
>> There is no precise mechanism for electronically nailing down
>> nebulous entity "You" and "I". We aren't of the Internet or Web, so
>> you can apprehend us in person. At best you can speculate that we are
>> the subjects of tokens comprised of composite keys.
>>
>> Unlinkability is subject to context fluidity and temporality once you
>> add neboulus congnitive entites (not of the Web or Internet) to the
>> equation. I believe you know this anyway :-)
>
> We cannot say that a URI refers to "you" or "I" in one breathe, and
> say it doesn't (or may not) in another.

You raise a good point, Now let me clarify, I don't believe (unless in
utter error) that I've ever claimed that a URI definitively refers to
"You", "Me", or "I". Of course, I cannot claim to have not made the
careless utterances such as "Your Personal URI" , for instance.

A URI that serves as a WebID has always been a denotation mechanism for
a composite key comprised of:

1. private key
2. public key
3. URI that resolves to a profile document that describes a subject via
an entity relationship graph.

The subject of an X.509 certificate is a nebulous entity. This entity is
associated with attribute and value pairs that comprise the profile
graph imprinted in said certificate. The semantics of an X.509
certificate don't change the nature of the certificates subject.

>
> There is a use case which provides a technical requirement here, one
> which is simply to not use identifiable information between requests
> to different origin servers, and sometimes more granular, not using
> the same identifiable information between requests to the same server.
>
> WebID, just like any auth protocol can be used, it just means using it
> on a one time basis, or only for a particular origin.

WebID is a part of the picture, not the picture in its entirety. I've
pretty much tried to encourage others to be careful about conveying the
misconception that WebID (solely) resolves the issues at hand. It is
just a critical piece of the puzzle, that's it.

You don't need to have a single WebID. Such a thing fails the most
mundane alter ego test re. 'Clarke Kent' and 'Superman' or 'Peter
Parker' and 'Spiderman'.

Privacy is about the aforementioned personas not being comprised, under
any circumstances. The fact that DC world entities 'Clark Kent' and
'Superman' used the same Web browser shouldn't comprise the alter ego
relationship between these personas.

Unlinkability is about the alter ego paradox.

>
> Personally I feel there are still questions here with WebID, as
> currently people use usernames/emails and passwords almost everywhere,
> and they can pick different usernames/emails/passwords on every
> site/origin. Suppose WebID was to gain 100% adoption overnight, we'd
> suddenly be in a position where everybody usually used the same
> identifier (rather than usernames and email addresses) and the same
> key (rather than multiple passwords) - because we've never been in a
> world like that, we don't know the consequences yet.

See my comments above. Such a system is dead on arrival re. privacy.
There have to be multiple WebIDs and the exploitation of logic when
dealing with data access policies, and all of this has to occur within
specific interaction contexts. For instance, if I want only you to see a
document, I could knock up the require security tokens and send them to
you via a PKCS#12 file. You open the file then go GET the document in
question. Being super paranoid, I would more than likely speak to you
via phone about the username and password combo for opening up the
PKCS#12 file.
>
> Thus, when security and identity experts suggest that we need to
> handle unlinkability, or consider that we may often need per origin
> WebIDs (or even have that as the default mode), then we may be wise to
> say "okay", go away and find our options, then report them back for
> consideration and review.
>
> It by no means limits WebID, rather it just makes it applicable to a
> broader range of use cases.

We need others (note: expert is utterly subjective to me) interested in
these matters to be constructive rather than dismissive. I chime in most
of the time because I see Henry going to immense pains to explain
matters only to be summarily dismissed in manners that I find
cognitively dissonant.

A basic RDBMS product doesn't depend on single attribute/field primary
keys, why would such thinking even apply to the complex matter of
privacy. When I use the term composite, I am pretty much referring the
the same concept well understood in the RDBMS world. You can have a
'super key' comprised of elements that are of themselves unique
identifiers.

I don't believe in a single WebID neither does Henry. We just believe
that Web-scale verifiable identity is a critical part of the required
infrastructure. We also believe that a de-referencable URI (e.g., an
HTTP URI) is a very powerful vehicle for this endeavor, even more so
when combined with structured data and first-order logic.

I only know of one way to deal with context fluidity at the software
level, and that's via logic integrated into data which produces self
describing data objects .
>
> Best as always,
>
> Nathan
>
>
>
>


--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Nathan
2012-10-22 09:42:42 UTC
Permalink
Kingsley Idehen wrote:
> On 10/21/12 3:52 PM, Nathan wrote:
>> Kingsley Idehen wrote:
>>> On 10/21/12 6:22 AM, Nathan wrote:
>>>> Ben Laurie wrote:
>>>>> I'm getting quite tired of this: the point is, you cannot achieve
>>>>> unlinkability with WebID except by using a different WebIDs. You made
>>>>> the claim that ACLs on resources achieve unlinkability. This is
>>>>> incorrect.
>>>>
>>>> You're 100% correct here Ben, and I'm unsure why it's so hard to
>>>> convey!?
>>>>
>>>> If you use the same identifier for more than one request, subsequent
>>>> requests can be associated with the first request. An identifier
>>>> here is any identifying, stable, information - key parts and URIs.
>>>>
>>>> If the issue is only unlinkability across sites, then you just have
>>>> a keypair+uri per site. Or better, key-pair only, and that's
>>>> associated with an identifier for the agent behind the interface.
>>>>
>>>> You're correct that ACLs won't cut it.
>>>>
>>>>
>>>>
>>>>
>>>>
>>> Nathan,
>>>
>>> What is the subject of unlinkability ?
>>>
>>> I am sure you know that Henry and I are fundamentally referring to
>>> nebulous real-world entities such as "You" and "I". A composite key
>>> of: machine name, user agent name, and a document referrer links !=
>>> said neboulus entity. Even further away in today world of multiple
>>> form factor devices that interact with the Internet and Web.
>>>
>>> There is no precise mechanism for electronically nailing down
>>> nebulous entity "You" and "I". We aren't of the Internet or Web, so
>>> you can apprehend us in person. At best you can speculate that we are
>>> the subjects of tokens comprised of composite keys.
>>>
>>> Unlinkability is subject to context fluidity and temporality once you
>>> add neboulus congnitive entites (not of the Web or Internet) to the
>>> equation. I believe you know this anyway :-)
>>
>> We cannot say that a URI refers to "you" or "I" in one breathe, and
>> say it doesn't (or may not) in another.
>
> You raise a good point, Now let me clarify, I don't believe (unless in
> utter error) that I've ever claimed that a URI definitively refers to
> "You", "Me", or "I". Of course, I cannot claim to have not made the
> careless utterances such as "Your Personal URI" , for instance.
>
> A URI that serves as a WebID has always been a denotation mechanism for
> a composite key comprised of:
>
> 1. private key
> 2. public key
> 3. URI that resolves to a profile document that describes a subject via
> an entity relationship graph.
>
> The subject of an X.509 certificate is a nebulous entity. This entity is
> associated with attribute and value pairs that comprise the profile
> graph imprinted in said certificate. The semantics of an X.509
> certificate don't change the nature of the certificates subject.
>
>>
>> There is a use case which provides a technical requirement here, one
>> which is simply to not use identifiable information between requests
>> to different origin servers, and sometimes more granular, not using
>> the same identifiable information between requests to the same server.
>>
>> WebID, just like any auth protocol can be used, it just means using it
>> on a one time basis, or only for a particular origin.
>
> WebID is a part of the picture, not the picture in its entirety. I've
> pretty much tried to encourage others to be careful about conveying the
> misconception that WebID (solely) resolves the issues at hand. It is
> just a critical piece of the puzzle, that's it.
>
> You don't need to have a single WebID. Such a thing fails the most
> mundane alter ego test re. 'Clarke Kent' and 'Superman' or 'Peter
> Parker' and 'Spiderman'.
>
> Privacy is about the aforementioned personas not being comprised, under
> any circumstances. The fact that DC world entities 'Clark Kent' and
> 'Superman' used the same Web browser shouldn't comprise the alter ego
> relationship between these personas.
>
> Unlinkability is about the alter ego paradox.
>
>>
>> Personally I feel there are still questions here with WebID, as
>> currently people use usernames/emails and passwords almost everywhere,
>> and they can pick different usernames/emails/passwords on every
>> site/origin. Suppose WebID was to gain 100% adoption overnight, we'd
>> suddenly be in a position where everybody usually used the same
>> identifier (rather than usernames and email addresses) and the same
>> key (rather than multiple passwords) - because we've never been in a
>> world like that, we don't know the consequences yet.
>
> See my comments above. Such a system is dead on arrival re. privacy.
> There have to be multiple WebIDs and the exploitation of logic when
> dealing with data access policies, and all of this has to occur within
> specific interaction contexts. For instance, if I want only you to see a
> document, I could knock up the require security tokens and send them to
> you via a PKCS#12 file. You open the file then go GET the document in
> question. Being super paranoid, I would more than likely speak to you
> via phone about the username and password combo for opening up the
> PKCS#12 file.
>>
>> Thus, when security and identity experts suggest that we need to
>> handle unlinkability, or consider that we may often need per origin
>> WebIDs (or even have that as the default mode), then we may be wise to
>> say "okay", go away and find our options, then report them back for
>> consideration and review.
>>
>> It by no means limits WebID, rather it just makes it applicable to a
>> broader range of use cases.
>
> We need others (note: expert is utterly subjective to me) interested in
> these matters to be constructive rather than dismissive. I chime in most
> of the time because I see Henry going to immense pains to explain
> matters only to be summarily dismissed in manners that I find
> cognitively dissonant.
>
> A basic RDBMS product doesn't depend on single attribute/field primary
> keys, why would such thinking even apply to the complex matter of
> privacy. When I use the term composite, I am pretty much referring the
> the same concept well understood in the RDBMS world. You can have a
> 'super key' comprised of elements that are of themselves unique
> identifiers.
>
> I don't believe in a single WebID neither does Henry. We just believe
> that Web-scale verifiable identity is a critical part of the required
> infrastructure. We also believe that a de-referencable URI (e.g., an
> HTTP URI) is a very powerful vehicle for this endeavor, even more so
> when combined with structured data and first-order logic.
>
> I only know of one way to deal with context fluidity at the software
> level, and that's via logic integrated into data which produces self
> describing data objects .

I agree on all counts and feel/think the same, so I think I'll need to
go and re-read this thread and see where the confusion is.

Best, Nathan
Ben Laurie
2012-10-22 09:54:35 UTC
Permalink
On 22 October 2012 10:42, Nathan <nathan-***@public.gmane.org> wrote:
> Kingsley Idehen wrote:
>>
>> On 10/21/12 3:52 PM, Nathan wrote:
>>>
>>> Kingsley Idehen wrote:
>>>>
>>>> On 10/21/12 6:22 AM, Nathan wrote:
>>>>>
>>>>> Ben Laurie wrote:
>>>>>>
>>>>>> I'm getting quite tired of this: the point is, you cannot achieve
>>>>>> unlinkability with WebID except by using a different WebIDs. You made
>>>>>> the claim that ACLs on resources achieve unlinkability. This is
>>>>>> incorrect.
>>>>>
>>>>>
>>>>> You're 100% correct here Ben, and I'm unsure why it's so hard to
>>>>> convey!?
>>>>>
>>>>> If you use the same identifier for more than one request, subsequent
>>>>> requests can be associated with the first request. An identifier here is any
>>>>> identifying, stable, information - key parts and URIs.
>>>>>
>>>>> If the issue is only unlinkability across sites, then you just have a
>>>>> keypair+uri per site. Or better, key-pair only, and that's associated with
>>>>> an identifier for the agent behind the interface.
>>>>>
>>>>> You're correct that ACLs won't cut it.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>> Nathan,
>>>>
>>>> What is the subject of unlinkability ?
>>>>
>>>> I am sure you know that Henry and I are fundamentally referring to
>>>> nebulous real-world entities such as "You" and "I". A composite key of:
>>>> machine name, user agent name, and a document referrer links != said
>>>> neboulus entity. Even further away in today world of multiple form factor
>>>> devices that interact with the Internet and Web.
>>>>
>>>> There is no precise mechanism for electronically nailing down nebulous
>>>> entity "You" and "I". We aren't of the Internet or Web, so you can apprehend
>>>> us in person. At best you can speculate that we are the subjects of tokens
>>>> comprised of composite keys.
>>>>
>>>> Unlinkability is subject to context fluidity and temporality once you
>>>> add neboulus congnitive entites (not of the Web or Internet) to the
>>>> equation. I believe you know this anyway :-)
>>>
>>>
>>> We cannot say that a URI refers to "you" or "I" in one breathe, and say
>>> it doesn't (or may not) in another.
>>
>>
>> You raise a good point, Now let me clarify, I don't believe (unless in
>> utter error) that I've ever claimed that a URI definitively refers to "You",
>> "Me", or "I". Of course, I cannot claim to have not made the careless
>> utterances such as "Your Personal URI" , for instance.
>>
>> A URI that serves as a WebID has always been a denotation mechanism for a
>> composite key comprised of:
>>
>> 1. private key
>> 2. public key
>> 3. URI that resolves to a profile document that describes a subject via an
>> entity relationship graph.
>>
>> The subject of an X.509 certificate is a nebulous entity. This entity is
>> associated with attribute and value pairs that comprise the profile graph
>> imprinted in said certificate. The semantics of an X.509 certificate don't
>> change the nature of the certificates subject.
>>
>>>
>>> There is a use case which provides a technical requirement here, one
>>> which is simply to not use identifiable information between requests to
>>> different origin servers, and sometimes more granular, not using the same
>>> identifiable information between requests to the same server.
>>>
>>> WebID, just like any auth protocol can be used, it just means using it on
>>> a one time basis, or only for a particular origin.
>>
>>
>> WebID is a part of the picture, not the picture in its entirety. I've
>> pretty much tried to encourage others to be careful about conveying the
>> misconception that WebID (solely) resolves the issues at hand. It is just a
>> critical piece of the puzzle, that's it.
>>
>> You don't need to have a single WebID. Such a thing fails the most mundane
>> alter ego test re. 'Clarke Kent' and 'Superman' or 'Peter Parker' and
>> 'Spiderman'.
>>
>> Privacy is about the aforementioned personas not being comprised, under
>> any circumstances. The fact that DC world entities 'Clark Kent' and
>> 'Superman' used the same Web browser shouldn't comprise the alter ego
>> relationship between these personas.
>>
>> Unlinkability is about the alter ego paradox.
>>
>>>
>>> Personally I feel there are still questions here with WebID, as currently
>>> people use usernames/emails and passwords almost everywhere, and they can
>>> pick different usernames/emails/passwords on every site/origin. Suppose
>>> WebID was to gain 100% adoption overnight, we'd suddenly be in a position
>>> where everybody usually used the same identifier (rather than usernames and
>>> email addresses) and the same key (rather than multiple passwords) - because
>>> we've never been in a world like that, we don't know the consequences yet.
>>
>>
>> See my comments above. Such a system is dead on arrival re. privacy. There
>> have to be multiple WebIDs and the exploitation of logic when dealing with
>> data access policies, and all of this has to occur within specific
>> interaction contexts. For instance, if I want only you to see a document, I
>> could knock up the require security tokens and send them to you via a
>> PKCS#12 file. You open the file then go GET the document in question. Being
>> super paranoid, I would more than likely speak to you via phone about the
>> username and password combo for opening up the PKCS#12 file.
>>>
>>>
>>> Thus, when security and identity experts suggest that we need to handle
>>> unlinkability, or consider that we may often need per origin WebIDs (or even
>>> have that as the default mode), then we may be wise to say "okay", go away
>>> and find our options, then report them back for consideration and review.
>>>
>>> It by no means limits WebID, rather it just makes it applicable to a
>>> broader range of use cases.
>>
>>
>> We need others (note: expert is utterly subjective to me) interested in
>> these matters to be constructive rather than dismissive. I chime in most of
>> the time because I see Henry going to immense pains to explain matters only
>> to be summarily dismissed in manners that I find cognitively dissonant.
>>
>> A basic RDBMS product doesn't depend on single attribute/field primary
>> keys, why would such thinking even apply to the complex matter of privacy.
>> When I use the term composite, I am pretty much referring the the same
>> concept well understood in the RDBMS world. You can have a 'super key'
>> comprised of elements that are of themselves unique identifiers.
>>
>> I don't believe in a single WebID neither does Henry. We just believe that
>> Web-scale verifiable identity is a critical part of the required
>> infrastructure. We also believe that a de-referencable URI (e.g., an HTTP
>> URI) is a very powerful vehicle for this endeavor, even more so when
>> combined with structured data and first-order logic.
>>
>> I only know of one way to deal with context fluidity at the software
>> level, and that's via logic integrated into data which produces self
>> describing data objects .
>
>
> I agree on all counts and feel/think the same,

So do I, more or less (except the last sentence, which I don't think I
really understand, and if I do, seems too sweeping), which surprises
me.

> so I think I'll need to go
> and re-read this thread and see where the confusion is.

Possibly something to do with the fact that of all of Kingley's posts
so far this is the only one I haven't found either incomprehensible or
wrong.

Where we came in was me pointing out that if you disconnect your
identities by using multiple WebIDs, then you have a UI problem, and
since then the aim seems to have been to persuade us that multiple
WebIDs are not needed.

>
> Best, Nathan
Nathan
2012-10-22 10:01:22 UTC
Permalink
Ben Laurie wrote:
> Where we came in was me pointing out that if you disconnect your
> identities by using multiple WebIDs, then you have a UI problem, and
> since then the aim seems to have been to persuade us that multiple
> WebIDs are not needed.

Yup, it appears to boil down to a UI problem, and more specifically, a
browser-ui problem.

Multiple WeIDs are often needed, and the WebID protocol doesn't preclude
that in any way shape or form.

On a positive note, it's great to talk through these things and make
sure that peoples concerns are voiced and noted :)

Best,

Nathan
Henry Story
2012-10-22 10:33:06 UTC
Permalink
[cutting down on the mailing lists]

On 22 Oct 2012, at 11:54, Ben Laurie <***@google.com> wrote:

> Where we came in was me pointing out that if you disconnect your
> identities by using multiple WebIDs, then you have a UI problem, and
> since then the aim seems to have been to persuade us that multiple
> WebIDs are not needed.

There is a happy medium on UI experience. For the UI experience
there are two seperate issues, one of which I proposed a fix for
and the other of which is a browser UI issue.

A. Number of WebIDs
-------------------

1. WebID per web site:

You don't want to have one WebID per site you go to, since the point
of WebID is to allow you to authenticate across sites using the same
ID ( in the case of TLS, a URL embedded in an X509 Certificate's SAN
field ).

2. One and only one WebID for the whole internet per person

WebID does not force any such restrictions (neither would OpenId
or BrowserId for that matter ).

3. As many WebID's for the whole web as the user feels worth investing in

The first sentence of the spec says so ( http://webid.info/spec/ )

[[
The WebID protocol enables secure, efficient and maximally user friendly authentication on the Web. It enables people to authenticate onto any site by simply clicking on one of the certificates proposed to them by their browser. These certificates can be created by any Web Site for their users in one click. The identifier, known as the WebID, is a URI whose sense can be found in the associated Profile Page, a type of web page that any Social Network user is familiar with.
]]

( so we are looking for help improving the wording)

Finally, (3) above does not mean that the user can only use WebID. He can still use
all the existing technologies for authenticating to web sites where he
wishes to have non cross-site linkable identities - e.g. cookies, with username
password for example if needed, ...

UI Experience
-------------

There are two elements to the UI experience

1. Certificate selection

If the server requesting the certificate from the user makes a CertificateRequest
by leaving the certificate_authorities field blank ( or null, not sure what the
correct wording is ) as explained by the spec currently
http://www.w3.org/2005/Incubator/webid/spec/#requesting-the-client-certificate

then users with multiple certificates - some of which may not be WebID enabled -
then those users will be presented with a selection box containing certificates
that are not in fact ones the server will accept - leading to confusion and a
bad UI. I just proposed on the WebID mailing list that WebID certificate chains
be signed (at some point) by CN=WebID,O=∅ to solve this issue.
http://lists.w3.org/Archives/Public/public-webid/2012Oct/0188.html

2. Transparency of Identity

It is not clear currently when you go to a web site if you are authenticated or not,
or with what identities you are. Even Google Chromes' Profile feature does not do so.
This is something I really hope they will fix by inspiring themselves from Aza Raskin's
work
http://www.azarask.in/blog/post/identity-in-the-browser-firefox/


I hope this helps,

Henry

Social Web Architect
http://bblfish.net/
Kingsley Idehen
2012-10-22 10:59:50 UTC
Permalink
On 10/22/12 5:54 AM, Ben Laurie wrote:
> Where we came in was me pointing out that if you disconnect your
> identities by using multiple WebIDs, then you have a UI problem, and
> since then the aim seems to have been to persuade us that multiple
> WebIDs are not needed.
Multiple WebIDs (or any other cryptographically verifiable identifier)
are a must.

The issue of UI is inherently subjective. It can't be used to
objectively validate or invalidate Web-scale verifiable identifier
systems such as WebID or any other mechanism aimed at achieving the
same goals.

Anyway, Henry, I, and a few others from the WebID IG (hopefully) are
going to knock up some demonstrations to show how this perceived UI/UX
inconvenience can be addressed.


--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Ben Laurie
2012-10-22 11:26:41 UTC
Permalink
On 22 October 2012 11:59, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org> wrote:
> On 10/22/12 5:54 AM, Ben Laurie wrote:
>>
>> Where we came in was me pointing out that if you disconnect your
>> identities by using multiple WebIDs, then you have a UI problem, and
>> since then the aim seems to have been to persuade us that multiple
>> WebIDs are not needed.
>
> Multiple WebIDs (or any other cryptographically verifiable identifier) are a
> must.
>
> The issue of UI is inherently subjective. It can't be used to objectively
> validate or invalidate Web-scale verifiable identifier systems such as
> WebID or any other mechanism aimed at achieving the same goals.

Ultimately what matters is: do users use it correctly? This can be tested :-)

Note that it is necessary to test the cases where the website is evil,
too - something that's often conveniently missed out of user testing.
For example, its pretty obvious that OpenID fails horribly in this
case, so it tends not to get tested.

>
> Anyway, Henry, I, and a few others from the WebID IG (hopefully) are going
> to knock up some demonstrations to show how this perceived UI/UX
> inconvenience can be addressed.

Cool.

>
>
>
> --
>
> Regards,
>
> Kingsley Idehen
> Founder & CEO
> OpenLink Software
> Company Web: http://www.openlinksw.com
> Personal Weblog: http://www.openlinksw.com/blog/~kidehen
> Twitter/Identi.ca handle: @kidehen
> Google+ Profile: https://plus.google.com/112399767740508618350/about
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
>
>
>
>
>
Kingsley Idehen
2012-10-22 12:03:26 UTC
Permalink
On 10/22/12 7:26 AM, Ben Laurie wrote:
> On 22 October 2012 11:59, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org> wrote:
>> On 10/22/12 5:54 AM, Ben Laurie wrote:
>>> Where we came in was me pointing out that if you disconnect your
>>> identities by using multiple WebIDs, then you have a UI problem, and
>>> since then the aim seems to have been to persuade us that multiple
>>> WebIDs are not needed.
>> Multiple WebIDs (or any other cryptographically verifiable identifier) are a
>> must.
>>
>> The issue of UI is inherently subjective. It can't be used to objectively
>> validate or invalidate Web-scale verifiable identifier systems such as
>> WebID or any other mechanism aimed at achieving the same goals.
> Ultimately what matters is: do users use it correctly? This can be tested :-)
>
> Note that it is necessary to test the cases where the website is evil,
> too - something that's often conveniently missed out of user testing.
> For example, its pretty obvious that OpenID fails horribly in this
> case, so it tends not to get tested.

Okay.
>
>> Anyway, Henry, I, and a few others from the WebID IG (hopefully) are going
>> to knock up some demonstrations to show how this perceived UI/UX
>> inconvenience can be addressed.
> Cool.

Okay, ball is in our court to now present a few implementations that
address the UI/UX concerns.

Quite relieved to have finally reached this point :-)



--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Harry Halpin
2012-10-22 12:32:24 UTC
Permalink
On 10/22/2012 02:03 PM, Kingsley Idehen wrote:
> On 10/22/12 7:26 AM, Ben Laurie wrote:
>> On 22 October 2012 11:59, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org>
>> wrote:
>>> On 10/22/12 5:54 AM, Ben Laurie wrote:
>>>> Where we came in was me pointing out that if you disconnect your
>>>> identities by using multiple WebIDs, then you have a UI problem, and
>>>> since then the aim seems to have been to persuade us that multiple
>>>> WebIDs are not needed.
>>> Multiple WebIDs (or any other cryptographically verifiable
>>> identifier) are a
>>> must.
>>>
>>> The issue of UI is inherently subjective. It can't be used to
>>> objectively
>>> validate or invalidate Web-scale verifiable identifier systems such as
>>> WebID or any other mechanism aimed at achieving the same goals.
>> Ultimately what matters is: do users use it correctly? This can be
>> tested :-)
>>
>> Note that it is necessary to test the cases where the website is evil,
>> too - something that's often conveniently missed out of user testing.
>> For example, its pretty obvious that OpenID fails horribly in this
>> case, so it tends not to get tested.
>
> Okay.
>>
>>> Anyway, Henry, I, and a few others from the WebID IG (hopefully)
>>> are going
>>> to knock up some demonstrations to show how this perceived UI/UX
>>> inconvenience can be addressed.
>> Cool.
>
> Okay, ball is in our court to now present a few implementations that
> address the UI/UX concerns.
>
> Quite relieved to have finally reached this point :-)

No, its not a UI/UX concern, although the UI experience of both identity
on the Web and with WebID in particular is quite terrible, I agree.

My earlier concern was an information flow concern that causes the issue
with linkability, which WebID shares to a large extent with other
server-side information-flow. As stated earlier, as long as you trust
the browser, BrowserID does ameliorate this. There is also this rather
odd conflation of "linkability" of URIs with hypertext and URI-enabled
Semantic Web data" and linkability as a privacy concern.

I do think many people agree stronger cryptographic credentials for
authentication are a good thing, and BrowserID is based on this and
OpenID Connect has (albeit not often used) options in this space. I
would again, please suggest that the WebID community take on board
comments in a polite manner and not cc mailing lists.
>
>
>
Melvin Carvalho
2012-10-22 12:46:58 UTC
Permalink
On 22 October 2012 14:32, Harry Halpin <hhalpin-***@public.gmane.org> wrote:

> On 10/22/2012 02:03 PM, Kingsley Idehen wrote:
>
>> On 10/22/12 7:26 AM, Ben Laurie wrote:
>>
>>> On 22 October 2012 11:59, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org>
>>> wrote:
>>>
>>>> On 10/22/12 5:54 AM, Ben Laurie wrote:
>>>>
>>>>> Where we came in was me pointing out that if you disconnect your
>>>>> identities by using multiple WebIDs, then you have a UI problem, and
>>>>> since then the aim seems to have been to persuade us that multiple
>>>>> WebIDs are not needed.
>>>>>
>>>> Multiple WebIDs (or any other cryptographically verifiable identifier)
>>>> are a
>>>> must.
>>>>
>>>> The issue of UI is inherently subjective. It can't be used to
>>>> objectively
>>>> validate or invalidate Web-scale verifiable identifier systems such as
>>>> WebID or any other mechanism aimed at achieving the same goals.
>>>>
>>> Ultimately what matters is: do users use it correctly? This can be
>>> tested :-)
>>>
>>> Note that it is necessary to test the cases where the website is evil,
>>> too - something that's often conveniently missed out of user testing.
>>> For example, its pretty obvious that OpenID fails horribly in this
>>> case, so it tends not to get tested.
>>>
>>
>> Okay.
>>
>>>
>>> Anyway, Henry, I, and a few others from the WebID IG (hopefully) are
>>>> going
>>>> to knock up some demonstrations to show how this perceived UI/UX
>>>> inconvenience can be addressed.
>>>>
>>> Cool.
>>>
>>
>> Okay, ball is in our court to now present a few implementations that
>> address the UI/UX concerns.
>>
>> Quite relieved to have finally reached this point :-)
>>
>
> No, its not a UI/UX concern, although the UI experience of both identity
> on the Web and with WebID in particular is quite terrible, I agree.
>

Harry, what exactly do you mean by "on the web"?

The reference point I take for this phrase is from the "Axioms of Web
Architecture" :

http://www.w3.org/DesignIssues/Axioms.html#uri

'An information object is "on the web" if it has a URI.'

If I have understood your previous posts correctly you perhaps have a
different definition or referring to something specific. Sorry if im a bit
confused things, It's not that clear hat you mean by the phrase.


> My earlier concern was an information flow concern that causes the issue
> with linkability, which WebID shares to a large extent with other
> server-side information-flow. As stated earlier, as long as you trust the
> browser, BrowserID does ameliorate this. There is also this rather odd
> conflation of "linkability" of URIs with hypertext and URI-enabled Semantic
> Web data" and linkability as a privacy concern.
>
> I do think many people agree stronger cryptographic credentials for
> authentication are a good thing, and BrowserID is based on this and OpenID
> Connect has (albeit not often used) options in this space. I would again,
> please suggest that the WebID community take on board comments in a polite
> manner and not cc mailing lists.
>

Feedback is valuable and appreciated. Certainly the comments made are
taken on board.

With standards such as identity there's always an overlap between different
efforts. I cant speak for others in the community, but I personally agree
that care should be taken to post the right topics to the right list.
Harry Halpin
2012-10-22 12:50:14 UTC
Permalink
[to strip off mailing lists except WebID]


On 10/22/2012 02:32 PM, Harry Halpin wrote:
> On 10/22/2012 02:03 PM, Kingsley Idehen wrote:
>> On 10/22/12 7:26 AM, Ben Laurie wrote:
>>> On 22 October 2012 11:59, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org>
>>> wrote:
>>>> On 10/22/12 5:54 AM, Ben Laurie wrote:
>>>>> Where we came in was me pointing out that if you disconnect your
>>>>> identities by using multiple WebIDs, then you have a UI problem, and
>>>>> since then the aim seems to have been to persuade us that multiple
>>>>> WebIDs are not needed.

Also, from a linkability/privacy perspective, you would still by virtue
of the URI in the SAN reveal that you control (or have delegated
control, i.e. it may just mint WebIDs for anyone) to that domain with
the "multiple WebID" solution. It seems like in this case that anonymous
credentials/ZKPs would make more sense without revealing any URI or key
information, although deployment of that work in browsers is still I
think quite far away.

Again, I think a good approach towards WebID CG is to say "Here is what
use-cases its good at (you control a URI, you like FOAF and the SemWeb,
you want a public profile), here's what use-cases its not good at or
specialized at (linkability, UI, etc.)" rather than attempt to paint
WebID as a silver bullet across as many mailing lists as possible.
Realistically, most standards and techniques have trade-offs. Whether or
not industry or users agree with your particular trade-offs determines
the success of the standard in my experience.

Good luck! Again, there's some good ideas in WebID, there's some ideas
that I personally think are good (stronger authentication) but unlikely
to be adopted by industry (such as FOAF), open problems (multiple
devices) and there's some ideas I don't personally agree with (approach
of WebID to linkability and URIs) but happy to see other people use if
they if they have different use-cases.

And if you want changes in the browser, I suggest you attempt to discuss
in a polite manner with browser vendors that takes their concerns on
board as well as people that contribute code to open-source browsers, or
contribute such changes yourselves.



>>>> Multiple WebIDs (or any other cryptographically verifiable
>>>> identifier) are a
>>>> must.
>>>>
>>>> The issue of UI is inherently subjective. It can't be used to
>>>> objectively
>>>> validate or invalidate Web-scale verifiable identifier systems such as
>>>> WebID or any other mechanism aimed at achieving the same goals.
>>> Ultimately what matters is: do users use it correctly? This can be
>>> tested :-)
>>>
>>> Note that it is necessary to test the cases where the website is evil,
>>> too - something that's often conveniently missed out of user testing.
>>> For example, its pretty obvious that OpenID fails horribly in this
>>> case, so it tends not to get tested.
>>
>> Okay.
>>>
>>>> Anyway, Henry, I, and a few others from the WebID IG (hopefully)
>>>> are going
>>>> to knock up some demonstrations to show how this perceived UI/UX
>>>> inconvenience can be addressed.
>>> Cool.
>>
>> Okay, ball is in our court to now present a few implementations that
>> address the UI/UX concerns.
>>
>> Quite relieved to have finally reached this point :-)
>
> No, its not a UI/UX concern, although the UI experience of both
> identity on the Web and with WebID in particular is quite terrible, I
> agree.
>
> My earlier concern was an information flow concern that causes the
> issue with linkability, which WebID shares to a large extent with
> other server-side information-flow. As stated earlier, as long as you
> trust the browser, BrowserID does ameliorate this. There is also this
> rather odd conflation of "linkability" of URIs with hypertext and
> URI-enabled Semantic Web data" and linkability as a privacy concern.
>
> I do think many people agree stronger cryptographic credentials for
> authentication are a good thing, and BrowserID is based on this and
> OpenID Connect has (albeit not often used) options in this space. I
> would again, please suggest that the WebID community take on board
> comments in a polite manner and not cc mailing lists.
>>
>>
>>
>
Kingsley Idehen
2012-10-22 13:46:01 UTC
Permalink
On 10/22/12 8:50 AM, Harry Halpin wrote:
>
> Again, I think a good approach towards WebID CG is to say "Here is
> what use-cases its good at (you control a URI, you like FOAF and the
> SemWeb, you want a public profile), here's what use-cases its not good
> at or specialized at (linkability, UI, etc.)" rather than attempt to
> paint WebID as a silver bullet across as many mailing lists as
> possible. Realistically, most standards and techniques have
> trade-offs. Whether or not industry or users agree with your
> particular trade-offs determines the success of the standard in my
> experience.

I've encouraged anyone that will listen not to pain WebID as a silver
bullet. It's a critical piece of the picture, but not the entire picture.
>
> Good luck! Again, there's some good ideas in WebID, there's some ideas
> that I personally think are good (stronger authentication) but
> unlikely to be adopted by industry (such as FOAF), open problems
> (multiple devices) and there's some ideas I don't personally agree
> with (approach of WebID to linkability and URIs) but happy to see
> other people use if they if they have different use-cases.

I think we are reaching a critical beachead with Ben. I am also
confident WebID IG and RWW community groups will be encouraged by this
beachead. Ultimately, I am even confident that we are going to solve
this problem.

As I said, all we need to do is just get a long and process feedback.
Live demonstrations (where possible) should always be used to
substantiate claims. We do that, and as I said, we are going to solve
this problem :-)


--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Henry Story
2012-10-22 14:04:17 UTC
Permalink
On 22 Oct 2012, at 14:32, Harry Halpin <hhalpin-***@public.gmane.org> wrote:

> On 10/22/2012 02:03 PM, Kingsley Idehen wrote:
>> On 10/22/12 7:26 AM, Ben Laurie wrote:
>>> On 22 October 2012 11:59, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org> wrote:
>>>> On 10/22/12 5:54 AM, Ben Laurie wrote:
>>>>> Where we came in was me pointing out that if you disconnect your
>>>>> identities by using multiple WebIDs, then you have a UI problem, and
>>>>> since then the aim seems to have been to persuade us that multiple
>>>>> WebIDs are not needed.
>>>> Multiple WebIDs (or any other cryptographically verifiable identifier) are a
>>>> must.
>>>>
>>>> The issue of UI is inherently subjective. It can't be used to objectively
>>>> validate or invalidate Web-scale verifiable identifier systems such as
>>>> WebID or any other mechanism aimed at achieving the same goals.
>>> Ultimately what matters is: do users use it correctly? This can be tested :-)
>>>
>>> Note that it is necessary to test the cases where the website is evil,
>>> too - something that's often conveniently missed out of user testing.
>>> For example, its pretty obvious that OpenID fails horribly in this
>>> case, so it tends not to get tested.
>>
>> Okay.
>>>
>>>> Anyway, Henry, I, and a few others from the WebID IG (hopefully) are going
>>>> to knock up some demonstrations to show how this perceived UI/UX
>>>> inconvenience can be addressed.
>>> Cool.
>>
>> Okay, ball is in our court to now present a few implementations that address the UI/UX concerns.
>>
>> Quite relieved to have finally reached this point :-)
>
> No, its not a UI/UX concern, although the UI experience of both identity on the Web and with WebID in particular is quite terrible, I agree.

It completely depends on the browsers:
http://www.w3.org/wiki/Foaf%2Bssl/Clients/CertSelection
If you are on Linux just file a bug request to your browser if you are unhappy, or even better hack up a good UI. It's easy: just make it simpler.

>
> My earlier concern was an information flow concern that causes the issue with linkability, which WebID shares to a large extent with other server-side information-flow.

Including BrowserId. Which has 2 tokens that can be used to identify the user across sites:

- an e-mail address ( useful for spamming )
- a public key, which can be used to authenticate across sites


> As stated earlier, as long as you trust the browser, BrowserID does ameliorate this.

No it does not improve linkability at all. Certainly not if you think the site you are authenticating to is the one you should be worried about, because just using a public key
by itself is enough for Linkability in the strict (paranoid) sense. That is if you
consider the site you are logging into to as the attacker, then by giving two sites
a public key where you have proven you control the private key is enough for them to know that
the same agent visited both sites. That is because the cert:key relation is inverse functional.

So in simple logical terms if you go to site1.org and identify with a public key pk,
and they create a local identifier for you <http://site1.org/u123>, and then you go site s2.net and identify with the same public key pk and they give you an identifier <http://s2.net/lsdfs>
(these need not be public) and then they exchange their information, then each of the sites would have the following relations ( written in http://www.w3.org/TR/Turtle )

@prefix cert: <http://www.w3.org/ns/auth/cert#>

<http://site1.org/u123> cert:key pk .
<http://s2.net/lsdfs> cert:key pk .

because cert:key is defined as an InverseFunctionalProperty
( as you can see by going http://www.w3.org/ns/auth/cert#key )

Then it follows from simple owl reasoning that

<http://site1.org/u123> == <http://s2.net/lsdfs> .

One cannot get much simpler logical reasoning that this, Harry.


> There is also this rather odd conflation of "linkability" of URIs with hypertext and URI-enabled Semantic Web data" and linkability as a privacy concern.

I am not conflating these.

My point from the beginning is that Linkability is both a good thing and a bad thing.

As a defender of BrowserId you cannot consistently attack WebID for linkability concerns and find BrowserId not to have that same problem. So I hate to reveal this truth to you: but we have to fight this battle together.

And the battle is simple: the linkability issue is only an issue if you think the site you
are authenticating to is the enemy. If you believe that you are in relation with a site that
is under a legal and moral duty to be respectful of the communication you are having with it,
then you will find that the linkability of information with that site and across sites is exactly what you want in order to reduce privacy issues that arise out of centralised systems.

>
> I do think many people agree stronger cryptographic credentials for authentication are a good thing, and BrowserID is based on this and OpenID Connect has (albeit not often used) options in this space. I would again, please suggest that the WebID community take on board comments in a polite manner and not cc mailing lists.

All my communications have been polite, and I don't know why you select out the WebID community.
As for taking on board comments, why, just the previous e-mail you responded to was a demonstration that we are: CN=WebID,O=∅



>>
>>
>>
>

Social Web Architect
http://bblfish.net/
Harry Halpin
2012-10-22 15:14:10 UTC
Permalink
On 10/22/2012 04:04 PM, Henry Story wrote:
> On 22 Oct 2012, at 14:32, Harry Halpin <hhalpin-***@public.gmane.org> wrote:
>
>> On 10/22/2012 02:03 PM, Kingsley Idehen wrote:
>>> On 10/22/12 7:26 AM, Ben Laurie wrote:
>>>> On 22 October 2012 11:59, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org> wrote:
>>>>> On 10/22/12 5:54 AM, Ben Laurie wrote:
>>>>>> Where we came in was me pointing out that if you disconnect your
>>>>>> identities by using multiple WebIDs, then you have a UI problem, and
>>>>>> since then the aim seems to have been to persuade us that multiple
>>>>>> WebIDs are not needed.
>>>>> Multiple WebIDs (or any other cryptographically verifiable identifier) are a
>>>>> must.
>>>>>
>>>>> The issue of UI is inherently subjective. It can't be used to objectively
>>>>> validate or invalidate Web-scale verifiable identifier systems such as
>>>>> WebID or any other mechanism aimed at achieving the same goals.
>>>> Ultimately what matters is: do users use it correctly? This can be tested :-)
>>>>
>>>> Note that it is necessary to test the cases where the website is evil,
>>>> too - something that's often conveniently missed out of user testing.
>>>> For example, its pretty obvious that OpenID fails horribly in this
>>>> case, so it tends not to get tested.
>>> Okay.
>>>>> Anyway, Henry, I, and a few others from the WebID IG (hopefully) are going
>>>>> to knock up some demonstrations to show how this perceived UI/UX
>>>>> inconvenience can be addressed.
>>>> Cool.
>>> Okay, ball is in our court to now present a few implementations that address the UI/UX concerns.
>>>
>>> Quite relieved to have finally reached this point :-)
>> No, its not a UI/UX concern, although the UI experience of both identity on the Web and with WebID in particular is quite terrible, I agree.
> It completely depends on the browsers:
> http://www.w3.org/wiki/Foaf%2Bssl/Clients/CertSelection
> If you are on Linux just file a bug request to your browser if you are unhappy, or even better hack up a good UI. It's easy: just make it simpler.
>
>> My earlier concern was an information flow concern that causes the issue with linkability, which WebID shares to a large extent with other server-side information-flow.
> Including BrowserId. Which has 2 tokens that can be used to identify the user across sites:
>
> - an e-mail address ( useful for spamming )
> - a public key, which can be used to authenticate across sites
>
>
>> As stated earlier, as long as you trust the browser, BrowserID does ameliorate this.
> No it does not improve linkability at all. Certainly not if you think the site you are authenticating to is the one you should be worried about, because just using a public key
> by itself is enough for Linkability in the strict (paranoid) sense. That is if you
> consider the site you are logging into to as the attacker, then by giving two sites
> a public key where you have proven you control the private key is enough for them to know that
> the same agent visited both sites. That is because the cert:key relation is inverse functional.
>
> So in simple logical terms if you go to site1.org and identify with a public key pk,
> and they create a local identifier for you <http://site1.org/u123>, and then you go site s2.net and identify with the same public key pk and they give you an identifier <http://s2.net/lsdfs>
> (these need not be public) and then they exchange their information, then each of the sites would have the following relations ( written in http://www.w3.org/TR/Turtle )
>
> @prefix cert: <http://www.w3.org/ns/auth/cert#>
>
> <http://site1.org/u123> cert:key pk .
> <http://s2.net/lsdfs> cert:key pk .
>
> because cert:key is defined as an InverseFunctionalProperty
> ( as you can see by going http://www.w3.org/ns/auth/cert#key )
>
> Then it follows from simple owl reasoning that
>
> <http://site1.org/u123> == <http://s2.net/lsdfs> .
>
> One cannot get much simpler logical reasoning that this, Harry.
>
>
>> There is also this rather odd conflation of "linkability" of URIs with hypertext and URI-enabled Semantic Web data" and linkability as a privacy concern.
> I am not conflating these.
To quote the IETF document I seem to have unsuccessfully suggested you
read a while back, the linkability of two or more Items Of Interest
(e.g., subjects, messages, actions, ...) from an attacker's perspective
means that within a particular set of information, the attacker can
distinguish whether these IOIs are related or not (with a high enough
degree of probability to be useful) [1]. If you "like linkability",
that's great, but probably many use-cases aren't built around liking
linkability.

This has very little with hypertext linking of web-pages via URIs. I
think you want to use the term "trust across different sites" rather
than linkability, although I see how WebID wants to conflate that with
trust, which no other identity solution does. A link does not
necessarily mean trust, especially if links aren't bi-directional.

As explained earlier, Mozilla Personae/BrowserID uses digital signatures
where an IDP signs claims but transfers that claim to the RP via the
browser (thus the notion of "different information flow") and thus the
RP and IDP do not directly communicate, reducing the linkability of the
data easily gathered by the IDP (not the RP).

I know WebID folks believe IDP = my homepage, but for most people IDP
would likely not be a homepage, but a major identity provider for which
data minimization principles should apply, including ownership of the
social network data of an individual and a history of their interactions
with every RP. I am not defending BrowerID per se: Personae assumes you
trust the browser, which some people don't. Also, email verification,
while common, is not great from a security perspective, i.e. STARTLS not
giving error messages when it degrades.

Perhaps a more productive question would be why would someone use WebID
rather than OpenID Connect with digital signatures?

Although, I have ran out of time for this for the time being.

>
> My point from the beginning is that Linkability is both a good thing and a bad thing.
>
> As a defender of BrowserId you cannot consistently attack WebID for linkability concerns and find BrowserId not to have that same problem. So I hate to reveal this truth to you: but we have to fight this battle together.
>
> And the battle is simple: the linkability issue is only an issue if you think the site you
> are authenticating to is the enemy. If you believe that you are in relation with a site that
> is under a legal and moral duty to be respectful of the communication you are having with it,
> then you will find that the linkability of information with that site and across sites is exactly what you want in order to reduce privacy issues that arise out of centralised systems.
>
>> I do think many people agree stronger cryptographic credentials for authentication are a good thing, and BrowserID is based on this and OpenID Connect has (albeit not often used) options in this space. I would again, please suggest that the WebID community take on board comments in a polite manner and not cc mailing lists.
> All my communications have been polite, and I don't know why you select out the WebID community.
> As for taking on board comments, why, just the previous e-mail you responded to was a demonstration that we are: CN=WebID,O=∅
>
>
>
>>>
>>>
> Social Web Architect
> http://bblfish.net/
>
Melvin Carvalho
2012-10-22 15:25:21 UTC
Permalink
On 22 October 2012 17:14, Harry Halpin <hhalpin-***@public.gmane.org> wrote:

> On 10/22/2012 04:04 PM, Henry Story wrote:
>
>> On 22 Oct 2012, at 14:32, Harry Halpin <hhalpin-***@public.gmane.org> wrote:
>>
>> On 10/22/2012 02:03 PM, Kingsley Idehen wrote:
>>>
>>>> On 10/22/12 7:26 AM, Ben Laurie wrote:
>>>>
>>>>> On 22 October 2012 11:59, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org>
>>>>> wrote:
>>>>>
>>>>>> On 10/22/12 5:54 AM, Ben Laurie wrote:
>>>>>>
>>>>>>> Where we came in was me pointing out that if you disconnect your
>>>>>>> identities by using multiple WebIDs, then you have a UI problem, and
>>>>>>> since then the aim seems to have been to persuade us that multiple
>>>>>>> WebIDs are not needed.
>>>>>>>
>>>>>> Multiple WebIDs (or any other cryptographically verifiable
>>>>>> identifier) are a
>>>>>> must.
>>>>>>
>>>>>> The issue of UI is inherently subjective. It can't be used to
>>>>>> objectively
>>>>>> validate or invalidate Web-scale verifiable identifier systems such as
>>>>>> WebID or any other mechanism aimed at achieving the same goals.
>>>>>>
>>>>> Ultimately what matters is: do users use it correctly? This can be
>>>>> tested :-)
>>>>>
>>>>> Note that it is necessary to test the cases where the website is evil,
>>>>> too - something that's often conveniently missed out of user testing.
>>>>> For example, its pretty obvious that OpenID fails horribly in this
>>>>> case, so it tends not to get tested.
>>>>>
>>>> Okay.
>>>>
>>>>> Anyway, Henry, I, and a few others from the WebID IG (hopefully) are
>>>>>> going
>>>>>> to knock up some demonstrations to show how this perceived UI/UX
>>>>>> inconvenience can be addressed.
>>>>>>
>>>>> Cool.
>>>>>
>>>> Okay, ball is in our court to now present a few implementations that
>>>> address the UI/UX concerns.
>>>>
>>>> Quite relieved to have finally reached this point :-)
>>>>
>>> No, its not a UI/UX concern, although the UI experience of both identity
>>> on the Web and with WebID in particular is quite terrible, I agree.
>>>
>> It completely depends on the browsers:
>> http://www.w3.org/wiki/Foaf%**2Bssl/Clients/CertSelection<http://www.w3.org/wiki/Foaf%2Bssl/Clients/CertSelection>
>> If you are on Linux just file a bug request to your browser if you are
>> unhappy, or even better hack up a good UI. It's easy: just make it simpler.
>>
>> My earlier concern was an information flow concern that causes the issue
>>> with linkability, which WebID shares to a large extent with other
>>> server-side information-flow.
>>>
>> Including BrowserId. Which has 2 tokens that can be used to identify the
>> user across sites:
>>
>> - an e-mail address ( useful for spamming )
>> - a public key, which can be used to authenticate across sites
>>
>>
>> As stated earlier, as long as you trust the browser, BrowserID does
>>> ameliorate this.
>>>
>> No it does not improve linkability at all. Certainly not if you think the
>> site you are authenticating to is the one you should be worried about,
>> because just using a public key
>> by itself is enough for Linkability in the strict (paranoid) sense. That
>> is if you
>> consider the site you are logging into to as the attacker, then by giving
>> two sites
>> a public key where you have proven you control the private key is enough
>> for them to know that
>> the same agent visited both sites. That is because the cert:key relation
>> is inverse functional.
>>
>> So in simple logical terms if you go to site1.org and identify with a
>> public key pk,
>> and they create a local identifier for you <http://site1.org/u123>, and
>> then you go site s2.net and identify with the same public key pk and
>> they give you an identifier <http://s2.net/lsdfs>
>> (these need not be public) and then they exchange their information, then
>> each of the sites would have the following relations ( written in
>> http://www.w3.org/TR/Turtle )
>>
>> @prefix cert: <http://www.w3.org/ns/auth/**cert#<http://www.w3.org/ns/auth/cert#>
>> >
>>
>> <http://site1.org/u123> cert:key pk .
>> <http://s2.net/lsdfs> cert:key pk .
>>
>> because cert:key is defined as an InverseFunctionalProperty
>> ( as you can see by going http://www.w3.org/ns/auth/**cert#key<http://www.w3.org/ns/auth/cert#key>)
>>
>> Then it follows from simple owl reasoning that
>>
>> <http://site1.org/u123> == <http://s2.net/lsdfs> .
>>
>> One cannot get much simpler logical reasoning that this, Harry.
>>
>>
>> There is also this rather odd conflation of "linkability" of URIs with
>>> hypertext and URI-enabled Semantic Web data" and linkability as a privacy
>>> concern.
>>>
>> I am not conflating these.
>>
> To quote the IETF document I seem to have unsuccessfully suggested you
> read a while back, the linkability of two or more Items Of Interest (e.g.,
> subjects, messages, actions, ...) from an attacker's perspective means
> that within a particular set of information, the attacker can distinguish
> whether these IOIs are related or not (with a high enough degree of
> probability to be useful) [1]. If you "like linkability", that's great, but
> probably many use-cases aren't built around liking linkability.
>

Harry, this document has been discussed in detail in the WebID group.
Thank you for bringing it to our attention.

I cant help but reflect at this point, that the only reason, that this
conversation has been made possible, is due to the "linkability" property
of the e-mail protocol. :)


> This has very little with hypertext linking of web-pages via URIs. I
> think you want to use the term "trust across different sites" rather than
> linkability, although I see how WebID wants to conflate that with trust,
> which no other identity solution does. A link does not necessarily mean
> trust, especially if links aren't bi-directional.
>
> As explained earlier, Mozilla Personae/BrowserID uses digital signatures
> where an IDP signs claims but transfers that claim to the RP via the
> browser (thus the notion of "different information flow") and thus the RP
> and IDP do not directly communicate, reducing the linkability of the data
> easily gathered by the IDP (not the RP).
>
> I know WebID folks believe IDP = my homepage, but for most people IDP
> would likely not be a homepage, but a major identity provider for which
> data minimization principles should apply, including ownership of the
> social network data of an individual and a history of their interactions
> with every RP. I am not defending BrowerID per se: Personae assumes you
> trust the browser, which some people don't. Also, email verification, while
> common, is not great from a security perspective, i.e. STARTLS not giving
> error messages when it degrades.
>
> Perhaps a more productive question would be why would someone use WebID
> rather than OpenID Connect with digital signatures?
>
> Although, I have ran out of time for this for the time being.
>
>
>
>> My point from the beginning is that Linkability is both a good thing and
>> a bad thing.
>>
>> As a defender of BrowserId you cannot consistently attack WebID for
>> linkability concerns and find BrowserId not to have that same problem. So I
>> hate to reveal this truth to you: but we have to fight this battle together.
>>
>> And the battle is simple: the linkability issue is only an issue if you
>> think the site you
>> are authenticating to is the enemy. If you believe that you are in
>> relation with a site that
>> is under a legal and moral duty to be respectful of the communication you
>> are having with it,
>> then you will find that the linkability of information with that site and
>> across sites is exactly what you want in order to reduce privacy issues
>> that arise out of centralised systems.
>>
>> I do think many people agree stronger cryptographic credentials for
>>> authentication are a good thing, and BrowserID is based on this and OpenID
>>> Connect has (albeit not often used) options in this space. I would again,
>>> please suggest that the WebID community take on board comments in a polite
>>> manner and not cc mailing lists.
>>>
>> All my communications have been polite, and I don't know why you select
>> out the WebID community.
>> As for taking on board comments, why, just the previous e-mail you
>> responded to was a demonstration that we are: CN=WebID,O=∅
>>
>>
>>
>>
>>>>
>>>> Social Web Architect
>> http://bblfish.net/
>>
>>
>
>
Dan Brickley
2012-10-22 17:31:28 UTC
Permalink
On 22 Oct 2012 08:14, "Harry Halpin" <hhalpin-***@public.gmane.org> wrote:
>
> On 10/22/2012 04:04 PM, Henry

> I know WebID folks believe IDP = my homepage, but for most people IDP
would likely not be a homepage, but a major identity provider for which
data minimization principles should apply, including ownership of the
social network data of an individual and a history of their interactions
with every RP.

Is there some reason it is implausible to assume homepages will never be
backed by major identity providers? Specifically, user-controlled DNS
names, but resolving to more sophisticated hosting than old-style
homepages....

Dan
Henry Story
2012-10-22 19:36:14 UTC
Permalink
On 22 Oct 2012, at 17:14, Harry Halpin <***@w3.org> wrote:

> On 10/22/2012 04:04 PM, Henry Story wrote:
>> On 22 Oct 2012, at 14:32, Harry Halpin <***@w3.org> wrote:
>>
>>> On 10/22/2012 02:03 PM, Kingsley Idehen wrote:
>>>> On 10/22/12 7:26 AM, Ben Laurie wrote:
>>>>> On 22 October 2012 11:59, Kingsley Idehen <***@openlinksw.com> wrote:
>>>>>> On 10/22/12 5:54 AM, Ben Laurie wrote:
>>>>>>> Where we came in was me pointing out that if you disconnect your
>>>>>>> identities by using multiple WebIDs, then you have a UI problem, and
>>>>>>> since then the aim seems to have been to persuade us that multiple
>>>>>>> WebIDs are not needed.
>>>>>> Multiple WebIDs (or any other cryptographically verifiable identifier) are a
>>>>>> must.
>>>>>>
>>>>>> The issue of UI is inherently subjective. It can't be used to objectively
>>>>>> validate or invalidate Web-scale verifiable identifier systems such as
>>>>>> WebID or any other mechanism aimed at achieving the same goals.
>>>>> Ultimately what matters is: do users use it correctly? This can be tested :-)
>>>>>
>>>>> Note that it is necessary to test the cases where the website is evil,
>>>>> too - something that's often conveniently missed out of user testing.
>>>>> For example, its pretty obvious that OpenID fails horribly in this
>>>>> case, so it tends not to get tested.
>>>> Okay.
>>>>>> Anyway, Henry, I, and a few others from the WebID IG (hopefully) are going
>>>>>> to knock up some demonstrations to show how this perceived UI/UX
>>>>>> inconvenience can be addressed.
>>>>> Cool.
>>>> Okay, ball is in our court to now present a few implementations that address the UI/UX concerns.
>>>>
>>>> Quite relieved to have finally reached this point :-)
>>> No, its not a UI/UX concern, although the UI experience of both identity on the Web and with WebID in particular is quite terrible, I agree.
>> It completely depends on the browsers:
>> http://www.w3.org/wiki/Foaf%2Bssl/Clients/CertSelection
>> If you are on Linux just file a bug request to your browser if you are unhappy, or even better hack up a good UI. It's easy: just make it simpler.
>>
>>> My earlier concern was an information flow concern that causes the issue with linkability, which WebID shares to a large extent with other server-side information-flow.
>> Including BrowserId. Which has 2 tokens that can be used to identify the user across sites:
>>
>> - an e-mail address ( useful for spamming )
>> - a public key, which can be used to authenticate across sites
>>
>>
>>> As stated earlier, as long as you trust the browser, BrowserID does ameliorate this.
>> No it does not improve linkability at all. Certainly not if you think the site you are authenticating to is the one you should be worried about, because just using a public key
>> by itself is enough for Linkability in the strict (paranoid) sense. That is if you
>> consider the site you are logging into to as the attacker, then by giving two sites
>> a public key where you have proven you control the private key is enough for them to know that
>> the same agent visited both sites. That is because the cert:key relation is inverse functional.
>>
>> So in simple logical terms if you go to site1.org and identify with a public key pk,
>> and they create a local identifier for you <http://site1.org/u123>, and then you go site s2.net and identify with the same public key pk and they give you an identifier <http://s2.net/lsdfs>
>> (these need not be public) and then they exchange their information, then each of the sites would have the following relations ( written in http://www.w3.org/TR/Turtle )
>>
>> @prefix cert: <http://www.w3.org/ns/auth/cert#>
>>
>> <http://site1.org/u123> cert:key pk .
>> <http://s2.net/lsdfs> cert:key pk .
>>
>> because cert:key is defined as an InverseFunctionalProperty
>> ( as you can see by going http://www.w3.org/ns/auth/cert#key )
>>
>> Then it follows from simple owl reasoning that
>>
>> <http://site1.org/u123> == <http://s2.net/lsdfs> .
>>
>> One cannot get much simpler logical reasoning that this, Harry.
>>
>>
>>> There is also this rather odd conflation of "linkability" of URIs with hypertext and URI-enabled Semantic Web data" and linkability as a privacy concern.
>> I am not conflating these.
> To quote the IETF document I seem to have unsuccessfully suggested you read a while back, the linkability of two or more Items Of Interest (e.g., subjects, messages, actions, ...) from an attacker's perspective means that within a particular set of information, the attacker can distinguish whether these IOIs are related or not (with a high enough degree of probability to be useful) [1]. If you "like linkability", that's great, but probably many use-cases aren't built around liking linkability.

The use of e-mail addresses as the primary identifier of BrowserId is defended for exactly the reason that web sites want to be able to communicate back with the user. It is a core part of the BrowserId marketing spiel. So linkability is core to BrowserID in that respect, and it is a core use case.

But the problem here is that one cannot speak of linkability full stop. One has to bring some further elements into consideration.

The definition from the draft-hansen-privacy-terminology-03 that you quote suggests that linkability is relative to an agent, call him 'A'. It is imagined that A has attackers, and so at least it is logically possible that A have friends too.

Communicating with friends is about building links, indeed this is what building communities is about. So building a social web is about building links in a distributed decentralised manner. We want to both increase linkages between people and increase their autonomy.

From this it follows that A when communicating has to consider two groups of people
1- friends: those people with whom A wishes to increase linkages with
2- enemies: those with whom A wishes to avoid linkages leaking to - the attacker as per draft-hansen-privacy-terminology-03

This is a bit rough of a distinction but I think it makes clear that you cannot just talk about linkability being good or bad without taking into considerations what the communication is about - with whom someone is communicating - and what the social network of the person is about - who his friends and enemies are.

> This has very little with hypertext linking of web-pages via URIs.

Well that is why above my argument was based in terms of public keys not URIs. But I could also have made it in terms of e-mail address for BrowserID. Here let me do it for you.

Imagine you go to two web sites with BrowserID: site1.org and s2.net . Each captures your e-mail address. They then exchange information ( and they trust each other too - that is an important point btw. ) Each site then has the following graph in store

@prefix foaf: <http://xmlns.com/foaf/0.1/> .

<http://site1.org/u123> foaf:mbox <mailto:***@ed.ac.uk> .
<http://s2.net/lsdfs> foaf:mbox <mailto:***@ed.ac.uk>.

From which they can deduce because since foaf:mbox is an owl:InverseFunctionalProperty
( just look up http://xmlns.com/foaf/0.1/mbox ) that

<http://site1.org/u123> owl:sameAs <http://s2.net/lsdfs> .

There you go: linkability via e-mail addresses.

> I think you want to use the term "trust across different sites" rather than linkability, although I see how WebID wants to conflate that with trust, which no other identity solution does. A link does not necessarily mean trust, especially if links aren't bi-directional.

There are many different types of links, some indicate trust, some don't. One can also have the equivalent of bidirectional links. A has a document where he points to B as a friend, and B returns the favour by placing a link from his document to A.

>
> As explained earlier, Mozilla Personae/BrowserID uses digital signatures where an IDP signs claims but transfers that claim to the RP via the browser (thus the notion of "different information flow") and thus the RP and IDP do not directly communicate, reducing the linkability of the data easily gathered by the IDP (not the RP).

As I prooved above, BrowserID by using Public Keys and by using e-mail identifiers furthermore, is giving a linkable identity to sites people use to log into. So you don't get away from the paranoid view of linkability being a problem, without getting any major interesting gain.

> I know WebID folks believe IDP = my homepage, but for most people IDP would likely not be a homepage, but a major identity provider for which data minimization principles should apply, including ownership of the social network data of an individual and a history of their interactions with every RP.

The point of this e-mail was to show that the type of linkability WebID provides is here in
order to make it possible for people to choose not to inhabit such a future.

> I am not defending BrowerID per se: Personae assumes you trust the browser, which some people don't. Also, email verification, while common, is not great from a security perspective, i.e. STARTLS not giving error messages when it degrades.

BrowserId will be an interesting tool in the future, when cryptography in the browser is available. Until then it has a strong centralisation focus. WebID delivers what BrowserId
promises to do, but now.

Anyway, you need to be more careful about talk of linkability, since to talk of linkability as good or bad without taking the context of who is talking to whome, who is in the role of the enemy etc, makes no sense.

>
> Perhaps a more productive question would be why would someone use WebID rather than OpenID Connect with digital signatures?

that can be a discussion for another day perhaps.
>
> Although, I have ran out of time for this for the time being.
>
>>
>> My point from the beginning is that Linkability is both a good thing and a bad thing.
>>
>> As a defender of BrowserId you cannot consistently attack WebID for linkability concerns and find BrowserId not to have that same problem. So I hate to reveal this truth to you: but we have to fight this battle together.
>>
>> And the battle is simple: the linkability issue is only an issue if you think the site you
>> are authenticating to is the enemy. If you believe that you are in relation with a site that
>> is under a legal and moral duty to be respectful of the communication you are having with it,
>> then you will find that the linkability of information with that site and across sites is exactly what you want in order to reduce privacy issues that arise out of centralised systems.
>>
>>> I do think many people agree stronger cryptographic credentials for authentication are a good thing, and BrowserID is based on this and OpenID Connect has (albeit not often used) options in this space. I would again, please suggest that the WebID community take on board comments in a polite manner and not cc mailing lists.
>> All my communications have been polite, and I don't know why you select out the WebID community.
>> As for taking on board comments, why, just the previous e-mail you responded to was a demonstration that we are: CN=WebID,O=∅
>>
>>
>>
>>>>
>>>>
>> Social Web Architect
>> http://bblfish.net/
>>
>

Social Web Architect
http://bblfish.net/
Ben Laurie
2012-10-23 08:44:06 UTC
Permalink
On Mon, Oct 22, 2012 at 8:36 PM, Henry Story <***@bblfish.net> wrote:
>
> On 22 Oct 2012, at 17:14, Harry Halpin <***@w3.org> wrote:
>
>> On 10/22/2012 04:04 PM, Henry Story wrote:
>>> On 22 Oct 2012, at 14:32, Harry Halpin <***@w3.org> wrote:
>>>
>>>> On 10/22/2012 02:03 PM, Kingsley Idehen wrote:
>>>>> On 10/22/12 7:26 AM, Ben Laurie wrote:
>>>>>> On 22 October 2012 11:59, Kingsley Idehen <***@openlinksw.com> wrote:
>>>>>>> On 10/22/12 5:54 AM, Ben Laurie wrote:
>>>>>>>> Where we came in was me pointing out that if you disconnect your
>>>>>>>> identities by using multiple WebIDs, then you have a UI problem, and
>>>>>>>> since then the aim seems to have been to persuade us that multiple
>>>>>>>> WebIDs are not needed.
>>>>>>> Multiple WebIDs (or any other cryptographically verifiable identifier) are a
>>>>>>> must.
>>>>>>>
>>>>>>> The issue of UI is inherently subjective. It can't be used to objectively
>>>>>>> validate or invalidate Web-scale verifiable identifier systems such as
>>>>>>> WebID or any other mechanism aimed at achieving the same goals.
>>>>>> Ultimately what matters is: do users use it correctly? This can be tested :-)
>>>>>>
>>>>>> Note that it is necessary to test the cases where the website is evil,
>>>>>> too - something that's often conveniently missed out of user testing.
>>>>>> For example, its pretty obvious that OpenID fails horribly in this
>>>>>> case, so it tends not to get tested.
>>>>> Okay.
>>>>>>> Anyway, Henry, I, and a few others from the WebID IG (hopefully) are going
>>>>>>> to knock up some demonstrations to show how this perceived UI/UX
>>>>>>> inconvenience can be addressed.
>>>>>> Cool.
>>>>> Okay, ball is in our court to now present a few implementations that address the UI/UX concerns.
>>>>>
>>>>> Quite relieved to have finally reached this point :-)
>>>> No, its not a UI/UX concern, although the UI experience of both identity on the Web and with WebID in particular is quite terrible, I agree.
>>> It completely depends on the browsers:
>>> http://www.w3.org/wiki/Foaf%2Bssl/Clients/CertSelection
>>> If you are on Linux just file a bug request to your browser if you are unhappy, or even better hack up a good UI. It's easy: just make it simpler.
>>>
>>>> My earlier concern was an information flow concern that causes the issue with linkability, which WebID shares to a large extent with other server-side information-flow.
>>> Including BrowserId. Which has 2 tokens that can be used to identify the user across sites:
>>>
>>> - an e-mail address ( useful for spamming )
>>> - a public key, which can be used to authenticate across sites
>>>
>>>
>>>> As stated earlier, as long as you trust the browser, BrowserID does ameliorate this.
>>> No it does not improve linkability at all. Certainly not if you think the site you are authenticating to is the one you should be worried about, because just using a public key
>>> by itself is enough for Linkability in the strict (paranoid) sense. That is if you
>>> consider the site you are logging into to as the attacker, then by giving two sites
>>> a public key where you have proven you control the private key is enough for them to know that
>>> the same agent visited both sites. That is because the cert:key relation is inverse functional.
>>>
>>> So in simple logical terms if you go to site1.org and identify with a public key pk,
>>> and they create a local identifier for you <http://site1.org/u123>, and then you go site s2.net and identify with the same public key pk and they give you an identifier <http://s2.net/lsdfs>
>>> (these need not be public) and then they exchange their information, then each of the sites would have the following relations ( written in http://www.w3.org/TR/Turtle )
>>>
>>> @prefix cert: <http://www.w3.org/ns/auth/cert#>
>>>
>>> <http://site1.org/u123> cert:key pk .
>>> <http://s2.net/lsdfs> cert:key pk .
>>>
>>> because cert:key is defined as an InverseFunctionalProperty
>>> ( as you can see by going http://www.w3.org/ns/auth/cert#key )
>>>
>>> Then it follows from simple owl reasoning that
>>>
>>> <http://site1.org/u123> == <http://s2.net/lsdfs> .
>>>
>>> One cannot get much simpler logical reasoning that this, Harry.
>>>
>>>
>>>> There is also this rather odd conflation of "linkability" of URIs with hypertext and URI-enabled Semantic Web data" and linkability as a privacy concern.
>>> I am not conflating these.
>> To quote the IETF document I seem to have unsuccessfully suggested you read a while back, the linkability of two or more Items Of Interest (e.g., subjects, messages, actions, ...) from an attacker's perspective means that within a particular set of information, the attacker can distinguish whether these IOIs are related or not (with a high enough degree of probability to be useful) [1]. If you "like linkability", that's great, but probably many use-cases aren't built around liking linkability.
>
> The use of e-mail addresses as the primary identifier of BrowserId is defended for exactly the reason that web sites want to be able to communicate back with the user. It is a core part of the BrowserId marketing spiel. So linkability is core to BrowserID in that respect, and it is a core use case.
>
> But the problem here is that one cannot speak of linkability full stop. One has to bring some further elements into consideration.
>
> The definition from the draft-hansen-privacy-terminology-03 that you quote suggests that linkability is relative to an agent, call him 'A'. It is imagined that A has attackers, and so at least it is logically possible that A have friends too.
>
> Communicating with friends is about building links, indeed this is what building communities is about. So building a social web is about building links in a distributed decentralised manner. We want to both increase linkages between people and increase their autonomy.
>
> From this it follows that A when communicating has to consider two groups of people
> 1- friends: those people with whom A wishes to increase linkages with
> 2- enemies: those with whom A wishes to avoid linkages leaking to - the attacker as per draft-hansen-privacy-terminology-03
>
> This is a bit rough of a distinction but I think it makes clear that you cannot just talk about linkability being good or bad without taking into considerations what the communication is about - with whom someone is communicating - and what the social network of the person is about - who his friends and enemies are.
>
>> This has very little with hypertext linking of web-pages via URIs.
>
> Well that is why above my argument was based in terms of public keys not URIs. But I could also have made it in terms of e-mail address for BrowserID. Here let me do it for you.
>
> Imagine you go to two web sites with BrowserID: site1.org and s2.net . Each captures your e-mail address. They then exchange information ( and they trust each other too - that is an important point btw. ) Each site then has the following graph in store
>
> @prefix foaf: <http://xmlns.com/foaf/0.1/> .
>
> <http://site1.org/u123> foaf:mbox <mailto:***@ed.ac.uk> .
> <http://s2.net/lsdfs> foaf:mbox <mailto:***@ed.ac.uk>.
>
> From which they can deduce because since foaf:mbox is an owl:InverseFunctionalProperty
> ( just look up http://xmlns.com/foaf/0.1/mbox ) that
>
> <http://site1.org/u123> owl:sameAs <http://s2.net/lsdfs> .
>
> There you go: linkability via e-mail addresses.
>
>> I think you want to use the term "trust across different sites" rather than linkability, although I see how WebID wants to conflate that with trust, which no other identity solution does. A link does not necessarily mean trust, especially if links aren't bi-directional.
>
> There are many different types of links, some indicate trust, some don't. One can also have the equivalent of bidirectional links. A has a document where he points to B as a friend, and B returns the favour by placing a link from his document to A.
>
>>
>> As explained earlier, Mozilla Personae/BrowserID uses digital signatures where an IDP signs claims but transfers that claim to the RP via the browser (thus the notion of "different information flow") and thus the RP and IDP do not directly communicate, reducing the linkability of the data easily gathered by the IDP (not the RP).
>
> As I prooved above, BrowserID by using Public Keys and by using e-mail identifiers furthermore, is giving a linkable identity to sites people use to log into. So you don't get away from the paranoid view of linkability being a problem, without getting any major interesting gain.
>
>> I know WebID folks believe IDP = my homepage, but for most people IDP would likely not be a homepage, but a major identity provider for which data minimization principles should apply, including ownership of the social network data of an individual and a history of their interactions with every RP.
>
> The point of this e-mail was to show that the type of linkability WebID provides is here in
> order to make it possible for people to choose not to inhabit such a future.
>
>> I am not defending BrowerID per se: Personae assumes you trust the browser, which some people don't. Also, email verification, while common, is not great from a security perspective, i.e. STARTLS not giving error messages when it degrades.
>
> BrowserId will be an interesting tool in the future, when cryptography in the browser is available. Until then it has a strong centralisation focus. WebID delivers what BrowserId
> promises to do, but now.
>
> Anyway, you need to be more careful about talk of linkability, since to talk of linkability as good or bad without taking the context of who is talking to whome, who is in the role of the enemy etc, makes no sense.
>
>>
>> Perhaps a more productive question would be why would someone use WebID rather than OpenID Connect with digital signatures?
>
> that can be a discussion for another day perhaps.

Not disagreeing with any of the above, but observing that:

a) There's no particular reason you could not have an email per site
as well as a key per site.

b) Linkability it not, as you say, inherently bad. The problem occurs
when you have (effectively) no choice about linkability.

>>
>> Although, I have ran out of time for this for the time being.
>>
>>>
>>> My point from the beginning is that Linkability is both a good thing and a bad thing.
>>>
>>> As a defender of BrowserId you cannot consistently attack WebID for linkability concerns and find BrowserId not to have that same problem. So I hate to reveal this truth to you: but we have to fight this battle together.
>>>
>>> And the battle is simple: the linkability issue is only an issue if you think the site you
>>> are authenticating to is the enemy. If you believe that you are in relation with a site that
>>> is under a legal and moral duty to be respectful of the communication you are having with it,
>>> then you will find that the linkability of information with that site and across sites is exactly what you want in order to reduce privacy issues that arise out of centralised systems.
>>>
>>>> I do think many people agree stronger cryptographic credentials for authentication are a good thing, and BrowserID is based on this and OpenID Connect has (albeit not often used) options in this space. I would again, please suggest that the WebID community take on board comments in a polite manner and not cc mailing lists.
>>> All my communications have been polite, and I don't know why you select out the WebID community.
>>> As for taking on board comments, why, just the previous e-mail you responded to was a demonstration that we are: CN=WebID,O=∅
>>>
>>>
>>>
>>>>>
>>>>>
>>> Social Web Architect
>>> http://bblfish.net/
>>>
>>
>
> Social Web Architect
> http://bblfish.net/
>
>
> _______________________________________________
> saag mailing list
> ***@ietf.org
> https://www.ietf.org/mailman/listinfo/saag
>
Henry Story
2012-10-23 09:45:11 UTC
Permalink
On 23 Oct 2012, at 10:44, Ben Laurie <***@links.org> wrote:

> On Mon, Oct 22, 2012 at 8:36 PM, Henry Story <***@bblfish.net> wrote:
>>
>> On 22 Oct 2012, at 17:14, Harry Halpin <***@w3.org> wrote:
>>
>>> On 10/22/2012 04:04 PM, Henry Story wrote:
>>>> On 22 Oct 2012, at 14:32, Harry Halpin <***@w3.org> wrote:
>>>>
>>>>> On 10/22/2012 02:03 PM, Kingsley Idehen wrote:
>>>>>> On 10/22/12 7:26 AM, Ben Laurie wrote:
>>>>>>> On 22 October 2012 11:59, Kingsley Idehen <***@openlinksw.com> wrote:
>>>>>>>> On 10/22/12 5:54 AM, Ben Laurie wrote:
>>>>>>>>> Where we came in was me pointing out that if you disconnect your
>>>>>>>>> identities by using multiple WebIDs, then you have a UI problem, and
>>>>>>>>> since then the aim seems to have been to persuade us that multiple
>>>>>>>>> WebIDs are not needed.
>>>>>>>> Multiple WebIDs (or any other cryptographically verifiable identifier) are a
>>>>>>>> must.
>>>>>>>>
>>>>>>>> The issue of UI is inherently subjective. It can't be used to objectively
>>>>>>>> validate or invalidate Web-scale verifiable identifier systems such as
>>>>>>>> WebID or any other mechanism aimed at achieving the same goals.
>>>>>>> Ultimately what matters is: do users use it correctly? This can be tested :-)
>>>>>>>
>>>>>>> Note that it is necessary to test the cases where the website is evil,
>>>>>>> too - something that's often conveniently missed out of user testing.
>>>>>>> For example, its pretty obvious that OpenID fails horribly in this
>>>>>>> case, so it tends not to get tested.
>>>>>> Okay.
>>>>>>>> Anyway, Henry, I, and a few others from the WebID IG (hopefully) are going
>>>>>>>> to knock up some demonstrations to show how this perceived UI/UX
>>>>>>>> inconvenience can be addressed.
>>>>>>> Cool.
>>>>>> Okay, ball is in our court to now present a few implementations that address the UI/UX concerns.
>>>>>>
>>>>>> Quite relieved to have finally reached this point :-)
>>>>> No, its not a UI/UX concern, although the UI experience of both identity on the Web and with WebID in particular is quite terrible, I agree.
>>>> It completely depends on the browsers:
>>>> http://www.w3.org/wiki/Foaf%2Bssl/Clients/CertSelection
>>>> If you are on Linux just file a bug request to your browser if you are unhappy, or even better hack up a good UI. It's easy: just make it simpler.
>>>>
>>>>> My earlier concern was an information flow concern that causes the issue with linkability, which WebID shares to a large extent with other server-side information-flow.
>>>> Including BrowserId. Which has 2 tokens that can be used to identify the user across sites:
>>>>
>>>> - an e-mail address ( useful for spamming )
>>>> - a public key, which can be used to authenticate across sites
>>>>
>>>>
>>>>> As stated earlier, as long as you trust the browser, BrowserID does ameliorate this.
>>>> No it does not improve linkability at all. Certainly not if you think the site you are authenticating to is the one you should be worried about, because just using a public key
>>>> by itself is enough for Linkability in the strict (paranoid) sense. That is if you
>>>> consider the site you are logging into to as the attacker, then by giving two sites
>>>> a public key where you have proven you control the private key is enough for them to know that
>>>> the same agent visited both sites. That is because the cert:key relation is inverse functional.
>>>>
>>>> So in simple logical terms if you go to site1.org and identify with a public key pk,
>>>> and they create a local identifier for you <http://site1.org/u123>, and then you go site s2.net and identify with the same public key pk and they give you an identifier <http://s2.net/lsdfs>
>>>> (these need not be public) and then they exchange their information, then each of the sites would have the following relations ( written in http://www.w3.org/TR/Turtle )
>>>>
>>>> @prefix cert: <http://www.w3.org/ns/auth/cert#>
>>>>
>>>> <http://site1.org/u123> cert:key pk .
>>>> <http://s2.net/lsdfs> cert:key pk .
>>>>
>>>> because cert:key is defined as an InverseFunctionalProperty
>>>> ( as you can see by going http://www.w3.org/ns/auth/cert#key )
>>>>
>>>> Then it follows from simple owl reasoning that
>>>>
>>>> <http://site1.org/u123> == <http://s2.net/lsdfs> .
>>>>
>>>> One cannot get much simpler logical reasoning that this, Harry.
>>>>
>>>>
>>>>> There is also this rather odd conflation of "linkability" of URIs with hypertext and URI-enabled Semantic Web data" and linkability as a privacy concern.
>>>> I am not conflating these.
>>> To quote the IETF document I seem to have unsuccessfully suggested you read a while back, the linkability of two or more Items Of Interest (e.g., subjects, messages, actions, ...) from an attacker's perspective means that within a particular set of information, the attacker can distinguish whether these IOIs are related or not (with a high enough degree of probability to be useful) [1]. If you "like linkability", that's great, but probably many use-cases aren't built around liking linkability.
>>
>> The use of e-mail addresses as the primary identifier of BrowserId is defended for exactly the reason that web sites want to be able to communicate back with the user. It is a core part of the BrowserId marketing spiel. So linkability is core to BrowserID in that respect, and it is a core use case.
>>
>> But the problem here is that one cannot speak of linkability full stop. One has to bring some further elements into consideration.
>>
>> The definition from the draft-hansen-privacy-terminology-03 that you quote suggests that linkability is relative to an agent, call him 'A'. It is imagined that A has attackers, and so at least it is logically possible that A have friends too.
>>
>> Communicating with friends is about building links, indeed this is what building communities is about. So building a social web is about building links in a distributed decentralised manner. We want to both increase linkages between people and increase their autonomy.
>>
>> From this it follows that A when communicating has to consider two groups of people
>> 1- friends: those people with whom A wishes to increase linkages with
>> 2- enemies: those with whom A wishes to avoid linkages leaking to - the attacker as per draft-hansen-privacy-terminology-03
>>
>> This is a bit rough of a distinction but I think it makes clear that you cannot just talk about linkability being good or bad without taking into considerations what the communication is about - with whom someone is communicating - and what the social network of the person is about - who his friends and enemies are.
>>
>>> This has very little with hypertext linking of web-pages via URIs.
>>
>> Well that is why above my argument was based in terms of public keys not URIs. But I could also have made it in terms of e-mail address for BrowserID. Here let me do it for you.
>>
>> Imagine you go to two web sites with BrowserID: site1.org and s2.net . Each captures your e-mail address. They then exchange information ( and they trust each other too - that is an important point btw. ) Each site then has the following graph in store
>>
>> @prefix foaf: <http://xmlns.com/foaf/0.1/> .
>>
>> <http://site1.org/u123> foaf:mbox <mailto:***@ed.ac.uk> .
>> <http://s2.net/lsdfs> foaf:mbox <mailto:***@ed.ac.uk>.
>>
>> From which they can deduce because since foaf:mbox is an owl:InverseFunctionalProperty
>> ( just look up http://xmlns.com/foaf/0.1/mbox ) that
>>
>> <http://site1.org/u123> owl:sameAs <http://s2.net/lsdfs> .
>>
>> There you go: linkability via e-mail addresses.
>>
>>> I think you want to use the term "trust across different sites" rather than linkability, although I see how WebID wants to conflate that with trust, which no other identity solution does. A link does not necessarily mean trust, especially if links aren't bi-directional.
>>
>> There are many different types of links, some indicate trust, some don't. One can also have the equivalent of bidirectional links. A has a document where he points to B as a friend, and B returns the favour by placing a link from his document to A.
>>
>>>
>>> As explained earlier, Mozilla Personae/BrowserID uses digital signatures where an IDP signs claims but transfers that claim to the RP via the browser (thus the notion of "different information flow") and thus the RP and IDP do not directly communicate, reducing the linkability of the data easily gathered by the IDP (not the RP).
>>
>> As I prooved above, BrowserID by using Public Keys and by using e-mail identifiers furthermore, is giving a linkable identity to sites people use to log into. So you don't get away from the paranoid view of linkability being a problem, without getting any major interesting gain.
>>
>>> I know WebID folks believe IDP = my homepage, but for most people IDP would likely not be a homepage, but a major identity provider for which data minimization principles should apply, including ownership of the social network data of an individual and a history of their interactions with every RP.
>>
>> The point of this e-mail was to show that the type of linkability WebID provides is here in
>> order to make it possible for people to choose not to inhabit such a future.
>>
>>> I am not defending BrowerID per se: Personae assumes you trust the browser, which some people don't. Also, email verification, while common, is not great from a security perspective, i.e. STARTLS not giving error messages when it degrades.
>>
>> BrowserId will be an interesting tool in the future, when cryptography in the browser is available. Until then it has a strong centralisation focus. WebID delivers what BrowserId
>> promises to do, but now.
>>
>> Anyway, you need to be more careful about talk of linkability, since to talk of linkability as good or bad without taking the context of who is talking to whome, who is in the role of the enemy etc, makes no sense.
>>
>>>
>>> Perhaps a more productive question would be why would someone use WebID rather than OpenID Connect with digital signatures?
>>
>> that can be a discussion for another day perhaps.
>
> Not disagreeing with any of the above, but observing that:
>
> a) There's no particular reason you could not have an email per site
> as well as a key per site.
>
> b) Linkability it not, as you say, inherently bad. The problem occurs
> when you have (effectively) no choice about linkability.

Yes. We're agreeing here :-)

Just to expand on our agreement:

a) You can have an e-mail per site, indeed. That is: nothing disallows that. But of course that is not what the core use case for BrowserId is.

Btw. you can have a profile page per web site too. Any web site that uses a cookie for a user could create a web page for that connection where it publishes the information of what it knows about that user. So for some connection of a user with cookie ck123, web site1.org could create a
access controlled web page where it publishes what it knows about ck123. This page could be named anything, but let's say that it chooses
https://site1.org/cki/ck123
It could publish there

<#u> foaf:mbox <mailto:***@ed.ac.uk> .

So that site could be either well intentioned or not. It could be sharing that information with co-conspirators, hand it over to the FBI whenever asked (as per US law), or take a stand the way Nichals Merill did when he fought an FBI order nearly all the way to the supreme court ( see his Chaos Communication Congress talk "The Importance of Resisting excessive Government Surveillance" http://www.youtube.com/watch?v=sDkHPNbCC1M )

So above I have have Harry use BrowserId to log into that site. But most users who log in to a site even using other technologies will pretty soon give enough information out to be linkable, since it requires only very few pieces of information for one to be able to get a statistically high enough correlation to make linkable relations. As a result one would be making their life extreemly difficult for only very little actual security gain. The difficulty of making connections cross sites is then a good reason for 500million people to go use one central service provider that only requires one login.

We have something akin to air in a balloon: you press in one place and the air just moves elsewhere. So you create a completely unlinkable security architecture, and all that happens is people move to a centralised system.

b) yes. We don't want people to have no choice about linkability. My view is we need to decrease sur-veillance we need to help people move to technologies of sous-veillance, by distributing information back to the nodes a lot more. That requires us to have better linkability technologies... oddly enough. :-)

>
>>>
>>> Although, I have ran out of time for this for the time being.
>>>
>>>>
>>>> My point from the beginning is that Linkability is both a good thing and a bad thing.
>>>>
>>>> As a defender of BrowserId you cannot consistently attack WebID for linkability concerns and find BrowserId not to have that same problem. So I hate to reveal this truth to you: but we have to fight this battle together.
>>>>
>>>> And the battle is simple: the linkability issue is only an issue if you think the site you
>>>> are authenticating to is the enemy. If you believe that you are in relation with a site that
>>>> is under a legal and moral duty to be respectful of the communication you are having with it,
>>>> then you will find that the linkability of information with that site and across sites is exactly what you want in order to reduce privacy issues that arise out of centralised systems.
>>>>
>>>>> I do think many people agree stronger cryptographic credentials for authentication are a good thing, and BrowserID is based on this and OpenID Connect has (albeit not often used) options in this space. I would again, please suggest that the WebID community take on board comments in a polite manner and not cc mailing lists.
>>>> All my communications have been polite, and I don't know why you select out the WebID community.
>>>> As for taking on board comments, why, just the previous e-mail you responded to was a demonstration that we are: CN=WebID,O=∅
>>>>
>>>>
>>>>
>>>>>>
>>>>>>
>>>> Social Web Architect
>>>> http://bblfish.net/
>>>>
>>>
>>
>> Social Web Architect
>> http://bblfish.net/
>>
>>
>> _______________________________________________
>> saag mailing list
>> ***@ietf.org
>> https://www.ietf.org/mailman/listinfo/saag
>>

Social Web Architect
http://bblfish.net/
Nathan
2012-10-23 09:56:21 UTC
Permalink
Ben Laurie wrote:
> b) Linkability it not, as you say, inherently bad. The problem occurs
> when you have (effectively) no choice about linkability.

.. and when people convey or infer that there is no choice about
linkability, when there really is scope to be as unlinkable as one likes
within WebID.

Quite convinced now that the confusion is just differing objectives and
opinions, and nothing technical.

You can have one or more WebIDs to cover any combination of one or more
requests to one or more resources. Be as linkable or unlinkable as you like.

On the other hand, WebID the idea (rather than the technical protocol)
has been created within a context where linkability is desired, indeed
it's creation was to enable and promote increased linkability - so
applying it to situations where unlinkability is desired goes against
the grain, or clashes with individual's general mental model of it.

In it's simplest form, WebID is just a way to establish an identifier
for an agent layered on to the usual client cert auth. This allows:
- WebID to be used anywhere HTTP+TLS can be used
- Crucially, identifiers to be used that refer to resources anywhere on
the web which can be interacted with in order to find out more about the
agent identified. Without relying on fixed API features, multiple
protocols and layers, out of band knowledge, or limited functionality by
using non dereferencable identifiers.

So if wikileaks want to generate a cert with an identifier only they can
view and which is completely unlinkable, for a one time use, they can.
If a bank wants to issue a series of certs to a client which has some
stable identifier in them for the client, they can. If facebook want to
issue certs which have identifiers which deref to a machine/human
readable version of the users profile, and allow people to use their
facebook id on any site, they can. If a single person wants to handle
their own identity and profile, they can. If services like AWS want to
issue keys to machine agents, they can. And critically, they'd all be
interoperable from a technical view, with limits to which identifiers
and keys and as to which information is visible and what can be used
where added on by ACL and usage restrictions.

It's quite simple really, client cert auth over TLS is well established,
and HTTP(s) URIs allow dereferencing to anything on the web, with the
possibility of any features you find anywhere on the web.

Seems far more logical and simpler than creating a plethora of custom
protocols which rely on layer upon layer of techs and protocols in order
to try and make non dereferencable identifiers dereferencable, or to try
and provide more information about an identified agent via a suite of
API extensions that need implemented by all adopters, or to come up with
something new which has most of the same negative sides, and requires
web scale adoption in order to work everywhere WebID already can.

Best,

Nathan
Henry Story
2012-10-23 10:14:49 UTC
Permalink
On 23 Oct 2012, at 11:56, Nathan <***@webr3.org> wrote:

> Ben Laurie wrote:
>> b) Linkability it not, as you say, inherently bad. The problem occurs
>> when you have (effectively) no choice about linkability.
>
> .. and when people convey or infer that there is no choice about linkability, when there really is scope to be as unlinkable as one likes within WebID.
>
> Quite convinced now that the confusion is just differing objectives and opinions, and nothing technical.
>
> You can have one or more WebIDs to cover any combination of one or more requests to one or more resources. Be as linkable or unlinkable as you like.
>
> On the other hand, WebID the idea (rather than the technical protocol) has been created within a context where linkability is desired, indeed it's creation was to enable and promote increased linkability - so applying it to situations where unlinkability is desired goes against the grain, or clashes with individual's general mental model of it.
>
> In it's simplest form, WebID is just a way to establish an identifier for an agent layered on to the usual client cert auth. This allows:
> - WebID to be used anywhere HTTP+TLS can be used
> - Crucially, identifiers to be used that refer to resources anywhere on the web which can be interacted with in order to find out more about the agent identified. Without relying on fixed API features, multiple protocols and layers, out of band knowledge, or limited functionality by using non dereferencable identifiers.
>
> So if wikileaks want to generate a cert with an identifier only they can view and which is completely unlinkable, for a one time use, they can.

yes, but they had better use some other technology such as TLS-Origin-Bound Certificates
then. http://tools.ietf.org/agenda/81/slides/tls-1.pdf
I am not even sure that is a good idea. Strong username passwords may still be a lot better there, and suggesting strongly the user remember it by heart.

> If a bank wants to issue a series of certs to a client which has some stable identifier in them for the client, they can. If facebook want to issue certs which have identifiers which deref to a machine/human readable version of the users profile, and allow people to use their facebook id on any site, they can. If a single person wants to handle their own identity and profile, they can. If services like AWS want to issue keys to machine agents, they can. And critically, they'd all be interoperable from a technical view, with limits to which identifiers and keys and as to which information is visible and what can be used where added on by ACL and usage restrictions.
>
> It's quite simple really, client cert auth over TLS is well established, and HTTP(s) URIs allow dereferencing to anything on the web, with the possibility of any features you find anywhere on the web.
>
> Seems far more logical and simpler than creating a plethora of custom protocols which rely on layer upon layer of techs and protocols in order to try and make non dereferencable identifiers dereferencable, or to try and provide more information about an identified agent via a suite of API extensions that need implemented by all adopters, or to come up with something new which has most of the same negative sides, and requires web scale adoption in order to work everywhere WebID already can.
>
> Best,
>
> Nathan

+1 :-)

Social Web Architect
http://bblfish.net/
Ben Laurie
2012-10-23 10:50:14 UTC
Permalink
On 23 October 2012 10:56, Nathan <***@webr3.org> wrote:
> Ben Laurie wrote:
>>
>> b) Linkability it not, as you say, inherently bad. The problem occurs
>> when you have (effectively) no choice about linkability.
>
>
> .. and when people convey or infer that there is no choice about
> linkability, when there really is scope to be as unlinkable as one likes
> within WebID.

I have never disputed that - my point is that if I am as unlinkable as
I like I then have a fairly horrific problem managing a large number
of certificates and remembering which one I used where.
Andrei Sambra
2012-10-23 11:46:41 UTC
Permalink
On 10/23/2012 12:50 PM, Ben Laurie wrote:
> On 23 October 2012 10:56, Nathan <nathan-***@public.gmane.org> wrote:
>> Ben Laurie wrote:
>>>
>>> b) Linkability it not, as you say, inherently bad. The problem occurs
>>> when you have (effectively) no choice about linkability.
>>
>>
>> .. and when people convey or infer that there is no choice about
>> linkability, when there really is scope to be as unlinkable as one likes
>> within WebID.
>
> I have never disputed that - my point is that if I am as unlinkable as
> I like I then have a fairly horrific problem managing a large number
> of certificates and remembering which one I used where.
>

Wouldn't you say you have the same problem now with most, if not all
authentication protocols? I still think it's easier to manage 100s of
certificates compared to managing 100s of user/pass combinations.

If it is a UI issue, then it can be made more intuitive. From what you
say above, the WebID protocol itself is not the problem.

Andrei

P.S. I've been trying to follow this conversation and so far it's been a
pain in the @$$. W3C should have a way to separate threads based on
relevance to one's interests, otherwise it becomes very hard to be
productive when you have to read though so many emails daily.
Ben Laurie
2012-10-23 12:03:58 UTC
Permalink
On 23 October 2012 12:46, Andrei Sambra <***@fcns.eu> wrote:
> On 10/23/2012 12:50 PM, Ben Laurie wrote:
>>
>> On 23 October 2012 10:56, Nathan <***@webr3.org> wrote:
>>>
>>> Ben Laurie wrote:
>>>>
>>>>
>>>> b) Linkability it not, as you say, inherently bad. The problem occurs
>>>> when you have (effectively) no choice about linkability.
>>>
>>>
>>>
>>> .. and when people convey or infer that there is no choice about
>>> linkability, when there really is scope to be as unlinkable as one likes
>>> within WebID.
>>
>>
>> I have never disputed that - my point is that if I am as unlinkable as
>> I like I then have a fairly horrific problem managing a large number
>> of certificates and remembering which one I used where.
>>
>
> Wouldn't you say you have the same problem now with most, if not all
> authentication protocols?

Yes.

> I still think it's easier to manage 100s of
> certificates compared to managing 100s of user/pass combinations.
>
> If it is a UI issue, then it can be made more intuitive. From what you say
> above, the WebID protocol itself is not the problem.

Well. There are certainly protocols that reduce this particular
problem, in particular those that use selective disclosure or zero
knowledge to solve the linkability issue without requiring a plethora
of keys.

>
> Andrei
>
> P.S. I've been trying to follow this conversation and so far it's been a
> pain in the @$$. W3C should have a way to separate threads based on
> relevance to one's interests, otherwise it becomes very hard to be
> productive when you have to read though so many emails daily.
Henry Story
2012-10-23 11:52:49 UTC
Permalink
On 23 Oct 2012, at 12:50, Ben Laurie <***@google.com> wrote:

> On 23 October 2012 10:56, Nathan <***@webr3.org> wrote:
>> Ben Laurie wrote:
>>>
>>> b) Linkability it not, as you say, inherently bad. The problem occurs
>>> when you have (effectively) no choice about linkability.
>>
>>
>> .. and when people convey or infer that there is no choice about
>> linkability, when there really is scope to be as unlinkable as one likes
>> within WebID.
>
> I have never disputed that - my point is that if I am as unlinkable as
> I like I then have a fairly horrific problem managing a large number
> of certificates and remembering which one I used where.


Yes, so browsers should in my view remember what selection you make when
you go to a web site, and resend the same certificate the next time you go
there. Mind you - they should also show you that they have done this and
allow you to change your previous choice - even if needed back to anonymous.
We argued this in a different thread on Transparency of Identity in the
browser - and there I pointed to work by Aza Raskin as a good example of
what I meant
http://www.azarask.in/blog/post/identity-in-the-browser-firefox/

This then leaves the issue of how to do this across browsers, and I think
there are a number of synchronisation "protocols" that could be developed there.
In my view the only protocol needed is HTTP here + an ontology for bookmarks,
cookies, personas, etc... You give your browser your trusted home site where
you can POST, PUT, and GET all of these ids. A good protocol for this would be
the Atom protocol or better the in development linked data protocol
http://dvcs.w3.org/hg/ldpwg/raw-file/a3be44430b37/ldp.html

You probably don't need here to even save the certificates for each site, you
just need to know if you authenticated there using a global id, a local certificate,
or a password, and you could re-generate the identifiers. Well you have a more
difficult time it is true for certificates bound to one site. And even saving cookies
is difficult because they may encode device type and screen size...

So that's a lot of work to get done right. I don't have anything against it being
done. It could even be helpful for WebID... But as my priority is building a
RESTful distributed social web, and as I am not employed by browser vendors to work
on such a protocol, .... (I'll use it when its deployed)

In short these issues seem to be orthogonal, and can be developed in parallel.


Henry

Social Web Architect
http://bblfish.net/
Robin Wilton
2012-10-23 09:58:22 UTC
Permalink
Robin Wilton
Technical Outreach Director - Identity and Privacy
Internet Society

email: ***@isoc.org
Phone: +44 705 005 2931
Twitter: @futureidentity




On 23 Oct 2012, at 09:44, Ben Laurie wrote:

<snip>

>
> Not disagreeing with any of the above, but observing that:
>
> a) There's no particular reason you could not have an email per site
> as well as a key per site.
>
> b) Linkability it not, as you say, inherently bad. The problem occurs
> when you have (effectively) no choice about linkability.
>


But it's very hard to use either of those mechanisms (separation through emails or keys) without giving some third party the ability to achieve total linkability. (In other words, both options remove effective choice).

Yrs.,
Robin
Ben Laurie
2012-10-24 09:30:12 UTC
Permalink
On 23 October 2012 10:58, Robin Wilton <***@isoc.org> wrote:
>
> Robin Wilton
> Technical Outreach Director - Identity and Privacy
> Internet Society
>
> email: ***@isoc.org
> Phone: +44 705 005 2931
> Twitter: @futureidentity
>
>
>
>
> On 23 Oct 2012, at 09:44, Ben Laurie wrote:
>
> <snip>
>
>
> Not disagreeing with any of the above, but observing that:
>
> a) There's no particular reason you could not have an email per site
> as well as a key per site.
>
> b) Linkability it not, as you say, inherently bad. The problem occurs
> when you have (effectively) no choice about linkability.
>
>
>
> But it's very hard to use either of those mechanisms (separation through
> emails or keys) without giving some third party the ability to achieve total
> linkability. (In other words, both options remove effective choice).

I agree that emails are a problem, but not at all sure why keys are?
In the case of appropriate selective disclosure mechanisms, even if
there were a third party involved, they would not be able to link uses
of the keys. Also, if you insist on using linkable keys, then per-site
keys do not involve third parties.

On email, this is a soluble problem, but not without using a
completely different delivery mechanism.

>
> Yrs.,
> Robin
>
> _______________________________________________
> saag mailing list
> ***@ietf.org
> https://www.ietf.org/mailman/listinfo/saag
>
Robin Wilton
2012-10-24 11:26:58 UTC
Permalink
Robin Wilton
Technical Outreach Director - Identity and Privacy
Internet Society

email: wilton-***@public.gmane.org
Phone: +44 705 005 2931
Twitter: @futureidentity




On 24 Oct 2012, at 10:30, Ben Laurie wrote:

> On 23 October 2012 10:58, Robin Wilton <wilton-***@public.gmane.org> wrote:
>>
>>
>> On 23 Oct 2012, at 09:44, Ben Laurie wrote:
>>
>> <snip>
>>
>>
>> Not disagreeing with any of the above, but observing that:
>>
>> a) There's no particular reason you could not have an email per site
>> as well as a key per site.
>>
>> b) Linkability it not, as you say, inherently bad. The problem occurs
>> when you have (effectively) no choice about linkability.
>>
>>
>>
>> But it's very hard to use either of those mechanisms (separation through
>> emails or keys) without giving some third party the ability to achieve total
>> linkability. (In other words, both options remove effective choice).
>
> I agree that emails are a problem, but not at all sure why keys are?
> In the case of appropriate selective disclosure mechanisms, even if
> there were a third party involved, they would not be able to link uses
> of the keys. Also, if you insist on using linkable keys, then per-site
> keys do not involve third parties.
>

It may just be that I'm not getting a clear mental picture of your architecture. But here was my thinking:

- If you use symmetric keys, you get a system which can't scale unless you opt for Schneier's idea of a key server… but then the key server becomes a point of potential panopticality.

- If you use PKI, *and* you want your communicating parties to be able to validate the certs they're relying on, then you have to design a CRL- or OCSP-like mechanism into the architecture, and again you end up with a component which is potentially panoptical. (Plus, you have to address the 20-year-old problem of how to make PKI usable by human beings, when recent history suggests that PKI only takes off where human beings are kept well away from it).

R



> On email, this is a soluble problem, but not without using a
> completely different delivery mechanism.



>
>>
>> Yrs.,
>> Robin
>>
>> _______________________________________________
>> saag mailing list
>> saag-***@public.gmane.org
>> https://www.ietf.org/mailman/listinfo/saag
>>
Ben Laurie
2012-10-24 11:32:06 UTC
Permalink
On 24 October 2012 12:26, Robin Wilton <***@isoc.org> wrote:
>
>
>
>
>
>
> Robin Wilton
> Technical Outreach Director - Identity and Privacy
> Internet Society
>
> email: ***@isoc.org
> Phone: +44 705 005 2931
> Twitter: @futureidentity
>
>
>
>
> On 24 Oct 2012, at 10:30, Ben Laurie wrote:
>
> On 23 October 2012 10:58, Robin Wilton <***@isoc.org> wrote:
>
>
>
> On 23 Oct 2012, at 09:44, Ben Laurie wrote:
>
>
> <snip>
>
>
>
> Not disagreeing with any of the above, but observing that:
>
>
> a) There's no particular reason you could not have an email per site
>
> as well as a key per site.
>
>
> b) Linkability it not, as you say, inherently bad. The problem occurs
>
> when you have (effectively) no choice about linkability.
>
>
>
>
> But it's very hard to use either of those mechanisms (separation through
>
> emails or keys) without giving some third party the ability to achieve total
>
> linkability. (In other words, both options remove effective choice).
>
>
> I agree that emails are a problem, but not at all sure why keys are?
> In the case of appropriate selective disclosure mechanisms, even if
> there were a third party involved, they would not be able to link uses
> of the keys. Also, if you insist on using linkable keys, then per-site
> keys do not involve third parties.
>
>
> It may just be that I'm not getting a clear mental picture of your
> architecture. But here was my thinking:
>
> - If you use symmetric keys, you get a system which can't scale unless you
> opt for Schneier's idea of a key server… but then the key server becomes a
> point of potential panopticality.

Symmetric keys obviously don't work.

> - If you use PKI, *and* you want your communicating parties to be able to
> validate the certs they're relying on, then you have to design a CRL- or
> OCSP-like mechanism into the architecture, and again you end up with a
> component which is potentially panoptical. (Plus, you have to address the
> 20-year-old problem of how to make PKI usable by human beings, when recent
> history suggests that PKI only takes off where human beings are kept well
> away from it).

Per-site keys don't really need the I in PKI, just the PK. Revocation
need not be centralised - I am not saying it is trivial, but it is
akin to the problem of forgotten or compromised passwords.

Also, it is possible to blacklist using selective disclosure - i.e.
detect whether a key has been revoked without revealing the key.
Nathan
2012-10-24 12:03:57 UTC
Permalink
Robin Wilton wrote:
>
>
>
>
>
> Robin Wilton
> Technical Outreach Director - Identity and Privacy
> Internet Society
>
> email: wilton-***@public.gmane.org
> Phone: +44 705 005 2931
> Twitter: @futureidentity
>
>
>
>
> On 24 Oct 2012, at 10:30, Ben Laurie wrote:
>
>> On 23 October 2012 10:58, Robin Wilton <wilton-***@public.gmane.org> wrote:
>>>
>>> On 23 Oct 2012, at 09:44, Ben Laurie wrote:
>>>
>>> <snip>
>>>
>>>
>>> Not disagreeing with any of the above, but observing that:
>>>
>>> a) There's no particular reason you could not have an email per site
>>> as well as a key per site.
>>>
>>> b) Linkability it not, as you say, inherently bad. The problem occurs
>>> when you have (effectively) no choice about linkability.
>>>
>>>
>>>
>>> But it's very hard to use either of those mechanisms (separation through
>>> emails or keys) without giving some third party the ability to achieve total
>>> linkability. (In other words, both options remove effective choice).
>> I agree that emails are a problem, but not at all sure why keys are?
>> In the case of appropriate selective disclosure mechanisms, even if
>> there were a third party involved, they would not be able to link uses
>> of the keys. Also, if you insist on using linkable keys, then per-site
>> keys do not involve third parties.
>>
>
> It may just be that I'm not getting a clear mental picture of your architecture. But here was my thinking:
>
> - If you use symmetric keys, you get a system which can't scale unless you opt for Schneier's idea of a key server… but then the key server becomes a point of potential panopticality.
>
> - If you use PKI, *and* you want your communicating parties to be able to validate the certs they're relying on, then you have to design a CRL- or OCSP-like mechanism into the architecture, and again you end up with a component which is potentially panoptical. (Plus, you have to address the 20-year-old problem of how to make PKI usable by human beings, when recent history suggests that PKI only takes off where human beings are kept well away from it).

CRL is pretty much built in to WebID, if you remove a public key from
the document pointed to by your uri-identifier, then it's no longer
valid for use in WebID - auth can't happen, rendering the cert useless
for WebID.
Kingsley Idehen
2012-10-24 14:03:09 UTC
Permalink
On 10/24/12 8:03 AM, Nathan wrote:
> Robin Wilton wrote:
>>
>>
>>
>>
>>
>> Robin Wilton
>> Technical Outreach Director - Identity and Privacy
>> Internet Society
>>
>> email: wilton-***@public.gmane.org
>> Phone: +44 705 005 2931
>> Twitter: @futureidentity
>>
>>
>>
>>
>> On 24 Oct 2012, at 10:30, Ben Laurie wrote:
>>
>>> On 23 October 2012 10:58, Robin Wilton <wilton-***@public.gmane.org> wrote:
>>>>
>>>> On 23 Oct 2012, at 09:44, Ben Laurie wrote:
>>>>
>>>> <snip>
>>>>
>>>>
>>>> Not disagreeing with any of the above, but observing that:
>>>>
>>>> a) There's no particular reason you could not have an email per site
>>>> as well as a key per site.
>>>>
>>>> b) Linkability it not, as you say, inherently bad. The problem occurs
>>>> when you have (effectively) no choice about linkability.
>>>>
>>>>
>>>>
>>>> But it's very hard to use either of those mechanisms (separation
>>>> through
>>>> emails or keys) without giving some third party the ability to
>>>> achieve total
>>>> linkability. (In other words, both options remove effective choice).
>>> I agree that emails are a problem, but not at all sure why keys are?
>>> In the case of appropriate selective disclosure mechanisms, even if
>>> there were a third party involved, they would not be able to link uses
>>> of the keys. Also, if you insist on using linkable keys, then per-site
>>> keys do not involve third parties.
>>>
>>
>> It may just be that I'm not getting a clear mental picture of your
>> architecture. But here was my thinking:
>> - If you use symmetric keys, you get a system which can't scale
>> unless you opt for Schneier's idea of a key server… but then the key
>> server becomes a point of potential panopticality.
>>
>> - If you use PKI, *and* you want your communicating parties to be
>> able to validate the certs they're relying on, then you have to
>> design a CRL- or OCSP-like mechanism into the architecture, and again
>> you end up with a component which is potentially panoptical. (Plus,
>> you have to address the 20-year-old problem of how to make PKI usable
>> by human beings, when recent history suggests that PKI only takes off
>> where human beings are kept well away from it).
>
> CRL is pretty much built in to WebID, if you remove a public key from
> the document pointed to by your uri-identifier, then it's no longer
> valid for use in WebID - auth can't happen, rendering the cert useless
> for WebID.
>
>
>

For sake of clarity, Nathan is speaking about the WebID authentication
protocol.

A WebID on its own refers to an resolvable (de-referencable) identifier.
The WebID protocol verifies the aforementioned identifier (entity
denotation mechanism) via a combination of cryptography and entity
relationship semantics oriented logic.

--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Melvin Carvalho
2012-10-24 14:24:26 UTC
Permalink
On 24 October 2012 16:03, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org> wrote:

> On 10/24/12 8:03 AM, Nathan wrote:
>
>> Robin Wilton wrote:
>>
>>>
>>>
>>>
>>>
>>>
>>> Robin Wilton
>>> Technical Outreach Director - Identity and Privacy
>>> Internet Society
>>>
>>> email: wilton-***@public.gmane.org
>>> Phone: +44 705 005 2931
>>> Twitter: @futureidentity
>>>
>>>
>>>
>>>
>>> On 24 Oct 2012, at 10:30, Ben Laurie wrote:
>>>
>>> On 23 October 2012 10:58, Robin Wilton <wilton-***@public.gmane.org> wrote:
>>>>
>>>>>
>>>>> On 23 Oct 2012, at 09:44, Ben Laurie wrote:
>>>>>
>>>>> <snip>
>>>>>
>>>>>
>>>>> Not disagreeing with any of the above, but observing that:
>>>>>
>>>>> a) There's no particular reason you could not have an email per site
>>>>> as well as a key per site.
>>>>>
>>>>> b) Linkability it not, as you say, inherently bad. The problem occurs
>>>>> when you have (effectively) no choice about linkability.
>>>>>
>>>>>
>>>>>
>>>>> But it's very hard to use either of those mechanisms (separation
>>>>> through
>>>>> emails or keys) without giving some third party the ability to achieve
>>>>> total
>>>>> linkability. (In other words, both options remove effective choice).
>>>>>
>>>> I agree that emails are a problem, but not at all sure why keys are?
>>>> In the case of appropriate selective disclosure mechanisms, even if
>>>> there were a third party involved, they would not be able to link uses
>>>> of the keys. Also, if you insist on using linkable keys, then per-site
>>>> keys do not involve third parties.
>>>>
>>>>
>>> It may just be that I'm not getting a clear mental picture of your
>>> architecture. But here was my thinking:
>>> - If you use symmetric keys, you get a system which can't scale unless
>>> you opt for Schneier's idea of a key server… but then the key server
>>> becomes a point of potential panopticality.
>>>
>>> - If you use PKI, *and* you want your communicating parties to be able
>>> to validate the certs they're relying on, then you have to design a CRL- or
>>> OCSP-like mechanism into the architecture, and again you end up with a
>>> component which is potentially panoptical. (Plus, you have to address the
>>> 20-year-old problem of how to make PKI usable by human beings, when recent
>>> history suggests that PKI only takes off where human beings are kept well
>>> away from it).
>>>
>>
>> CRL is pretty much built in to WebID, if you remove a public key from the
>> document pointed to by your uri-identifier, then it's no longer valid for
>> use in WebID - auth can't happen, rendering the cert useless for WebID.
>>
>>
>>
>>
> For sake of clarity, Nathan is speaking about the WebID authentication
> protocol.
>
> A WebID on its own refers to an resolvable (de-referencable) identifier.
> The WebID protocol verifies the aforementioned identifier (entity
> denotation mechanism) via a combination of cryptography and entity
> relationship semantics oriented logic.


Kingsley, thanks for pointing this out.

I think some of the confusion arises from the fact that a webid is
sometimes not that clearly defined, and people focus on the protocol.

In particular a WebID is a URI that references an Agent (human or machine)

Similarly, email will become WebIDs using the webfinger spec (when that's
complete)

It can be argued that OAuth/OpenID identifiers are also WebID but with a
different auth protocol.

Mozilla persona, although certainly useful, would possibly not fit into the
same category, as they use a proprietary identification system.

The whole idea is that WebID brings things together at an architectural
level. "The WebID Protocol", certs, X.509 are implementation details
really.


>
>
> --
>
> Regards,
>
> Kingsley Idehen
> Founder & CEO
> OpenLink Software
> Company Web: http://www.openlinksw.com
> Personal Weblog: http://www.openlinksw.com/**blog/~kidehen<http://www.openlinksw.com/blog/~kidehen>
> Twitter/Identi.ca handle: @kidehen
> Google+ Profile: https://plus.google.com/**112399767740508618350/about<https://plus.google.com/112399767740508618350/about>
> LinkedIn Profile: http://www.linkedin.com/in/**kidehen<http://www.linkedin.com/in/kidehen>
>
>
>
>
>
>
Henry Story
2012-10-24 15:00:21 UTC
Permalink
On 24 Oct 2012, at 16:24, Melvin Carvalho <***@gmail.com> wrote:

>
>
> On 24 October 2012 16:03, Kingsley Idehen <***@openlinksw.com> wrote:
> On 10/24/12 8:03 AM, Nathan wrote:
> Robin Wilton wrote:
>
>
>
>
> On 24 Oct 2012, at 10:30, Ben Laurie wrote:
>
> On 23 October 2012 10:58, Robin Wilton <***@isoc.org> wrote:
>
> On 23 Oct 2012, at 09:44, Ben Laurie wrote:
>
> <snip>
>
>
> Not disagreeing with any of the above, but observing that:
>
> a) There's no particular reason you could not have an email per site
> as well as a key per site.
>
> b) Linkability it not, as you say, inherently bad. The problem occurs
> when you have (effectively) no choice about linkability.
>
>
>
> But it's very hard to use either of those mechanisms (separation through
> emails or keys) without giving some third party the ability to achieve total
> linkability. (In other words, both options remove effective choice).
> I agree that emails are a problem, but not at all sure why keys are?
> In the case of appropriate selective disclosure mechanisms, even if
> there were a third party involved, they would not be able to link uses
> of the keys. Also, if you insist on using linkable keys, then per-site
> keys do not involve third parties.
>
>
> It may just be that I'm not getting a clear mental picture of your architecture. But here was my thinking:
> - If you use symmetric keys, you get a system which can't scale unless you opt for Schneier's idea of a key server… but then the key server becomes a point of potential panopticality.
>
> - If you use PKI, *and* you want your communicating parties to be able to validate the certs they're relying on, then you have to design a CRL- or OCSP-like mechanism into the architecture, and again you end up with a component which is potentially panoptical. (Plus, you have to address the 20-year-old problem of how to make PKI usable by human beings, when recent history suggests that PKI only takes off where human beings are kept well away from it).
>
> CRL is pretty much built in to WebID, if you remove a public key from the document pointed to by your uri-identifier, then it's no longer valid for use in WebID - auth can't happen, rendering the cert useless for WebID.

+1 Nathan.

( btw. It is always good to point people to the spec too http://webid.info/spec is the short url for it. )

>
>
>
>
> For sake of clarity, Nathan is speaking about the WebID authentication protocol.
>
> A WebID on its own refers to an resolvable (de-referencable) identifier. The WebID protocol verifies the aforementioned identifier (entity denotation mechanism) via a combination of cryptography and entity relationship semantics oriented logic.
>
> Kingsley, thanks for pointing this out.
>
> I think some of the confusion arises from the fact that a webid is sometimes not that clearly defined, and people focus on the protocol.

The spec has a definition that seems pretty reasonable, ( though I think we should remove the reference to "intentions" )

http://www.w3.org/2005/Incubator/webid/spec/#terminology

WebID
A URI that refers to an Agent - Person, Robot, Group or other thing that can have Intentions. The WebID should be a URI which when dereferenced returns a representation whose description uniquely identifies the Agent who is the controller of a public key. In our example the WebID refers to Bob. A WebID is usually a URL with a #tag, as the meaning of such a URL is defined in the document refered to by the WebID URL without the #tag .


>
> In particular a WebID is a URI that references an Agent (human or machine)
>
> Similarly, email will become WebIDs using the webfinger spec (when that's complete)
>
> It can be argued that OAuth/OpenID identifiers are also WebID but with a different auth protocol.
>
> Mozilla persona, although certainly useful, would possibly not fit into the same category, as they use a proprietary identification system.
>
> The whole idea is that WebID brings things together at an architectural level. "The WebID Protocol", certs, X.509 are implementation details really.

I would not say just implementation details. From the philosophical theory of reference
( eg: Gareth Evans's book: The variety of Reference ) they may be, but in everyday usage
how these things are implemented is quite important, and so are the distinctions.

According to the WebID spec definition:
- OpenID is close [1] though they have a URL for a web page rather than an agent (not a big deal), but more importantly they don't make use of the URL to get the attributes, which is what WebID does. They certainly don't publish the public key in the OpenId profile.
- webfinger does indeed give a method to dereference a mailto: uri, which could be used for a WebID protocol.
- I don't think OAuth is working with URIs at all
- Mozilla Persona could use WebIDs [2] and it would improve their protocol so dramatically, it is evident that they will at some point.

Henry

[1] https://blogs.oracle.com/bblfish/entry/what_does_foaf_ssl_give
[2] http://security.stackexchange.com/questions/5406/what-are-the-main-advantages-and-disadvantages-of-webid-compared-to-browserid

>
>
>
> --
>
> Regards,
>
> Kingsley Idehen
> Founder & CEO
> OpenLink Software
> Company Web: http://www.openlinksw.com
> Personal Weblog: http://www.openlinksw.com/blog/~kidehen
> Twitter/Identi.ca handle: @kidehen
> Google+ Profile: https://plus.google.com/112399767740508618350/about
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
>
>
>
>
>
>

Social Web Architect
http://bblfish.net/
Melvin Carvalho
2012-10-24 15:09:17 UTC
Permalink
On 24 October 2012 17:00, Henry Story <henry.story-***@public.gmane.org> wrote:

>
> On 24 Oct 2012, at 16:24, Melvin Carvalho <melvincarvalho-***@public.gmane.org>
> wrote:
>
>
>
> On 24 October 2012 16:03, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org> wrote:
>
>> On 10/24/12 8:03 AM, Nathan wrote:
>>
>>> Robin Wilton wrote:
>>>
>>>>
>>>>
>>>>
>>>>
>>>> On 24 Oct 2012, at 10:30, Ben Laurie wrote:
>>>>
>>>> On 23 October 2012 10:58, Robin Wilton <wilton-***@public.gmane.org> wrote:
>>>>>
>>>>>>
>>>>>> On 23 Oct 2012, at 09:44, Ben Laurie wrote:
>>>>>>
>>>>>> <snip>
>>>>>>
>>>>>>
>>>>>> Not disagreeing with any of the above, but observing that:
>>>>>>
>>>>>> a) There's no particular reason you could not have an email per site
>>>>>> as well as a key per site.
>>>>>>
>>>>>> b) Linkability it not, as you say, inherently bad. The problem occurs
>>>>>> when you have (effectively) no choice about linkability.
>>>>>>
>>>>>>
>>>>>>
>>>>>> But it's very hard to use either of those mechanisms (separation
>>>>>> through
>>>>>> emails or keys) without giving some third party the ability to
>>>>>> achieve total
>>>>>> linkability. (In other words, both options remove effective choice).
>>>>>>
>>>>> I agree that emails are a problem, but not at all sure why keys are?
>>>>> In the case of appropriate selective disclosure mechanisms, even if
>>>>> there were a third party involved, they would not be able to link uses
>>>>> of the keys. Also, if you insist on using linkable keys, then per-site
>>>>> keys do not involve third parties.
>>>>>
>>>>>
>>>> It may just be that I'm not getting a clear mental picture of your
>>>> architecture. But here was my thinking:
>>>> - If you use symmetric keys, you get a system which can't scale unless
>>>> you opt for Schneier's idea of a key server… but then the key server
>>>> becomes a point of potential panopticality.
>>>>
>>>> - If you use PKI, *and* you want your communicating parties to be able
>>>> to validate the certs they're relying on, then you have to design a CRL- or
>>>> OCSP-like mechanism into the architecture, and again you end up with a
>>>> component which is potentially panoptical. (Plus, you have to address the
>>>> 20-year-old problem of how to make PKI usable by human beings, when recent
>>>> history suggests that PKI only takes off where human beings are kept well
>>>> away from it).
>>>>
>>>
>>> CRL is pretty much built in to WebID, if you remove a public key from
>>> the document pointed to by your uri-identifier, then it's no longer valid
>>> for use in WebID - auth can't happen, rendering the cert useless for WebID.
>>>
>>
> +1 Nathan.
>
> ( btw. It is always good to point people to the spec too
> http://webid.info/spec is the short url for it. )
>
>
>>>
>>>
>>>
>> For sake of clarity, Nathan is speaking about the WebID authentication
>> protocol.
>>
>> A WebID on its own refers to an resolvable (de-referencable) identifier.
>> The WebID protocol verifies the aforementioned identifier (entity
>> denotation mechanism) via a combination of cryptography and entity
>> relationship semantics oriented logic.
>
>
> Kingsley, thanks for pointing this out.
>
> I think some of the confusion arises from the fact that a webid is
> sometimes not that clearly defined, and people focus on the protocol.
>
>
> The spec has a definition that seems pretty reasonable, ( though I think
> we should remove the reference to "intentions" )
>
> http://www.w3.org/2005/Incubator/webid/spec/#terminology
>
> WebIDA URI that refers to an Agent - Person, Robot, Group or other thing
> that can have Intentions. The WebID should be a URI which when dereferenced
> returns a representation whose description uniquely identifies the Agent
> who is the controller of a public key. In our example the WebID refers to
> Bob<https://dvcs.w3.org/hg/WebID/raw-file/tip/spec/index-respec.html#dfn-bob>.
> A WebID is usually a URL with a #tag, as the meaning of such a URL is
> defined in the document refered to by the WebID URL without the #tag .
>
>
>
> In particular a WebID is a URI that references an Agent (human or machine)
>
> Similarly, email will become WebIDs using the webfinger spec (when that's
> complete)
>
> It can be argued that OAuth/OpenID identifiers are also WebID but with a
> different auth protocol.
>
> Mozilla persona, although certainly useful, would possibly not fit into
> the same category, as they use a proprietary identification system.
>
> The whole idea is that WebID brings things together at an architectural
> level. "The WebID Protocol", certs, X.509 are implementation details
> really.
>
>
> I would not say just implementation details. From the philosophical theory
> of reference
> ( eg: Gareth Evans's book: The variety of Reference ) they may be, but in
> everyday usage
> how these things are implemented is quite important, and so are the
> distinctions.
>
> According to the WebID spec definition:
> - OpenID is close [1] though they have a URL for a web page rather than
> an agent (not a big deal), but more importantly they don't make use of the
> URL to get the attributes, which is what WebID does. They certainly don't
> publish the public key in the OpenId profile.
> - webfinger does indeed give a method to dereference a mailto: uri,
> which could be used for a WebID protocol.
>

the current draft of webfinger allows dereferencing a mailto: URI ... in
fact it is anyURI


> - I don't think OAuth is working with URIs at all
> - Mozilla Persona could use WebIDs [2] and it would improve their
> protocol so dramatically, it is evident that they will at some point.
>
> Henry
>
> [1] https://blogs.oracle.com/bblfish/entry/what_does_foaf_ssl_give
> [2]
> http://security.stackexchange.com/questions/5406/what-are-the-main-advantages-and-disadvantages-of-webid-compared-to-browserid
>
>
>
>>
>>
>> --
>>
>> Regards,
>>
>> Kingsley Idehen
>> Founder & CEO
>> OpenLink Software
>> Company Web: http://www.openlinksw.com
>> Personal Weblog: http://www.openlinksw.com/**blog/~kidehen<http://www.openlinksw.com/blog/~kidehen>
>> Twitter/Identi.ca handle: @kidehen
>> Google+ Profile: https://plus.google.com/**112399767740508618350/about<https://plus.google.com/112399767740508618350/about>
>> LinkedIn Profile: http://www.linkedin.com/in/**kidehen<http://www.linkedin.com/in/kidehen>
>>
>>
>>
>>
>>
>>
>
> Social Web Architect
> http://bblfish.net/
>
>
Henry Story
2012-10-24 15:20:37 UTC
Permalink
On 24 Oct 2012, at 17:09, Melvin Carvalho <melvincarvalho-***@public.gmane.org> wrote:

>
> - webfinger does indeed give a method to dereference a mailto: uri, which could be used for a WebID protocol.
>
> the current draft of webfinger allows dereferencing a mailto: URI ... in fact it is anyURI

WebFinger is a dereferencing protocol, but not an authentication protocol, and as such it could possibly be used with WebID over TLS.

Henry

Social Web Architect
http://bblfish.net/
Kingsley Idehen
2012-10-24 17:51:07 UTC
Permalink
On 10/24/12 11:20 AM, Henry Story wrote:
>
> On 24 Oct 2012, at 17:09, Melvin Carvalho <melvincarvalho-***@public.gmane.org
> <mailto:melvincarvalho-***@public.gmane.org>> wrote:
>
>>
>> - webfinger does indeed give a method to dereference a mailto:
>> uri, which could be used for a WebID protocol.
>>
>>
>> the current draft of webfinger allows dereferencing a mailto: URI ...
>> in fact it is anyURI
>
> WebFinger is a dereferencing protocol, but not an authentication
> protocol, and as such it could possibly be used with WebID over TLS.
>
> Henry
>
> Social Web Architect
> http://bblfish.net/
>
Henry,

We already use it with acct: or mailto: scheme URIs that serve as WebIDs
in our implementation of the WebID Authentication Protocol (WAP).
Basically, these URIs resolve to entity-attribute-value graphs expressed
in XRD or JRD format which we then transform into RDF graphs. The
aforementioned transformation adds the requisite entity relationship
semantics required for WAP conformance.

This simply requires implementers to take responsibility for the following:

1. Webfinger protocol incorporation for URI resolution
2. RDF graph generation from XRD or JRD resources .

--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Henry Story
2012-10-24 18:16:42 UTC
Permalink
On 24 Oct 2012, at 19:51, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org> wrote:

> On 10/24/12 11:20 AM, Henry Story wrote:
>>
>> On 24 Oct 2012, at 17:09, Melvin Carvalho <melvincarvalho-***@public.gmane.org> wrote:
>>
>>>
>>> - webfinger does indeed give a method to dereference a mailto: uri, which could be used for a WebID protocol.
>>>
>>> the current draft of webfinger allows dereferencing a mailto: URI ... in fact it is anyURI
>>
>> WebFinger is a dereferencing protocol, but not an authentication protocol, and as such it could possibly be used with WebID over TLS.
>>
>> Henry
>>
>> Social Web Architect
>> http://bblfish.net/
>>
> Henry,
>
> We already use it with acct: or mailto: scheme URIs that serve as WebIDs in our implementation of the WebID Authentication Protocol (WAP). Basically, these URIs resolve to entity-attribute-value graphs expressed in XRD or JRD format which we then transform into RDF graphs. The aforementioned transformation adds the requisite entity relationship semantics required for WAP conformance.
>
> This simply requires implementers to take responsibility for the following:
>
> 1. Webfinger protocol incorporation for URI resolution
> 2. RDF graph generation from XRD or JRD resources .

yes, that is something we can work on adding to the spec. But that is work by itself, and we need
more people to implement it. We are just about getting to the point where people are implementing
the current spec correctly, and the spec itself still needs work.

I think this is somehting we can discuss a road map for at TPAC.

>
> --
>
> Regards,
>
> Kingsley Idehen
> Founder & CEO
> OpenLink Software
> Company Web: http://www.openlinksw.com
> Personal Weblog: http://www.openlinksw.com/blog/~kidehen
> Twitter/Identi.ca handle: @kidehen
> Google+ Profile: https://plus.google.com/112399767740508618350/about
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
>
>
>
>

Social Web Architect
http://bblfish.net/
Kingsley Idehen
2012-10-24 18:55:29 UTC
Permalink
On 10/24/12 2:16 PM, Henry Story wrote:
>
> On 24 Oct 2012, at 19:51, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org
> <mailto:kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org>> wrote:
>
>> On 10/24/12 11:20 AM, Henry Story wrote:
>>>
>>> On 24 Oct 2012, at 17:09, Melvin Carvalho <melvincarvalho-***@public.gmane.org
>>> <mailto:melvincarvalho-***@public.gmane.org>> wrote:
>>>
>>>>
>>>> - webfinger does indeed give a method to dereference a mailto:
>>>> uri, which could be used for a WebID protocol.
>>>>
>>>>
>>>> the current draft of webfinger allows dereferencing a mailto: URI
>>>> ... in fact it is anyURI
>>>
>>> WebFinger is a dereferencing protocol, but not an authentication
>>> protocol, and as such it could possibly be used with WebID over TLS.
>>>
>>> Henry
>>>
>>> Social Web Architect
>>> http://bblfish.net/
>>>
>> Henry,
>>
>> We already use it with acct: or mailto: scheme URIs that serve as
>> WebIDs in our implementation of the WebID Authentication Protocol
>> (WAP). Basically, these URIs resolve to entity-attribute-value graphs
>> expressed in XRD or JRD format which we then transform into RDF
>> graphs. The aforementioned transformation adds the requisite entity
>> relationship semantics required for WAP conformance.
>>
>> This simply requires implementers to take responsibility for the
>> following:
>>
>> 1. Webfinger protocol incorporation for URI resolution
>> 2. RDF graph generation from XRD or JRD resources .
>
> yes, that is something we can work on adding to the spec. But that is
> work by itself, and we need
> more people to implement it.

Yes, but all we need to do is acknowledge the pathway that exists for
those who are conversant with Webfinger and JRD or XRD descriptor
resources.

> We are just about getting to the point where people are implementing
> the current spec correctly, and the spec itself still needs work.

This isn't about the folks working on the spec right now, it's about
enabling others to understand we are open to a variety of contributor
and collaborator profiles etc..

>
> I think this is somehting we can discuss a road map for at TPAC.

Being officially open and engaging should be intrinsic to this endeavor :-)

Kingsley
>
>>
>> --
>>
>> Regards,
>>
>> Kingsley Idehen
>> Founder & CEO
>> OpenLink Software
>> Company Web:http://www.openlinksw.com
>> Personal Weblog:http://www.openlinksw.com/blog/~kidehen
>> Twitter/Identi.ca <http://Identi.ca> handle: @kidehen
>> Google+ Profile:https://plus.google.com/112399767740508618350/about
>> LinkedIn Profile:http://www.linkedin.com/in/kidehen
>>
>>
>>
>>
>
> Social Web Architect
> http://bblfish.net/
>


--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Kingsley Idehen
2012-10-19 17:30:55 UTC
Permalink
On 10/19/12 9:31 AM, Ben Laurie wrote:
>> >So perhaps it is up to you to answer: why should I not want that?
> I am not saying you should not want that, I am saying that ACLs on the
> resources do not achieve unlinkability.
>

You keep on saying this but I simply don't agree. I gave you an example
of a PKCS#12 file sent to you and a phone call during which its access
password is exchanged. How do you the recipient of that data even
understand the basis of the data access policy associated with the
protected resource to which it will provide access? You don't know the
nature of my data access policy. It doesn't say: grant access to the
subject of this certificate. But seem to assume that it can only test
that claim when you repeat the claim above.

You don't know the logic behind my assessment of your nebulous identity.
You aren't in my head. The beauty of logic is that it allows me express
a good chunk of what's in my head via notation.

A machine is linkable via DNS. A document is linkable via an HTTP URL, I
am not linkable because I (like you and every other human) is endowed
with cognitive powers and the ability to exploit temporality. We are
really difficult to pin down, even more so with the explosion of
networking devices, software etc.. that are loosely associated with us.

I can't stop you using the words, but I can assure you that you claims
are refutable via logic.

What I would really like you to do is point us to an working example of
something that meets your goals. Then we have something to compare.
Bottom line, somebody will learn something useful and everyone will be
ultimately be better off etc..

Links:

1.
http://www.guardian.co.uk/commentisfree/belief/2009/jul/27/heidegger-being-time-philosophy

2. http://twitpic.com/1g03vo/full -- you can't really pin down the
entity depicted in that image, contrary to what you might think due to
Web perception illusion.

--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen


--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Henry Story
2012-10-19 17:47:23 UTC
Permalink
On 19 Oct 2012, at 19:30, Kingsley Idehen <***@openlinksw.com> wrote:

> On 10/19/12 9:31 AM, Ben Laurie wrote:
>>> >So perhaps it is up to you to answer: why should I not want that?
>> I am not saying you should not want that, I am saying that ACLs on the
>> resources do not achieve unlinkability.
>>
>
> You keep on saying this but I simply don't agree. I gave you an example of a PKCS#12 file sent to you and a phone call during which its access password is exchanged. How do you the recipient of that data even understand the basis of the data access policy associated with the protected resource to which it will provide access? You don't know the nature of my data access policy. It doesn't say: grant access to the subject of this certificate. But seem to assume that it can only test that claim when you repeat the claim above.

Kingsley: Ben is saying that you don't achieve unlinkability because the situations Ben Laurie is thinking of are those such as Wikileaks where you need to consider the site you are connecting to as the enemy. Even if all knew were your public key it could report that it has proven that someone knowing the private key of public key PK has connected.

Ben makes this clear in the e-mail here:
http://lists.w3.org/Archives/Public/public-privacy/2012OctDec/0079.html

The answer is simple:
that is just a use case that WebID is not meant for. For such use cases any linkability is problematic. But we are trying to build a social web, so clearly linkability at some level is necessary for what we want to do.

Before you argue with someone about "linkability" problem, just ask them who they consider to be the enemy. Say if you considered yourself in the future to be the enemy, you'd have even more trouble coming up with a good solution. :-)


>
> You don't know the logic behind my assessment of your nebulous identity. You aren't in my head. The beauty of logic is that it allows me express a good chunk of what's in my head via notation.
>
> A machine is linkable via DNS. A document is linkable via an HTTP URL, I am not linkable because I (like you and every other human) is endowed with cognitive powers and the ability to exploit temporality. We are really difficult to pin down, even more so with the explosion of networking devices, software etc.. that are loosely associated with us.
>
> I can't stop you using the words, but I can assure you that you claims are refutable via logic.
>
> What I would really like you to do is point us to an working example of something that meets your goals. Then we have something to compare. Bottom line, somebody will learn something useful and everyone will be ultimately be better off etc..
>
> Links:
>
> 1. http://www.guardian.co.uk/commentisfree/belief/2009/jul/27/heidegger-being-time-philosophy
> 2. http://twitpic.com/1g03vo/full -- you can't really pin down the entity depicted in that image, contrary to what you might think due to Web perception illusion.
>
> --
>
> Regards,
>
> Kingsley Idehen
> Founder & CEO
> OpenLink Software
> Company Web: http://www.openlinksw.com
> Personal Weblog: http://www.openlinksw.com/blog/~kidehen
> Twitter/Identi.ca handle: @kidehen
> Google+ Profile: https://plus.google.com/112399767740508618350/about
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
>
>
> --
>
> Regards,
>
> Kingsley Idehen
> Founder & CEO
> OpenLink Software
> Company Web: http://www.openlinksw.com
> Personal Weblog: http://www.openlinksw.com/blog/~kidehen
> Twitter/Identi.ca handle: @kidehen
> Google+ Profile: https://plus.google.com/112399767740508618350/about
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
>
>
>
>
>

Social Web Architect
http://bblfish.net/
Anders Rundgren
2012-10-21 07:28:50 UTC
Permalink
On 2012-10-18 21:29, Ben Laurie wrote:
> On Thu, Oct 18, 2012 at 8:20 PM, Henry Story <henry.story-***@public.gmane.org> wrote:

>> from any person that was not able to access the resources. But you would
>> be linkable by your friends. I think you want both. Linkability by those
>> authorized, unlinkability for those unauthorized. Hence linkability is not
>> just a negative.
>
> I really feel like I am beating a dead horse at this point, but
> perhaps you'll eventually admit it. Your public key links you. Access
> control on the rest of the information is irrelevant. Indeed, access
> control on the public key is irrelevant, since you must reveal it when
> you use the client cert. Incidentally, to observers as well as the
> server you connect to.
>

That's undeniable.

I'm still curious about the use-cases for non-linkable authentication.
The Austrian government spent a lot of money and time on creating sector-
specific IDs but I doubt they actually work in practice. Without any
kind of "call-back" info, what kind of service can you actually get?

There's probably more utility in systems vouching for non-personal attributes
like "Employee of Acme", "I'm over 18", etc. Yes, InformationCards was a
good idea! It was just poorly though-out since it didn't exploit the
platform that already existed in the wild: consumer PKI.

Anders
Mo McRoberts
2012-10-21 08:24:08 UTC
Permalink
On 18 Oct 2012, at 20:29, Ben Laurie <ben-cX+m+/***@public.gmane.org> wrote:

> I really feel like I am beating a dead horse at this point, but
> perhaps you'll eventually admit it. Your public key links you. Access
> control on the rest of the information is irrelevant. Indeed, access
> control on the public key is irrelevant, since you must reveal it when
> you use the client cert. Incidentally, to observers as well as the
> server you connect to.


Right, but that's the nature of a persistent identifier which is (surely) a prerequisite for auth — assuming one doesn't wish to remain anonymous and have some auth, you could hypothetically avoid the cross-domain linkability issue by having a key-per-site, which could be semi-automated on the client side.

What I can't see is how you can maintain persistence on the server side without something which ultimately boils down to (or otherwise allows the storage of) a persistent identifier.

M.

--
Mo McRoberts - Technical Lead - The Space
0141 422 6036 (Internal: 01-26036) - PGP key CEBCF03E,
Zone 1.08, BBC Scotland, Pacific Quay, Glasgow, G51 1DA
Project Office: Room 7083, BBC Television Centre, London W12 7RJ



-----------------------------
http://www.bbc.co.uk
This e-mail (and any attachments) is confidential and
may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in
error, please delete it from your system.
Do not use, copy or disclose the
information in any way nor act in reliance on it and notify the sender
immediately.
Please note that the BBC monitors e-mails
sent or received.
Further communication will signify your consent to
this.
-----------------------------
Ben Laurie
2012-10-21 10:18:42 UTC
Permalink
On Sun, Oct 21, 2012 at 9:24 AM, Mo McRoberts <***@bbc.co.uk> wrote:
>
> On 18 Oct 2012, at 20:29, Ben Laurie <***@links.org> wrote:
>
>> I really feel like I am beating a dead horse at this point, but
>> perhaps you'll eventually admit it. Your public key links you. Access
>> control on the rest of the information is irrelevant. Indeed, access
>> control on the public key is irrelevant, since you must reveal it when
>> you use the client cert. Incidentally, to observers as well as the
>> server you connect to.
>
>
> Right, but that's the nature of a persistent identifier which is (surely) a prerequisite for auth — assuming one doesn't wish to remain anonymous and have some auth, you could hypothetically avoid the cross-domain linkability issue by having a key-per-site, which could be semi-automated on the client side.
>
> What I can't see is how you can maintain persistence on the server side without something which ultimately boils down to (or otherwise allows the storage of) a persistent identifier.

Obviously. I'm talking about linkability across sites.

>
> M.
>
> --
> Mo McRoberts - Technical Lead - The Space
> 0141 422 6036 (Internal: 01-26036) - PGP key CEBCF03E,
> Zone 1.08, BBC Scotland, Pacific Quay, Glasgow, G51 1DA
> Project Office: Room 7083, BBC Television Centre, London W12 7RJ
>
>
>
> -----------------------------
> http://www.bbc.co.uk
> This e-mail (and any attachments) is confidential and
> may contain personal views which are not the views of the BBC unless specifically stated.
> If you have received it in
> error, please delete it from your system.
> Do not use, copy or disclose the
> information in any way nor act in reliance on it and notify the sender
> immediately.
> Please note that the BBC monitors e-mails
> sent or received.
> Further communication will signify your consent to
> this.

Oh really? Further communication will signify your agreement to send me £10,000.
Kingsley Idehen
2012-10-21 16:32:13 UTC
Permalink
On 10/18/12 3:29 PM, Ben Laurie wrote:
>> from any person that was not able to access the resources. But you would
>> >be linkable by your friends. I think you want both. Linkability by those
>> >authorized, unlinkability for those unauthorized. Hence linkability is not
>> >just a negative.
> I really feel like I am beating a dead horse at this point, but
> perhaps you'll eventually admit it. Your public key links you. Access
> control on the rest of the information is irrelevant. Indeed, access
> control on the public key is irrelevant, since you must reveal it when
> you use the client cert. Incidentally, to observers as well as the
> server you connect to.
>
>
>
>
A public key links to a private key.

It could also link to a machine -- due to resolvable machine names on
the Internet due to DNS .

It could also link to composite of a machine, user agent, and referrer
document -- due to resolvable document names on the Web of Documents due
to HTTP.

It doesn't provide the high precision link that you speculate about
(repeatedly) re. a Web of Linked Data -- since the referent of a Linked
Data URI is potentially nebulous e.g., entities "You" and "I" .

I know you don't want to concede this reality, but stop making it sound
like those that oppose your view are simply being obstinate. You are the
one being utterly obstinate here. I encourage you to make you point with
clear examples so that others can juxtapose your views and ours.

Back to you.


--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Ben Laurie
2012-10-21 16:49:54 UTC
Permalink
On Sun, Oct 21, 2012 at 5:32 PM, Kingsley Idehen <***@openlinksw.com> wrote:
> On 10/18/12 3:29 PM, Ben Laurie wrote:
>>>
>>> from any person that was not able to access the resources. But you would
>>> >be linkable by your friends. I think you want both. Linkability by those
>>> >authorized, unlinkability for those unauthorized. Hence linkability is
>>> > not
>>> >just a negative.
>>
>> I really feel like I am beating a dead horse at this point, but
>> perhaps you'll eventually admit it. Your public key links you. Access
>> control on the rest of the information is irrelevant. Indeed, access
>> control on the public key is irrelevant, since you must reveal it when
>> you use the client cert. Incidentally, to observers as well as the
>> server you connect to.
>>
>>
>>
>>
> A public key links to a private key.
>
> It could also link to a machine -- due to resolvable machine names on the
> Internet due to DNS .
>
> It could also link to composite of a machine, user agent, and referrer
> document -- due to resolvable document names on the Web of Documents due to
> HTTP.
>
> It doesn't provide the high precision link that you speculate about
> (repeatedly) re. a Web of Linked Data -- since the referent of a Linked Data
> URI is potentially nebulous e.g., entities "You" and "I" .

Ah, I agree that the key does not inherently link back to a particular
person. What it links is the various interactions that occur under the
identity represented by that key. As we know from various anonymity
disasters (the AOL search terms and Netflix incidents being the best
known), it is not hard, in practice, to go back from those
interactions to the person behind them.

To be clear: linkability does _not_ refer to the ability to link
events to people (or machines). It refers to the ability to link
events to each other. The reason linkability is a privacy problem is
that it turns out that in practice you do not need very many linked
events to figure out who was behind them.

I am sorry if that has not been clear from the start.

> I know you don't want to concede this reality, but stop making it sound like
> those that oppose your view are simply being obstinate. You are the one
> being utterly obstinate here. I encourage you to make you point with clear
> examples so that others can juxtapose your views and ours.
>
> Back to you.
>
>
>
> --
>
> Regards,
>
> Kingsley Idehen
> Founder & CEO
> OpenLink Software
> Company Web: http://www.openlinksw.com
> Personal Weblog: http://www.openlinksw.com/blog/~kidehen
> Twitter/Identi.ca handle: @kidehen
> Google+ Profile: https://plus.google.com/112399767740508618350/about
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
>
>
>
>
>
Kingsley Idehen
2012-10-21 17:00:13 UTC
Permalink
On 10/21/12 12:49 PM, Ben Laurie wrote:
> On Sun, Oct 21, 2012 at 5:32 PM, Kingsley Idehen <kidehen-***@public.gmane.orgm> wrote:
>> On 10/18/12 3:29 PM, Ben Laurie wrote:
>>>> from any person that was not able to access the resources. But you would
>>>>> be linkable by your friends. I think you want both. Linkability by those
>>>>> authorized, unlinkability for those unauthorized. Hence linkability is
>>>>> not
>>>>> just a negative.
>>> I really feel like I am beating a dead horse at this point, but
>>> perhaps you'll eventually admit it. Your public key links you. Access
>>> control on the rest of the information is irrelevant. Indeed, access
>>> control on the public key is irrelevant, since you must reveal it when
>>> you use the client cert. Incidentally, to observers as well as the
>>> server you connect to.
>>>
>>>
>>>
>>>
>> A public key links to a private key.
>>
>> It could also link to a machine -- due to resolvable machine names on the
>> Internet due to DNS .
>>
>> It could also link to composite of a machine, user agent, and referrer
>> document -- due to resolvable document names on the Web of Documents due to
>> HTTP.
>>
>> It doesn't provide the high precision link that you speculate about
>> (repeatedly) re. a Web of Linked Data -- since the referent of a Linked Data
>> URI is potentially nebulous e.g., entities "You" and "I" .
> Ah, I agree that the key does not inherently link back to a particular
> person. What it links is the various interactions that occur under the
> identity represented by that key. As we know from various anonymity
> disasters (the AOL search terms and Netflix incidents being the best
> known), it is not hard, in practice, to go back from those
> interactions to the person behind them.

No, not the "Person" (a nebulous non electronic media entity). You have
links back to a user agent (software) associated with a network address.
>
> To be clear: linkability does _not_ refer to the ability to link
> events to people (or machines). It refers to the ability to link
> events to each other.

Doesn't matter, so you link two events, what does it ultimately prove
beyond the use of some machinery on a network?

> The reason linkability is a privacy problem is
> that it turns out that in practice you do not need very many linked
> events to figure out who was behind them.

That's ultimately a function of logic. Today, we have email addresses
making the process of identity reconciliation very easy, thanks to Web
2.0 patterns whereby you email address (if rally unlucky your address
book) serves as the "super key". This will change if folks can make new
personal identifiers (which aren't email addresses) with alacrity.
That's what this whole issue of WebID, Linked Data, Entity Relationship
Semantics, and Logic is all about. Create higher burdens of proof that
address:

1. context fluidity
2. nebulous nature of cognitive entities not of the Web or Internet --
"You", "I", and "Others".

We are speaking about foundation for what comes next -- via existing
Web architecture -- as opposed to what's broken right now :-)

>
> I am sorry if that has not been clear from the start.
>
>> I know you don't want to concede this reality, but stop making it sound like
>> those that oppose your view are simply being obstinate. You are the one
>> being utterly obstinate here. I encourage you to make you point with clear
>> examples so that others can juxtapose your views and ours.
>>
>> Back to you.
>>
>>


--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Sam Hartman
2012-10-21 17:55:25 UTC
Permalink
I think if I hear the phrase context fluidity or nebulous enttity one
more time I'm going to give up in disgust.
Those phrases don't have enough meaning to have any place in a security
argument.

You seem to believe that it is necessary to prove an event is related to
a person in order to have a privacy problem.
If there are 20 seditious (in the context of some government)
messages posted and the government is able to link those events down to
3 machines and conclude that only 10 people had access to those machines
at the same time, you have a privacy problem.
If the government decides that executing 10 people is an acceptable
cost those 10 people are just as dead even if 9 of them had nothing to
do with it.

Sitting there going "you never proved it was me, only my machine," isn't
going to help you as the fluids of your context are leaking out of an
ever more nebulous entity.
The fact is that by linking events, people can gain information about
real-world entities that might have had something to do with an event.
To the extent they gain that information, there is a loss of privacy.

Not all losses of privacy are bad.
Not all linkability is bad.
I give up privacy and create linkability every time I log into a site,
so that I can store preferences, manage entries I've posted in the past,
etc.
Of course for the most part I'm not risking my fluid context with what I
do online. I'd probably decide preferences weren't worth it if that was
the potential price.

But seriously, can we either move this discussion off IETF lists or use
enough precision and stop hiding behind vague terminology that we can
have a computer security discussion?

Thanks for your consideration,

--Sam
Kingsley Idehen
2012-10-21 18:26:32 UTC
Permalink
On 10/21/12 1:55 PM, Sam Hartman wrote:
> I think if I hear the phrase context fluidity or nebulous enttity one
> more time I'm going to give up in disgust.
> Those phrases don't have enough meaning to have any place in a security
> argument.

Context matters.

The subject of a security token matters.

If they don't mean anything to you, then clearly, talking past one
another is where we are at.

>
> You seem to believe that it is necessary to prove an event is related to
> a person in order to have a privacy problem.

Sorry, but it isn't as simple as that. But you don't believe in context
or the nebulous nature of identity, so what else can I say?

Somehow, you believe privacy is a simple matter. It isn't so simple, far
from it.

I one context I might want you to know what "I LIke" on Facebook in
another I might not. I need to be the controller of this reality (fluid
context). That's my reality offline, and it can be my reality online too.

> If there are 20 seditious (in the context of some government)
> messages posted and the government is able to link those events down to
> 3 machines and conclude that only 10 people had access to those machines
> at the same time, you have a privacy problem.

Yes, but I don't think you can prove that who the 10 people where at
that specific time.

Again, you have temporality, context, and cognitive beings in the mix.

Did "I" send this email? Or was it sent by some entity associated with
the mailto: scheme URI: <mailto:kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org> ? Who am I ? Who
are You?
Of Whom do you speak?

> If the government decides that executing 10 people is an acceptable
> cost those 10 people are just as dead even if 9 of them had nothing to
> do with it.

Well, I don't know that to be the norm in the real world. Luckily I've
lived under dictatorships during a significant chunk of my life, and it
isn't even so easy under those circumstances to pull off what you just
outlined as some kind of example.

>
> Sitting there going "you never proved it was me, only my machine," isn't
> going to help you as the fluids of your context are leaking out of an
> ever more nebulous entity.
> The fact is that by linking events, people can gain information about
> real-world entities that might have had something to do with an event.
> To the extent they gain that information, there is a loss of privacy.

Privacy is lost when you aren't the one calibrating your vulnerability.
The applies to online and offline media. That's the fundamental point
re. privacy. It is all about "You" not "Them". Thus, the we need point
to point communications where the payloads reach destinations without
anyone snooping or acting as a "big brother" intermediary. "You" have to
be able to control that.

Simple example: "I" should be able to place a document in your in-box
knowing its only accessible to "You". Likewise, you should be able to
ensure that only "I" can place a document in an in-box you've setup for:

1. me
2. a group to which I belong
3. an expression that logically concludes I am an accepted depositor.

>
> Not all losses of privacy are bad.

I never implied anything to the contrary. The only bad loss is the
ability to calibrate your vulnerability online or offline.

> Not all linkability is bad.

Never said or every implied that either.

> I give up privacy and create linkability every time I log into a site,
> so that I can store preferences, manage entries I've posted in the past,
> etc.

You are calibrating your vulnerability when you decide to make data
public, in any form.

> Of course for the most part I'm not risking my fluid context with what I
> do online.

No, you are aware of the context in play. You know its fluid, but you
don't care since the bottom-line is that you know its out in a medium
that doesn't have an eraser.

> I'd probably decide preferences weren't worth it if that was
> the potential price.
>
> But seriously, can we either move this discussion off IETF lists or use
> enough precision and stop hiding behind vague terminology that we can
> have a computer security discussion?

I am not in the business of vague terminology. I have live examples that
back up whatever opinions I hold. There are just a link away, or a
Google search away.


>
> Thanks for your consideration,
>
> --Sam
>
>
>


--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Dick Hardt
2012-10-21 21:17:41 UTC
Permalink
On Oct 21, 2012, at 9:32 AM, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org> wrote:

> On 10/18/12 3:29 PM, Ben Laurie wrote:
>>
>> I really feel like I am beating a dead horse at this point, but
>> perhaps you'll eventually admit it. Your public key links you. Access
>> control on the rest of the information is irrelevant. Indeed, access
>> control on the public key is irrelevant, since you must reveal it when
>> you use the client cert. Incidentally, to observers as well as the
>> server you connect to.
>>
> A public key links to a private key.

A public key or private key *is* an identifier. If there is a 1:1 mapping of public/private key pair to a user, and if the key pair is used at more than one place, then those places know it is the same user and the activities at each of those places is linked.

> You are the one being utterly obstinate here.

Not true … and I don't think that was a productive comment.

> I encourage you to make you point with clear examples so that others can juxtapose your views and ours.

Perhaps my explanation above makes the point clear to you.

-- Dick
Henry Story
2012-10-21 22:13:56 UTC
Permalink
It would be nice if we could remove the ad-hominem attacks here. These
issues can be worked out clearly and calmly by careful reasoning and
attending to some existing definitions.

Below I show how I agree with Dick Hard and Ben Laurie that public
keys are identifiers. But the point of this thread entitled
"Liking Linkability" is that this is not the problem to privacy that
it is thought to be. Indeed my point is that linkability is very important
to increase privacy....

On 21 Oct 2012, at 23:17, Dick Hardt <dick.hardt-***@public.gmane.org> wrote:

>
> On Oct 21, 2012, at 9:32 AM, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org> wrote:
>
>> On 10/18/12 3:29 PM, Ben Laurie wrote:
>>>
>>> I really feel like I am beating a dead horse at this point, but
>>> perhaps you'll eventually admit it. Your public key links you. Access
>>> control on the rest of the information is irrelevant. Indeed, access
>>> control on the public key is irrelevant, since you must reveal it when
>>> you use the client cert. Incidentally, to observers as well as the
>>> server you connect to.
>>>
>> A public key links to a private key.
>
> A public key or private key *is* an identifier. If there is a 1:1 mapping of public/private key pair to a user, and if the key pair is used at more than one place, then those places know it is the same user and the activities at each of those places is linked.

Note Dick, that I (Henry Story) agree with you and Ben Laurie here: A public key is
an identifier. If you use the same public key to identify yourself at various sites
then those sites can link you. This may be what you do intend to do though, and so
this is not a priori a bad thing. Which is why the title of this post is "Liking Linkability".

In this thread my argument has consisted in a making two points:

1. that showing someone an identifier - be it public key or other string with an
inverse functional relation to an agent - may not be a linkability problem
( because you may not consider the agent receiving the information as the enemy )

2. Show how linkability is important for privacy

1. linkability
--------------

If we look at the definition given of linkability in

https://tools.ietf.org/html/draft-hansen-privacy-terminology-03

it says:

[[
Definition: Unlinkability of two or more Items Of Interest (e.g.,
subjects, messages, actions, ...) from an attacker's perspective
means that within a particular set of information, the attacker
cannot distinguish whether these IOIs are related or not (with a
high enough degree of probability to be useful).

]]

It is defining unlinkability in terms of "two or more items of interest
from an attacker's perspective".

So my point is simply: who is the attacker? If you make the site you are
authenticating to with OpenID, BrowserId, or WebID be considered
the attacker then you should not use any of those technologies. If on the
other hand you consider that those sites are *not* the attacker - because say,
you only give them your identity when you are sure that you want to do so -
then the negative linkability claim cannot be made according to the above
definition.

Or at the very least it is a very different problem at that point: if you
exclude the site you are authenticating to as the enemy, then identifying yourself
with your public key is not a linkability problem according to the above definition.
It would be if some other agent listening in on the conversation could surmise
your public key. They would then be able to know that you talked to site B. (If they
also knew the content of the conversation then they would know even more, and your
privacy problem would indeed be greater)

2. linkability's importance to privacy
--------------------------------------

I then argued that one cannot make a simple claim that linkability is a bad thing.
In fact there are good reasons to believe that certain types of linkability
are very important to create distributed social networks - which I call the social web.
A Social Web would clearly be a big improvement for privacy over how things are
being done currently. I don't want to repeat this whole thread here since that was
the argument I made in the initial post in this thread which is archived here:

http://lists.w3.org/Archives/Public/public-privacy/2012OctDec/0003.html


>
>> You are the one being utterly obstinate here.
>
> Not true … and I don't think that was a productive comment.

I don't think that comment is fruitful either. This case can be
argued well without ad-hominem attacks.

>
>> I encourage you to make you point with clear examples so that others can juxtapose your views and ours.
>
> Perhaps my explanation above makes the point clear to you.
>
> -- Dick

Social Web Architect
http://bblfish.net/
Kingsley Idehen
2012-10-22 02:24:32 UTC
Permalink
On 10/21/12 5:17 PM, Dick Hardt wrote:
> On Oct 21, 2012, at 9:32 AM, Kingsley Idehen <kidehen-HpHEqLDO2a7UEDaH6ef/***@public.gmane.org> wrote:
>
>> On 10/18/12 3:29 PM, Ben Laurie wrote:
>>> I really feel like I am beating a dead horse at this point, but
>>> perhaps you'll eventually admit it. Your public key links you. Access
>>> control on the rest of the information is irrelevant. Indeed, access
>>> control on the public key is irrelevant, since you must reveal it when
>>> you use the client cert. Incidentally, to observers as well as the
>>> server you connect to.
>>>
>> A public key links to a private key.
> A public key or private key *is* an identifier.

An together they make a composite key, an identifier.

> If there is a 1:1 mapping of public/private key pair to a user, and if the key pair is used at more than one place, then those places know it is the same user and the activities at each of those places is linked.

Yes, but I am not in anyway espousing the fact the the "user" is a known
entity as per your assumptions. The subject of an X.509 certificate is
who, whom, or what?

At best you can say there is an entity that is the subject of the graph
represented and imprinted to an X.509 certificate.
>
>> You are the one being utterly obstinate here.
> Not true … and I don't think that was a productive comment.
>
>> I encourage you to make you point with clear examples so that others can juxtapose your views and ours.
> Perhaps my explanation above makes the point clear to you.

Yes, but only to the point it clarifies we have strongly differing views
about "user" . In many houses today you have a single device used by
many nebulous entities. How do you pin down the activities of a specific
entity associated with some composite of: public key, private key, URI
in SAN, etc.? It isn't so easy.

Ultimately, the fact that we think in terms of "sites" and flawed
fingerprints remains part of the problem in this conversation.

Personally, we will be more constructive working with actual examples.
So far, Ben hasn't produced a single example for which I haven't
provided a clear response re. the use of structured data and logic to
surmount those problems.

Also note, when Henry mentioned Tor, he received the usual response. All
of sudden Tor by implications meant the subject was of dubious nature
even though the baseline was supposedly about no fingerprints
whatsoever, even at the packet routing level.

>
> -- Dick
>
>


--

Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Melvin Carvalho
2012-10-18 16:23:41 UTC
Permalink
On 18 October 2012 17:34, Ben Laurie <benl-hpIqsD4AKlfQT0dZR+***@public.gmane.org> wrote:

> On 9 October 2012 14:19, Henry Story <henry.story-***@public.gmane.org> wrote:
> > Still in my conversations I have found that many people in security
> spaces
> > just don't seem to be able to put the issues in context, and can get
> sidetracked
> > into not wanting any linkability at all. Not sure how to fix that.
>
> You persist in missing the point, which is why you can't fix it. The
> point is that we want unlinkability to be possible. Protocols that do
> not permit it or make it difficult are problematic. I have certainly
> never said that you should always be unlinked, that would be stupid
> (in fact, I once wrote a paper about how unpleasant it would be).
>
> As I once wrote, anonymity should be the substrate. Once you have
> that, you can the build on it to be linked when you choose to be, and
> not linked when you choose not to be. If it is not the substrate, then
> you do not have this choice.
>
>
What are the criteria for anonymity to be considered an acceptable
substrate?

1. For example if I dont send my certificate, no one can ever link me. Is
that good enough?

2. I suggested a shared anonymous identity (either an individual or group)
eg at http://webid.info/#anon . What that solve the problem.

3. Are we looking for more crypto style proofs, such as chaumian blinding,
anonymous veto, OpenPGP style subkeys or one time shared secrets?

I understand what you are suggesting, but on what criteria would a
suggested solution be measured?
David Chadwick
2012-10-18 19:18:41 UTC
Permalink
Hi Ben

I disagree. It depends upon your risk assessment. Your stand is like
saying TLS should be the substrate, not http. There are two alternative
viewpoints. You can either start with the lowest security/privacy and
add to it, or make the highest security/privacy the default and then
take from it. So you should not necessarily mandate that U-Prove/Idemix
are the default tokens, but rather only require them if your risk
assessment says privacy protection is essential

regards

David

On 18/10/2012 16:34, Ben Laurie wrote:
> On 9 October 2012 14:19, Henry Story <henry.story-***@public.gmane.org> wrote:
>> Still in my conversations I have found that many people in security spaces
>> just don't seem to be able to put the issues in context, and can get sidetracked
>> into not wanting any linkability at all. Not sure how to fix that.
>
> You persist in missing the point, which is why you can't fix it. The
> point is that we want unlinkability to be possible. Protocols that do
> not permit it or make it difficult are problematic. I have certainly
> never said that you should always be unlinked, that would be stupid
> (in fact, I once wrote a paper about how unpleasant it would be).
>
> As I once wrote, anonymity should be the substrate. Once you have
> that, you can the build on it to be linked when you choose to be, and
> not linked when you choose not to be. If it is not the substrate, then
> you do not have this choice.
>
Ben Laurie
2012-10-22 16:58:33 UTC
Permalink
On Thu, Oct 18, 2012 at 8:18 PM, David Chadwick <***@kent.ac.uk> wrote:
> Hi Ben
>
> I disagree. It depends upon your risk assessment. Your stand is like saying
> TLS should be the substrate, not http.

Not at all. You can add security to an insecure connection. You cannot
add anonymity to an identified session. My stand is, in fact, like
saying that TCP should be the substrate, not TLS.

> There are two alternative viewpoints.
> You can either start with the lowest security/privacy and add to it, or make
> the highest security/privacy the default and then take from it. So you
> should not necessarily mandate that U-Prove/Idemix are the default tokens,
> but rather only require them if your risk assessment says privacy protection
> is essential
>
> regards
>
> David
>
>
> On 18/10/2012 16:34, Ben Laurie wrote:
>>
>> On 9 October 2012 14:19, Henry Story <***@bblfish.net> wrote:
>>>
>>> Still in my conversations I have found that many people in security
>>> spaces
>>> just don't seem to be able to put the issues in context, and can get
>>> sidetracked
>>> into not wanting any linkability at all. Not sure how to fix that.
>>
>>
>> You persist in missing the point, which is why you can't fix it. The
>> point is that we want unlinkability to be possible. Protocols that do
>> not permit it or make it difficult are problematic. I have certainly
>> never said that you should always be unlinked, that would be stupid
>> (in fact, I once wrote a paper about how unpleasant it would be).
>>
>> As I once wrote, anonymity should be the substrate. Once you have
>> that, you can the build on it to be linked when you choose to be, and
>> not linked when you choose not to be. If it is not the substrate, then
>> you do not have this choice.
>>
> _______________________________________________
> saag mailing list
> ***@ietf.org
> https://www.ietf.org/mailman/listinfo/saag
David Chadwick
2012-10-22 17:41:40 UTC
Permalink
On 22/10/2012 17:58, Ben Laurie wrote:
> On Thu, Oct 18, 2012 at 8:18 PM, David Chadwick <d.w.chadwick-***@public.gmane.org> wrote:
>> Hi Ben
>>
>> I disagree. It depends upon your risk assessment. Your stand is like saying
>> TLS should be the substrate, not http.
>
> Not at all. You can add security to an insecure connection. You cannot
> add anonymity to an identified session.

Once you have a session you have linkability.
So if you want unlinkability there can be no concept of a session, which
by its very nature, links a series of messages together. So when you
want anonymity you switch from your existing session to using TOR or
some other privacy protecting mechanism.

regards

David

My stand is, in fact, like
> saying that TCP should be the substrate, not TLS.
>
>> There are two alternative viewpoints.
>> You can either start with the lowest security/privacy and add to it, or make
>> the highest security/privacy the default and then take from it. So you
>> should not necessarily mandate that U-Prove/Idemix are the default tokens,
>> but rather only require them if your risk assessment says privacy protection
>> is essential
>>
>> regards
>>
>> David
>>
>>
>> On 18/10/2012 16:34, Ben Laurie wrote:
>>>
>>> On 9 October 2012 14:19, Henry Story <henry.story-***@public.gmane.org> wrote:
>>>>
>>>> Still in my conversations I have found that many people in security
>>>> spaces
>>>> just don't seem to be able to put the issues in context, and can get
>>>> sidetracked
>>>> into not wanting any linkability at all. Not sure how to fix that.
>>>
>>>
>>> You persist in missing the point, which is why you can't fix it. The
>>> point is that we want unlinkability to be possible. Protocols that do
>>> not permit it or make it difficult are problematic. I have certainly
>>> never said that you should always be unlinked, that would be stupid
>>> (in fact, I once wrote a paper about how unpleasant it would be).
>>>
>>> As I once wrote, anonymity should be the substrate. Once you have
>>> that, you can the build on it to be linked when you choose to be, and
>>> not linked when you choose not to be. If it is not the substrate, then
>>> you do not have this choice.
>>>
>> _______________________________________________
>> saag mailing list
>> saag-***@public.gmane.org
>> https://www.ietf.org/mailman/listinfo/saag
>
Klaas Wierenga (kwiereng)
2012-10-09 12:29:46 UTC
Permalink
Hi Henry,

(adding saag, had not realised that it was a resend)

On Oct 9, 2012, at 12:05 AM, Henry Story <***@bblfish.net> wrote:

>
> On 8 Oct 2012, at 20:27, "Klaas Wierenga (kwiereng)" <***@cisco.com> wrote:
>
>> Hi Henry,
>>
>> I think your definition of what constitutes a private conversation is a bit limited, especially in an electronic day and age. I consider the simple fact that we are having a conversation, without knowing what we talk about, a privacy sensitive thing. Do you want your wife to know that you are talking to your mistress, or your employer that you have a job interview?
>> And do you believe that the location where you are does not constitute a privacy sensitive attribute?
>
> Ok I think my definition still works: If someone knows that you are communicating with someone then they know something about the conversation. In my definition that does constitute a privacy violation at least for that bit of information.

ehm, I think that you need quite a bit of fantasy to read that in your definition ;-) So if you mean also "or are aware of the communication" you should perhaps include that, but, as you point out below, that does complicate things big time.

> Though I think you exaggerate what they know. Your wife won't know that you are talking to your mistress, just that you are talking to another server (If it is a freedom box, they could narrow it down to an individual). Information about it being a mistress cannot be found just by seeing information move over the wire. Neither does an employer know you have a job interview just because you are communicating with some server x from a different company. But he could be worried.

I think you are now digressing from the general case, whilst your definition was meant to be very generic (I believe?). I am not talking about implementations, but about the general principle. The fact that there is an xmpp session between ***@cisco.com and ***@apple.com may indicate to my manager that I am looking for another job. My manager might also be worried if he sees me entering the Google premises, but that is much less likely (even though I have helped applicants get out of the building through the emergency exit because a colleague had arrived in the reception area in the past ;-) The reason I brought these examples up is that I believe something has changed with the ubiquity of online databases and online communication. When I didn't want to be overheard in the past
I would go for a walk with someone and we could talk with reasonable assurance. Now I have to trust that say Skype is not listening in to my conversation and that Twitter will not hand my tw
eets to DHS. So the simple fact that I use an encrypted channel is not sufficient.

>
> So if I apply this to WebID ( http://webid.info/ ) - which is I think why you bring it up - WebID is currently based on TLS, which does make it possible to track connections between servers. But remember that the perfect is the enemy of the good. How come? Well, put things in context: by failing to create simple distributed systems which protect privacy of content pretty well, that works with current deployed technologies (e.g. browsers, and servers), we have allowed large social networks to grow to sizes unimaginable in any previous surveillance society. So even a non optimal system like TLS can still bring huge benefits over the current status quo. If only in educating people in how to build such reasonably safe distributed systems.

I was not referring to WebID in particular. I applaud your effort, and do realise that perfect will not happen. However I think that your definition of privacy should either be scoped tightly to particular use cases or is too broad a brush. I tend to think that one single definition of privacy is not very useful, and rather like to think about different forms of privacy, location privacy, encrypted channels, plausible deniability etc.

>
> But having put that in context, the issue of tracking what servers are communicating remains. There are technologies designed to make that opaque, such as Tor. I still need to prove that one can have .onion WebIDs, and that one can also connect with browsers using TLS behind Tor - but it should not be difficult to do. Once one can show this then it should be possible to develop protocols that make this a lot more efficient. Would that convince you?

Ehm, what actually concerns me more is not the fact that *it is possible* to design proper protocols as much as that I would like to provide guidance to protocol developers to *prevent improper protocols*. Does that make sense?

Klaas

>
>>
>> Klaas
>>
>> Sent from my iPad
>>
>> On 8 okt. 2012, at 19:01, "Henry Story" <***@bblfish.net> wrote:
>>
>>>
>>> Notions of unlinkability of identities have recently been deployed
>>> in ways that I would like to argue, are often much too simplistic,
>>> and in fact harmful to wider issues of privacy on the web.
>>>
>>> I would like to show this in two stages:
>>> 1. That linkability of identity is essential to electronic privacy
>>> on the web
>>> 2. Show an example of an argument by Harry Halpin relating to
>>> linkability, and by pulling it apart show how careful one has
>>> to be with taking such arguments at face value
>>>
>>> Because privacy is the context in which the linkability or non linkability
>>> of identities is important, I would like to start with a simple working
>>> definition of what constitutes privacy with the following minimal
>>> criterion [0] that I think everyone can agree on:
>>>
>>> "A communication between two people is private if the only people
>>> who are party to the conversation are the two people in question.
>>> One can easily generalise to groups: a conversation between groups
>>> of people is private (to the group) if the only people who can
>>> participate/read the information are members of that group"
>>>
>>> Note that this does not deal with issues of people who were privy to
>>> the conversation later leaking information voluntarily. We cannot
>>> technically legislate good behaviour, though we can make it possible
>>> for people to express context. [1]
>>>
>>>
>>> 1. On the importance of linkability of identities to privacy
>>> ============================================================
>>>
>>> A. Issues of Centralisation
>>> ---------------------------
>>>
>>> We can put this with the following thought experiment which I put
>>> to Ben Laurie recently [0].
>>>
>>> First imagine that we all are on one big social network, where
>>> all of our home pages are at the same URL. Nobody could link
>>> to our profile page in any meaningful way. The bigger the network
>>> the more different people that one URL could refer to. People
>>> that were part of the network could log in, and once logged in
>>> communicate with others in their unlinkable channels.
>>>
>>> But this would not necessarily give users of the network privacy:
>>> simply because the network owner would be party to the conversation
>>> between any two people or any group of people. Conversations
>>> that do not wish the network owner to be party to the conversation
>>> cannot work within that framework.
>>>
>>> At the level of our planet it is clear that there will always be a
>>> huge number of agents that cannot for legal or other reasons allow one
>>> global network owner to be party to all their conversations. We are
>>> therefore socio-logically forced into the social web.
>>>
>>> B. Linkability and the Social Web
>>> ---------------------------------
>>>
>>> Secondly imagine that we now all have Freedom Boxes [4], where
>>> each of us has full control over the box, its software, and the
>>> data on it. (We take this extreme individualistic case to emphasise
>>> the contrast, not because we don't acknowledge the importance of
>>> many intermediate cases as useful) Now we want to create a
>>> distributed social network - the social web - where each of us can
>>> publish information and through access control rules limit who can
>>> access each resource. We would like to limit access to groups such
>>> as:
>>>
>>> - friends
>>> - friends of friends
>>> - family
>>> - business colleagues
>>> - ...
>>>
>>> Limit access means, that we need to determine when accessing a
>>> resource who is accessing it. For this we need a global identifier
>>> so that can check with the information available to us, if the
>>> referent of that identifier is indeed a member of one of those
>>> groups. We can't have a local identifier, for that would require
>>> that the person we were dealing with had an account on our private
>>> box - which will be extremely unlikely. We therefore need a way
>>> to identify - pseudonymously if be - agents in a global space.
>>>
>>> Take the following example. Imagine you come to the WebID TPAC
>>> meeting [6] and I take a picture of everyone present. I would like
>>> to first restrict access to the picture to only those members who
>>> were present. Clearly if I only used local identifiers, I would have
>>> to get each one of you to first create an account on my machine. But
>>> how would I then know that the accounts created on the FBox correspond
>>> to the people who were at the party? It is much easier if we could
>>> create a party members group and publish it like this
>>>
>>> http://www.w3.org/2005/Incubator/webid/team.n3
>>>
>>> Then I could drag and drop this group on the access control panel
>>> of my FBox admin console to restrict access to only those members.
>>> This shows how through linkability I can restrict access and
>>> increase privacy by making it possible to link identities in a distributed
>>> web. It would be quite possible furthermore for the above team.n3
>>> resource to be protected by access control.
>>>
>>>
>>> 2. Example of how Unlinkability can be used to spread FUD
>>> =========================================================
>>>
>>>
>>> So here I would like to show how fears about linkability can
>>> then bring intelligent people like Harry Halpin to make some seemingly
>>> plausible arguments. Here is an example [2] of Harry arguing against
>>> W3C WebID CG's http://webid.info/spec/
>>>
>>> [[
>>> Please look up "unlinkability" (which is why I kept referencing the
>>> aforementioned IETF doc [sic [3] below it is a draft] which I saw
>>> referenced earlier but whose main point seemed missed). Then explain
>>> how WebID provides unlinkability.
>>>
>>> Looking at the spec - to me, WebID doesn't as it still requires
>>> publishing your public key at a URI and then having the relying party go
>>> to your identity provider (i.e. your personal homepage in most cases,
>>> i.e. what it is that hosts your key) in order to verify your cert, which
>>> must provide that URI in the SAN in the cert. Thus, WebID does not
>>> provide unlinkability. There's some waving of hands about guards and
>>> access control, but that would not mediate the above point, as the HTTP
>>> GET to the URI for the key is enough to provide the "link".
>>>
>>> In comparison, BrowserID provides better privacy in terms of
>>> unlinkability by having the browser in between the identity provider and
>>> the relying party, so the relying party doesn't have to ping the
>>> identity provider for identity-related transactions. That definitely
>>> helps provide unlinkability in terms of the identity provider not
>>> needing to knowing every time the user goes to a relying party.
>>> ]]
>>>
>>> If I can rephrase the point seems to be the following: A WebID verification
>>> requires that the site your are authenticating to ( The Relying Party ) verify
>>> your identity by dereferencing ( let me add: anonymously ) your profile
>>> page, which might only contain as much as your public key publicly. The yellow
>>> box in the picture here:
>>>
>>> http://www.w3.org/2005/Incubator/webid/spec/#the-webid-protocol
>>>
>>> The leakage of information then would not be towards the Relying Party - the
>>> site you are logging into - because that site is the one you just wilfully
>>> sent a proof of your identity to. The leakage of information is (drum roll)
>>> towards your profile page server! That server might discover ( through IP address
>>> sniffing presumably ) which sites you might be visiting.
>>>
>>> One reasonable answer to this problem would be for the Relying Party to fetch
>>> this information via Tor which would remove the ip address sniffing problem.
>>>
>>> But let us develop the picture of who we are loosing (potentially)
>>> information to. There are a number of profile server scenarios:
>>>
>>> A. Profile on My Freedom Box [4]
>>>
>>> The FreedomBox is a personal machine that I control, running
>>> free software that I can inspect. Here the only person who has
>>> access to the Freedom Box is me. So if I discover that I logged
>>> in somewhere that should come as no surprise to me. I might even
>>> be interested in this information as a way of gathering information
>>> about where I logged in - and perhaps also if anything had been
>>> logging in somewhere AS me. (Sadly it looks like it might be
>>> difficult to get much good information there as things stand
>>> currently with WebID.)
>>>
>>> B. Profile on My Company/University Profile Server
>>>
>>> As a member of a company, I am part of a larger agency, namely the
>>> Company or University who is backing my identity as member of that
>>> institution. A profile on a University web site can mean a lot more
>>> than a profile on some social network, because it is in part backed
>>> by that institution. Of course as a member of that institution we
>>> are part of a larger agent hood. And so it is not clear that the institution
>>> and me are in that context that different. This is also why it is
>>> often legally required that one not use one's company identity for
>>> private business.
>>>
>>> C. A Social Network ( Google+, Facebook, ... )
>>>
>>> It is a bit odd that people who are part of these networks, and who
>>> are "liking" pretty much everything on the web in a way that is clearly
>>> visible and is encouraged by those networks to be visible to the
>>> network, would have an issue with those sites knowing-perhaps (if the
>>> RP does not use Tor or a proxy) where they are logging into. It is certainly
>>> not the way the OAuth, OpenID or other protocols that are in extremely
>>> wide use now have been developed and are used by those sites.
>>>
>>> If we look then at BrowserId [7] Now Mozilla Persona, the only difference
>>> really with WebID ( apart from it not being decentralised until crypto in the
>>> browser really works ) is that the certificate is updated at short notice
>>> - once a day - and that relying parties verify the signature. Neither of course
>>> can the relying party get much interesting attributes this way, and if it did
>>> then the whole of the unlinkability argument would collapse immediately.
>>>
>>>
>>> 3. Conclusion
>>> =============
>>>
>>> Talking about privacy is like talking about security. It is a breeding ground
>>> for paranoia, which tend to make it difficult to notice important
>>> solutions to the problem we actually have. Linkability or unlinkability as defined in
>>> draft-hansen-privacy-terminology-03 [3] come with complicated definitions,
>>> and are I suppose meant to be applied carefully. But the choice of "unlinkable"
>>> as a word tends to help create rhethorical short cuts that are apt to hide the
>>> real problems of privacy. By trying too hard to make things unlinkable we are moving
>>> inevitably towards a centralised world where all data is in big brother's hands.
>>>
>>> I want to argue that we should all *Like* Linkability. We should
>>> do it aware that we can protect ourselves with access control (and TOR)
>>> and realise that we don't need to reveal anything more than anyone knew
>>> before hand in our linkable profiles.
>>>
>>> To create a Social Web we need a Linkable ( and likeable ) social web.
>>> We may need other technologies for running Wikileaks type set ups, but
>>> the clearly cannot be the basic for an architecture of privacy - even
>>> if it is an important element in the political landscape.
>>>
>>> Henry
>>>
>>> [0] this is from a discussion with Ben Laurie
>>> http://lists.w3.org/Archives/Public/public-webid/2012Oct/att-0022/privacy-def-1.pdf
>>> [1] Oshani's Usage Restriction paper
>>> http://dig.csail.mit.edu/2011/Papers/IEEE-Policy-httpa/paper.pdf
>>> [2] http://lists.w3.org/Archives/Public/public-identity/2012Oct/0036.html
>>> [3] https://tools.ietf.org/html/draft-hansen-privacy-terminology-03
>>> [4] http://www.youtube.com/watch?v=SzW25QTVWsE
>>> [6] http://www.w3.org/2012/10/TPAC/
>>> [7] A Comparison between BrowserId and WebId
>>> http://security.stackexchange.com/questions/5406/what-are-the-main-advantages-and-disadvantages-of-webid-compared-to-browserid
>>>
>>>
>>> Social Web Architect
>>> http://bblfish.net/
>>>
>>> _______________________________________________
>>> saag mailing list
>>> ***@ietf.org
>>> https://www.ietf.org/mailman/listinfo/saag
>
> Social Web Architect
> http://bblfish.net/
>
Melvin Carvalho
2012-10-09 15:10:52 UTC
Permalink
On 6 October 2012 15:49, Henry Story <henry.story-***@public.gmane.org> wrote:

>
> Notions of unlinkability of identities have recently been deployed
> in ways that I would like to argue, are often much too simplistic,
> and in fact harmful to wider issues of privacy on the web.
>

It seems to me that there's 3 phases of the web

1. Unlinkability -- this was essentially web 1.0 and provided anonymity

2. Pseudo anonymitiy -- this was essentially web 2.0 and provided user
logins but also lead to walled gardens and data silos

3. Linkability -- perhaps this the great unsolved problem of web 3.0 and
will provide data portability


>
> I would like to show this in two stages:
> 1. That linkability of identity is essential to electronic privacy
> on the web
> 2. Show an example of an argument by Harry Halpin relating to
> linkability, and by pulling it apart show how careful one has
> to be with taking such arguments at face value
>
> Because privacy is the context in which the linkability or non linkability
> of identities is important, I would like to start with a simple working
> definition of what constitutes privacy with the following minimal
> criterion [0] that I think everyone can agree on:
>
> "A communication between two people is private if the only people
> who are party to the conversation are the two people in question.
> One can easily generalise to groups: a conversation between groups
> of people is private (to the group) if the only people who can
> participate/read the information are members of that group"
>
> Note that this does not deal with issues of people who were privy to
> the conversation later leaking information voluntarily. We cannot
> technically legislate good behaviour, though we can make it possible
> for people to express context. [1]
>
>
> 1. On the importance of linkability of identities to privacy
> ============================================================
>
> A. Issues of Centralisation
> ---------------------------
>
> We can put this with the following thought experiment which I put
> to Ben Laurie recently [0].
>
> First imagine that we all are on one big social network, where
> all of our home pages are at the same URL. Nobody could link
> to our profile page in any meaningful way. The bigger the network
> the more different people that one URL could refer to. People
> that were part of the network could log in, and once logged in
> communicate with others in their unlinkable channels.
>
> But this would not necessarily give users of the network privacy:
> simply because the network owner would be party to the conversation
> between any two people or any group of people. Conversations
> that do not wish the network owner to be party to the conversation
> cannot work within that framework.
>
> At the level of our planet it is clear that there will always be a
> huge number of agents that cannot for legal or other reasons allow one
> global network owner to be party to all their conversations. We are
> therefore socio-logically forced into the social web.
>
> B. Linkability and the Social Web
> ---------------------------------
>
> Secondly imagine that we now all have Freedom Boxes [4], where
> each of us has full control over the box, its software, and the
> data on it. (We take this extreme individualistic case to emphasise
> the contrast, not because we don't acknowledge the importance of
> many intermediate cases as useful) Now we want to create a
> distributed social network - the social web - where each of us can
> publish information and through access control rules limit who can
> access each resource. We would like to limit access to groups such
> as:
>
> - friends
> - friends of friends
> - family
> - business colleagues
> - ...
>
> Limit access means, that we need to determine when accessing a
> resource who is accessing it. For this we need a global identifier
> so that can check with the information available to us, if the
> referent of that identifier is indeed a member of one of those
> groups. We can't have a local identifier, for that would require
> that the person we were dealing with had an account on our private
> box - which will be extremely unlikely. We therefore need a way
> to identify - pseudonymously if be - agents in a global space.
>
> Take the following example. Imagine you come to the WebID TPAC
> meeting [6] and I take a picture of everyone present. I would like
> to first restrict access to the picture to only those members who
> were present. Clearly if I only used local identifiers, I would have
> to get each one of you to first create an account on my machine. But
> how would I then know that the accounts created on the FBox correspond
> to the people who were at the party? It is much easier if we could
> create a party members group and publish it like this
>
> http://www.w3.org/2005/Incubator/webid/team.n3
>
> Then I could drag and drop this group on the access control panel
> of my FBox admin console to restrict access to only those members.
> This shows how through linkability I can restrict access and
> increase privacy by making it possible to link identities in a distributed
> web. It would be quite possible furthermore for the above team.n3
> resource to be protected by access control.
>
>
> 2. Example of how Unlinkability can be used to spread FUD
> =========================================================
>
>
> So here I would like to show how fears about linkability can
> then bring intelligent people like Harry Halpin to make some seemingly
> plausible arguments. Here is an example [2] of Harry arguing against
> W3C WebID CG's http://webid.info/spec/
>
> [[
> Please look up "unlinkability" (which is why I kept referencing the
> aforementioned IETF doc [sic [3] below it is a draft] which I saw
> referenced earlier but whose main point seemed missed). Then explain
> how WebID provides unlinkability.
>
> Looking at the spec - to me, WebID doesn't as it still requires
> publishing your public key at a URI and then having the relying party go
> to your identity provider (i.e. your personal homepage in most cases,
> i.e. what it is that hosts your key) in order to verify your cert, which
> must provide that URI in the SAN in the cert. Thus, WebID does not
> provide unlinkability. There's some waving of hands about guards and
> access control, but that would not mediate the above point, as the HTTP
> GET to the URI for the key is enough to provide the "link".
>
> In comparison, BrowserID provides better privacy in terms of
> unlinkability by having the browser in between the identity provider and
> the relying party, so the relying party doesn't have to ping the
> identity provider for identity-related transactions. That definitely
> helps provide unlinkability in terms of the identity provider not
> needing to knowing every time the user goes to a relying party.
> ]]
>
> If I can rephrase the point seems to be the following: A WebID verification
> requires that the site your are authenticating to ( The Relying Party )
> verify
> your identity by dereferencing ( let me add: anonymously ) your profile
> page, which might only contain as much as your public key publicly. The
> yellow
> box in the picture here:
>
> http://www.w3.org/2005/Incubator/webid/spec/#the-webid-protocol
>
> The leakage of information then would not be towards the Relying Party -
> the
> site you are logging into - because that site is the one you just wilfully
> sent a proof of your identity to. The leakage of information is (drum roll)
> towards your profile page server! That server might discover ( through IP
> address
> sniffing presumably ) which sites you might be visiting.
>
> One reasonable answer to this problem would be for the Relying Party to
> fetch
> this information via Tor which would remove the ip address sniffing
> problem.
>
> But let us develop the picture of who we are loosing (potentially)
> information to. There are a number of profile server scenarios:
>
> A. Profile on My Freedom Box [4]
>
> The FreedomBox is a personal machine that I control, running
> free software that I can inspect. Here the only person who has
> access to the Freedom Box is me. So if I discover that I logged
> in somewhere that should come as no surprise to me. I might even
> be interested in this information as a way of gathering information
> about where I logged in - and perhaps also if anything had been
> logging in somewhere AS me. (Sadly it looks like it might be
> difficult to get much good information there as things stand
> currently with WebID.)
>
> B. Profile on My Company/University Profile Server
>
> As a member of a company, I am part of a larger agency, namely the
> Company or University who is backing my identity as member of that
> institution. A profile on a University web site can mean a lot more
> than a profile on some social network, because it is in part backed
> by that institution. Of course as a member of that institution we
> are part of a larger agent hood. And so it is not clear that the
> institution
> and me are in that context that different. This is also why it is
> often legally required that one not use one's company identity for
> private business.
>
> C. A Social Network ( Google+, Facebook, ... )
>
> It is a bit odd that people who are part of these networks, and who
> are "liking" pretty much everything on the web in a way that is clearly
> visible and is encouraged by those networks to be visible to the
> network, would have an issue with those sites knowing-perhaps (if the
> RP does not use Tor or a proxy) where they are logging into. It is
> certainly
> not the way the OAuth, OpenID or other protocols that are in extremely
> wide use now have been developed and are used by those sites.
>
> If we look then at BrowserId [7] Now Mozilla Persona, the only difference
> really with WebID ( apart from it not being decentralised until crypto in
> the
> browser really works ) is that the certificate is updated at short notice
> - once a day - and that relying parties verify the signature. Neither of
> course
> can the relying party get much interesting attributes this way, and if it
> did
> then the whole of the unlinkability argument would collapse immediately.
>
>
> 3. Conclusion
> =============
>
> Talking about privacy is like talking about security. It is a breeding
> ground
> for paranoia, which tend to make it difficult to notice important
> solutions to the problem we actually have. Linkability or unlinkability as
> defined in
> draft-hansen-privacy-terminology-03 [3] come with complicated definitions,
> and are I suppose meant to be applied carefully. But the choice of
> "unlinkable"
> as a word tends to help create rhethorical short cuts that are apt to hide
> the
> real problems of privacy. By trying too hard to make things unlinkable we
> are moving
> inevitably towards a centralised world where all data is in big brother's
> hands.
>
> I want to argue that we should all *Like* Linkability. We should
> do it aware that we can protect ourselves with access control (and TOR)
> and realise that we don't need to reveal anything more than anyone knew
> before hand in our linkable profiles.
>
> To create a Social Web we need a Linkable ( and likeable ) social web.
> We may need other technologies for running Wikileaks type set ups, but
> the clearly cannot be the basic for an architecture of privacy - even
> if it is an important element in the political landscape.
>
> Henry
>
> [0] this is from a discussion with Ben Laurie
>
> http://lists.w3.org/Archives/Public/public-webid/2012Oct/att-0022/privacy-def-1.pdf
> [1] Oshani's Usage Restriction paper
> http://dig.csail.mit.edu/2011/Papers/IEEE-Policy-httpa/paper.pdf
> [2] http://lists.w3.org/Archives/Public/public-identity/2012Oct/0036.html
> [3] https://tools.ietf.org/html/draft-hansen-privacy-terminology-03
> [4] http://www.youtube.com/watch?v=SzW25QTVWsE
> [6] http://www.w3.org/2012/10/TPAC/
> [7] A Comparison between BrowserId and WebId
>
> http://security.stackexchange.com/questions/5406/what-are-the-main-advantages-and-disadvantages-of-webid-compared-to-browserid
>
>
> Social Web Architect
> http://bblfish.net/
>
>
Loading...