Discussion:
Creating a Global User-level CA/Trust Infrastructure for Secure Messaging
Anders Rundgren
2008-11-22 10:12:36 UTC
Permalink
The following is related to the S/MIME discussions.

One of the many [unsolvable] problems with S/MIME is the establishment of a globally working user-level PKI infrastructure.

Although not perfect, I think it is fair to say that a globally working domain-name-level PKI infrastructure actually already exists.

If we (security experts) want to create anything that could match closed networks such as Skype, having 100M+ users enjoying full end-2-end-security, I think we need to be a bit pragmatic and not hoping that users should be extremely interested in certificates, or that the UN should provide us with a universal root certificate.

The following proposal breaks RFC 3280 validation rules which is bad but the idea is to only use this scheme in a special purpose messaging protocol, not "infecting" NSS in any way. Here it goes (please don't throw up, this is completely serious)...

Each domain (host) have a "pseudo-CA" using a commercial-grade SSL certificate as a CA certificate. Certificates created by such a CA should have a specific DN format (in order to be valid), where the host-name of course must be a core component (you can only certify things in your own domain).

Based on such a trust infrastructure, an on-line-based secure messaging system should be able to achieve Skype-level scalability while still being fully distributed. I haven't really gotten down to the nitty-gritty with the messaging itself, because a system like this obviously requires a bunch of other hot-shots as well :-)

Enrolment issues? Skype does this without the user having to know what a certificate is.

Applications include all kinds of interactive communication with mobile phones as a really interesting target unless it gets outlawed.

Anders Rundgren
Nelson B Bolyard
2008-11-22 11:11:57 UTC
Permalink
Anders Rundgren wrote, On 2008-11-22 02:12:
> The following is related to the S/MIME discussions.

Anders, here are your choices:
You may either have
a) encryption using authenticated keys or
b) encryption using unauthenticated keys.

Certificates are used for authenticated encryption. If you don't want
authenticated encryption, you don't use certificates. It's that simple.

The idea of producing phony certificates, so called "self signed"
certificates, or certs from "pseudo CAs" is an attempt to force
unauthenticated keys into a system whose ENTIRE and SOLE purpose is to
authenticate keys, and avoid the use of unauthenticated keys.

If you want to design an application protocol that uses purely
unauthenticated keys (and it appears that you do), then design it to not
use certificates at all, but to simply use unauthenticated keys. Yes,
there are application protocols out there that do that. They're vulnerable
to Man in the middle attacks, but that's the price you pay for choosing to
use unauthenticated keys. SSL supports the use of unauthenticated "bare"
keys, without any certs. If that's what you want, go for it.

But first consider the capabilities of authenticated keys carefully.
There's absolutely NOTHING that says that in every application protocol,
key authentication must be tied to DNS names. The decision to bind keys
to DNS names is a decision made by the designers of the https application
protocol, and it fits their model, where contact is initiated based on
a DNS name. But for other applications, where contact is NOT initiated
based on DNS name, binding of keys to DNS names in meaningless. In
other applications, which identify end points with identities in other
spaces besides DNS names, authentication of keys requires binding them
to what ever identities are used by that application.

That's why authenticated email encryption protocols don't bind to
DNS names, they bind to mailbox names (email addresses).

There's a certain world-wide instant messaging service that offers file
transfer capabilities. It has the ability to offer authenticated and
encrypted IM and authenticated and encrypted file transfer. It uses
S/MIME for the IM and SSL for the file transfer. Every one of its clients
offers both. It doesn't require that certs have DNS names in them, because
users identify each other through names that are not DNS names and are not
mailbox addresses.

The certificates that the users use to authenticate each other for the
S/MIME based IM service are the same certificates used for the SSL file
transfer. The reasoning is that when you've determined the certificate
of the party to whom you're IMing, and you want to do file transfer,
you know that you want to transfer your files to the same party to
whom you've been IMing. So, the very same cert used to identify the
party for IM is the cert used to identify that party's SSL server.

When the SSL client connects to the peer's SSL server for file transfer,
it checks to see that the cert is the same cert used in the IM. It does
not check for a DNS name or an email address. It checks that the cert is
the very same cert. This is SSL peer-to-peer.

In this system, there are servers that act as relays between the IM
endpoints, and also between the SSL endpoints (if necessary desired).
Those servers don't decrypt anything (they can't). They just pass the
traffic through from one end to the other. All encryption is end to
end. If one peer is able to receive incoming connections (many are not)
then one client can connect directly to the other over the internet,
bypassing the IM servers for file transfer, but if the receiving party
cannot receive incoming connections, it connects to the IM servers,
and the file transfer takes place through the IM server. It's still
end to end encryption. The IM servers don't decrypt anything because
a) they don't have the keys, and b) it would be way too slow anyway.

The clients all require certs from real CAs, Self-signed certs are not
an option, but there are no predefined requirements on what cert names
must contain. The UI clearly presents the peer's cert name to the user
and the user must decide if that is a name with whom he wants to IM (or file
transfer) or not.

This system has been around for years. The clients all use NSS for all
the SMIME IM messages and for all the SSL file transfers. Works with
any cert issued by any of the CAs in Mozilla's list. I use it every day.
Chances are good that you do too. It's the worlds largest IM network.
If you have your own cert, such as (say) an email cert, and would like
to try secure IM over AOL's instant messenger network, write to me off
list.

Please don't waste any more time talking about "pseudo CAs". That's
pointless. If your application wants unauthenticated keys, then use
them and don't bother with certs. My long standing objection to
self signed certs is not an objection to unauthenticated encryption
(although I have no use for unauthenticated encryption), but rather is
an objection to trying to force the system whose sole purpose is to
prevent/avoid unauthenticated keys from being used a a way to distribute
unauthenticated keys.

Did you know there's even an RFC proposing the use of opportunistic
encryption over http (not https)? It can use either authenticated or
unauthenticated keys. No publicly offered clients or servers in the world
(known to me) implement it, and it has some real problems with proxies
because proxies must encrypt and decrypt at every stage (hello MITM) but
hey, that's no objection to people who want unauthenticated encryption.
Anders Rundgren
2008-11-22 14:52:41 UTC
Permalink
Nelson,
Thank you for your elaborate answer.

Naturally there is no problem to solve if everybody is connected to one of a handful of IM providers. The purpose of my proposal was rather investigating the possibility that each organization or ISP run their own secure messaging server in about the same way they run mail-servers today, and without any particular bias towards consumers, citizens, or employees. To achieve that you need a scalable trust infrastructure otherwise you run into the situation we have with secure email which has reduced it to a community-scale security scheme.

Since there already are hundreds of thousands SSL certificates out there, each one could serve as a poor mans CA for the domain they are *authoritative* in.

How these CAs enroll users would follow existing practices which are completely up to the policy of the domain owner. AOL would presumably use email-roundtrips, while your employer would likely based enrolment on Active Directory.

That is, I'm definitely talking about authenticated keys, since a "pseudo CA" is is indeed minting end-user certificates from its associated SSL certificate. Yes, BasicConstraints and KeyUsage are ignored :-)

That DNS do not represent the only naming-system there is is for sure but as a foundation for a globally interoperable messaging system, it is hard to come up with anything better.

Regarding message formats, I haven't gone to that level yet. If there are existing RFCs that would work out-of-box that would be fantastic! I have though a feeling that some "tweaks" in the interpretation would be necessary since the described scheme is incompatible with existing PKI validation schemes.

Anders





----- Original Message -----
From: "Nelson B Bolyard" <***@bolyard.me>
To: "mozilla's crypto code discussion list" <dev-tech-***@lists.mozilla.org>
Sent: Saturday, November 22, 2008 12:11
Subject: Re: Creating a Global User-level CA/Trust Infrastructure for SecureMessaging


Anders Rundgren wrote, On 2008-11-22 02:12:
> The following is related to the S/MIME discussions.

Anders, here are your choices:
You may either have
a) encryption using authenticated keys or
b) encryption using unauthenticated keys.

Certificates are used for authenticated encryption. If you don't want
authenticated encryption, you don't use certificates. It's that simple.

The idea of producing phony certificates, so called "self signed"
certificates, or certs from "pseudo CAs" is an attempt to force
unauthenticated keys into a system whose ENTIRE and SOLE purpose is to
authenticate keys, and avoid the use of unauthenticated keys.

If you want to design an application protocol that uses purely
unauthenticated keys (and it appears that you do), then design it to not
use certificates at all, but to simply use unauthenticated keys. Yes,
there are application protocols out there that do that. They're vulnerable
to Man in the middle attacks, but that's the price you pay for choosing to
use unauthenticated keys. SSL supports the use of unauthenticated "bare"
keys, without any certs. If that's what you want, go for it.

But first consider the capabilities of authenticated keys carefully.
There's absolutely NOTHING that says that in every application protocol,
key authentication must be tied to DNS names. The decision to bind keys
to DNS names is a decision made by the designers of the https application
protocol, and it fits their model, where contact is initiated based on
a DNS name. But for other applications, where contact is NOT initiated
based on DNS name, binding of keys to DNS names in meaningless. In
other applications, which identify end points with identities in other
spaces besides DNS names, authentication of keys requires binding them
to what ever identities are used by that application.

That's why authenticated email encryption protocols don't bind to
DNS names, they bind to mailbox names (email addresses).

There's a certain world-wide instant messaging service that offers file
transfer capabilities. It has the ability to offer authenticated and
encrypted IM and authenticated and encrypted file transfer. It uses
S/MIME for the IM and SSL for the file transfer. Every one of its clients
offers both. It doesn't require that certs have DNS names in them, because
users identify each other through names that are not DNS names and are not
mailbox addresses.

The certificates that the users use to authenticate each other for the
S/MIME based IM service are the same certificates used for the SSL file
transfer. The reasoning is that when you've determined the certificate
of the party to whom you're IMing, and you want to do file transfer,
you know that you want to transfer your files to the same party to
whom you've been IMing. So, the very same cert used to identify the
party for IM is the cert used to identify that party's SSL server.

When the SSL client connects to the peer's SSL server for file transfer,
it checks to see that the cert is the same cert used in the IM. It does
not check for a DNS name or an email address. It checks that the cert is
the very same cert. This is SSL peer-to-peer.

In this system, there are servers that act as relays between the IM
endpoints, and also between the SSL endpoints (if necessary desired).
Those servers don't decrypt anything (they can't). They just pass the
traffic through from one end to the other. All encryption is end to
end. If one peer is able to receive incoming connections (many are not)
then one client can connect directly to the other over the internet,
bypassing the IM servers for file transfer, but if the receiving party
cannot receive incoming connections, it connects to the IM servers,
and the file transfer takes place through the IM server. It's still
end to end encryption. The IM servers don't decrypt anything because
a) they don't have the keys, and b) it would be way too slow anyway.

The clients all require certs from real CAs, Self-signed certs are not
an option, but there are no predefined requirements on what cert names
must contain. The UI clearly presents the peer's cert name to the user
and the user must decide if that is a name with whom he wants to IM (or file
transfer) or not.

This system has been around for years. The clients all use NSS for all
the SMIME IM messages and for all the SSL file transfers. Works with
any cert issued by any of the CAs in Mozilla's list. I use it every day.
Chances are good that you do too. It's the worlds largest IM network.
If you have your own cert, such as (say) an email cert, and would like
to try secure IM over AOL's instant messenger network, write to me off
list.

Please don't waste any more time talking about "pseudo CAs". That's
pointless. If your application wants unauthenticated keys, then use
them and don't bother with certs. My long standing objection to
self signed certs is not an objection to unauthenticated encryption
(although I have no use for unauthenticated encryption), but rather is
an objection to trying to force the system whose sole purpose is to
prevent/avoid unauthenticated keys from being used a a way to distribute
unauthenticated keys.

Did you know there's even an RFC proposing the use of opportunistic
encryption over http (not https)? It can use either authenticated or
unauthenticated keys. No publicly offered clients or servers in the world
(known to me) implement it, and it has some real problems with proxies
because proxies must encrypt and decrypt at every stage (hello MITM) but
hey, that's no objection to people who want unauthenticated encryption.
Eddy Nigg
2008-11-22 12:03:08 UTC
Permalink
On 11/22/2008 12:12 PM, Anders Rundgren:
> Enrolment issues? Skype does this without the user having to know what a
> certificate is.

LOL! And nobody knows what those keys are, nor if it's authentic and who
else can listen and decrypt. Who controls what exactly? Does the user
has control over his key(s)? I don't....do you?

Skype's encryption is security theater at best! :-) Did Skype ever
disclose what it does, what their infrastructure looks like, what kind
of encryption is in force and most important, why the user doesn't
control his supposedly own private key(s)?

I suggest to treat Skype's encryption or any similar scheme as plain text.


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Ian G
2008-11-22 15:39:18 UTC
Permalink
Anders Rundgren wrote:
> The following is related to the S/MIME discussions.
...


> If we (security experts) want to create anything that could match closed
> networks such as Skype, having 100M+ users enjoying full
> end-2-end-security, I think we need to be a bit pragmatic and not hoping
> that users should be extremely interested in certificates, or that the
> UN should provide us with a universal root certificate.


I see this as an interesting question. There are pros and cons. First
con; why would we want to do that? Just use Skype. Or, Nelson talked
about AIM having some form of crypto. Also Jabber has something.

In contrast to that, one of the things that Mozilla Messaging should be
looking at is exactly that. The comparisons between email and chat are
strong if not perfect, and while old things like email aren't likely to
die any time soon (telegrams just got shutdown last year!), all new
interesting work is being done in the peer2peer domain.

So an obvious thing is to add chat to Tbird. How to do this? An
interesting question. However, this is a business-level requirement,
not a user-level or tech-level comment.

...
> Each domain (host) have a "pseudo-CA" using a commercial-grade SSL
> certificate as a CA certificate. Certificates created by such a CA
> should have a specific DN format (in order to be valid), where the
> host-name of course must be a core component (you can only certify
> things in your own domain).


The problem I see here is that you are (all) starting from a tech pov.
Bottom up. What's the point of that? Granted, it will work in theory,
but the market has shown that successful things start from a top-down,
market focus, they win out in the end.

So, I would suggest defining what the chat system is that users of say
Tbird would want. (Or Firefox.) Then, once you've figured that out,
start meeting those requirements.

Alternatively, if you do want to take a tool -- "I have a cert" -- and
wish to thrust it down people's throats, then you are reducing your
chances to essentially lottery proportions. You have to be right about
things you aren't looking at and don't know exist, and users have no
difficulty in ditching tools that are too cumbersome.

> Based on such a trust infrastructure, an on-line-based secure messaging
> system should be able to achieve Skype-level scalability while still
> being fully distributed. I haven't really gotten down to the
> nitty-gritty with the messaging itself, because a system like
> this obviously requires a bunch of other hot-shots as well :-)


So from this, I gather you want: scalability + distribution. Do you
want no center(s) at all?


> Enrolment issues? Skype does this without the user having to know what
> a certificate is.


I sense an easy enrolment process. OK, I agree with that.

> Applications include all kinds of interactive communication with mobile
> phones as a really interesting target unless it gets outlawed.


Mobile phones -> strange messaging formats like SMS. Avoid Internet
assumptions like TCP/IP, make it strictly messaging. Well, ok, any chat
system should have done that anyway. But this is getting too deep.


Do you want file-sharing? Do you want video? These are both common
with modern day chat, and they strain the architecture depending on what
choices you made. Do you want integration into other things? E.g., if
you ended up piggybacking on some p2p networks, you might end up with
file-sharing and backup possibilities.

Do you want to have:

no originated authentication (leave it to the users)
an upgrade path to third party auth (aka CAs)
third party auth form the start?

It depends on your user base I would guess. If we are talking ordinary
mom & pop, they are happy with whatever works immediately, so the first.
If we are talking corporates, sometimes they want authentication from
a third party, and sometimes they want it from a first party (themselves).

just some thoughts!

iang
Anders Rundgren
2008-11-22 16:33:06 UTC
Permalink
Ian,
I hope you don't mind but I limit my response to a single core topic.

<<snip>>

>So from this, I gather you want: scalability + distribution.

Absolutely.

>Do you want no center(s) at all?

I want each organization/domain entity that can afford an SSL certificate to
become a virtual CA and run their own secure messaging center. Based on
the SSL certificate they can use whatever issuance policies they feel comfortable
with as long as they keep inside of their "PKI sandbox" which is (by the not
yet defined application), constrained regarding subject naming-schemes.

This is BTW, how I believe secure e-mail should have been from the beginning;
secured at the domain-level. Although that doesn't technically stop people from
sending out viruses, spam, or similar, it at least makes it much less attractive because
the domain owner would terminate you if it get too many complaints. Currently
ISPs typically do not even authenticate SMTP requests, since there is no point,
because you can "reuse" whatever domain you want and most of the time the mails
get through.

<<snip>>

anders
Ian G
2008-11-22 16:54:23 UTC
Permalink
Anders Rundgren wrote:
> Ian,
> I hope you don't mind but I limit my response to a single core topic.

:)

>> So from this, I gather you want: scalability + distribution.
>
> Absolutely.
>
>> Do you want no center(s) at all?
>
> I want each organization/domain entity that can afford an SSL certificate to
> become a virtual CA and run their own secure messaging center. Based on
> the SSL certificate they can use whatever issuance policies they feel comfortable
> with as long as they keep inside of their "PKI sandbox" which is (by the not
> yet defined application), constrained regarding subject naming-schemes.


OK, so if we intersect that with my interests (how to add chat to Tbird)
then the idea might be to write:

a Tbird plugin CA with some limited functionality:
receive requests with keys over email
issue cert over key from a superior cert, if in domain
distro the cert (over email?) to the identities

a Tbird plugin chat client that:
creates a key
sends and receives the request/cert
sends out and receives chat messages.

Hmmm... Needs work :) I wonder if we wouldn't just be better off doing
something like writing a chat client that creates and uses its keys, but
leaves them "unathenticated"? Trying to get all that theoretical
authentication going seems beyond the effort most people will expend in
order to just chat.


> This is BTW, how I believe secure e-mail should have been from the beginning;
> secured at the domain-level. Although that doesn't technically stop people from
> sending out viruses, spam, or similar, it at least makes it much less attractive because
> the domain owner would terminate you if it get too many complaints. Currently
> ISPs typically do not even authenticate SMTP requests, since there is no point,
> because you can "reuse" whatever domain you want and most of the time the mails
> get through.


How would the domain owner terminate you? The problem with spam is that
even if only a few still get through, it works. It would seem that this
idea would rest on every other mail server in the world behaving nicely,
which isn't reality in the mail world.

For my vision of how secure e-mail could work: the Tbird creates a key,
self-signs it, turns on digsiging + key distro always, and starts
sending encrypted email as soon as it has your key.

Of course, this is unauthenticated. So an additional optional extra for
the concerned user is to click a button, select a CA, and go off and
turn the self-signed cert into a CA-signed cert. If so desired.

Just my thoughts.

iang
Anders Rundgren
2008-11-22 17:29:03 UTC
Permalink
Ian,

For me at least secure messaging means authenticated messaging as well.
Here is the current Firefox solution to certificate distribution.
http://demo.webpki.org/mozkeygen

I don't know what Eddy and Jabber intends to do but it must be something similar.

Anders

----- Original Message -----
From: "Ian G" <***@iang.org>
To: "mozilla's crypto code discussion list" <dev-tech-***@lists.mozilla.org>
Sent: Saturday, November 22, 2008 17:54
Subject: Re: Creating a Global User-level CA/Trust Infrastructurefor SecureMessaging


Anders Rundgren wrote:
> Ian,
> I hope you don't mind but I limit my response to a single core topic.

:)

>> So from this, I gather you want: scalability + distribution.
>
> Absolutely.
>
>> Do you want no center(s) at all?
>
> I want each organization/domain entity that can afford an SSL certificate to
> become a virtual CA and run their own secure messaging center. Based on
> the SSL certificate they can use whatever issuance policies they feel comfortable
> with as long as they keep inside of their "PKI sandbox" which is (by the not
> yet defined application), constrained regarding subject naming-schemes.


OK, so if we intersect that with my interests (how to add chat to Tbird)
then the idea might be to write:

a Tbird plugin CA with some limited functionality:
receive requests with keys over email
issue cert over key from a superior cert, if in domain
distro the cert (over email?) to the identities

a Tbird plugin chat client that:
creates a key
sends and receives the request/cert
sends out and receives chat messages.

Hmmm... Needs work :) I wonder if we wouldn't just be better off doing
something like writing a chat client that creates and uses its keys, but
leaves them "unathenticated"? Trying to get all that theoretical
authentication going seems beyond the effort most people will expend in
order to just chat.


> This is BTW, how I believe secure e-mail should have been from the beginning;
> secured at the domain-level. Although that doesn't technically stop people from
> sending out viruses, spam, or similar, it at least makes it much less attractive because
> the domain owner would terminate you if it get too many complaints. Currently
> ISPs typically do not even authenticate SMTP requests, since there is no point,
> because you can "reuse" whatever domain you want and most of the time the mails
> get through.


How would the domain owner terminate you? The problem with spam is that
even if only a few still get through, it works. It would seem that this
idea would rest on every other mail server in the world behaving nicely,
which isn't reality in the mail world.

For my vision of how secure e-mail could work: the Tbird creates a key,
self-signs it, turns on digsiging + key distro always, and starts
sending encrypted email as soon as it has your key.

Of course, this is unauthenticated. So an additional optional extra for
the concerned user is to click a button, select a CA, and go off and
turn the self-signed cert into a CA-signed cert. If so desired.

Just my thoughts.

iang
Ian G
2008-11-23 00:22:57 UTC
Permalink
Anders Rundgren wrote:
> Ian,
>
> For me at least secure messaging means authenticated messaging as well.


Sure, your choice. For me, security is an overall economic equation.
Sometimes this suggests security as unauthenticated, encrypted
messaging, sometimes not :)


> Here is the current Firefox solution to certificate distribution.
> http://demo.webpki.org/mozkeygen

OK, that's nice! How does it authenticate from browser to CA? I guess
as Javascript downloaded it can include any cert if needs to talk to on
the server end? Is there a protocol from javascript to CA?

Or is there no need for comms-auth in that the javascript can check that
the signature over the new cert is validly as expected?

I notice the javascript doesn't insert the root key into the Authorities
list. Is that a choice, ommission, bug, anti-bug?

iang
Eddy Nigg
2008-11-22 18:31:58 UTC
Permalink
On 11/22/2008 07:29 PM, Anders Rundgren:
> Ian,
>
> For me at least secure messaging means authenticated messaging as well.
> Here is the current Firefox solution to certificate distribution.
> http://demo.webpki.org/mozkeygen

This serves only for authentication. Hopefully you aren't including
email signing and encryption in those certificates.

>
> I don't know what Eddy and Jabber intends to do but it must be something similar.
>

XMPP certificates provided by the XMPP foundation CA will always be
verified in some form (validating the address space under the control of
the user). That would be rather an XMPP message ping as opposed to an
email ping.

--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Nelson B Bolyard
2008-11-22 21:30:52 UTC
Permalink
Anders Rundgren wrote, On 2008-11-22 08:33:

> I want each organization/domain entity that can afford an SSL certificate
> to become a virtual CA and run their own secure messaging center.

Why SSL certs? why not email certs?

Is it because you think that a secured IM service would be based on SSL?

CMS is a MUCH better choice for secure IM than SSL, for many reasons.

Here's just one of them: CMS is amenable to store-and-forward
communications, SSL is not. If the peer to whom you're trying to send
the encrypted IM is not online at the time you send it, it's trivial to
turn into an SMIME email and mail it to him, so he gets it when he
goes online.

> Based on the SSL certificate they can use whatever issuance policies they
> feel comfortable with as long as they keep inside of their "PKI sandbox"
> which is (by the not yet defined application), constrained regarding
> subject naming-schemes.
>
> This is BTW, how I believe secure e-mail should have been from the
> beginning; secured at the domain-level. Although that doesn't
> technically stop people from sending out viruses, spam, or similar, it at
> least makes it much less attractive because the domain owner would
> terminate you if it get too many complaints.

What about all the inherent risks of having an ISP be a CA?
And ISP is in a uniquely good position to be an MITM, especially if
they issue the certs used to authenticate keys for their subscribers.

The IM service I mentioned before allows users to use certs from any CA.
Each user's client decide which certs are acceptable, not the service.
That facilitates communication between people world wide. Out of the box,
those clients trust all of the CAs known to NSS. They don't supply any
UI with which to manage the set of trusted CA certs, but their cert DBs
are ordinary NSS cert DBs and anyone who knows how to edit a cert DB
with certutil or even Firefox can ...

BTW, I'm not trying to promote a particular service. I just happen to
think they did a really good job, and the way they secured their IMs
and file transfer seem exemplary. I'd encourage any other service to
emulate that aspect of their service.
Anders Rundgren
2008-11-23 17:15:28 UTC
Permalink
Nelson B Bolyard wrote.
>> I want each organization/domain entity that can afford an SSL certificate
>> to become a virtual CA and run their own secure messaging center.

>Why SSL certs? why not email certs?

Could it be the fact that the SSL PKI exists?

Email certs is a nice idea that requires that organizations buy into something
like VeriSign's OnSite concept or into completely bizarre stuff like the US
FBCA ( http://www.cio.gov/fpkipa ). Only governments have proved to be
interested in becoming a part of a PKI trust network. The concepts they
work with are appallingly stupid. NASA for instance use an Aerospace PKI
for their suppliers ignoring the fact that 90% of all invoices are from suppliers
that are not in Aerospace (catering, transports, office supplies etc etc).
More "fun": http://www.imc.org/ietf-pkix/mail-archive/msg05024.html

That is, if success is irrelevant you have many choices. If OTOH success
is a core component, the number of options are pretty limited.

The choice is yours!

>The IM service I mentioned before allows users to use certs from any CA.
>Each user's client decide which certs are acceptable, not the service.

Oops! *My* target are users that do not know what a certificate is!

Then the rest becomes rather unimportant since it is about comparing
apples and oranges and we already know that strawberries are better :-)

I believe Eddy's Jabber stuff is rather close to what I propose, since
it indeed gives the service an issuing capability if I have not read
the docs too bad.

Anders
Nelson B Bolyard
2008-11-23 19:33:35 UTC
Permalink
Anders Rundgren wrote, On 2008-11-23 09:15:
> Nelson B Bolyard wrote.
>>> I want each organization/domain entity that can afford an SSL certificate
>>> to become a virtual CA and run their own secure messaging center.
>
>> Why SSL certs? why not email certs?
>
> Could it be the fact that the SSL PKI exists?

So does email PKI. I use it every day.

> Email certs is a nice idea that requires that organizations buy into something
> like VeriSign's OnSite concept or into completely bizarre stuff like the US
> FBCA

Uh, no. Nearly all of the CA in Mozilla's root list offer email certs.
You can get one from startcom for free.

>> The IM service I mentioned before allows users to use certs from any CA.
>> Each user's client decide which certs are acceptable, not the service.
>
> Oops! *My* target are users that do not know what a certificate is!

That's fine, since it trusts all of Mozilla's trusted roots by default
so the user doesn't need to take any action to trust a reasonable set of
CAs by default. The point is that the user CAN if he so chooses.

Cert issuance could be done as part of registration for the service.

You just don't want the CA to be controlled by the ISP or you're begging
for MITM. Numerous large ISPs are now making no secret about their MITM
intentions. Google for phorm or nebuad.
Nelson B Bolyard
2008-11-22 21:01:44 UTC
Permalink
Ian G wrote, On 2008-11-22 07:39:

> So an obvious thing is to add chat to Tbird. How to do this?

Are you aware of chatzilla? It's been around for a long time.
Protocols and architecture are defined in RFCs 2810-2813. Chatzilla
interoperates with many other chat clients that follow those RFCs.

Mozilla runs an Internet Relay Chat server for use by chatzilla users.
It's widely and heavily used by mozilla developers and other community
members. I think you'd have a difficult time convincing mozilla they need a
SECOND chat client/service.
Ian G
2008-11-24 15:58:40 UTC
Permalink
Nelson B Bolyard wrote:
> Ian G wrote, On 2008-11-22 07:39:
>
>> So an obvious thing is to add chat to Tbird. How to do this?
>
> Are you aware of chatzilla? It's been around for a long time.
> Protocols and architecture are defined in RFCs 2810-2813. Chatzilla
> interoperates with many other chat clients that follow those RFCs.


No, I wasn't aware. I'm guessing chatzilla is an IRC client only? So
it needs a central server, and it is only encrypted client-server? (To
be frank, I don't use IRC that much because it doesn't appeal to a wide
base of my communicators. I'm guessing that is because the old
client-server model is less scalable, but that's a long debate.)


> Mozilla runs an Internet Relay Chat server for use by chatzilla users.
> It's widely and heavily used by mozilla developers and other community
> members. I think you'd have a difficult time convincing mozilla they need a
> SECOND chat client/service.


I'm not sure about those statements. I thought Mozilla objectives were
to support end-users with what they wanted, not developers? Are you
suggesting that the Mozilla developers think users want IRC? I've never
seen that, this would be new to me. IRC seems to be a developer-heavy
community.

Also, your proposal seems to suggest that you place "standards chat"
above "secure chat". It doesn't take much elegance of argumentation to
show that these are not really the same thing, nor can they easily
align. E.g., standards bodies haven't updated their security model in a
decade or more, but the attackers have (updated their respective
models). (The users, too.)

This dilemma is most easily shown in the form of Col. Boyd's OODA loop

http://en.wikipedia.org/wiki/OODA_Loop



iang
Michael Ströder
2008-11-25 20:52:56 UTC
Permalink
Anders Rundgren wrote:
> I want each organization/domain entity that can afford an SSL certificate to
> become a virtual CA and run their own secure messaging center. Based on
> the SSL certificate they can use whatever issuance policies they feel comfortable
> with as long as they keep inside of their "PKI sandbox" which is (by the not
> yet defined application), constrained regarding subject naming-schemes.
>
> This is BTW, how I believe secure e-mail should have been from the beginning;
> secured at the domain-level.

Anders, that's not the real problem with S/MIME or PGP.
Encrypting/signing is simply not a business requirement.

One of my customers has a special CA for issuing S/MIME certs to its own
internal end users. The end users are always surprised how easy they can
get a S/MIME cert within a minute. But the external partners are not
obliged to encrypt e-mail and they are not willing to do the necessary
work on their side. I already tried this 10 years ago with a PKI which
would have issued certs to external partners. They were not willing to
do their part even if made fairly simple.

=> Encrypting/signing must be made a business requirement in contracts.
That's the whole point. And there's no technical solution for it.

Ciao, Michael.
Anders Rundgren
2008-11-26 08:27:46 UTC
Permalink
Michael,

I think we are looking for different things.

I'm looking for a system that offers authenticated and confidential
messaging which would among things include mobile phone voice messaging.
If such system would require users to trust certificates and stuff, it will fail.

Our current only alternative is the trusted provider concept. I'm interested
in making the trusted provider something else than Vodafone; which could
be your employer or Google, and for the really paranoid a server you run
yourself.

It seems that Eddy's Jabber system is an even ligher alternative because
it doesn't seem to require end-users "trusting" anything than their provider.

Anders

----- Original Message -----
From: "Michael Ströder" <***@stroeder.com>
Newsgroups: mozilla.dev.tech.crypto
To: <dev-tech-***@lists.mozilla.org>
Sent: Tuesday, November 25, 2008 21:52
Subject: Re: Creating a Global User-level CA/Trust Infrastructure forSecureMessaging


Anders Rundgren wrote:
> I want each organization/domain entity that can afford an SSL certificate to
> become a virtual CA and run their own secure messaging center. Based on
> the SSL certificate they can use whatever issuance policies they feel comfortable
> with as long as they keep inside of their "PKI sandbox" which is (by the not
> yet defined application), constrained regarding subject naming-schemes.
>
> This is BTW, how I believe secure e-mail should have been from the beginning;
> secured at the domain-level.

Anders, that's not the real problem with S/MIME or PGP.
Encrypting/signing is simply not a business requirement.

One of my customers has a special CA for issuing S/MIME certs to its own
internal end users. The end users are always surprised how easy they can
get a S/MIME cert within a minute. But the external partners are not
obliged to encrypt e-mail and they are not willing to do the necessary
work on their side. I already tried this 10 years ago with a PKI which
would have issued certs to external partners. They were not willing to
do their part even if made fairly simple.

=> Encrypting/signing must be made a business requirement in contracts.
That's the whole point. And there's no technical solution for it.

Ciao, Michael.
Ian G
2008-11-26 15:30:07 UTC
Permalink
Anders Rundgren wrote:

> I'm looking for a system that offers authenticated and confidential
> messaging which would among things include mobile phone voice messaging.
> If such system would require users to trust certificates and stuff, it will fail.
>
> Our current only alternative is the trusted provider concept.

Well, I don't see that. PGP and Skype both offer authenticated +
confidential messages, without the "certificate" side of things. They
do it conceptually by tightly binding the keys to the user, and having
each user authenticate their handles directly to each other.

(Ignoring the implementation details ... obviously the different actual
networks work better or worse.)

An old military secure phone trick was to read off the numbers displayed
on the phone. This is using a different channel -- voice -- to
authenticate the numbers which then authenticate the commsec.

A problem with this sort of work is that people take the security model
from a text book and then try and implement it without complaint. This
doesn't really work, as the target audience generally have a different
model of security, sometimes mildly different, sometimes wildly
different. This syndrome is sometimes encapsulated in WYTM? or What's
your threat model?

iang
Michael Ströder
2008-11-26 17:09:20 UTC
Permalink
Ian G wrote:
> PGP and Skype both offer authenticated +
> confidential messages, without the "certificate" side of things. They
> do it conceptually by tightly binding the keys to the user, and having
> each user authenticate their handles directly to each other.

Well, there has to be a persistent secret in the game - likely the
user's password which is being used as shared secret. Kerberos works
that way. The caveat is that it needs network on-line access to a
central infrastructure. X.509 PKI does not require this.

Ciao, Michael.
Eddy Nigg
2008-11-26 19:40:18 UTC
Permalink
On 11/26/2008 05:30 PM, Ian G:
> Well, I don't see that. PGP and Skype both offer authenticated +
> confidential messages, without the "certificate" side of things.

LOL, and how exactly? Or better, how can I validate that? Specially in
the case of skype, we don't even know where those keys reside, if they
change when using a different client installation, how they are
distributed, which encryption is implemented and how the keys are
exchanged. At best it's security by obscurity.

--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Ian G
2008-11-27 11:22:05 UTC
Permalink
Eddy Nigg wrote:
> On 11/26/2008 05:30 PM, Ian G:
>> Well, I don't see that. PGP and Skype both offer authenticated +
>> confidential messages, without the "certificate" side of things.
>
> LOL, and how exactly? Or better, how can I validate that? Specially in
> the case of skype, we don't even know where those keys reside, if they
> change when using a different client installation, how they are
> distributed, which encryption is implemented and how the keys are
> exchanged. At best it's security by obscurity.


I guess I forgot to mention "ignoring implementation details..." because
we are talking about models not implementations.

Specifically, in the case of skype, handles are bound tightly to keys,
and users transfer handles between each other.

How do we know whether the keys are managed properly? Good question!
Well, it's a closed architecture & codebase, but it has been audited, so
it bears comparison to any CA which operates a closed/audited procedure.
We rely on the audit, and we trust the business won't do anything
drastically against the interests of the users.

Back to the model: it can be done, all you have to do is replicate
Skype in open source, if that's your fancy. (Whether this answers
Anders' requirements cannot be answered, because we really don't have
more than a glimmering of them.)

iang
Eddy Nigg
2008-11-27 12:10:50 UTC
Permalink
On 11/27/2008 01:22 PM, Ian G:
>
> How do we know whether the keys are managed properly? Good question!
> Well, it's a closed architecture & codebase, but it has been audited, so
> it bears comparison to any CA which operates a closed/audited procedure.

Bullshit! That's about the same as CAs keeping copies of the users
private keys...such a nonsense!


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Ian G
2008-11-29 11:23:19 UTC
Permalink
Eddy Nigg wrote:
> On 11/27/2008 01:22 PM, Ian G:
>>
>> How do we know whether the keys are managed properly? Good question!
>> Well, it's a closed architecture & codebase, but it has been audited, so
>> it bears comparison to any CA which operates a closed/audited procedure.
>
> Bullshit! That's about the same as CAs keeping copies of the users
> private keys...such a nonsense!


Which they are indeed permitted to do, as long as they state that in
their procedures, and their auditor agrees that they have met criteria.

Eddy, other than your need to be colourful, what was the point you were
trying to make?

iang
Eddy Nigg
2008-11-29 12:37:52 UTC
Permalink
On 11/29/2008 01:23 PM, Ian G:
> Eddy Nigg wrote:
>> On 11/27/2008 01:22 PM, Ian G:
>>>
>>> How do we know whether the keys are managed properly? Good question!
>>> Well, it's a closed architecture & codebase, but it has been audited, so
>>> it bears comparison to any CA which operates a closed/audited procedure.
>>
>> Bullshit! That's about the same as CAs keeping copies of the users
>> private keys...such a nonsense!
>
>
> Which they are indeed permitted to do, as long as they state that in
> their procedures, and their auditor agrees that they have met criteria.
>
> Eddy, other than your need to be colourful, what was the point you were
> trying to make?
>

Well, CAs MUSTN'T have private keys of end user certificates, except in
case of a properly implemented key escrow service and with the consent
of the user. But if you really have to ask this question I'm afraid that
the understandings about this and other subjects are probably too far
apart between us in order to have any fruitful discussion.


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Eddy Nigg
2008-11-29 12:57:42 UTC
Permalink
On 11/29/2008 02:37 PM, Eddy Nigg:
>> Which they are indeed permitted to do, as long as they state that in
>> their procedures, and their auditor agrees that they have met criteria.
>>
>> Eddy, other than your need to be colourful, what was the point you were
>> trying to make?
>>
>
> Well, CAs MUSTN'T have private keys of end user certificates, except in
> case of a properly implemented key escrow service and with the consent
> of the user. But if you really have to ask this question I'm afraid that
> the understandings about this and other subjects are probably too far
> apart between us in order to have any fruitful discussion.
>

Perhaps I may add, that I'm not aware of any WebTrust, ETSI or similar
audit they (Skype) performed. Can you point me to it? Also where is
their (CA) policy?

I understand your interest in making CAs superfluous, however the CAs
perform various services only a third part is supposed to perform
(separation of different aspects which makes up good security):

- software (cryptography and usability)
- issuing and validating instance
- user (control over his private keys)

In case of Skype they are the software vendor and control the software,
the issuing instance and also the user (because they control what
apparently seems to be private keys of users?). This is very similar to
dictatorship and similar regimes where no separation exists...


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Ian G
2008-12-02 18:04:46 UTC
Permalink
Eddy Nigg wrote:
> On 11/29/2008 02:37 PM, Eddy Nigg:
>>> Which they are indeed permitted to do, as long as they state that in
>>> their procedures, and their auditor agrees that they have met criteria.
>>>
>>> Eddy, other than your need to be colourful, what was the point you were
>>> trying to make?
>>>
>>
>> Well, CAs MUSTN'T have private keys of end user certificates, except in
>> case of a properly implemented key escrow service and with the consent
>> of the user. But if you really have to ask this question I'm afraid that
>> the understandings about this and other subjects are probably too far
>> apart between us in order to have any fruitful discussion.
>>
>
> Perhaps I may add, that I'm not aware of any WebTrust, ETSI or similar
> audit they (Skype) performed. Can you point me to it? Also where is
> their (CA) policy?

Well, they are not a CA, or at least they don't see themselves as a CA,
and they did not conduct a CA style of audit. Hence, I said: "it bears
comparison to any CA which operates a closed/audited procedure" rather
than saying it is the same thing.

I spent some time looking for the audit, but did not find it - I
certainly understand your interest in finding out!

Here's what I recall: They requested an audit of their architecture and
protocols by a third party. The auditor was an experienced software guy
from Britain. I cannot recall his name. I discussed the audit with
him, and he said he was initially skeptical, but afterwards was
impressed. It was done under NDA, he had access to the entire
protocols, and did not report any "secret bits" or "worrying signs."


> I understand your interest in making CAs superfluous,


My interest is in delivering some security to users. To the extent that
CAs can help that, then I'm interested. Making something superfluous
for the sake of it is not on the list.


> however the CAs
> perform various services only a third part is supposed to perform
> (separation of different aspects which makes up good security):
>
> - software (cryptography and usability)
> - issuing and validating instance
> - user (control over his private keys)


No, not at all. What you have described above is based on one
particular security model, that commonly known as PKI. In that model,
CAs form a service as written. It's not the only one.


> In case of Skype they are the software vendor and control the software,
> the issuing instance and also the user

Right, they do everything. One advantage for today: in the case of
Skype we (the user) only have to pay for one organisation. In the case
of CAs, we have to pay for four organisations. Imagine how much more
code Skype gets written... How unfair of them :)


> (because they control what
> apparently seems to be private keys of users?).


Well, sure, but you are applying PKI assumptions to something that
clearly isn't PKI. Why do that?


> This is very similar to
> dictatorship and similar regimes where no separation exists...


Ah, now we see why you take an assumption from one world to another :)

In the case of Skype, they just use the tools relatively wisely to solve
the problems they need to solve. Their particular design eliminates
many of the things that PKI does, but that is simply because their
design meets the security needs and addresses the threat model for their
given application and audience.

If there is anything "dictatorial" it is the claim that there is only
one true security model; instead, it is all architecture, and all the
time we are learning how to do things better.

(When was the last time your security model was updated?)

iang
Eddy Nigg
2008-12-03 12:16:38 UTC
Permalink
On 12/02/2008 08:04 PM, Ian G:
> Eddy Nigg wrote:
>> In case of Skype they are the software vendor and control the
>> software, the issuing instance and also the user
>
> Right, they do everything. One advantage for today: in the case of Skype
> we (the user) only have to pay for one organisation. In the case of CAs,
> we have to pay for four organisations.

Well, not sure where the payment comes in, but I don't pay personally
for either software, not for certificates and certainly not for my own
private keys. Now where does the "pay" come in?

But besides that, PKI is implemented in this way, because it makes
sense, not because it doesn't. Each party has its responsibilities.

> In the case of Skype, they just use the tools relatively wisely to solve
> the problems they need to solve. Their particular design eliminates many
> of the things that PKI does, but that is simply because their design
> meets the security needs and addresses the threat model for their given
> application and audience.

Meets the needs of whom? Just because the average user doesn't
understand it (not when using Skype nor when using Firefox or
Thunderbird) doesn't mean that it meets the security needs. It doesn't
for me (for confidentially) and the security theater could be simply
omitted. Same effect.

If I could use my own client certs that would be a different
story....well, yes, it's called PKI...

>
> If there is anything "dictatorial" it is the claim that there is only
> one true security model;

Why do you think so many are using PKI? Because it's dictated or because
it solves a problem? I didn't invent it, but it serves the purpose
extremely well, hence I'm using it. Nobody forced me to, it's my own
conclusion.

> (When was the last time your security model was updated?)
>

There are always some smaller moves here and there, however at large no
updating is needed because it works. Or shall I say, the full potential
hasn't been reached yet and PKI will be deployed just about everywhere?

--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Ian G
2008-12-04 12:49:39 UTC
Permalink
Eddy Nigg wrote:
> On 12/02/2008 08:04 PM, Ian G:
>> Eddy Nigg wrote:
>>> In case of Skype they are the software vendor and control the
>>> software, the issuing instance and also the user
>>
>> Right, they do everything. One advantage for today: in the case of Skype
>> we (the user) only have to pay for one organisation. In the case of CAs,
>> we have to pay for four organisations.
>
> Well, not sure where the payment comes in, but I don't pay personally
> for either software, not for certificates and certainly not for my own
> private keys. Now where does the "pay" come in?


To clarify, from economics: cost (perhaps better word than pay) is
generated by all activities, and needs to show benefits, or elsewise the
free market will eventually bypass it. So, the cost of the four
organisations still exist, and the fact that you cannot identify a
payment to them doesn't mean that you don't pay, by one means or
another. (The particular branch of economics is called "transaction
costs".)


>> In the case of Skype, they just use the tools relatively wisely to solve
>> the problems they need to solve. Their particular design eliminates many
>> of the things that PKI does, but that is simply because their design
>> meets the security needs and addresses the threat model for their given
>> application and audience.
>
> Meets the needs of whom?


Applications and security theory: security doesn't sell. The way to
apply security, as this school has it, is to build it into another
product that generates real benefits to the market.

Skype provided VoIP to the masses. And it was secure. And then it
added chat. And it was secure.

That meets the needs of the users.

The observation here perhaps is that the security wonks are so far away
from the apps field that they cannot easily work out what's what. The
more you know about modular multiplication, the less about users.
(Known issue, specialisation is a trap.)


>> If there is anything "dictatorial" it is the claim that there is only
>> one true security model;
>
> Why do you think so many are using PKI? Because it's dictated or because
> it solves a problem? I didn't invent it, but it serves the purpose
> extremely well, hence I'm using it. Nobody forced me to, it's my own
> conclusion.

Sure, but you are biased, as am I and everyone on this list. We are all
engaged in the business in one way or another. We all have an incentive
to "eat our own dogfood" and we all have trouble lifting our heads above
the crowd and seeing which way it's really going.

As to why PKI is used, and is in place, that is a controversial subject.
Suffice to say, it is there, in place, so the task is to improve its
delivery of security to users. Because it is in place, not because it
is good.


>> (When was the last time your security model was updated?)
>>
>
> There are always some smaller moves here and there, however at large no
> updating is needed because it works. Or shall I say, the full potential
> hasn't been reached yet and PKI will be deployed just about everywhere?


What did the dolphins say? So long, and thanks for all the phish :)

The PKI world pretty much failed to respond to the authentication
failure of phishing. I don't particularly want to rub anyone's face in
it, because I know people here work long and hard on the bugs and code.

But we were there. We all watched, and what did we get? From the PKI
world, nothing more than some green. Any response to phishing -- the
authentication failure of secure browsing -- came from plugins, banks,
regulators, anti-phishing forums, police, practically everyone *but* the
PKI world. Until the PKI world stands up and says, yeah, we blew that
one, now listen, here's what you have to do ... nobody will pay much
attention.

E.g., update the security model. Think back to the revocation
discussion: that was a request to update the security model. Short
story, we couldn't. Mozilla cannot update the PKI security model.
Period, end of story. The conclusion was that it was to be referred to
a committee that we all know in our hearts cannot change it. Hence, the
only revocation for roots possible is via business paths. Literally, a
hack, added over the top.

https://financialcryptography.com/mt/archives/001107.html

The one-organisation model of Skype has an advantage in security. Not
only in cost, but also, *it can update its security model*. The 4-org
model of PKI cannot update its security model (and it costs more).
Against such a combination, I would suggest that the only advantage that
PKI has is if it were so right that it worked. But phishing and other
threats suggest that this is not so.

Ergo, low deployment. The market does not lie about this. You can
preach to the choir all you like in this forum, but out there in the
security departments of companies, in user-land, in crypo-land, in
social-network-land, and every other land, PKI doesn't have many friends.



iang
Nelson Bolyard
2008-12-05 07:17:02 UTC
Permalink
Ian,

Previously in this thread, you wrote:

> For me, the purpose of this debate is finding out what users can expect from
> Mozilla by way of security.

The answers to that quest probably include these properties:
- open, openly specified, not secret,
- inner workings subjected to public scrutiny.
- security claims independently verifiable
- interoperability with products from other sources is desired, not avoided
- interoperability with products from other sources is based on standards
compliance - not proprietary specifications controlled solely by Mozilla

Now, in contrast to that, I have been led to believe that Skype's:
- protocols, security designs and parameters are proprietary, secret, have
not been openly published, and thus not subjected to public scrutiny
- components are all proprietary. Their clients only interoperate with their
servers and their other clients. It's a closed system, as far as I know.
- security claims are not independently verifiable by those who have no
economic interest in keeping unfavorable findings secret

I suspect that part of the reason you look so favorably on Skype is
precisely that its security claims have NOT been subjected to public
scrutiny. I think you tend to give them the benefit of a (very large) doubt.
In the absence of published faults in their technology, in your debates
it seems you tend to treat that technology as flawless, which gives them an
advantage that no openly specified system can ever have.

I believe you will not get Mozilla or its community members interested in
developing a solution that requires that
- all clients and all servers come from Mozilla,
- protocol specifications, source code, and other technologies be kept secret
- security claims must be taken on faith.

Consequently, I think there's little to be gained by continuing to hold
Skype up as a shining example in this list/group. So, please don't keep
flogging us with praise for Skype or other systems that are antithetical
to the values of the open-source community.

Thanks.

/Nelson (speaking only for myself, as always)
Anders Rundgren
2008-12-05 08:09:26 UTC
Permalink
Nelson wrote:
>> For me, the purpose of this debate is finding out what users can expect from
>> Mozilla by way of security.

>The answers to that quest probably include these properties:
>- open, openly specified, not secret,
>- inner workings subjected to public scrutiny.
>- security claims independently verifiable
>- interoperability with products from other sources is desired, not avoided
>- interoperability with products from other sources is based on standards
>compliance - not proprietary specifications controlled solely by Mozilla

Which we all appericate.

>Now, in contrast to that, I have been led to believe that Skype's:
>- protocols, security designs and parameters are proprietary, secret, have
>not been openly published, and thus not subjected to public scrutiny
>- components are all proprietary. Their clients only interoperate with their
>servers and their other clients. It's a closed system, as far as I know.
- security claims are not independently verifiable by those who have no
>economic interest in keeping unfavorable findings secret

>I suspect that part of the reason you look so favorably on Skype is
>precisely that its security claims have NOT been subjected to public
>scrutiny. I think you tend to give them the benefit of a (very large) doubt.
>In the absence of published faults in their technology, in your debates
>it seems you tend to treat that technology as flawless, which gives them an
>advantage that no openly specified system can ever have.

>I believe you will not get Mozilla or its community members interested in
>developing a solution that requires that
>- all clients and all servers come from Mozilla,
>- protocol specifications, source code, and other technologies be kept secret
>- security claims must be taken on faith.

>Consequently, I think there's little to be gained by continuing to hold
>Skype up as a shining example in this list/group. So, please don't keep
>flogging us with praise for Skype or other systems that are antithetical
>to the values of the open-source community.

Since I originally brought up Skype as an example, I can unfortunately only
reiterate that the open, standards-based, and non-proprietary world have
schemes that offer "perfect security" on paper, while their opposites have
fully deployed "flawed security" on a truly massive scale.

My guess, is that the majority of the market will hook into the latter
because they really have no alternative.

IF Mozilla and other groups actually wants to "fix" this, they have to
come up with something that can be deployed without users becoming
security experts. Based on a decade with S/MIME failures, I believe
the word "pragmatism" is severely lacking and therefore we get nowhere.
One of the ways you could create a generally useful solution (see subject
line...) would IMO be to use DNS a key repository like featured in DKIM.
But since this is not "perfect" we will rather continue with horrible
suff like: http://news.cnet.com/8301-17939_109-10110382-2.html

With respect to Skype, there is unfortunately another thing that make
prospects for open security messaging look pretty bleak and that is the
ability to connect to paid services like Skype-out.

This is BTW not too different to PayPal which I guess works so well
because it owns the entire customer-base and doesn't have to mess
with other competing/collaborating partners.

Anders
user of flawed security solutions, developer of new concepts
Ian G
2008-12-05 09:10:37 UTC
Permalink
Anders Rundgren wrote:
> This is BTW not too different to PayPal which I guess works so well
> because it owns the entire customer-base and doesn't have to mess
> with other competing/collaborating partners.


Ahhh... Paypal :) Now there is a poignant example.

Paypal is awful. Its security is woeful. It's a mess. Its business
concept is a lie unto its own vision. It's practically the #1 phishing
victim. There's even a book about it ...

Yet, it won the market [1].

How do we deal with a world where something as bad, engineering-wise, as
Paypal is the dominant product?

Well, with a lot of pragmatism, and a lot of skepticism about the
excessive number of one true religions.


> Anders
> user of flawed security solutions, developer of new concepts


:)

iang



[1] For Nelson and others, this was my real business, not crypto, I was
in probably the #2 opposing camp, and that camp didn't make it because
of its own stupidity. But its security was much better than Paypal,
around 10 times better by one objective measure.
Anders Rundgren
2008-12-05 11:03:56 UTC
Permalink
And it goes on and on
http://news.cnet.com/8301-17939_109-10110382-2.html
while security communities are talking about perfect solutions for
a minority of security-conscious users...

This is almost like a discussion about "theory" versus "practice".

As a researcher in this field, I'd hoped that the gap would diminish over
time but it seems that is actually widening!

--Anders

----- Original Message -----
From: "Ian G" <***@iang.org>
To: "mozilla's crypto code discussion list" <dev-tech-***@lists.mozilla.org>
Cc: "Nelson B Bolyard" <***@bolyard.me>
Sent: Friday, December 05, 2008 10:10
Subject: Re: Creating a Global User-level CA/Trust Infrastructurefor SecureMessaging


Anders Rundgren wrote:
> This is BTW not too different to PayPal which I guess works so well
> because it owns the entire customer-base and doesn't have to mess
> with other competing/collaborating partners.


Ahhh... Paypal :) Now there is a poignant example.

Paypal is awful. Its security is woeful. It's a mess. Its business
concept is a lie unto its own vision. It's practically the #1 phishing
victim. There's even a book about it ...

Yet, it won the market [1].

How do we deal with a world where something as bad, engineering-wise, as
Paypal is the dominant product?

Well, with a lot of pragmatism, and a lot of skepticism about the
excessive number of one true religions.


> Anders
> user of flawed security solutions, developer of new concepts


:)

iang



[1] For Nelson and others, this was my real business, not crypto, I was
in probably the #2 opposing camp, and that camp didn't make it because
of its own stupidity. But its security was much better than Paypal,
around 10 times better by one objective measure.
Eddy Nigg
2008-12-05 13:01:02 UTC
Permalink
On 12/05/2008 01:03 PM, Anders Rundgren:
> And it goes on and on
> http://news.cnet.com/8301-17939_109-10110382-2.html
> while security communities are talking about perfect solutions for
> a minority of security-conscious users...
>
> This is almost like a discussion about "theory" versus "practice".
>
> As a researcher in this field, I'd hoped that the gap would diminish over
> time but it seems that is actually widening!
>

Can you elaborate? I'm not getting the "security" aspect in relation to
what we were discussing here and Facebook Connect. Just for your
knowledge, major companies have thrown their weight behind OpenID
including Google, Yahoo, IBM, Microsoft and others. Facebook is on its
own...

--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Ian G
2008-12-05 09:01:59 UTC
Permalink
Nelson Bolyard wrote:
> Ian,
>
> Previously in this thread, you wrote:
>
>> For me, the purpose of this debate is finding out what users can expect from
>> Mozilla by way of security.


Thank you for taking the time to lay out your views!


> The answers to that quest probably include these properties:
> - open, openly specified, not secret,
> - inner workings subjected to public scrutiny.
> - security claims independently verifiable
> - interoperability with products from other sources is desired, not avoided
> - interoperability with products from other sources is based on standards
> compliance - not proprietary specifications controlled solely by Mozilla


Yes, a laudable goal. (Leaving aside Mozilla for now.)

> Now, in contrast to that, I have been led to believe that Skype's:
> - protocols, security designs and parameters are proprietary, secret, have
> not been openly published, and thus not subjected to public scrutiny
> - components are all proprietary. Their clients only interoperate with their
> servers and their other clients. It's a closed system, as far as I know.

I think these two claims are completely correct!

> - security claims are not independently verifiable by those who have no
> economic interest in keeping unfavorable findings secret

In essence, your claim is approximately sustainable, notwithstanding a
single audit, and I suggest some additional stuff below.

> I suspect that part of the reason you look so favorably on Skype is
> precisely that its security claims have NOT been subjected to public
> scrutiny.

Not at all, that is not my mind speaking :)

Actually I find it really irritating, but I have different motives from
you. I would like the chance to criticise the design, especially as it
is a new design (relatively speaking) and has incorporated a lot of the
new learning in it. In the crypto world, we often talk about how we
should break others' designs before we design our own, and I follow that
principle.

That's why I have that silly SSL page: to criticise is to hone. But,
you have no idea how boring it is to criticise the older designs; when
one comes across the same mistakes over and over again, one has to keep
reminding oneself how "we know sooooo much more these days..."


> I think you tend to give them the benefit of a (very large) doubt.


Well, that is relative. Consider: those who have taken the above
laudable goal and decided this is the beginning and end of everything
... have obviously found that Skype doesn't meet this goal. Therefore,
not having met your primary, first, up-front goal, everything else is
nonsense.

It's not open, therefore it cannot be secure. Right?

In contrast, I tend to look at the user interest. And specifically, I
tend to look at the security delivered to the user. Although I approve
the open source goal, I have over time come to view it as
not-unchallengeable.

So, you -- and most here -- look at Skype and decide because it is
closed, it cannot be secure. I look at Skype and say, well, how much
security does it deliver? Objectively? Does it confirm or deny the
open source hypothesis?

As it turns out, Skype delivers more security than practically every
other example out there. Yes, I can and will argue that, even though
you will doubt it. :)

Which leads us to a conundrum: if this is true, then open source may
not be the best (or only) way to deliver security.

Which, if you are religious about open source, will be very troubling.
And, even if you are more like me, a fan about open source, it is still
rather irritating.


> In the absence of published faults in their technology, in your debates
> it seems you tend to treat that technology as flawless, which gives them an
> advantage that no openly specified system can ever have.


Well, nobody here has asked me about their flaws, so that's another
assumption which I'm happy to address.

Here are their *security* flaws, as far as my view goes: obviously, not
open source. So we don't know who is listening. So we have to conduct
some wider research. Here's what I have found: it is possible to fork
off a subnet and intercept using a borked client, this was demonstrated
by the EADS guys over a year ago. If you pay them lots of money,
they'll sell you an intercept kit (probably, at least, that's their
business). China has a borked client. There is another minor published
weakness, which I forget now. The super-servers are a cause for
concern. The company is now owned by a US company so we can assume that
the NSA has achieved quiet satisfaction, if not anyone else. There is a
view that the intel agencies of other countries in UKUSA can now breach
Skype, and there is a view that this breach is now either leaking to
non-UKUSA-G20 countries, and from there to police, in countries where
police have an ability or desire to listen in, as para-intel agencies.
However, this is all secret, and being interpolated from claims and
counter-claims. The evidence is not sufficient to get a prosecution,
yet. In contrast, the open analysis world has failed to breach the
protocols. I recall 2 substantial attempts to analyse it, and the
results were not promising. Plus I mentioned that audit. Plus, at
times, many powerful people have complained, so this would suggest there
are no screamers in the protocol, no easy-to-find weaknesses.

Now, in any serious threat & security modelling, we can use those
results and work out whether the tool -- any tool -- is good enough for
any particular task or group. I'm not going to do that, be because it
is not interesting to this community to *use* that tool, it is only
interesting to *learn* from the tool. Others who are, have done so.


> I believe you will not get Mozilla or its community members interested in
> developing a solution that requires that
> - all clients and all servers come from Mozilla,
> - protocol specifications, source code, and other technologies be kept secret
> - security claims must be taken on faith.


Agreed. (I'm somewhat aghast that anyone would suggest that Mozilla
should ever do such a thing. How on earth ... have you read Mozilla's
mission?)

But that's not what it is about. I'm only interested in whether Mozilla
is seriously focussed in delivering security to its users. Right now,
that's an open question.

It is entirely clear that Mozilla is delivering open source and is
passionate about that. Unquestionable. This is a good thing, mostly.

But it is not clear that Mozilla has a focus on security.

Just one example: in this email, you assume that open source &
associated principles is equivalent to security, and the two are
inseparable:


> Consequently, I think there's little to be gained by continuing to hold
> Skype up as a shining example in this list/group. So, please don't keep
> flogging us with praise for Skype or other systems that are antithetical
> to the values of the open-source community.


The challenge, or question here, is whether you (speaking broadly) will
ever recognise the two apart.

Or to put it another way, is Mozilla religious about open source? To
the extent that Mozilla will crowd out any learning from the non-open
source world?

(Nobody is asking Mozilla to write proprietary code or protocols. I am
simply talking here about the ability and desire to learn ...)




iang
Eddy Nigg
2008-12-05 12:48:51 UTC
Permalink
On 12/05/2008 09:17 AM, Nelson Bolyard:
> Ian,
>
> Now, in contrast to that, I have been led to believe that Skype's:
> - protocols, security designs and parameters are proprietary, secret, have
> not been openly published, and thus not subjected to public scrutiny
> - components are all proprietary. Their clients only interoperate with their
> servers and their other clients. It's a closed system, as far as I know.
> - security claims are not independently verifiable by those who have no
> economic interest in keeping unfavorable findings secret
>

Nelson, you know what truly amazes me? That people like Ian actually
promote a closed, proprietary source and proprietary standards,
unaudited and secretive model of a commercial vendor who's product locks
in its users and who's security model is highly questionable. All this
in order to bash PKI, CAs and digital certificates. I wonder if this has
something to do with a certain CA not being included in NSS?


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Anders Rundgren
2008-12-05 13:20:00 UTC
Permalink
Eddy Nigg wrote:
>Nelson wrote:
>> Now, in contrast to that, I have been led to believe that Skype's:
>> - protocols, security designs and parameters are proprietary, secret, have
>> not been openly published, and thus not subjected to public scrutiny
>> - components are all proprietary. Their clients only interoperate with their
>> servers and their other clients. It's a closed system, as far as I know.
>> - security claims are not independently verifiable by those who have no
>> economic interest in keeping unfavorable findings secret

>Nelson, you know what truly amazes me? That people like Ian actually
>promote a closed, proprietary source and proprietary standards,
>unaudited and secretive model of a commercial vendor who's product locks
>in its users and who's security model is highly questionable. All this
>in order to bash PKI, CAs and digital certificates. I wonder if this has
>something to do with a certain CA not being included in NSS?

I doubt that Ian promotes the things you claim he does.

I believe that he as well as I see a problem with the alternatives
since they are way off in terms of users.

That there should be as you claim mainly a "UI problem" is an opinion
that has some support in the literature ("Jonny can't encrypt"),
but I feel that it is much deeper than that; security should probably
as in the case of Skype be transparent, not needing any UI at all.
I start Skype and that's about it.

We can probably not get much further on this thread except that we
violently disagree on for example the importance of S/MIME.

I will continue with my mobile phone stuff because the "container"
issue isn't solved either.

Anders
Eddy Nigg
2008-12-05 13:30:02 UTC
Permalink
On 12/05/2008 03:20 PM, Anders Rundgren:
>
> I doubt that Ian promotes the things you claim he does.
>

The tone and arguments highly suggests exactly that.

> That there should be as you claim mainly a "UI problem" is an opinion
> that has some support in the literature ("Jonny can't encrypt"),
> but I feel that it is much deeper than that; security should probably
> as in the case of Skype be transparent, not needing any UI at all.
> I start Skype and that's about it.

I start FF, TB, Psi, Adobe (PDF), OpenOffice or whatever and that's
about it.

Well, I had to create my cert(s) once, that's correct. But from the
minute I stored them in my smart card I'm using 'em everywhere instantly.

>
> We can probably not get much further on this thread except that we
> violently disagree on for example the importance of S/MIME.
>

I wouldn't limit it to S/MIME, since client certs can be used just about
anywhere for almost any purpose (authentication, signing and
en/de-cryption). However I appreciate your contribution nevertheless and
saw (previously) other work you did and you are trying to promote. Hence
I tend to agree on our disagreement for now at leave it as is.


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Eddy Nigg
2008-12-05 11:56:33 UTC
Permalink
On 12/04/2008 02:49 PM, Ian G:
>

Telephony was provided to the masses and it's inherently insecure.

> Skype provided VoIP to the masses. And it was secure.

You keep claiming it and I tell you that it's not. Of course we can
continue forever here. But it doesn't come close to the same security
requirements as they are applied on the web or mail for example.

> And then it added
> chat. And it was secure.

Chat was always there, but...Enter Jabber/XMPP. It's secure because

- XMPP is an open standard
- you can use open source server and client software
- uses PKI
- allows you to control your keys
- nobody owns it
- is a decentralized network

>
> That meets the needs of the users.

Yes, THAT meets the needs of the users. Also of the enterprise's. Skype
doesn't.

>
> The PKI world pretty much failed to respond to the authentication
> failure of phishing.

The StartCom CAs control panels are not subject to phishing (e.g.
phishing resistant). It's done with PKI.

> Until the PKI world stands up and says, yeah, we blew that
> one, now listen, here's what you have to do ... nobody will pay much
> attention.

Actually it's not PKI, but the software vendors which have to stand up.
Mozilla did! And if it weren't for all the cry-babies, phishable
self-signed certs would be a thing of the past.

PKI didn't fail, the UI failed!

>
> E.g., update the security model.

Yes, I think this is what's happening anyway. Browser vendors recognized
the failures of the last decade and are acting! Interestingly it's
exactly your crowd which has a problem with it :-)


> Ergo, low deployment. The market does not lie about this.

Hear hear?! Netcraft recognizes an ever increasing amount of secured
sites every month, soon to be one million. PKI implementations and
deployments are on the rise as never before.

> You can preach
> to the choir all you like in this forum, but out there in the security
> departments of companies, in user-land, in crypo-land, in
> social-network-land, and every other land, PKI doesn't have many friends.
>

Uhahhhaaa....LOL :-)

You make me laugh! Seriously. Apparently we aren't living on the same
planet...

But for what I care, let me predict that PKI hasn't reached its tipping
point of now return yet, but is very close of happening! Once it does,
it will be part of our daily computer-network life as screen-saver's are...


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Ian G
2008-12-02 18:16:58 UTC
Permalink
Eddy Nigg wrote:
> On 11/29/2008 01:23 PM, Ian G:
>> Eddy Nigg wrote:
>>> On 11/27/2008 01:22 PM, Ian G:
>>>>
>>>> How do we know whether the keys are managed properly? Good question!
>>>> Well, it's a closed architecture & codebase, but it has been
>>>> audited, so
>>>> it bears comparison to any CA which operates a closed/audited
>>>> procedure.
>>>
>>> Bullshit! That's about the same as CAs keeping copies of the users
>>> private keys...such a nonsense!
>>
>>
>> Which they are indeed permitted to do, as long as they state that in
>> their procedures, and their auditor agrees that they have met criteria.
>>
>> Eddy, other than your need to be colourful, what was the point you were
>> trying to make?
>>
>
> Well, CAs MUSTN'T have private keys of end user certificates, except in
> case of a properly implemented key escrow service and with the consent
> of the user.


Right, CAs won't have the private keys, unless they do. I imagine a
corporate CA can do what it likes, and doesn't need the consent of the
user. I also imagine that an ISP CA can do something similar, because
it gets an implied consent from somewhere or other. And if my CA says
"we got your private keys", then you have the choice of another CA.

I'm not saying I "approve" of these things, just that they do exist, and
they are expected to exist. Chokani et al has sections on them; there
are some businesses around that like to do mass population of desktops.
To the extent that they document these things in a CPS, pass an audit,
then those CAs are cool, in today's world.

Also, there is a silliness aspect to this. If the CAs are trusted not
to issue false certs for users, why can't they be trusted to look after
their private keys?


> But if you really have to ask this question I'm afraid that
> the understandings about this and other subjects are probably too far
> apart between us in order to have any fruitful discussion.


If you don't like that, places to change it would be Chokhani et al (RFC
3647) or the Mozilla policy, I guess.



iang
Eddy Nigg
2008-12-03 12:22:19 UTC
Permalink
On 12/02/2008 08:16 PM, Ian G:
> Right, CAs won't have the private keys, unless they do. I imagine a
> corporate CA can do what it likes, and doesn't need the consent of the
> user.

Sure, but they aren't in my list of CA roots.

> And if my CA says "we
> got your private keys", then you have the choice of another CA.

It's considered a very bad practice I think. Are there any CAs in
Mozilla NSS which have the users private keys?

> Also, there is a silliness aspect to this. If the CAs are trusted not to
> issue false certs for users, why can't they be trusted to look after
> their private keys?

Perhaps because some countries have certain laws...

> If you don't like that, places to change it would be Chokhani et al (RFC
> 3647) or the Mozilla policy, I guess.

The Mozilla CA policy is my domain...indeed are there CAs which perform
"key escrow" without the consent of the user (or without the user having
explicitly asked beforehand)?


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Rob Stradling
2008-12-05 09:38:37 UTC
Permalink
On Wednesday 03 December 2008 12:22:19 Eddy Nigg wrote:
> On 12/02/2008 08:16 PM, Ian G:
> > Right, CAs won't have the private keys, unless they do. I imagine a
> > corporate CA can do what it likes, and doesn't need the consent of the
> > user.
>
> Sure, but they aren't in my list of CA roots.
>
> > And if my CA says "we
> > got your private keys", then you have the choice of another CA.
>
> It's considered a very bad practice I think.

Eddy, could you expand on this point?

I don't think WebTrust prohibits CAs from generating/retaining private keys
for users.

> Are there any CAs in Mozilla NSS which have the users private keys?

Have a look at:
http://www.globalsign.com/support/csr/autocsr.html

> > Also, there is a silliness aspect to this. If the CAs are trusted not to
> > issue false certs for users, why can't they be trusted to look after
> > their private keys?
>
> Perhaps because some countries have certain laws...
>
> > If you don't like that, places to change it would be Chokhani et al (RFC
> > 3647) or the Mozilla policy, I guess.
>
> The Mozilla CA policy is my domain...indeed are there CAs which perform
> "key escrow" without the consent of the user (or without the user having
> explicitly asked beforehand)?

--
Rob Stradling
Senior Research & Development Scientist
Comodo - Creating Trust Online
Office Tel: +44.(0)1274.730505
Fax Europe: +44.(0)1274.730909
www.comodo.com

Comodo CA Limited, Registered in England No. 04058690
Registered Office:
3rd Floor, 26 Office Village, Exchange Quay,
Trafford Road, Salford, Manchester M5 3EQ

This e-mail and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the sender by replying
to the e-mail containing this attachment. Replies to this email may be
monitored by Comodo for operational or business reasons. Whilst every
endeavour is taken to ensure that e-mails are free from viruses, no liability
can be accepted and the recipient is requested to use their own virus checking
software.
Eddy Nigg
2008-12-05 10:56:52 UTC
Permalink
On 12/05/2008 11:38 AM, Rob Stradling:
>> It's considered a very bad practice I think.
>
> Eddy, could you expand on this point?
>
> I don't think WebTrust prohibits CAs from generating/retaining private keys
> for users.

Retaining the private keys of users requires a key escrow service,
reasonable protection by the CA (at least) and the consent of the user.
This is what I know concerning the WebTrust audit.

Personally I view it as a risk for the user AND for the CA. Or would you
be willing to take the responsibility over user generated private keys
without the consent of the user? Or at all?

>
>> Are there any CAs in Mozilla NSS which have the users private keys?
>
> Have a look at:
> http://www.globalsign.com/support/csr/autocsr.html

Errr...there is a difference between creating it for and on behalf of
the user and retaining the keys. Just for your knowledge, StartCom does
provide different utilities for the creation of private keys, CSR,
decryption of private keys and so forth. However StartCom doesn't retain
any of the private keys and the user doesn't have to use our wizards for
it (it's there for convenience), instead can submit his/her signing
request at any time.

In this respect, Globalsign might implement it exactly in the same way.
We might however ask them or read their CPS instead.


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Michael Ströder
2008-11-26 09:11:39 UTC
Permalink
Anders Rundgren wrote:
> I think we are looking for different things.
>
> I'm looking for a system that offers authenticated and confidential
> messaging which would among things include mobile phone voice messaging.

But it's the very same problem.

> If such system would require users to trust certificates and stuff, it will fail.

Off course you can use untrusted keys but then you have the MITM issue
(as others already mentioned).

> Our current only alternative is the trusted provider concept. I'm interested
> in making the trusted provider something else than Vodafone; which could
> be your employer or Google, and for the really paranoid a server you run
> yourself.

As I wrote even if you have your domain-wide PKI the other end has to
also do the homework. If there's no real business requirement to do so
they will not.

Ciao, Michael.


> ----- Original Message -----
> From: "Michael Ströder" <***@stroeder.com>
> Newsgroups: mozilla.dev.tech.crypto
> To: <dev-tech-***@lists.mozilla.org>
> Sent: Tuesday, November 25, 2008 21:52
> Subject: Re: Creating a Global User-level CA/Trust Infrastructure forSecureMessaging
>
>
> Anders Rundgren wrote:
>> I want each organization/domain entity that can afford an SSL certificate to
>> become a virtual CA and run their own secure messaging center. Based on
>> the SSL certificate they can use whatever issuance policies they feel comfortable
>> with as long as they keep inside of their "PKI sandbox" which is (by the not
>> yet defined application), constrained regarding subject naming-schemes.
>>
>> This is BTW, how I believe secure e-mail should have been from the beginning;
>> secured at the domain-level.
>
> Anders, that's not the real problem with S/MIME or PGP.
> Encrypting/signing is simply not a business requirement.
>
> One of my customers has a special CA for issuing S/MIME certs to its own
> internal end users. The end users are always surprised how easy they can
> get a S/MIME cert within a minute. But the external partners are not
> obliged to encrypt e-mail and they are not willing to do the necessary
> work on their side. I already tried this 10 years ago with a PKI which
> would have issued certs to external partners. They were not willing to
> do their part even if made fairly simple.
>
> => Encrypting/signing must be made a business requirement in contracts.
> That's the whole point. And there's no technical solution for it.
>
> Ciao, Michael.
> _______________________________________________
> dev-tech-crypto mailing list
> dev-tech-***@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-crypto
>
Eddy Nigg
2008-11-26 11:17:17 UTC
Permalink
On 11/26/2008 10:27 AM, Anders Rundgren:
> I'm looking for a system that offers authenticated and confidential
> messaging which would among things include mobile phone voice messaging.

You also might want to look into http://openid.net/
I expect OpenID to deployed as a form of authentication almost anywhere
in the future, including your mobiles.

> If such system would require users to trust certificates and stuff, it will fail.

It's possible: https://www.startssl.com/?app=14

This is an OpenID provider making use of certificate authentication. But
there are less secure alternatives of course which don't require
certificates.

>
> It seems that Eddy's Jabber system is an even ligher alternative because
> it doesn't seem to require end-users "trusting" anything than their provider.
>

But please note that there are still digital certificates involved in
the common sense as we know it (including the address space control
validation).


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Ian G
2008-11-26 14:47:44 UTC
Permalink
Michael Ströder wrote:

> Anders, that's not the real problem with S/MIME or PGP.
> Encrypting/signing is simply not a business requirement.
...
> => Encrypting/signing must be made a business requirement in contracts.
> That's the whole point. And there's no technical solution for it.

That's as close to a perfect dilemma as I've come across! It's not a
business requirement, so we must make it a business requirement ...

What then creates the upstream requirement? If it doesn't come from
business, where does it come from?

iang
Anders Rundgren
2008-11-26 17:02:14 UTC
Permalink
Ian G wrote:

>> => Encrypting/signing must be made a business requirement in contracts.
>> That's the whole point. And there's no technical solution for it.

>That's as close to a perfect dilemma as I've come across! It's not a
>business requirement, so we must make it a business requirement ...

Another alternative is to
1. abandon non-scalable trust infrastructures such as the one required by S/MIME
2. abandon schmes that use explicit encryption keys like S/MIME
3. introduce secure mobile secure key-storage
4. put the latter in cell phones

I'm currently working with 3 and 4.
http://keycenter.webpki.org/javadoc/keystore/phone/keystore/crypto/VirtualSE.html

http://webpki.org/papers/keygen2/keygen-all-protocol-steps.html

The schemes we have today where the majority of users do not have a mobile
key-store is impossible for large-scale use of two-factor authentication.

Anders
Michael Ströder
2008-11-26 17:18:04 UTC
Permalink
Anders Rundgren wrote:
> Ian G wrote:
>
>>> => Encrypting/signing must be made a business requirement in contracts.
>>> That's the whole point. And there's no technical solution for it.
>
>> That's as close to a perfect dilemma as I've come across! It's not a
>> business requirement, so we must make it a business requirement ...
>
> Another alternative is to

Anders, still you fail to see the real problems since you propose
technical solutions for non-technical issues.

But let's see:

> 1. abandon non-scalable trust infrastructures such as the one required by S/MIME

Why "non-scalable"? Can you be more verbose?

> 2. abandon schmes that use explicit encryption keys like S/MIME

Are you aware of the requirements for separate encryption keys? Some
companies have the legal requirements for key escrow in litigation
cases. That's the main reason why encryption and signature keys are
separated.

> 3. introduce secure mobile secure key-storage

Ah, yeah. Did you ever think of a growing key history and such?

> 4. put the latter in cell phones

Even cell phones can break. And I don't consider them to be trustworthy
key stores
1. with all the control the cell phone provider has over them,
2. all the gadgets installed with security issues,
3. with the limited data storage size on today's SIM cards.

And the main point: You fail to explain how trust is to be established.

Ciao, Michael.
Anders Rundgren
2008-11-27 09:04:05 UTC
Permalink
Michael,
It seems that you don't believe much in technical solutions as enablers. As a technologist I have a bit hard to cope with that :-)

Let me take a practical example. In the EU most on-line banks use two-factor authentication. The majority of these use OTP (One Time Password) solutions that definitely not are without cost as well as susceptible to phishing. In addition OTP is not terribly convenient for users but that is (of course) something the banks care a little bit less about. So why don't they use PKI instead?

Some people say it is because PKI is difficult and introduces legal and liability hurdles. IMNSHO this is total BS since a bank-local PKI isn't designed to work outside of the bank's domain. PKI in such a setup is just another kind of password.

So what is then real problem?
1. The European Smart Card industry who do not want to become suppliers of commodities. Of course the latter is a REQUIREMENT for general deployment
2. Governments who believe that ID-cards and eID are natural combos in spite of the fact that USB and USB memory sticks are everywhere, while the traditional smart card interface is not.
3. Governments claiming that the use-case for physical IDs and eIDs are essentially the same
4. Governments that do not understand that their eID concept does not address more than a tiny fraction of their citizens' needs for authentication on the Internet
5. Governments investing in stuff like CEN 15480 and ISO/IEC 24727
6. Governments pushing bizarre Bridge CA concepts

PKI for consumers will become bigger than OTP when PKI is housed in mobile phones although initially OTP will be used in mobile phones rather than by special-purpose devices. To achieve that we need a whole bunch of enablement technologies. Most of the PKIX enrollment stuff will be obsolete in 5-10 years from now because it doesn't meet the requirements imposed by the "Open Key Container" paradigm which I and A LOT OF OTHER PEOPLE actually work with. No, the SIM is not the target because it is closed key-container with limited capacity. The Open Key Container is a part of the CPU. It is already shipping in huge quantities, it is "just" not properly enabled.

The problems with mobile phone security issues are exaggerated and are also in no way cast in concrete. If the requirement is "perfect" security, we have to accept that nothing will happen. If we OTOH accept the notion that security is rather a "journey" we may indeed do some progress. Google's Android as well as Symbian 9.3 are not comparable to Windows which indeed has a broken security model.

I don't expect a reply on this because it will anyway take some five years or so to figure out if the above is correct or not.

Anders

----- Original Message -----
From: "Michael Ströder" <***@stroeder.com>
Newsgroups: mozilla.dev.tech.crypto
To: <dev-tech-***@lists.mozilla.org>
Sent: Wednesday, November 26, 2008 18:18
Subject: Re: Creating a Global User-level CA/Trust Infrastructure forSecureMessaging


Anders Rundgren wrote:
> Ian G wrote:
>
>>> => Encrypting/signing must be made a business requirement in contracts.
>>> That's the whole point. And there's no technical solution for it.
>
>> That's as close to a perfect dilemma as I've come across! It's not a
>> business requirement, so we must make it a business requirement ...
>
> Another alternative is to

Anders, still you fail to see the real problems since you propose
technical solutions for non-technical issues.

But let's see:

> 1. abandon non-scalable trust infrastructures such as the one required by S/MIME

Why "non-scalable"? Can you be more verbose?

> 2. abandon schmes that use explicit encryption keys like S/MIME

Are you aware of the requirements for separate encryption keys? Some
companies have the legal requirements for key escrow in litigation
cases. That's the main reason why encryption and signature keys are
separated.

> 3. introduce secure mobile secure key-storage

Ah, yeah. Did you ever think of a growing key history and such?

> 4. put the latter in cell phones

Even cell phones can break. And I don't consider them to be trustworthy
key stores
1. with all the control the cell phone provider has over them,
2. all the gadgets installed with security issues,
3. with the limited data storage size on today's SIM cards.

And the main point: You fail to explain how trust is to be established.

Ciao, Michael.
Michael Ströder
2008-11-27 10:54:31 UTC
Permalink
Anders Rundgren wrote:
>
> It seems that you don't believe much in technical solutions as
> enablers.

In fact I do. But still there are non-technical issues to be solved for
which no technical solution exist. And I think that steadily inventing
new standards is not a solution for establishing a technology (here
cryptography in general).

> Let me take a practical example. In the EU most on-line banks use
> two-factor authentication. The majority of these use OTP (One Time
> Password) solutions that definitely not are without cost as well as
> susceptible to phishing. In addition OTP is not terribly convenient for
> users but that is (of course) something the banks care a little bit less
> about. So why don't they use PKI instead?

There are several reasons for that. One was that if you want to use
smartcards as key store for better security you have to install software
and hardware on the user's system. Most times the smartcard "middleware"
was quite buggy, sometimes it was simply unmaintained crap. Also the
card software was not available for all the client systems out there
(not everybody uses Windows). That's why e.g. HBCI never hit the mass
market.

Currently it gets a little bit better with some crypto tokens.

But crypto tokens are not suitable for S/MIME encryption keys because of
the growing key history needed. So one has to distinguish PKI-enabled
applications.

> Some people say it is because PKI is difficult and introduces legal and
> liability hurdles. IMNSHO this is total BS since a bank-local PKI isn't
> designed to work outside of the bank's domain.

I agree here.

> PKI in such a setup is just another kind of password.

Hmm, here I disagree since a password, even when used like in Kerberos,
leaves the user's system (directly or as shared secret) whereas a
private key used for signing something during authentication never
leaves the key store of the client's system.

> So what is then real problem?
> 1. The European Smart Card industry who do not want to become suppliers
> of commodities.

???
Each time I talked to smartcard vendors they were keen on selling their
stuff. The more the better.

> 2. Governments who believe that ID-cards and eID are natural combos in
> spite of the fact that USB and USB memory sticks are everywhere, while
> the traditional smart card interface is not.
> 3. Governments claiming that the use-case for physical IDs and eIDs are
> essentially the same
> 4. Governments that do not understand that their eID concept does not
> address more than a tiny fraction of their citizens' needs for
> authentication on the Internet
> 5. Governments investing in stuff like CEN 15480 and ISO/IEC 24727

Do you think banks care for governments at all? They don't!
I saw some banking PKI fail since they believed: We're big enough and we
invent our own stuff which rules out everything else. They mainly
suffered from internal politics and the DOT.COM blurb.

> 6. Governments pushing bizarre Bridge CA concepts

BTW: The Bridge CA in Germany was not invented by the government. IIRC
the founders were a bank and a big telco company.

> PKI for consumers will become bigger than OTP when PKI is housed in
> mobile phones although initially OTP will be used in mobile phones
> rather than by special-purpose devices.

I doubt that.

> To achieve that we need a whole bunch of enablement technologies.
> Most of the PKIX enrollment stuff will be obsolete in 5-10 years from
> now

I'd never trust a system where the mobile phone vendor initializes a key
to avoid an enrollment process. If you really plan to establish such a
system be assured that I will fight against this.

> The problems with mobile phone security issues are exaggerated and are
> also in no way cast in concrete.

On which planet are you living?

> If the requirement is "perfect" security,

There's no 100% security. We all know that. But e.g. given the Bluetooth
attacks I'm concerned of drive-by copying of private keys. And given the
strange customizing of mobile phones by the telco companies my trust is
even lower.

> we have to accept that nothing will happen.

Frankly I prefer having to deal with OTP when doing online banking over
using my handy with some obscure key container initialized by a vendor
on it.

> Google's Android as well as Symbian 9.3 are not comparable to
> Windows which indeed has a broken security model.

But many security reviewers know a lot about Windows (and Linux and Mac
OS X) in comparison to public knowledge about Android. So you can't tell
at this time.

> I don't expect a reply on this because it will anyway take some five
> years or so to figure out if the above is correct or not.

Well, mabye the problem is that I'm not as visionary as you are. ;-)

Ciao, Michael.
Anders Rundgren
2008-11-27 12:15:54 UTC
Permalink
Michael Ströder wrote:
Let me comment on a few things. We do not disagree with all but we look from different angles.

>But crypto tokens are not suitable for S/MIME encryption keys because of
>the growing key history needed. So one has to distinguish PKI-enabled
>applications.

Authentication over the web is the killer PKI app and therefore I'm less worried about S/MIME. LDAP (your primary work space?) is a core IT technology, S/MIME is not.

>> PKI in such a setup is just another kind of password.

>Hmm, here I disagree since a password, even when used like in Kerberos,
>leaves the user's system (directly or as shared secret) whereas a
>private key used for signing something during authentication never
>leaves the key store of the client's system.

I did not really meant on a technical level but as a domain-restricted use-case.

>> So what is then real problem?
>> 1. The European Smart Card industry who do not want to become suppliers
>> of commodities.

>???
>Each time I talked to smartcard vendors they were keen on selling their
>stuff. The more the better.

You mean there is a standard blank smartcard that you can buy from multiple vendors that works right-out-of-the-box in most computer systems? Using what kind of standard personalization software?

>> To achieve that we need a whole bunch of enablement technologies.
>> Most of the PKIX enrollment stuff will be obsolete in 5-10 years from
>> now

>I'd never trust a system where the mobile phone vendor initializes a key
>to avoid an enrollment process. If you really plan to establish such a
>system be assured that I will fight against this.

The idea is rather than the phone vendor provides an Open Key Container which is initialized by a certified device key which is used for key attestations:
http://tinyurl.com/6rg7ap

Some other people working in the same space:
http://research.nokia.com/files/NRCTR2008007.pdf

Anders
Michael Ströder
2008-11-27 14:02:08 UTC
Permalink
Anders Rundgren wrote:
>
> >> So what is then real problem?
> >> 1. The European Smart Card industry who do not want to become suppliers
> >> of commodities.
>
> >???
> >Each time I talked to smartcard vendors they were keen on selling their
> >stuff. The more the better.
>
> You mean there is a standard blank smartcard that you can buy from
> multiple vendors that works right-out-of-the-box in most computer
> systems? Using what kind of standard personalization software?

Different vendors have different smartcards but you can use them from
different applications through PKCS#11 and CAPI/CSP. The software
quality differs.

You claimed that banks do not use PKI with smartcards for authc because
there's nothing available. I don't think so. The banks do not want to
get involved with supporting software/hardware installed at the user's
PC. You should look at the HBCI history.

> >> To achieve that we need a whole bunch of enablement technologies.
> >> Most of the PKIX enrollment stuff will be obsolete in 5-10 years from
> >> now
>
> >I'd never trust a system where the mobile phone vendor initializes a key
> >to avoid an enrollment process. If you really plan to establish such a
> >system be assured that I will fight against this.
>
> The idea is rather than the phone vendor provides an Open Key Container
> which is initialized by a certified device key which is used for key
> attestations:

And how is the device key certified to establish trust?

> http://tinyurl.com/6rg7ap <http://tinyurl.com/6rg7ap>

Pretty vague.

This all does not solve the basic problem which is: People are too lazy
to use this technology to mitigate risks if they are not forced to use
it (by law or security policy).

Ciao, Michael.
Nelson B Bolyard
2008-11-29 06:40:20 UTC
Permalink
Michael Ströder wrote, On 2008-11-27 06:02:
> Anders Rundgren wrote:
>> >> So what is then real problem?
>> >> 1. The European Smart Card industry who do not want to become suppliers
>> >> of commodities.
>>
>> >???
>> >Each time I talked to smartcard vendors they were keen on selling their
>> >stuff. The more the better.
>>
>> You mean there is a standard blank smartcard that you can buy from
>> multiple vendors that works right-out-of-the-box in most computer
>> systems? Using what kind of standard personalization software?
>
> Different vendors have different smartcards but you can use them from
> different applications through PKCS#11 and CAPI/CSP. The software
> quality differs.
>
> You claimed that banks do not use PKI with smartcards for authc because
> there's nothing available. I don't think so. The banks do not want to
> get involved with supporting software/hardware installed at the user's
> PC. You should look at the HBCI history.

I recently had lunch with a Swiss banking executive whose bank now
supports two different USB hardware PKI token gizmos for authentication.
As I recall, one is distributed and supported by the Swiss post office.
The bank seems quite happy to support the devices, given that the bank
is not the sole service to use it, and therefore does not bear the sole
support burden.

I have contacts in the former Soviet Union who claim that Russian banks
now routinely require PKI hardware for authentication as a condition of
online banking.

How sad that I live is a nation that is such a technological back-water. :)
Anders Rundgren
2008-11-29 13:21:48 UTC
Permalink
Nelson B Bolyard wrote:

>I have contacts in the former Soviet Union who claim that Russian banks
>now routinely require PKI hardware for authentication as a condition of
>online banking.

>How sad that I live is a nation that is such a technological back-water. :)

It sure is. The US is about the only major IT-nation where the government
haven't even the slightest embryo to an architecture for secure messaging
between agencies, not to mention between agencies and the private sector.
So far they have managed keeping this a secret, since nobody has been able
to decipher what the gazillion of "CIO-documents" littered with government
buzz-words like FISSMA actually means for an architect.

Fortunately, most EU governments have (with the German-speaking regions
as the notable exception...), begun to build on architectures based on a
paradigm that banks established 3-4 decades before them:
http://webpki.org/papers/web/gateway.pdf

Another strong reason for that is briefly described in this document:
http://webpki.org/papers/web/A.R.AppliedPKI-Lesson-1.pdf
It is fascinating meeting the consultants that the US government use,
who all claim that this is nonsense; FIPS201/PIV can do it all!
But since there is no bluprint supporting that position, progress
remains firmly stuck at zero.

Anders
Ian G
2008-11-30 01:19:29 UTC
Permalink
Anders Rundgren wrote:
> Nelson B Bolyard wrote:
>
>> I have contacts in the former Soviet Union who claim that Russian banks
>> now routinely require PKI hardware for authentication as a condition of
>> online banking.
>
>> How sad that I live is a nation that is such a technological back-water. :)
>
> It sure is. The US is about the only major IT-nation where the government
> haven't even the slightest embryo to an architecture for secure messaging
> between agencies, not to mention between agencies and the private sector.
> So far they have managed keeping this a secret, since nobody has been able
> to decipher what the gazillion of "CIO-documents" littered with government
> buzz-words like FISSMA actually means for an architect.
>
> Fortunately, most EU governments have (with the German-speaking regions
> as the notable exception...), begun to build on architectures based on a
> paradigm that banks established 3-4 decades before them:
> http://webpki.org/papers/web/gateway.pdf
>
> Another strong reason for that is briefly described in this document:
> http://webpki.org/papers/web/A.R.AppliedPKI-Lesson-1.pdf
> It is fascinating meeting the consultants that the US government use,
> who all claim that this is nonsense; FIPS201/PIV can do it all!
> But since there is no bluprint supporting that position, progress
> remains firmly stuck at zero.

Hmm, Anders, apologies in advance for the RTFM question, but can you
please summarise those two docs, or explain the essential points in more
detail?

iang
Anders Rundgren
2008-12-02 12:45:34 UTC
Permalink
>Hmm, Anders, apologies in advance for the RTFM question, but can you
>please summarise those two docs, or explain the essential points in more
>detail?

That's the problem in a nutshell; there is no "FM"!

The answer I'm looking for (but know is unavailable) is how to apply
client/employee PKI to the scheme on p2 of:
http://webpki.org/papers/web/A.R.AppliedPKI-Lesson-1.pdf
I have even tried to get academia interested. The answer is always:
"we don't do applications".

Another example is NIST's b2b testbed that does not even mention the
word security: http://www.mel.nist.gov/msid/b2btestbed

Anyway, using a bank-like transaction backbone, you can create secure
networks using very simple means, without having to implement PKI on
the desktop. The latter then becomes a separate mission.

Anders


----- Original Message -----
From: "Ian G" <***@iang.org>
To: "mozilla's crypto code discussion list" <dev-tech-***@lists.mozilla.org>
Sent: Sunday, November 30, 2008 02:19
Subject: Re: Creating a Global User-level CA/Trust Infrastructurefor SecureMessaging


Anders Rundgren wrote:
> Nelson B Bolyard wrote:
>
>> I have contacts in the former Soviet Union who claim that Russian banks
>> now routinely require PKI hardware for authentication as a condition of
>> online banking.
>
>> How sad that I live is a nation that is such a technological back-water. :)
>
> It sure is. The US is about the only major IT-nation where the government
> haven't even the slightest embryo to an architecture for secure messaging
> between agencies, not to mention between agencies and the private sector.
> So far they have managed keeping this a secret, since nobody has been able
> to decipher what the gazillion of "CIO-documents" littered with government
> buzz-words like FISSMA actually means for an architect.
>
> Fortunately, most EU governments have (with the German-speaking regions
> as the notable exception...), begun to build on architectures based on a
> paradigm that banks established 3-4 decades before them:
> http://webpki.org/papers/web/gateway.pdf
>
> Another strong reason for that is briefly described in this document:
> http://webpki.org/papers/web/A.R.AppliedPKI-Lesson-1.pdf
> It is fascinating meeting the consultants that the US government use,
> who all claim that this is nonsense; FIPS201/PIV can do it all!
> But since there is no bluprint supporting that position, progress
> remains firmly stuck at zero.

Hmm, Anders, apologies in advance for the RTFM question, but can you
please summarise those two docs, or explain the essential points in more
detail?

iang
Ian G
2008-11-27 12:00:30 UTC
Permalink
Michael Ströder wrote:
> Anders Rundgren wrote:
>> Ian G wrote:
>>
>>>> => Encrypting/signing must be made a business requirement in contracts.
>>>> That's the whole point. And there's no technical solution for it.
>>
>>> That's as close to a perfect dilemma as I've come across! It's not a
>>> business requirement, so we must make it a business requirement ...
>>
>> Another alternative is to
>
> Anders, still you fail to see the real problems since you propose
> technical solutions for non-technical issues.
>
> But let's see:
>
>> 1. abandon non-scalable trust infrastructures such as the one
>> required by S/MIME
>
> Why "non-scalable"? Can you be more verbose?

I don't know what Anders thinks, but I see these reasons as to why
S/MIME is non-scaleable:

* it has no open + effective key distribution mechanism. (I exclude
the LDAP stuff as that is generally for internal / corporates, and is
not a general solution for the users.) E.g., after changing laptops
recently, I still cannot s/mime to half my counterparties because I
don't have their certs. This happens regularly with everyone I know...
* it needs a few tweaks in UI to align it with the safe usage models,
so, for example the "signing" icon has to go because it cannot be used
for signing, because signing is needed for key distribution. It also
cannot be used for signing unless reference is made to the conditions of
signing, and no UI vendor has ever wanted to give time&space to a CPS.
C.f., that recent thread with Nelson, where he reads everything before
signing.
* it needs a click-to-launch method of key-creation, sort of like
that which Anders was demoing with Firefox. Or preferably, it should be
launched by default. "There is only one mode, and it is secure." But
that will likely clash with the next point.
* the security architecture is referred to some IETF committee. This
means it is incapable of modifying its security model to deal with
evolving threats. Anything with its security leadership split across
too many components eventually falls into stasis.

That's off the top of my head. There are others on my blog, likely.

>> 2. abandon schmes that use explicit encryption keys like S/MIME
>
> Are you aware of the requirements for separate encryption keys? Some
> companies have the legal requirements for key escrow in litigation
> cases. That's the main reason why encryption and signature keys are
> separated.


What happens when we add complexity to an already broken system?

>> 3. introduce secure mobile secure key-storage
>
> Ah, yeah. Did you ever think of a growing key history and such?


Is that the counterparty certs, which would then also disappear every
time someone changed cellphone? Yeah, I agree. It needs a better key
distro mechanism, like the key servers of OpenPGP.

>> 4. put the latter in cell phones
>
> Even cell phones can break. And I don't consider them to be trustworthy
> key stores
> 1. with all the control the cell phone provider has over them,
> 2. all the gadgets installed with security issues,
> 3. with the limited data storage size on today's SIM cards.


Sounds about as robust as any Internet software on any modern PC that
bombs out once a year or so :)


> And the main point: You fail to explain how trust is to be established.


Well, there is the old trick I described: do a DH key exchange and then
use the voice to authenticate the checksum over the results.

(Mind you, let's not get too hung up on this, as "trust" is not defined
yet!)

iang
Michael Ströder
2008-11-27 13:51:50 UTC
Permalink
Just to clarify: I also see a lot of practical problems to be solved
when encrypting/signing e-mails. And I supported real end-users doing
so. But these are not caused by S/MIME (or PGP) standards itself.

Ian G wrote:
> * it has no open + effective key distribution mechanism. (I exclude
> the LDAP stuff as that is generally for internal / corporates, and is
> not a general solution for the users.)

Just exchanging signed S/MIME e-mails is quite easy for most users. The
case that e-mail receivers are completely unknown is fairly seldom. This
is a non-issue.

> E.g., after changing laptops recently, I still cannot s/mime to half
> my counterparties because I don't have their certs. This happens
> regularly with everyone I know...

???

I've changed my notebook harddisk quite often. I never lost my Seamonkey
cert DB containing the key history of the last 10 years since it's part
of the Mozilla profile which I have backups of. When people in companies
get new PCs there's backup concept to migrate their old data. If not the
user has more problems than just the e-mail certs of others.
If you create a new profile in your MUA then you have to import the
certs therein. But does that happen very often?

This is a non-issue.

> * it needs a few tweaks in UI to align it with the safe usage models,
> so, for example the "signing" icon has to go because it cannot be used
> for signing, because signing is needed for key distribution. It also
> cannot be used for signing unless reference is made to the conditions of
> signing, and no UI vendor has ever wanted to give time&space to a CPS.

Maybe it's me but frankly I don't understand what you say here.
Especially I don't see the need for a "UI vendor" to define a CPS (if
Certificate Practice Statement is meant here).

No doubt the UI could be better in some S/MIME-enabled MUAs.

> C.f., that recent thread with Nelson, where he reads everything before
> signing.

The thread about form signing? There was a basic question whether it's
feasible at all and I commented on that.

> * it needs a click-to-launch method of key-creation, sort of like that
> which Anders was demoing with Firefox. Or preferably, it should be
> launched by default. "There is only one mode, and it is secure." But
> that will likely clash with the next point.

Are you talking about the PGP model of peer trust? (Each end-user
defining individual trust for each participant's public key).

> * the security architecture is referred to some IETF committee. This
> means it is incapable of modifying its security model to deal with
> evolving threats. Anything with its security leadership split across
> too many components eventually falls into stasis.

I don't understand this.

>>> 2. abandon schmes that use explicit encryption keys like S/MIME
>>
>> Are you aware of the requirements for separate encryption keys? Some
>> companies have the legal requirements for key escrow in litigation
>> cases. That's the main reason why encryption and signature keys are
>> separated.
>
> What happens when we add complexity to an already broken system?

I fail to see why it's broken. So I can't answer. And I fail to see why
the other schemes proposed are less broken. IMHO the opposite is true.

>>> 3. introduce secure mobile secure key-storage
>>
>> Ah, yeah. Did you ever think of a growing key history and such?
>
> Is that the counterparty certs, which would then also disappear every
> time someone changed cellphone? Yeah, I agree. It needs a better key
> distro mechanism, like the key servers of OpenPGP.

No, I meant the archived private keys for accessing old encrypted e-mails.

>>> 4. put the latter in cell phones
>>
>> Even cell phones can break. And I don't consider them to be
>> trustworthy key stores
>> 1. with all the control the cell phone provider has over them,
>> 2. all the gadgets installed with security issues,
>> 3. with the limited data storage size on today's SIM cards.
>
> Sounds about as robust as any Internet software on any modern PC that
> bombs out once a year or so :)

Yes, there are risks with software on a PC. But on a PC I have a fairly
good chance to keep more control on what I'm using. The mobile phones
tend to be customized. Configuration options are very sparse. There is
no reasonable update mechanism keeping me informed about security
updates. (It was a major PITA to update the buggy firmware on my Sony
Ericsson mobile phone. The update software needed a flash player to be
installed to display some fancy graphics. Uumpf!)

>> And the main point: You fail to explain how trust is to be established.
>
> Well, there is the old trick I described: do a DH key exchange and then
> use the voice to authenticate the checksum over the results.

Yupp. But that's kind of an enrollment process which is what Anders
would like to avoid.

> (Mind you, let's not get too hung up on this, as "trust" is not defined
> yet!)

Trust is like beauty. Beauty is in the eye of the beholder. ;-)

Ciao, Michael.
Anders Rundgren
2008-11-27 15:06:11 UTC
Permalink
Michael Ströder wrote:
>Ian G wrote:
>> * it has no open + effective key distribution mechanism. (I exclude
>> the LDAP stuff as that is generally for internal / corporates, and is
>> not a general solution for the users.)

>Just exchanging signed S/MIME e-mails is quite easy for most users. The
>case that e-mail receivers are completely unknown is fairly seldom. This
>is a non-issue.

The e-mail receivers are seldom unknown but their CAs are. Using
Windows Mail most PKIX signed messages give me a black screen
telling there is something wrong with this message, while messages
asking me to download EXE files pass without warnings.

>> E.g., after changing laptops recently, I still cannot s/mime to half
>> my counterparties because I don't have their certs. This happens
>> regularly with everyone I know...

>???

>I've changed my notebook harddisk quite often. I never lost my Seamonkey
>cert DB containing the key history of the last 10 years since it's part
>of the Mozilla profile which I have backups of. When people in companies
>get new PCs there's backup concept to migrate their old data. If not the
>user has more problems than just the e-mail certs of others.
>If you create a new profile in your MUA then you have to import the
>certs therein. But does that happen very often?

Each time you want to use another computer.
Why do you think I claim that mobile crypto is a prerequisite?

>This is a non-issue.

For hackers, yes. For corporations with IT-support, yes. For consumers
OTOH it is a showstopper.

>> * it needs a few tweaks in UI to align it with the safe usage models,
>> so, for example the "signing" icon has to go because it cannot be used
>> for signing, because signing is needed for key distribution. It also
>> cannot be used for signing unless reference is made to the conditions of
>> signing, and no UI vendor has ever wanted to give time&space to a CPS.

>Maybe it's me but frankly I don't understand what you say here.
>Especially I don't see the need for a "UI vendor" to define a CPS (if
>Certificate Practice Statement is meant here).

I believe Ian is referring to the problem which made me starting this thread...
That is, the need for end-users to become trust managers.

Anders
Michael Ströder
2008-11-27 18:00:57 UTC
Permalink
Anders Rundgren wrote:
> Michael Ströder wrote:
>> Ian G wrote:
>>> * it has no open + effective key distribution mechanism. (I exclude
>>> the LDAP stuff as that is generally for internal / corporates, and is
>>> not a general solution for the users.)
>
>> Just exchanging signed S/MIME e-mails is quite easy for most users. The
>> case that e-mail receivers are completely unknown is fairly seldom. This
>> is a non-issue.
>
> The e-mail receivers are seldom unknown but their CAs are. Using
> Windows Mail most PKIX signed messages give me a black screen
> telling there is something wrong with this message, while messages
> asking me to download EXE files pass without warnings.

When I'm in a project working for a company which has a S/MIME CA
importing the CA cert into my S/MIME-enabled MUA is a no-brainer. What's
the issue? I establish trust for a certain purpose: Exchanging secured
e-mail with a certain company so nobody can read the documents *they*
want to keep confidential. I'm happy to do that once for a CA cert
instead of having to initiate a secure key exchange with every employee
of the company.

The sad thing is: The users, in this case my project colleagues,
sometimes do not know how to use the existing S/MIME infrastructure
although they enrolled during a user registration process and they
already have everything on their desktop. Although I'm not involved
personally with the S/MIME infrastructure my attitude is to teach the
people how to use it. And they feel better when using it because they
know there's a need for e-mail protection. But they were simply not
teached. That's a non-technical problem. And any other
signature/encryption/whatever standard will suffer from this.

>>> E.g., after changing laptops recently, I still cannot s/mime to half
>>> my counterparties because I don't have their certs. This happens
>>> regularly with everyone I know...
>
>> ???
>
>> I've changed my notebook harddisk quite often. I never lost my Seamonkey
>> cert DB containing the key history of the last 10 years since it's part
>> of the Mozilla profile which I have backups of.
>
> Each time you want to use another computer.

Oh, come on! How often do you *really* do this? And how do you move
around the rest of your workspace? There are many more things to
consider when you want real roaming than just your keys and PKCs of others.

> Why do you think I claim that mobile crypto is a prerequisite?

Either your mobile also runs the apps or you have to integrate your
mobile with the PC on which the whatever-you-call-your-standard-enabled
app runs. The latter is the same problem space like using
smartcards/readers or USB tokens as key store.

> For hackers, yes. For corporations with IT-support, yes. For consumers
> OTOH it is a showstopper.

BTW: Consumers don't switch PCs so often. My friends and relatives who
get a new PC also try to backup and restore their MUA profile data (or
somebody helps them to do it).

>>> * it needs a few tweaks in UI to align it with the safe usage models,
>>> so, for example the "signing" icon has to go because it cannot be used
>>> for signing, because signing is needed for key distribution. It also
>>> cannot be used for signing unless reference is made to the conditions of
>>> signing, and no UI vendor has ever wanted to give time&space to a CPS.
>
>> Maybe it's me but frankly I don't understand what you say here.
>> Especially I don't see the need for a "UI vendor" to define a CPS (if
>> Certificate Practice Statement is meant here).
>
> I believe Ian is referring to the problem which made me starting this thread...
> That is, the need for end-users to become trust managers.

Everybody is a trust manager. All day everybody is making trust
decisions. But there's no ultimate trust.

Ciao, Michael.
Ian G
2008-11-29 11:20:05 UTC
Permalink
Michael Ströder wrote:
> Anders Rundgren wrote:
>> Michael Ströder wrote:
>>> Ian G wrote:
>>>> * it has no open + effective key distribution mechanism. (I exclude
>>>> the LDAP stuff as that is generally for internal / corporates, and is
>>>> not a general solution for the users.)
>>
>>> Just exchanging signed S/MIME e-mails is quite easy for most users. The
>>> case that e-mail receivers are completely unknown is fairly seldom. This
>>> is a non-issue.
>>
>> The e-mail receivers are seldom unknown but their CAs are. Using
>> Windows Mail most PKIX signed messages give me a black screen
>> telling there is something wrong with this message, while messages
>> asking me to download EXE files pass without warnings.
>
> When I'm in a project working for a company which has a S/MIME CA
> importing the CA cert into my S/MIME-enabled MUA is a no-brainer. What's
> the issue? I establish trust for a certain purpose: Exchanging secured
> e-mail with a certain company so nobody can read the documents *they*
> want to keep confidential. I'm happy to do that once for a CA cert
> instead of having to initiate a secure key exchange with every employee
> of the company.


OK. I certainly understand the objective and the use-case. I can offer
a counterpoint: a recent well-thought-out project to do something
similar started out with S/MIME, and concluded that S/MIME should be
optional because it is brittle, and all email should go through
corporate servers, and TLS should be used for the protection.

(In this case, every user was either an experienced security and tech
person, or an extremely experienced security and tech person.)


> The sad thing is: The users, in this case my project colleagues,
> sometimes do not know how to use the existing S/MIME infrastructure
> although they enrolled during a user registration process and they
> already have everything on their desktop. Although I'm not involved
> personally with the S/MIME infrastructure my attitude is to teach the
> people how to use it. And they feel better when using it because they
> know there's a need for e-mail protection. But they were simply not
> teached. That's a non-technical problem.


IMO, the root cause is not training. Nor legal. To blame some other
process is what we call "shifting the burden," a pattern that allows us
to ignore the root causes.

The root cause is that the S/MIME security model is inefficient; it
doesn't deliver benefits in accordance with the costs imposed.

Funnily enough, users are very savvy. They can spot a worthless system
much more easily than engineers. What they can't do is explain why it
is worthless; they simply bypass it. This is why smart product is
always developed in association with lots of user feedback, and paper
designs generally don't succeed.

In this sense, Mozilla is on the right track with trying to put in place
a user security model that doesn't require user intervention. (E.g.,
the UI hides the CA, from the "all CAs are equal" assumption.) However,
this only works if the result is efficient. As Kyle comments, it isn't,
for S/MIME, and the result is that the model experiences low usage rates.


> And any other
> signature/encryption/whatever standard will suffer from this.


If by "standard" you mean "security model," that's simply not true.
Skype delivers the goods and takes only a few minutes of training.
There is practically no training required to get users to use Skype in
its secure mode, because it nicely follows the idea of "there is only
one mode, and it is secure." Although it is not likely that we can move
email to the same model, it is entirely plausible to adopt 90% of the
ease-of-use, without losing any of the CA certificate benefit.

Of on the other hand you mean more literally, a standards-based security
model, then yes, that's true. Correct me if I'm wrong, but I don't
think any standards approach ever came up with a security model that
works for users.


>>>> E.g., after changing laptops recently, I still cannot s/mime to half
>>>> my counterparties because I don't have their certs. This happens
>>>> regularly with everyone I know...
>>
>>> ???

>>> I've changed my notebook harddisk quite often. I never lost my Seamonkey
>>> cert DB containing the key history of the last 10 years since it's part
>>> of the Mozilla profile which I have backups of.


It is a curious thing: I have been using Tbird for many years, and each
time I've never managed to transport more than a portion of the stuff
across. I just spent some time looking and couldn't find the magic
command, so I always wonder... I know there is a thing called profiles,
but where does one import & export them?

>> Each time you want to use another computer.
>
> Oh, come on! How often do you *really* do this? And how do you move
> around the rest of your workspace? There are many more things to
> consider when you want real roaming than just your keys and PKCs of others.


Sure. It's a nightmare. I do it around once a year at least -- full
migration. This year, three times. I hate it.

But it is reality. Saying "you don't need to do that" is just ignoring
the problem by arguing some technicality which is totally irrelevant to
the way users have to live their lives.


>> Why do you think I claim that mobile crypto is a prerequisite?
>
> Either your mobile also runs the apps or you have to integrate your
> mobile with the PC on which the whatever-you-call-your-standard-enabled
> app runs. The latter is the same problem space like using
> smartcards/readers or USB tokens as key store.


Right. Guess what: user-oriented applications like webmail, google
tools, skype (cough!) and so forth solve this problem by integrating the
entire database into some form of network store.


>>>> * it needs a few tweaks in UI to align it with the safe usage models,
>>>> so, for example the "signing" icon has to go because it cannot be used
>>>> for signing, because signing is needed for key distribution. It also
>>>> cannot be used for signing unless reference is made to the
>>>> conditions of
>>>> signing, and no UI vendor has ever wanted to give time&space to a CPS.
>>
>>> Maybe it's me but frankly I don't understand what you say here.
>>> Especially I don't see the need for a "UI vendor" to define a CPS (if
>>> Certificate Practice Statement is meant here).


Not quite, what I mean here is that somehow, the user has to figure out
what is happening. The PKI view is that this is done by referring to
the CPS. The secure browsing view is that it is done by the vendor, on
behalf of the user, and the CPS is reviewed by the vendor for that
purpose. (Yes, these two views are at odds, and the vendor has some
questions to answer here...)

One could surmise that this situation/confusion is good enough for
encryption between websites and users; given that there are lots of
other protections in place, etc. Indeed, this is our informal
preference here, in that we prefer to get more CAs in and and more
encryption happening, and this addresses the current threat scenario
which breaches secure browsing by exploiting its rarity.

However: one would be hard-pushed to suggest that this situation /
confusion could be acceptable for users to interchange legally binding
signatures, because there is an absence of other protections in place,
or those protections that are in place are uncertain.

Recall Nelson's view that he does not sign anything without reading.
The wider principle here is that one should not enter into an agreement
unless it is understood. Now, applied to S/MIME, if it implied some
form digital signing over emails, then it should not be used, because
one cannot read the implied contract (CPS, or whatever), and nobody else
is stepping up to say it's ok, sign away, we're watching your back.
Full understanding is not possible, at any of many layers and levels.

In order to satisfy users' needs for clarity, the governance UI should
present a workable human signing view to the user. But, as we have seen
in recent threads, that is fantasy. It's a non-starter.

Ergo, S/MIME client UI implementations should be modified to drop any
sense of signing, by default, and the digsigs should be used for
integrity protection and key distribution.


>> I believe Ian is referring to the problem which made me starting this
>> thread...
>> That is, the need for end-users to become trust managers.


Yes. Or, the absence of end-to-end trust management in the system, if
we are using that language.

> Everybody is a trust manager. All day everybody is making trust
> decisions. But there's no ultimate trust.


No user can make a trust decision without evaluation of the
circumstances. Without info, it is called gambling. They are indeed
good at evaluation, given the limited resources that they can apply at
any time. However, as S/MIME does not provide any "circumstances" that
suggest a reliable framework for agreements, it should drop the
suggestion entirely.

(Users as a mass have already rejected S/MIME as a signing framework, so
this is more about protecting those users who might otherwise be
mistaken or might otherwise be sold a product by their IT supplier.)

iang
Kyle Hamilton
2008-11-29 23:09:02 UTC
Permalink
On Sat, Nov 29, 2008 at 3:20 AM, Ian G <***@iang.org> wrote:
>
>
>
>> The sad thing is: The users, in this case my project colleagues, sometimes do not know how to use the existing S/MIME infrastructure although they enrolled during a user registration process and they already have everything on their desktop. Although I'm not involved personally with the S/MIME infrastructure my attitude is to teach the people how to use it. And they feel better when using it because they know there's a need for e-mail protection. But they were simply not teached. That's a non-technical problem.
>
>
> IMO, the root cause is not training. Nor legal. To blame some other process is what we call "shifting the burden," a pattern that allows us to ignore the root causes.
>
> The root cause is that the S/MIME security model is inefficient; it doesn't deliver benefits in accordance with the costs imposed.
>
> Funnily enough, users are very savvy. They can spot a worthless system much more easily than engineers. What they can't do is explain why it is worthless; they simply bypass it. This is why smart product is always developed in association with lots of user feedback, and paper designs generally don't succeed.
>
> In this sense, Mozilla is on the right track with trying to put in place a user security model that doesn't require user intervention. (E.g., the UI hides the CA, from the "all CAs are equal" assumption.) However, this only works if the result is efficient. As Kyle comments, it isn't, for S/MIME, and the result is that the model experiences low usage rates.


First off: User training is arguably more technical than computer
infrastructure. You can't simply say "they were simply not teached
[sic]" and "that's a non-technical problem", because computers need to
be taught exactly one thing: how to perform a series of complex tasks.
Users need to be taught that (perhaps not to the specific granularity
of the operations that computers need to be taught, but they do need
to know how to do a series of complex things) as well as something,
perhaps more important: WHY to perform a series of complex tasks.
(Why should someone change the oil in their car? Because it helps the
car's engine last longer. Why should someone go through the
additional mess and morass of using S/MIME? To let themselves in for
more user-interface headache and annoyance?)

The root cause is not "training". There are actually two root causes
of the failure of cryptography to make sizeable inroads into everyday
non-commercial life. First is that the UI designers and programmers
have violated the contract and interface to which the users have been
trained. In other words, WE BROKE THE INTERFACE. Second is that the
threat model currently used for commerce and government is NOT
appropriate for non-commercial and non-governmental social
interaction. In other words, for the "general user", WE DIDN'T HAVE A
GOOD REASON TO BREAK THE INTERFACE.

As cryptographers, we can know -- and show, to a certainty far beyond
any other data-transformation discipline -- several aspects of the
metadata of properly-formatted messages. In our zeal to try to
explain what we can know, and how we can know it, and why we can know
it, we've overwhelmed the coders, the UI designers, the UI experts,
the people who are supposed to distill complex operations, notices,
and warnings to individually-understandable pieces.

I like the idea of putting in a user security model that doesn't
require user intervention -- but only to a point. I /don't/ like the
idea of trying to make the system make all of its security decisions
in a vacuum, especially in areas where the user has historically been
the master (for me, that includes anything which can be covered under
the Electronic Communication Privacy Act of 1986). The problem is
this: the system is not intelligent. Only the user is intelligent.
This means that data which would otherwise not be acceptable under the
system's rules might be acceptable under the user's rules.

This is why I've been in favor of unobtrusive pop-ups (rather like
Growl notifications on the Mac). There are only a couple of pieces of
information truly necessary for any security UI... who it's from, who
says it's from the person it's from, who (ultimately) has been deemed
acceptable to provide that kind of information, and whether it's been
modified in transit. i.e., certificate subject, certificate issuer,
issuer's root authority, and hash-match.

I've also been putting energy toward making it possible for those
interactions that don't require positive legal identification to be
able to use certificates with the identities that others interact
with. As I wrote elsewhere, it's entirely possible for two people to
use the same login name or nickname -- but I've not seen any system
where it is possible for two people to use the same login name within
the same authentication/authorization boundary.

What X.509 needs to be viewed as is not "a means of identification".
It needs to be viewed as "a means of authentication which uses the
same identity policy as the issuing realm" -- in other words, it's a
means of agreeing which set of rules is being used to identify each
person. (Nelson mentioned using certificates across AOL Instant
Messenger. This is a perfect example -- normally, when you
communicate over AIM, you're relying on the AIM realm for
identification and authentication of identity, which it does via
screenname/password tuples. When you use certificates, the user can
essentially throw away the AIM identification [since the only reason
to have it at that point is to tell the AIM network where to route the
message], and instead start talking with that person as though they
were inside the realm from which used the certificate to authenticate
with. This also means that when the communication is over, just
because someone's using that AIM screenname doesn't mean that they're
the same person who authenticated via certificate earlier.)

>> And any other signature/encryption/whatever standard will suffer from this.
>
>
> If by "standard" you mean "security model," that's simply not true. Skype delivers the goods and takes only a few minutes of training. There is practically no training required to get users to use Skype in its secure mode, because it nicely follows the idea of "there is only one mode, and it is secure." Although it is not likely that we can move email to the same model, it is entirely plausible to adopt 90% of the ease-of-use, without losing any of the CA certificate benefit.
>
> Of on the other hand you mean more literally, a standards-based security model, then yes, that's true. Correct me if I'm wrong, but I don't think any standards approach ever came up with a security model that works for users.

It's entirely possible to make a secure mode easy to use for the
users. It's not possible to do it with any of the current
security-model standards.*

*n.b. I haven't looked at all of them, but the ones from the IETF and
the ones from the ITU that I've looked at seem designed to require
advanced degrees to figure out what they're trying to say -- and the
current implementations seem to require pushing that complexity to the
user.

>>>>> E.g., after changing laptops recently, I still cannot s/mime to half
>>>>> my counterparties because I don't have their certs. This happens
>>>>> regularly with everyone I know...
>>>
>>>> ???
>
>>>> I've changed my notebook harddisk quite often. I never lost my Seamonkey
>>>> cert DB containing the key history of the last 10 years since it's part
>>>> of the Mozilla profile which I have backups of.
>
>
> It is a curious thing: I have been using Tbird for many years, and each time I've never managed to transport more than a portion of the stuff across. I just spent some time looking and couldn't find the magic command, so I always wonder... I know there is a thing called profiles, but where does one import & export them?
>
>>> Each time you want to use another computer.
>>
>> Oh, come on! How often do you *really* do this? And how do you move around the rest of your workspace? There are many more things to consider when you want real roaming than just your keys and PKCs of others.
>
>
> Sure. It's a nightmare. I do it around once a year at least -- full migration. This year, three times. I hate it.
>
> But it is reality. Saying "you don't need to do that" is just ignoring the problem by arguing some technicality which is totally irrelevant to the way users have to live their lives.

I'm so glad that one of the cognoscenti can manage to transport his
profile around. Yay, it's possible.

I've suffered three hard disk crashes this year. Fortunately, none
have managed to destroy my most valuable data... but honestly. If we
(the users) are supposed to keep our secret keys under our physical
control, what are we supposed to do? Worse, what if we keep the keys
in a TPM on our board, but we have to change the board?

I have an iDisk. I could theoretically upload my key and certificate
databases up there for backup, but with a single PIN being used to
unlock all of the keys in my PKCS#11 store I can't put different
passwords on them, I can't partition the trust that I have been
granted (unless I put them in separate modules, which I haven't yet
been able to accomplish), I can't do my part to implement the policies
that I'm trusted with upholding... my personal email PIN is the same
as my business email PIN is the same as my business contract-signature
PIN. (And if you ask me why I have my business contract-signature key
at home... haven't you ever worked from home?)

This last could be described as, "do I really have to trust that my
backup provider isn't going to break into my personal keystores?
Can't I do something to make it less likely that they'd succeed?"

>>> Why do you think I claim that mobile crypto is a prerequisite?
>>
>> Either your mobile also runs the apps or you have to integrate your mobile with the PC on which the whatever-you-call-your-standard-enabled app runs. The latter is the same problem space like using smartcards/readers or USB tokens as key store.
>
>
> Right. Guess what: user-oriented applications like webmail, google tools, skype (cough!) and so forth solve this problem by integrating the entire database into some form of network store.

Webmail and Google tools solve this problem by not having a local
store, period. You interact with them via an http or https
connection. (If you want to use certificates with webmail, though,
you need to use imap or pop3, and run your PKI-enabled app locally.)

Since Skype happens to be this big bad confusing pile of steaming
crud, and since Eddy's using it to enable a red herring attack, I'm
going to ignore it. (Hint: Eddy, just because someone else chooses
not to do things that you've done doesn't mean they're automatically
useless or harmful. I have not seen evidence of Skype being misused,
so I will not raise my voice to decry them -- even though you aren't
seeing evidence of Skype having done an audit, and are raising the hue
and cry based on that. Also, there is nothing wrong with Skype
holding private keys, even if they're not an escrow service. All
Skype needs to do is ensure that they're only being used on behalf of
the account that they're assigned to. All the users need to do is
realize that it's no more secure than AIM's screenname/password
authentication, though it's unlikely that others signed into the
system -- even those whose machines are being routed through for NAT
traversal -- can piece together any of the conversation.)

>>>>> * it needs a few tweaks in UI to align it with the safe usage models,
>>>>> so, for example the "signing" icon has to go because it cannot be used
>>>>> for signing, because signing is needed for key distribution. It also
>>>>> cannot be used for signing unless reference is made to the conditions of
>>>>> signing, and no UI vendor has ever wanted to give time&space to a CPS.
>>>
>>>> Maybe it's me but frankly I don't understand what you say here.
>>>> Especially I don't see the need for a "UI vendor" to define a CPS (if
>>>> Certificate Practice Statement is meant here).
>
>
> Not quite, what I mean here is that somehow, the user has to figure out what is happening. The PKI view is that this is done by referring to the CPS. The secure browsing view is that it is done by the vendor, on behalf of the user, and the CPS is reviewed by the vendor for that purpose. (Yes, these two views are at odds, and the vendor has some questions to answer here...)

http://web.mac.com/wolfoftheair/internetpkirethought.txt

Not my most stellar work, and probably extremely substandard by the
views of those assembled. However, it's a CPS (from the POV of the
certifier, not the vendor) which describes a means to tweak the UI to
a point that I consider necessary.

> One could surmise that this situation/confusion is good enough for encryption between websites and users; given that there are lots of other protections in place, etc. Indeed, this is our informal preference here, in that we prefer to get more CAs in and and more encryption happening, and this addresses the current threat scenario which breaches secure browsing by exploiting its rarity.
>
> However: one would be hard-pushed to suggest that this situation / confusion could be acceptable for users to interchange legally binding signatures, because there is an absence of other protections in place, or those protections that are in place are uncertain.
>
> Recall Nelson's view that he does not sign anything without reading. The wider principle here is that one should not enter into an agreement unless it is understood. Now, applied to S/MIME, if it implied some form digital signing over emails, then it should not be used, because one cannot read the implied contract (CPS, or whatever), and nobody else is stepping up to say it's ok, sign away, we're watching your back. Full understanding is not possible, at any of many layers and levels.

This, right here, is why I have a serious problem accepting the
current CA model.

I've also come up with a means of potentially helping with this
situation, but it relies on OS vendors actually stepping up to the
plate. http://aerowolf.livejournal.com/432470.html#cutid1

> In order to satisfy users' needs for clarity, the governance UI should present a workable human signing view to the user. But, as we have seen in recent threads, that is fantasy. It's a non-starter.
>
> Ergo, S/MIME client UI implementations should be modified to drop any sense of signing, by default, and the digsigs should be used for integrity protection and key distribution.

S/MIME client UIs need to stop handling S/MIME differently from
non-S/MIME (except for the addition of a badge to the chrome).

I'm not yet ready to go into the entire set of traffic-analysis
attacks which can be applied against S/MIME. However, they do exist,
and their existence (and lack of mitigating factors) is worrying to
me.

>>> I believe Ian is referring to the problem which made me starting this thread...
>>> That is, the need for end-users to become trust managers.
>
>
> Yes. Or, the absence of end-to-end trust management in the system, if we are using that language.
>

Erm... more importantly, there is no /central/ trust manager. And as
long as the people clamoring to become central trust managers (i.e.,
the root CAs) refuse to accept that I need information that they won't
certify, and that they certify information that is completely and
abjectly useless to me, I cannot accept them as trust managers.

Also: I hereby put forth that Startcom is not "free". It derives
monetary benefit from the personal information that it demands of
anyone before they're ever approved to become users of the system.
See http://www.turbulence.org/Works/swipe/calculator.html for
information.

>> Everybody is a trust manager. All day everybody is making trust decisions. But there's no ultimate trust.
>
> No user can make a trust decision without evaluation of the circumstances. Without info, it is called gambling. They are indeed good at evaluation, given the limited resources that they can apply at any time. However, as S/MIME does not provide any "circumstances" that suggest a reliable framework for agreements, it should drop the suggestion entirely.
>
> (Users as a mass have already rejected S/MIME as a signing framework, so this is more about protecting those users who might otherwise be mistaken or might otherwise be sold a product by their IT supplier.)

Sure there's ultimate trust. The problem is that there are as many
points of ultimate trust as there are people. If governments want to
get into the business of dictating arbitrary ultimate trust points,
that number goes down to 230 or however many countries there currently
are in the world.

If the UN decided, after that, to issue and run its own CA, that would
create one single ultimate point of trust... for legal interactions,
for fiscal interactions. But not for other interactions.

There is no single point of ultimate trust (and thus single point of
failure) for legal interaction or fiscal interaction. And as for
those of us in non-dictatorships, there would be no single point of
ultimate trust for non-legal/non-fiscal (i.e., social) interaction.
In the US, at least, there's the right of free assembly.

Also, getting into the business of telling someone who to trust, or
what information to trust, puts one squarely into the role of
"fiduciary advisor". Insurance agents, bankers, brokers, and lawyers
are basically the kind of people who get into that group -- and there
are laws strictly limiting what those people can do with their
clients' information or with their clients' trust.

I'm rather sick of asking this question: "What can we do to get the
users to use the technologies that have been developed?"

I'd rather ask this question: "What do the users need that can have
partial or total solutions implemented using the technologies that
have been developed?"

-Kyle H
Ian G
2008-11-30 01:56:15 UTC
Permalink
Kyle Hamilton wrote:

> I'd rather ask this question: "What do the users need that can have
> partial or total solutions implemented using the technologies that
> have been developed?"


Right, good question. I have three partial answers:

* if a standards protocol, Mozilla is interested in implementing it

* if it is useful for developers, then that is good

* if it delivers some benefit to users, then it may align with mission.

iang
Eddy Nigg
2008-11-30 00:33:15 UTC
Permalink
On 11/30/2008 01:09 AM, Kyle Hamilton:
>

Kyle, I must say that I found this particular message highly
interesting! Allow me to respond only on some subjects you've touched
which were of particular interest to me...

> This is why I've been in favor of unobtrusive pop-ups (rather like
> Growl notifications on the Mac). There are only a couple of pieces of
> information truly necessary for any security UI... who it's from, who
> says it's from the person it's from, who (ultimately) has been deemed
> acceptable to provide that kind of information, and whether it's been
> modified in transit. i.e., certificate subject, certificate issuer,
> issuer's root authority, and hash-match.

We've been discussing this previously, I just want to point out that for
S/MIME the UI can be much less intrusive since S/MIME has been much less
misused so far and most users using it have a better knowledge
generally. That's the front-side of the coin - the same coin of not
having a high adoption rate perhaps. BTW, I wonder if there are any
reliable studies concerning that claim anyway.

But the threats for web sites are currently different then for email,
basically because MITMs (and phishing) of web sites are more attractive
right now. Having said that, I believe that many people send routinely
high valuable information unsecured via email - much higher in value
sometimes than some credit card details or so...

> ... my personal email PIN is the same
> as my business email PIN is the same as my business contract-signature
> PIN. (And if you ask me why I have my business contract-signature key
> at home... haven't you ever worked from home?)

Why don't you simply use different smart cards instead?

> Since Skype happens to be this big bad confusing pile of steaming
> crud, and since Eddy's using it to enable a red herring attack, I'm
> going to ignore it. (Hint: Eddy, just because someone else chooses
> not to do things that you've done doesn't mean they're automatically
> useless or harmful. I have not seen evidence of Skype being misused,
> so I will not raise my voice to decry them -- even though you aren't
> seeing evidence of Skype having done an audit, and are raising the hue
> and cry based on that. Also, there is nothing wrong with Skype
> holding private keys, even if they're not an escrow service. All
> Skype needs to do is ensure that they're only being used on behalf of
> the account that they're assigned to.

Kyle, I personally also use Skype in addition to Jabber/XMPP and have
nothing against it. However I must make a stance if their security model
is touted as the solution to all evil, because it's not. First of all
Skype is a centralized system compared to decentralized systems like the
web, email, Jabber and others. There is an inherent difference between
those. Second, one must know the facts and evaluate the risks of Skype
having all the control. I don't care if Skype is encrypted or not,
because I don't have enough information nor control about any of those
aspects. Hence I'm treating it basically as an insecure transport - with
some encryption layer put on top.

However I wouldn't use Skype for the exchange of critical or
confidential messages and files. Nor can this approach be applied to the
web or email or any other decentralized network, otherwise you'd need
from now on use only ONE email server handling all your mail and that of
those who interact with you (e.g. all those who will want to send you
email will have to have an account at the one-and-only mail server you
are using. In this context, some similar scheme might be applied.)

>
> Also: I hereby put forth that Startcom is not "free". It derives
> monetary benefit from the personal information that it demands of
> anyone before they're ever approved to become users of the system.

Kyle, you must be very careful about what you are accusing StartCom
of!!! Let me explain to you the following:

StartCom provides some certification services for free as in free beer.
StartCom isn't a "free" system and never will be. Because certification
authorities have generally not much to do with "free", besides the
potential fees, but quite the opposite. Actually there is almost
nothing "free" in about anything related to CAs - I'm talking as an
operator of a CA and from my point of view. "Free" you can find outside
of the CA framework much easier...

Concerning StartCom's requirement for registration: StartCom has NEVER,
EVER disclosed to any third party any details about its subscribers,
NEVER used or misused subscriber information to promote its own products
or that of others. StartCom are such suckers, they never sent out even
one email encouraging its own subscribers to buy one of their paid
products or upgrade to a paid product or service (I'm certain that CAs
like Godaddy do that routinely) [*]. StartCom has a company wide policy
and a CA policy regulating the use of all subscriber information clearly
We are very well aware of the special responsibility we took upon us in
everything we do at the CA and NEVER used or misused our position in any
way. The only exceptions are court orders and presentations in a
summarized form to potential investors and partners.

Now, since StartCom must enforce adherence to the StartCom Certification
Policies by all subscribers, the subscriber must provide his/her
personal information during registration. For anybody this requirement
might present a problem I suggest to head over to a different CA. (Just
have your credit card ready).

> I'd rather ask this question: "What do the users need that can have
> partial or total solutions implemented using the technologies that
> have been developed?"
>

Or, how to educate the masses to use the technologies which have been
developed and deployed. I've increasingly come to the conclusion that
the problem is educational (or training as you stated in the first part
of your mail) and the inability of the technology people to speak non-geek.


[*] I'm certain that there are some on this list which can confirm that
statement from personal experience.

--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Ian G
2008-11-30 11:47:59 UTC
Permalink
Eddy Nigg wrote:

> (I'm certain that CAs
> like Godaddy do that routinely) [*].

> [*] I'm certain that there are some on this list which can confirm that
> statement from personal experience.


I use Godaddy for some domains. I don't think they have ever sent me an
email except for the purpose of notifying of a need to renew. However,
that email and their website is so full of stuff that they are rather
hard to decipher and navigate.

iang
Eddy Nigg
2008-11-30 12:21:50 UTC
Permalink
On 11/30/2008 01:47 PM, Ian G:
> Eddy Nigg wrote:
>
>> (I'm certain that CAs like Godaddy do that routinely) [*].
>
>> [*] I'm certain that there are some on this list which can confirm
>> that statement from personal experience.
>
>
> I use Godaddy for some domains. I don't think they have ever sent me an
> email except for the purpose of notifying of a need to renew. However,
> that email and their website is so full of stuff that they are rather
> hard to decipher and navigate.
>

I'm glad to hear that they have a responsible policy as well! The world
seems to be better than I thought :-)


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Eddy Nigg
2008-11-30 12:28:08 UTC
Permalink
On 11/30/2008 01:47 PM, Ian G:
> Eddy Nigg wrote:
>
>> (I'm certain that CAs like Godaddy do that routinely) [*].
>
>> [*] I'm certain that there are some on this list which can confirm
>> that statement from personal experience.
>
>
> I use Godaddy for some domains. I don't think they have ever sent me an
> email except for the purpose of notifying of a need to renew. However,
> that email and their website is so full of stuff that they are rather
> hard to decipher and navigate.
>

Hehe, btw. I meant it also the other way around, that there are some on
this list which can confirm that they never received any mail not
relevant to the service they received - not from us nor from any other
party (hint: we don't sell subscriber information as Kyle suggested).


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Michael Ströder
2008-11-30 13:38:59 UTC
Permalink
Kyle Hamilton wrote:
> First off: User training is arguably more technical than computer
> infrastructure. You can't simply say "they were simply not teached
> [sic]" and "that's a non-technical problem",

Let me rephrase: The decision whether users are teached is a business
decision since budget has to be spent for that. This is a non-technical
decision and the lack of training even though the technical
infrastructure already exists is therefore a non-technical problem which
cannot be solved by yet another technical infrastructure.

> perhaps more important: WHY to perform a series of complex tasks.

Yes, a very valid question.

> (Why should someone change the oil in their car? Because it helps the
> car's engine last longer.

Good example.

> Why should someone go through the
> additional mess and morass of using S/MIME? To let themselves in for
> more user-interface headache and annoyance?)

My point was the users themselves already were aware of the problems
with unencrypted e-mail. They felt better encrypting their e-mails
because they want to avoid harm to their company which pays their loan.

> What X.509 needs to be viewed as is not "a means of identification".
> It needs to be viewed as "a means of authentication which uses the
> same identity policy as the issuing realm" -- in other words, it's a
> means of agreeing which set of rules is being used to identify each
> person.

I agree here. As I wrote in another posting. There's no ultimate trust.
And IMO in most deployments the people are quite aware of this.

>>> Everybody is a trust manager. All day everybody is making trust decisions. But there's no ultimate trust.
>> No user can make a trust decision without evaluation of the circumstances. Without info, it is called gambling. They are indeed good at evaluation, given the limited resources that they can apply at any time. However, as S/MIME does not provide any "circumstances" that suggest a reliable framework for agreements, it should drop the suggestion entirely.
>>
>> (Users as a mass have already rejected S/MIME as a signing framework, so this is more about protecting those users who might otherwise be mistaken or might otherwise be sold a product by their IT supplier.)
>
> Sure there's ultimate trust.

I disagree. You are making trust decision only in a certain context.

To avoid getting too philosophical a PKI-related example: You would
trust your employer to issue certs for encrypting corporate
business-related e-mails and even accept that the private keys are
subject of key recovery/escrow within the company's context. You would
probably not want to use these keys for personal communication
exchanging intimate details of your private life.

> The problem is that there are as many
> points of ultimate trust as there are people.

I'd argue that there even are many points of trust per person. ;-)

But the trust model is not the main obstacle.

Ciao, Michael.
Kyle Hamilton
2008-12-03 21:57:33 UTC
Permalink
On Sun, Nov 30, 2008 at 5:38 AM, Michael Ströder <***@stroeder.com> wrote:
>> Sure there's ultimate trust.
>
> I disagree. You are making trust decision only in a certain context.
>
> To avoid getting too philosophical a PKI-related example: You would trust
> your employer to issue certs for encrypting corporate business-related
> e-mails and even accept that the private keys are subject of key
> recovery/escrow within the company's context. You would probably not want to
> use these keys for personal communication exchanging intimate details of
> your private life.
>
>> The problem is that there are as many
>> points of ultimate trust as there are people.
>
> I'd argue that there even are many points of trust per person. ;-)
>
> But the trust model is not the main obstacle.

What I meant by "sure there's ultimate trust" -- each person has
exactly one location to look for ensuring that the person's interests
are protected, and that's the user him or herself. Each person looks
at the context and the content, and decides whether it's okay or not.

A person can choose not to be employed by a corporation that wants to
use S/MIME and employee certificates. They can choose to be employed
somewhere else. Thus, a person does have the ability to exercise that
own judgement whether to accept culpability -- be it legal liability
or corporate behavior for which one is called up on the carpet for not
using the S/MIME tools provided.

Every place that the point-of-ultimate-trust decides to place trust is
"delegated trust". I call it this because it's entirely possible (in
the event that one changes jobs, for example) that one does not have
to trust the former job's CA.

Now, if you turn it around and look at it from the corporation's
viewpoint (since the corporation is a legal entity): its CA is where
it places ultimate trust (since it's a CA that theoretically falls
under the corporate policy, and the corporation must trust its own
policy), which allows it to prove that it has delegated trust to
someone to whom it has issued a token. This CA doesn't put all of its
trust in any specific individual, it puts its trust in a role defined
by its policy -- essentially, it evolved a body part to help it manage
trust.

The problem with having a single CA (which is probably fine for a
corporation unless it's got many many arms) is that it allows one to
express trust under only one policy. At this point, "signing a
certificate" implies "the sole policy under which a CA can sign a
certificate has been satisfied".

As an example of an alternative:

CA issues a CA certificate to an end entity, with the provision that
the end entity will only use it to sign certificates to identify
services that the end entity itself wants to create. (This is more
for persons, not for webservers, obviously.)

User decides to run a peer-to-peer networking system that uses
certificates, decides to run a webserver, and decides to open a couple
more network ports. Instead of having these ports be unencrypted, he
uses his certificate to sign new certificates for the services that
the user -- as a sovereign -- has allowed to delegate trust from his
credential. Each of these certificates includes the name of the
machine, the port it's running on, and the protocol that any client is
expected to speak to it (and -- possibly, though I don't like it, due
to notebooks traversing network boundaries and thus getting new IPs --
the IP that it's listening on).

These are different from "proxy certificates", in that they don't try
to state that the user has authorized a third party to operate on his
behalf and charge his account. These are much more akin to the "end
entity" certificates which are used to identify websites at this time.
This means that the issuing certificate must have a CA:true and a
depth limit of at least 1 -- which is something which is never done by
any of the current CAs.

Even though they'd be able to point precisely where any issue started,
and act to revoke that certificate if it's being abused. (Really, we
should be encouraging end entities to manage their own certificates
like CA certificates, generating sub-EE CAs for date ranges and using
those. We have the software to do it (or at least the CAs do), now
all we need is to change the overarching current paradigm to allow for
it.)

Not that I expect anyone to do this, I'm just trying to give examples
of alternative worldviews which could lead to wider adoption of
certificates.

-Kyle H
Michael Ströder
2008-11-29 22:38:19 UTC
Permalink
Ian G wrote:
> Michael Ströder wrote:
>> Anders Rundgren wrote:
>>> Michael Ströder wrote:
>
> I can offer a counterpoint: a recent well-thought-out project to do
> something similar started out with S/MIME, and concluded that S/MIME
> should be optional because it is brittle,

The phrase "because it is brittle" does not give any particular reason.
So it's impossible to answer in a meaningful way.

> and all email should go through corporate servers, and TLS should be
> used for the protection.

And how did you ensure that *all* of the relevant communication partner
had StartTLS enabled at their MUAs, all MUAs were corporate servers
(especially the receiving MX) and how was CA trust handled? Pretty
similar problems...

> (In this case, every user was either an experienced security and tech
> person, or an extremely experienced security and tech person.)

Well, strange...

The unexperienced users I've teached to use the existing S/MIME
infrastructure were capable of doing so.

>> The sad thing is: The users, in this case my project colleagues,
>> sometimes do not know how to use the existing S/MIME infrastructure
>> although they enrolled during a user registration process and they
>> already have everything on their desktop. Although I'm not involved
>> personally with the S/MIME infrastructure my attitude is to teach the
>> people how to use it. And they feel better when using it because they
>> know there's a need for e-mail protection. But they were simply not
>> teached. That's a non-technical problem.
>
> IMO, the root cause is not training.

The root cause is that protecting e-mails is not enforced/endorsed
within companies even if they have a working infrastructure. The lack of
training is the consequence of this.

> Nor legal.

Yes, not a legal issue.

> The root cause is that the S/MIME security model is inefficient; it
> doesn't deliver benefits in accordance with the costs imposed.

I disagree (see above).

> Funnily enough, users are very savvy.

I agree (see above ;-).

> They can spot a worthless system
> much more easily than engineers. What they can't do is explain why it
> is worthless; they simply bypass it. This is why smart product is
> always developed in association with lots of user feedback, and paper
> designs generally don't succeed.

As I wrote the users themselves felt better after protecting e-mails
with encryption. They are savvy. They know that it's bad not to encrypt
e-mails.

In another project my task was to explicitly support external partners
of a company with a S/MIME infrastructure to get S/MIME-enabled. A
normal user from within that company had triggered this support request.
I tried to contact the admins of several external partners to help them
but they just ignored it. => the lack of security requirements in the
contracts were the root cause for failure. This has to be enhanced.

> In this sense, Mozilla is on the right track with trying to put in place
> a user security model that doesn't require user intervention. (E.g.,
> the UI hides the CA, from the "all CAs are equal" assumption.) However,
> this only works if the result is efficient. As Kyle comments, it isn't,
> for S/MIME, and the result is that the model experiences low usage rates.

In any case the sender *and* the receiver has to enabled for e-mail
protection => the very same problems will arise.

>> And any other signature/encryption/whatever standard will suffer from
>> this.
>
> If by "standard" you mean "security model," that's simply not true.

Yes, IMO the security model is part of a standard (cryptographic protocol).

> Skype delivers the goods and takes only a few minutes of training.

I don't trust Skype! AFAIK the protocol and security model was never
publicly reviewed. And it only works with access to a central
infrastructure. I'd never accept such a thing.

> Correct me if I'm wrong, but I don't
> think any standards approach ever came up with a security model that
> works for users.

A pretty broad statement...

>>>>> E.g., after changing laptops recently, I still cannot s/mime to half
>>>>> my counterparties because I don't have their certs. This happens
>>>>> regularly with everyone I know...
>>>
>>>> ???
>>>>
>>>> I've changed my notebook harddisk quite often. I never lost my
>>>> Seamonkey
>>>> cert DB containing the key history of the last 10 years since it's part
>>>> of the Mozilla profile which I have backups of.
>
> It is a curious thing: I have been using Tbird for many years, and each
> time I've never managed to transport more than a portion of the stuff
> across. I just spent some time looking and couldn't find the magic
> command, so I always wonder... I know there is a thing called profiles,
> but where does one import & export them?

By simply copying files? That's how I'm doing it.

>>> Each time you want to use another computer.
>>
>> Oh, come on! How often do you *really* do this? And how do you move
>> around the rest of your workspace? There are many more things to
>> consider when you want real roaming than just your keys and PKCs of
>> others.
>
> Sure. It's a nightmare. I do it around once a year at least -- full
> migration. This year, three times. I hate it.

Migration is something else than Roaming. Migration happens not so
often. Even if you personally feel that three times per year is quite
often. Which I fully agree it's a pain. But migrating keys is least of
the work.

In opposite to that Roaming means you switch workstations every day or
even multiple times a day and can use your complete environment without
any additional work. In practice this does not work for power users.

>> Either your mobile also runs the apps or you have to integrate your
>> mobile with the PC on which the
>> whatever-you-call-your-standard-enabled app runs. The latter is the
>> same problem space like using smartcards/readers or USB tokens as key
>> store.
>
> Right. Guess what: user-oriented applications like webmail, google
> tools, skype (cough!) and so forth solve this problem by integrating the
> entire database into some form of network store.

I wouldn't trust anybody to run my personal key store on the Internet.
Period. And most of my friends (non IT people) don't use webmail. They
use MUAs on their PCs with POP3/SMTP.

>>>>> * it needs a few tweaks in UI to align it with the safe usage
>>>>> models,
>>>>> so, for example the "signing" icon has to go because it cannot be used
>>>>> for signing, because signing is needed for key distribution. It also
>>>>> cannot be used for signing unless reference is made to the
>>>>> conditions of
>>>>> signing, and no UI vendor has ever wanted to give time&space to a CPS.
>>>
>>>> Maybe it's me but frankly I don't understand what you say here.
>>>> Especially I don't see the need for a "UI vendor" to define a CPS (if
>>>> Certificate Practice Statement is meant here).
>
> Not quite, what I mean here is that somehow, the user has to figure out
> what is happening.

Yes.

> The PKI view is that this is done by referring to
> the CPS.

No.

> Recall Nelson's view that he does not sign anything without reading. The
> wider principle here is that one should not enter into an agreement
> unless it is understood.

Yes.

> Now, applied to S/MIME, if it implied some
> form digital signing over emails, then it should not be used, because
> one cannot read the implied contract (CPS, or whatever), and nobody else
> is stepping up to say it's ok, sign away, we're watching your back. Full
> understanding is not possible, at any of many layers and levels.

Whether one would like to use S/MIME to sign something legally binding
is beyond my scope. That would be a business decision evaluating the
risk in a certain context.

> In order to satisfy users' needs for clarity, the governance UI should
> present a workable human signing view to the user. But, as we have seen
> in recent threads, that is fantasy. It's a non-starter.
>
> Ergo, S/MIME client UI implementations should be modified to drop any
> sense of signing, by default, and the digsigs should be used for
> integrity protection and key distribution.

You're wildly mixing stuff here.

>> Everybody is a trust manager. All day everybody is making trust
>> decisions. But there's no ultimate trust.
>
> No user can make a trust decision without evaluation of the
> circumstances.

In fact evaluation of the circumstances is never done completely.

> Without info, it is called gambling.

Yes, that's exactly what we all do each and every day when making trust
decisions:
If I go to a bakery in the morning buying a brezel for breakfast I have
a very vague idea of whether I can really trust them for not doing any
harm to my health with the food they are selling. Even though this shop
has a certificate in the sales room proving that this baker has the
title "Meister" (needed in Germany) it's completely impractical to
evaluate all the circumstances of this little deal. So yes, it's simply
gambling with high risks.

Well, my feeling is that I said everything I have to say in this context.

Ciao, Michael.
Ian G
2008-11-30 14:32:21 UTC
Permalink
For me, the purpose of this debate is finding out what users can expect
from Mozilla by way of security. For the purpose of this question, we
see below that users can be divided into corporate users and individuals.



Michael Ströder wrote:
> Ian G wrote:
> Well, strange...

sure, snipping this.

> The root cause is that protecting e-mails is not enforced/endorsed
> within companies even if they have a working infrastructure. The lack of
> training is the consequence of this.

OK, so would you agree that this is not very useful for the non-company
people, like yours and my mum?

If so, if we agree on that, we might also say "well, companies can look
after themselves;" and/or "Mozilla has no offering suitable for secure
email for ordinary users."

I don't know about you, but I'm here at Mozilla to get a solution to
everyone. Companies come second in my book, because they can pay.
Probably, this is just a personal fetish of mine, and I don't mind being
told that Mozilla doesn't agree. But currently, its mission seems to
suggest that *all users* and especially non-corporate users are the ones
that Mozilla targets.


>> They can spot a worthless system much more easily than engineers.
>> What they can't do is explain why it is worthless; they simply bypass
>> it. This is why smart product is always developed in association with
>> lots of user feedback, and paper designs generally don't succeed.
>
> As I wrote the users themselves felt better after protecting e-mails
> with encryption. They are savvy. They know that it's bad not to encrypt
> e-mails.
>
> In another project my task was to explicitly support external partners
> of a company with a S/MIME infrastructure to get S/MIME-enabled. A
> normal user from within that company had triggered this support request.
> I tried to contact the admins of several external partners to help them
> but they just ignored it. => the lack of security requirements in the
> contracts were the root cause for failure. This has to be enhanced.


Right, but that doesn't change the underlying economic model: the use
of S/MIME does not sustain itself. You need to put in fines or
penalties or additional costs in order to make it work. Without that,
it is not "economic". However, you always have to pay those
fines/costs/penalties ... and these need to be balanced against the
benefit of S/MIME. So even with the fees/rules/contracts, S/MIME is not
economic until you have shown the benefit.

This is a classical failing of the security world. Having established
the absolute need for "secure X" we then run around and organise the
business such that there are incentives to ensure "secure X". However,
there is no particular analysis that it was worth the cost, because it
is assumed to be "worth any cost".


>>> And any other signature/encryption/whatever standard will suffer from
>>> this.
>>
>> If by "standard" you mean "security model," that's simply not true.
>
> Yes, IMO the security model is part of a standard (cryptographic protocol).


Except, the things found in modern standards are more security templates
or examples than models. They are more akin to recipes. If you have
threats like X,Y,Z, and you have business like A,B,C, then you can do 1,2,3.

Security models are individual to businesses and individuals. They
derive from threat to the persons, which again are peculiar to
businesses and individuals. WYTM? Without walking that path, we are
simply "protecting what we can" rather than "protecting the business."


>> Skype delivers the goods and takes only a few minutes of training.
>
> I don't trust Skype! AFAIK the protocol and security model was never
> publicly reviewed. And it only works with access to a central
> infrastructure. I'd never accept such a thing.


Good. So you -- and a few others here -- don't like Skype. However,
you are far out-numbered by the ones who do. Luckily, you have a choice
in your client software, and this is not a big issue for you.

However, my point in pushing these examples is not to get you to adopt
the product (choice!) but to consider the model (what you can get for
very little user interaction and zero paid cost). This architectural
model offers benefits that could be utilised by Mozilla's users, as
opposed to what corporate customers might pay for. More below on this...


>> Correct me if I'm wrong, but I don't think any standards approach ever
>> came up with a security model that works for users.
>
> A pretty broad statement...


DNSsec, IPSec, S/MIME, PKI in general, WAP, WEP, ...

SSL was invented before it went to standards. SSH: same. OpenPGP
followed the designs of PRZ then PGP Inc. Hush was a private design.

Skype never made it to standards. New payment systems also tend to
avoid standards.

The point is that standards committees *may* be able to standardise an
already successful design; they do not seem to have any record in
creating new designs, nor fixing old designs.

As I say, correct me if I am wrong! Counterpoints might be: GSM?


>> It is a curious thing: I have been using Tbird for many years, and
>> each time I've never managed to transport more than a portion of the
>> stuff across. I just spent some time looking and couldn't find the
>> magic command, so I always wonder... I know there is a thing called
>> profiles, but where does one import & export them?
>
> By simply copying files? That's how I'm doing it.


Right, this is out of scope for users. (And, sure, I have enough unix
experience to figure it out too, but, these days I treat such a thing as
a bug.)


>>> Either your mobile also runs the apps or you have to integrate your
>>> mobile with the PC on which the
>>> whatever-you-call-your-standard-enabled app runs. The latter is the
>>> same problem space like using smartcards/readers or USB tokens as key
>>> store.
>>
>> Right. Guess what: user-oriented applications like webmail, google
>> tools, skype (cough!) and so forth solve this problem by integrating
>> the entire database into some form of network store.
>
> I wouldn't trust anybody to run my personal key store on the Internet.
> Period.


There is an assumption that private/public keys have to be treated as
key pairs, protected by the person and distributed to the public. This
may be ok for a system that is built that way, but it's only an assumption.

It's just architecture, private keys are to be used, not treated like
sacred objects.

Consider that most systems use password and username. This is *the
standard* albeit defacto. To move this to a "personal key store on the
net" scenario, we encrypt the private key with the password. Indeed, we
encrypt the entire account store with the password. This way, we can
log in from whereever and get access to the whole lot. The client
downloads the encrypted store, decrypts it with the password, then
extracts the private key.

Hey presto, an application that is better than today's standard.


> And most of my friends (non IT people) don't use webmail. They
> use MUAs on their PCs with POP3/SMTP.

It is odd! There are two big groups here, with no real favourites. The
market isn't giving us any clear signals.


>>>>>> * it needs a few tweaks in UI to align it with the safe usage
>>>>>> models,
>>>>>> so, for example the "signing" icon has to go because it cannot be
>>>>>> used
>>>>>> for signing, because signing is needed for key distribution. It also
>>>>>> cannot be used for signing unless reference is made to the
>>>>>> conditions of
>>>>>> signing, and no UI vendor has ever wanted to give time&space to a
>>>>>> CPS.
>>>>
>>>>> Maybe it's me but frankly I don't understand what you say here.
>>>>> Especially I don't see the need for a "UI vendor" to define a CPS (if
>>>>> Certificate Practice Statement is meant here).
>>
>> Not quite, what I mean here is that somehow, the user has to figure
>> out what is happening.
>
> Yes.
>
>> The PKI view is that this is done by referring to the CPS.
>
> No.


How so? How do you define any use of certs without referring to the CPS
or equivalent set of documents?

>> Recall Nelson's view that he does not sign anything without reading.
>> The wider principle here is that one should not enter into an
>> agreement unless it is understood.
>
> Yes.
>
>> Now, applied to S/MIME, if it implied some form digital signing over
>> emails, then it should not be used, because one cannot read the
>> implied contract (CPS, or whatever), and nobody else is stepping up to
>> say it's ok, sign away, we're watching your back. Full understanding
>> is not possible, at any of many layers and levels.
>
> Whether one would like to use S/MIME to sign something legally binding
> is beyond my scope. That would be a business decision evaluating the
> risk in a certain context.
>
>> In order to satisfy users' needs for clarity, the governance UI should
>> present a workable human signing view to the user. But, as we have
>> seen in recent threads, that is fantasy. It's a non-starter.
>>
>> Ergo, S/MIME client UI implementations should be modified to drop any
>> sense of signing, by default, and the digsigs should be used for
>> integrity protection and key distribution.
>
> You're wildly mixing stuff here.


That's partly the point. If the user can sanely untangle the stuff,
then well and good. I claim she cannot. If you-as-insider can, then
that's a start, but it needs to be put in a framework that is both safe
for users and understandable. That we haven't got.


>>> Everybody is a trust manager. All day everybody is making trust
>>> decisions. But there's no ultimate trust.
>>
>> No user can make a trust decision without evaluation of the
>> circumstances.
>
> In fact evaluation of the circumstances is never done completely.


Of course.


>> Without info, it is called gambling.
>
> Yes, that's exactly what we all do each and every day when making trust
> decisions:
> If I go to a bakery in the morning buying a brezel for breakfast I have
> a very vague idea of whether I can really trust them for not doing any
> harm to my health with the food they are selling. Even though this shop
> has a certificate in the sales room proving that this baker has the
> title "Meister" (needed in Germany) it's completely impractical to
> evaluate all the circumstances of this little deal. So yes, it's simply
> gambling with high risks.


No, not at all. When you go to the bakery you get a brezel. The risk
of you not getting a brezel is very low. The information you get is
reliable, even if you do not "audit" the very making of your brezel.
This is called risk management.

(BTW, what is a brezel?)

When you go to the casino and put some chips down on the 00, you have a
1 in 48 (or somesuch) chance of getting a win. That's called gambling.
It's also highly reliable, you can calculate your odds, but the info
of the future event is lacking.

Huge difference.

> Well, my feeling is that I said everything I have to say in this context.


It's a tough subject :) But I think we have got to the point where your
context is more or less the corporate usage, and my context is more or
less the individual user.

iang
Michael Ströder
2008-11-30 18:25:00 UTC
Permalink
Ian G wrote:
>
> Michael Ströder wrote:
>
>> The root cause is that protecting e-mails is not enforced/endorsed
>> within companies even if they have a working infrastructure. The lack of
>> training is the consequence of this.
>
> OK, so would you agree that this is not very useful for the non-company
> people, like yours and my mum?
>
> If so, if we agree on that, we might also say "well, companies can look
> after themselves;" and/or "Mozilla has no offering suitable for secure
> email for ordinary users."

Let me check that. I'll try to teach my friends to use S/MIME and see
what happens.

> I don't know about you, but I'm here at Mozilla to get a solution to
> everyone.

I appreciate that.

> Companies come second in my book, because they can pay.
> Probably, this is just a personal fetish of mine, and I don't mind being
> told that Mozilla doesn't agree. But currently, its mission seems to
> suggest that *all users* and especially non-corporate users are the ones
> that Mozilla targets.

Agreed in the focus of Mozilla project.

> Right, but that doesn't change the underlying economic model: the use
> of S/MIME does not sustain itself. You need to put in fines or
> penalties or additional costs in order to make it work. Without that,
> it is not "economic". However, you always have to pay those
> fines/costs/penalties ... and these need to be balanced against the
> benefit of S/MIME. So even with the fees/rules/contracts, S/MIME is not
> economic until you have shown the benefit.
>
> This is a classical failing of the security world. Having established
> the absolute need for "secure X" we then run around and organise the
> business such that there are incentives to ensure "secure X". However,
> there is no particular analysis that it was worth the cost, because it
> is assumed to be "worth any cost".

Well, I'd argue that the classic failing of the economic world is that
it always wants to have a proof for monetary fines/costs/penalties of a
risk. But there are other fines/costs/penalties too, especially for the
non-corporate users.

Additionally even when having a strictly economic view the real costs
are never calculated exactly since the real world is too complex to come
to a precise result.

> Security models are individual to businesses and individuals. They
> derive from threat to the persons, which again are peculiar to
> businesses and individuals. WYTM? Without walking that path, we are
> simply "protecting what we can" rather than "protecting the business."

Yes, like in various other parts of our life.

>>> I know there is a thing called
>>> profiles, but where does one import & export them?
>>
>> By simply copying files? That's how I'm doing it.
>
> Right, this is out of scope for users. (And, sure, I have enough unix
> experience to figure it out too, but, these days I treat such a thing as
> a bug.)

Then this would be a room for improvement of Mozilla products.

> Consider that most systems use password and username. This is *the
> standard* albeit defacto. To move this to a "personal key store on the
> net" scenario, we encrypt the private key with the password. Indeed, we
> encrypt the entire account store with the password. This way, we can
> log in from whereever and get access to the whole lot. The client
> downloads the encrypted store, decrypts it with the password, then
> extracts the private key.

I know at least one commercial implementation of that roaming scheme
with which I worked in a customer project.

> Hey presto, an application that is better than today's standard.

As usual there are pros and cons. I'd consider this to be applicable for
the corporate user, not our moms.

And if you plan to implement such a thing for Mozilla be prepared to
work around some patents.

>> And most of my friends (non IT people) don't use webmail. They
>> use MUAs on their PCs with POP3/SMTP.
>
> It is odd! There are two big groups here, with no real favourites. The
> market isn't giving us any clear signals.

Yes, I vaguely remember having read some survey results (hope that's the
right word) that European (especially German users) prefer having a MUA
on their own PC. In contrary U.S. users preferring webmail.

>>>>>> Maybe it's me but frankly I don't understand what you say here.
>>>>>> Especially I don't see the need for a "UI vendor" to define a CPS (if
>>>>>> Certificate Practice Statement is meant here).
>>>
>>> Not quite, what I mean here is that somehow, the user has to figure
>>> out what is happening.
>>
>> Yes.
>>
>>> The PKI view is that this is done by referring to the CPS.
>>
>> No.
>
> How so? How do you define any use of certs without referring to the CPS
> or equivalent set of documents?

You have to distinguish that the user is informed once what to do with a
certain cert. Yes, that involves the CPS of the CA.

>>> Recall Nelson's view that he does not sign anything without reading.
>>> The wider principle here is that one should not enter into an
>>> agreement unless it is understood.
>>
>> Yes.

Remember that in another posting I expressed my doubts about whether the
content to be signed can be displayed in such a way that the signer
knows what he signes and can archive the signed blob. I'd like to keep
this separate from the SecureMessaging Trust Infrastructure.

>>> In order to satisfy users' needs for clarity, the governance UI
>>> should present a workable human signing view to the user. But, as we
>>> have seen in recent threads, that is fantasy. It's a non-starter.
>>>
>>> Ergo, S/MIME client UI implementations should be modified to drop any
>>> sense of signing, by default, and the digsigs should be used for
>>> integrity protection and key distribution.
>>
>> You're wildly mixing stuff here.
>
> That's partly the point. If the user can sanely untangle the stuff,
> then well and good. I claim she cannot. If you-as-insider can, then
> that's a start, but it needs to be put in a framework that is both safe
> for users and understandable. That we haven't got.

The main use-case of S/MIME is encryption, not legally signing
something. So bashing S/MIME means you have to come up with something
else. Anders and you seem to propose something like networked
key-containers. I dislike that because I often write encrypted e-mail
while traveling by train and thus being off-line.

If you really want to sign e-mails with S/MIME and now only viewing at
the UI aspect: IMO a S/MIME e-mail is more clear for the sender than
form signing since he completely creates the content to be signed and
can archive it. Nowadays, with current implementation. And no web
designers are involved. ;-)

>>>> Everybody is a trust manager. All day everybody is making trust
>>>> decisions. But there's no ultimate trust.
>>>
>>> No user can make a trust decision without evaluation of the
>>> circumstances.
>>
>> In fact evaluation of the circumstances is never done completely.
>
> Of course.
>
>>> Without info, it is called gambling.
>>
>> Yes, that's exactly what we all do each and every day when making
>> trust decisions:
>> If I go to a bakery in the morning buying a brezel for breakfast I
>> have a very vague idea of whether I can really trust them for not
>> doing any harm to my health with the food they are selling. Even
>> though this shop has a certificate in the sales room proving that this
>> baker has the title "Meister" (needed in Germany) it's completely
>> impractical to evaluate all the circumstances of this little deal. So
>> yes, it's simply gambling with high risks.
>
> No, not at all. When you go to the bakery you get a brezel. The risk
> of you not getting a brezel is very low. The information you get is
> reliable, even if you do not "audit" the very making of your brezel.
> This is called risk management.

The risk isn't me not getting a brezel at all. I can even see the
availability of the brezel in the sales room. The risk is to get ill by
eating it because they might have salmonellae or similiar in it.
Fortunately the probability of this risk is also very low and that's why
we gamble each day that there will no harm. But the potential damage is
very high.

> (BTW, what is a brezel?)

http://images.google.com/images?q=brezel
(Not every baker, even being entitled a "Meister" can produce a good
brezel but it's not a high risk to just try. ;-)

> It's a tough subject :) But I think we have got to the point where your
> context is more or less the corporate usage, and my context is more or
> less the individual user.

Hmm, not really since being a freelancer I have both roles even in my
business life.

Ciao, Michael.
Eddy Nigg
2008-11-30 19:29:47 UTC
Permalink
On 11/30/2008 04:32 PM, Ian G:
> OK, so would you agree that this is not very useful for the non-company
> people, like yours and my mum?

Please note that you are agreeing here with yourself. The lack of
contributions to the thread doesn't mean that there is silent agreement
to what you say.

>
> If so, if we agree on that, we might also say "well, companies can look
> after themselves;" and/or "Mozilla has no offering suitable for secure
> email for ordinary users."

The support of S/MINE in Thunderbird is neither difficult nor is it
insecure. It takes about two clicks to import a certificate and another
two to configure it for a specific account. Once TB will be able to use
the same certificate store as Firefox, it's minus two clicks. Once a
certificate is configured for signing, no further interaction is
required. Encrypting is a piece of cake. Getting a certificate happens
at some CAs already during the registration process (cough, cough).

Considering the amount of public client certs stored in my TB, it seems
that many of the somewhat more technical orientated audience are A) able
to use it, B) actually using it. And not all of them are geeks either.

>
> I don't know about you, but I'm here at Mozilla to get a solution to
> everyone.

S/MIME is an easy to use solution to encrypt mail, sufficiently secure,
provides reasonable protection and easy to obtain (free client
certificates are all over - Verisign, Thawte, StartCom, Comodo and
perhaps more).


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Frank Hecker
2008-12-01 04:57:19 UTC
Permalink
Eddy Nigg wrote:
> Getting a certificate happens
> at some CAs already during the registration process (cough, cough).

This is an interesting point, which I think supports at least some of
Ian's arguments. What you've done is to provide a real incentive for
users to get client certificates, certificates that can then be
repurposed for S/MIME email or other uses.

IMO, in general there is little or no a priori reason for a typical
(non-corporate) user to get a client certificate for S/MIME use. Even
though it make take only a small effort to get the cert, getting a
client cert is not necessarily justifiable given the uncertain benefits
of having one, especially if none of your friends and other
correspondents have one. (It's the network effect in reverse.)

But in this case users are willing to go through the minor hassle of
getting a client cert because they're motivated to get those super-duper
free SSL certificates, and they need the client cert to access the
administrative interface. It's a clever way of getting around the problem.


> Considering the amount of public client certs stored in my TB, it seems
> that many of the somewhat more technical orientated audience are A) able
> to use it, B) actually using it. And not all of them are geeks either.

With all due respect, this is merely anecdotal evidence. IMO the only
two metrics of interest for S/MIME email are a) the fraction of email
users who have personal certs usable for S/MIME; and b) the fraction of
all email messages that are send using S/MIME. I don't happen to know of
any authoritative studies on this.

> S/MIME is an easy to use solution to encrypt mail, sufficiently secure,
> provides reasonable protection and easy to obtain (free client
> certificates are all over - Verisign, Thawte, StartCom, Comodo and
> perhaps more).

To be clear, I don't think that S/MIME email is irreparable. I think it
could benefit from an improved UI in products like Thunderbird and more
attention to making the initial "bootstrapping" process more automatic
and invisible. (For example, when a user gets a certificate, have
Thunderbird automatically offer to send a signed message with the cert
to all people to whom you've sent mail, or all people in your
addressbook, or whatever.) And as noted above, I think a fundamental
problem is providing more incentives for users to get client certs,
particularly outside the context of S/MIME proper. (For example, have
some interesting web service that uses client certs for authentication.)

Frank

--
Frank Hecker
***@mozillafoundation.org
Eddy Nigg
2008-12-01 05:24:15 UTC
Permalink
On 12/01/2008 06:57 AM, Frank Hecker:
> Eddy Nigg wrote:
>> Getting a certificate happens at some CAs already during the
>> registration process (cough, cough).
>
> This is an interesting point, which I think supports at least some of
> Ian's arguments. What you've done is to provide a real incentive for
> users to get client certificates, certificates that can then be
> repurposed for S/MIME email or other uses.

Well, actually we were exploring ways to facilitate user accounts
without compromising on security. Our "old" CA infrastructure had
nothing like an account exactly for this reasons. Basically we couldn't
rely on anything which required user input like user names and passwords
(because of the risk of getting phished, with us being a potential
target), hence we opted for client certificate authentication eventually.

> But in this case users are willing to go through the minor hassle of
> getting a client cert because they're motivated to get those super-duper
> free SSL certificates, and they need the client cert to access the
> administrative interface. It's a clever way of getting around the problem.
>

We could have opted for using only the authentication bits for this
purpose certificate, but clearly recognized the added benefit for not
requiring the user to jump through the same process again just to get a
client cert, second obviously also to promote S/MIME further.

> especially if none of your friends and other
> correspondents have one. (It's the network effect in reverse.)
>

So they get one :-) And it usually goes like "Hey, what's that, I want
one too..."

>
>> Considering the amount of public client certs stored in my TB, it
>> seems that many of the somewhat more technical orientated audience are
>> A) able to use it, B) actually using it. And not all of them are geeks
>> either.
>
> With all due respect, this is merely anecdotal evidence.

Yes, certainly, I'm not a typical example, it was still interesting to
realize that I've got hundreds of other peoples public certificate in my
store. And from all brands and CAs in addition to that.

> IMO the only
> two metrics of interest for S/MIME email are a) the fraction of email
> users who have personal certs usable for S/MIME; and b) the fraction of
> all email messages that are send using S/MIME.

However don't be mistaken, there are many subscribers which come
solemnly for the client certificates. I don't know the reasons nor did
we ever make a study, but the number is much higher than some here would
prefer us to believe.

> To be clear, I don't think that S/MIME email is irreparable.

I must ask if it's broken at all? It could be made even more convenient,
but it's certainly not broken to the extend that it needs reparation...

> I think it
> could benefit from an improved UI in products like Thunderbird and more
> attention to making the initial "bootstrapping" process more automatic
> and invisible. (For example, when a user gets a certificate, have
> Thunderbird automatically offer to send a signed message with the cert
> to all people to whom you've sent mail, or all people in your
> addressbook, or whatever.)

I think I wouldn't want that, but it's maybe an idea of some benefit to
some users. Not sure...

> And as noted above, I think a fundamental
> problem is providing more incentives for users to get client certs,
> particularly outside the context of S/MIME proper. (For example, have
> some interesting web service that uses client certs for authentication.)
>

I think that client auth would really help solve some huge problems out
there. Incidentally, many OpenID providers have opted exactly for this
type of authentication most likely because of the higher risks involved
with having only one authoritative provider for all site authentications
when using OpenID. I certainly believe that financial institutions
should use it (more) as well.


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Ian G
2008-12-02 17:53:04 UTC
Permalink
Frank Hecker wrote:
> Eddy Nigg wrote:
>> Getting a certificate happens at some CAs already during the
>> registration process (cough, cough).
>
> This is an interesting point, which I think supports at least some of
> Ian's arguments. What you've done is to provide a real incentive for
> users to get client certificates, certificates that can then be
> repurposed for S/MIME email or other uses.
>
> IMO, in general there is little or no a priori reason for a typical
> (non-corporate) user to get a client certificate for S/MIME use. Even
> though it make take only a small effort to get the cert, getting a
> client cert is not necessarily justifiable given the uncertain benefits
> of having one, especially if none of your friends and other
> correspondents have one. (It's the network effect in reverse.)
>
> But in this case users are willing to go through the minor hassle of
> getting a client cert because they're motivated to get those super-duper
> free SSL certificates, and they need the client cert to access the
> administrative interface. It's a clever way of getting around the problem.


Albeit, only to those interested in SSL certs. Conceivably this would
be made a lot more fluid if Apache were to release TLS/SNI, and to a
lesser extent, Microsoft's IIE.

>> Considering the amount of public client certs stored in my TB, it
>> seems that many of the somewhat more technical orientated audience are
>> A) able to use it, B) actually using it. And not all of them are geeks
>> either.
>
> With all due respect, this is merely anecdotal evidence. IMO the only
> two metrics of interest for S/MIME email are a) the fraction of email
> users who have personal certs usable for S/MIME; and b) the fraction of
> all email messages that are send using S/MIME. I don't happen to know of
> any authoritative studies on this.

+1 for any authoritive studies. It would be nice if Thunderbird could
do this, but the "ET phone home" part would probably scare people.

>> S/MIME is an easy to use solution to encrypt mail, sufficiently
>> secure, provides reasonable protection and easy to obtain (free client
>> certificates are all over - Verisign, Thawte, StartCom, Comodo and
>> perhaps more).
>
> To be clear, I don't think that S/MIME email is irreparable.

I have written frequently about this on my blog. I don't think it is
irreparable, but I think the development team needs to decide whether
they are supporting users or others. If users, then they will get more
security by generating the key pairs on account creation, fixing the key
distro issue, and helping users to upgrade to better certs later on.
IMO. Users want a bit of security talking to people they know, talking
to others they don't know can come later, as can dealing with third parties.

> I think it
> could benefit from an improved UI in products like Thunderbird and more
> attention to making the initial "bootstrapping" process more automatic
> and invisible. (For example, when a user gets a certificate, have
> Thunderbird automatically offer to send a signed message with the cert
> to all people to whom you've sent mail, or all people in your
> addressbook, or whatever.)

Yes, agreed, basically solve the key distro problem.

> And as noted above, I think a fundamental
> problem is providing more incentives for users to get client certs,
> particularly outside the context of S/MIME proper. (For example, have
> some interesting web service that uses client certs for authentication.)


Over at CAcert they conducted a similar experiment by insisting that the
test for assurers (CATS) be conducted using client certs. It worked
out, or at least this didn't cause the project to fail, and there
weren't any complaints to my knowledge that this held up the process.
However, this is an interested and dedicated audience; it doesn't
necessarily apply to a real user audience, it is mostly the techie
community who are challenged by the thought that they know certs.

(Client side certs are a lot more ready for mass-deployment than S/MIME
ones, but still have their foibles. One thing I discovered was that if
you have multiple certs, the KCM is not so well developed in Firefox.
It works if set to "choose-by-self," in which case we don't know which
cert is in use. Or, if set to "ask-me", it asks me practically every
click which to choose, and sometimes twice or thrice per click. If I
had more time I'd chase the bugzilla.)

iang
Graham Leggett
2008-12-03 11:19:56 UTC
Permalink
Ian G wrote:

> Albeit, only to those interested in SSL certs. Conceivably this would
> be made a lot more fluid if Apache were to release TLS/SNI, and to a
> lesser extent, Microsoft's IIE.

My understanding is that SNI is supported in httpd-trunk, soon to become
httpd v2.3.0. The people who created the patch apparently didn't make it
compatible with httpd v2.2, and it has blocked its backport.

Regards,
Graham
--
Kaspar Brand
2008-12-03 16:36:08 UTC
Permalink
Graham Leggett wrote:
> My understanding is that SNI is supported in httpd-trunk, soon to become
> httpd v2.3.0. The people who created the patch apparently didn't make it
> compatible with httpd v2.2, and it has blocked its backport.

Not really true, actually... for a fuller version of the story, see e.g.

http://mail-archives.apache.org/mod_mbox/httpd-dev/200806.mbox/%***@velox.ch%3e
http://mail-archives.apache.org/mod_mbox/httpd-dev/200806.mbox/%***@velox.ch%3e

or also

http://mail-archives.apache.org/mod_mbox/httpd-dev/200808.mbox/%***@velox.ch%3e
http://mail-archives.apache.org/mod_mbox/httpd-dev/200808.mbox/%***@velox.ch%3e

but lack of support by influential httpd committers brought it to a
halt, more or less.

http://sni.velox.ch/httpd-2.2.x-sni.patch is working pretty well for
2.2, though (have a look at https://sni.velox.ch).

Kaspar
Nelson B Bolyard
2008-12-03 17:09:19 UTC
Permalink
Kaspar Brand wrote, On 2008-12-03 08:36 PST:

> http://sni.velox.ch/httpd-2.2.x-sni.patch is working pretty well for
> 2.2, though (have a look at https://sni.velox.ch).

Kaspar, Thank you for building and maintaining that web site.
It is the ONLY web site known to me that implements SNI.
I use it from time to time for testing the client-side SNI in NSS.
I appreciate your leadership in this area, and your contributions to Mozilla.

/Nelson Bolyard
Graham Leggett
2008-12-03 17:22:40 UTC
Permalink
Kaspar Brand wrote:

> Not really true, actually... for a fuller version of the story, see e.g.

The authoritative status of the httpd-2.2 backport is in the STATUS file
in the httpd v2.2 branch, and that currently says this:

Backport version for 2.2.x of updated patch:
http://people.apache.org/~fuankg/diffs/httpd-2.2.x-sni.diff
+1: fuankg
+0: like ssl upgrade of 2.2, perhaps this is a good reason to bring
httpd-2.4 to completion? vhost changes could be disruptive to
third party module authors.
-1: rpluem: jorton found some problems with the trunk version and
they
should be fixed / discussed in trunk before we backport.

If you want to see SNI in httpd v2.2, work to resolve the issues rpluem
is referring to.

Regards,
Graham
--
Kaspar Brand
2008-12-03 18:02:55 UTC
Permalink
Graham Leggett wrote:
> The authoritative status of the httpd-2.2 backport is in the STATUS file
> in the httpd v2.2 branch, and that currently says this:

I'm quite familiar with that file, thanks for the pointer. Perhaps you
should have a look at

http://mail-archives.apache.org/mod_mbox/httpd-dev/200806.mbox/%***@velox.ch%3e

and

http://mail-archives.apache.org/mod_mbox/httpd-dev/200810.mbox/%***@apache.org%3e

before advising me to "work to resolve the issues rpluem
is referring to". (rpluem's -1 was re-added on 4 June -
http://svn.apache.org/viewvc?view=rev&revision=663112 -, but predates
all postings to httpd-dev I referenced in my last posting.)

Kaspar
Graham Leggett
2008-12-03 18:07:56 UTC
Permalink
Kaspar Brand wrote:

> I'm quite familiar with that file, thanks for the pointer. Perhaps you
> should have a look at
>
> http://mail-archives.apache.org/mod_mbox/httpd-dev/200806.mbox/%***@velox.ch%3e
>
> and
>
> http://mail-archives.apache.org/mod_mbox/httpd-dev/200810.mbox/%***@apache.org%3e
>
> before advising me to "work to resolve the issues rpluem
> is referring to". (rpluem's -1 was re-added on 4 June -
> http://svn.apache.org/viewvc?view=rev&revision=663112 -, but predates
> all postings to httpd-dev I referenced in my last posting.)

And you've kept chasing this issue up on the dev list?

Regards,
Graham
--
Kaspar Brand
2008-12-03 18:28:03 UTC
Permalink
> And you've kept chasing this issue up on the dev list?

Graham, I'm getting tired of this conversation. Of course I brought up
SNI repeatedly on httpd-dev - in January, April, June, and August. But
if the feedback on the list is almost zero with each additional attempt,
then I'm losing interest in pursuing this further.

Kaspar
Ian G
2008-12-03 20:51:18 UTC
Permalink
Kaspar Brand wrote:
>> And you've kept chasing this issue up on the dev list?
>
> Graham, I'm getting tired of this conversation. Of course I brought up
> SNI repeatedly on httpd-dev - in January, April, June, and August. But
> if the feedback on the list is almost zero with each additional attempt,
> then I'm losing interest in pursuing this further.
>
> Kaspar



I have to agree with Kaspar here; I also have posted many times on the
dev list over last 6 months or so, in general, positive and supportive
terms, including some blah-blah. That's where I was told to go. I went
there, I talked.

But I have don't recall getting much of a positive response from the
team. No pointers to how to get it done. What the process is, who the
key players are, etc. I'd speculate that the team isn't accustomed to
talking to anyone outside the team, which is to say, they aren't taking
any input or suggestions as to what is important. I guess.

Putting myself in their shoes, I suppose they are thinking that only
those who can code or review the code or find bugs have a say.



iang
Graham Leggett
2008-12-03 21:18:53 UTC
Permalink
Kaspar Brand wrote:

>> And you've kept chasing this issue up on the dev list?
>
> Graham, I'm getting tired of this conversation. Of course I brought up
> SNI repeatedly on httpd-dev - in January, April, June, and August. But
> if the feedback on the list is almost zero with each additional attempt,
> then I'm losing interest in pursuing this further.

The way the process works is that you have to shepherd the patch through
all the way until all the issues are resolved. And if someone raises
an issue, don't assume that time will magically appear in their diary to
fix your patch for you, that is your job.

If you're too tired to do this, then just wait until httpd v2.4 is
released, as the patch is on trunk.

Regards,
Graham
--
Kaspar Brand
2008-12-04 06:06:47 UTC
Permalink
Graham Leggett wrote:
> The way the process works is that you have to shepherd the patch through
> all the way until all the issues are resolved. And if someone raises
> an issue, don't assume that time will magically appear in their diary to
> fix your patch for you, that is your job.

I'm getting tired of the pert tone of your replies, that's the point.
Maybe you're missing the fact that SNI support was first added to httpd
trunk in December 2007
(http://svn.apache.org/viewvc?view=rev&revision=606190), reflecting my
enhanced version of the original SNI patch from the EdelKey project.
Maybe you're ignoring the fact that I also further improved the patch
before "someone raised an issue" (won't provide URLs, as you seem to
ignore them as well), and that whenever new concerns or issues were
brought up on httpd-dev, I was promptly looking into it and came up with
suggested fixes (patches) for trunk. Maybe you're also overlooking the
fact that in June, I did exactly what I was advised to do, but got zero
replies afterwards.

> If you're too tired to do this, then just wait until httpd v2.4 is
> released, as the patch is on trunk.

That reflects the status of the code as of April 2008, and doesn't
include any of the later improvements. But if the key httpd people
aren't willing to invest time in reviewing additional patches, then it
won't make any further progress, obviously.

Kaspar
Graham Leggett
2008-12-04 11:04:45 UTC
Permalink
Kaspar Brand wrote:

>> If you're too tired to do this, then just wait until httpd v2.4 is
>> released, as the patch is on trunk.
>
> That reflects the status of the code as of April 2008, and doesn't
> include any of the later improvements. But if the key httpd people
> aren't willing to invest time in reviewing additional patches, then it
> won't make any further progress, obviously.

I think you're missing the point I am trying to make. The addition of
SNI is a worthy feature to be added to httpd, but you've got half way
through the submission process and have chosen to start complaining that
the door won't open, instead of just banging on the door some more.

I would love to donate some time to help you do this, but right now I
have a queue a mile high of other stuff that needs to be finalised
first. I can well imagine others are in the same boat.

Short, targeted and specific questions are quick and easy to answer, and
will get you the progress you need. General questions that involve a
whole lot of research first to figure out what you are referring to will
be deferred to "later", and you'll end up waiting.

httpd v2.3.0-alpha is to be tagged soon, which means SNI will start
being available in a release very soon, and SNI will start getting some
attention from end users.

Regards,
Graham
--
Ian G
2008-12-04 13:38:49 UTC
Permalink
Graham Leggett wrote:
> I think you're missing the point I am trying to make. The addition of
> SNI is a worthy feature to be added to httpd, ...


I think this is one of the biggest problems. Superficially, it is easy
to think of SNI as a feature enhancement. Instead, it is a security bug
fix to SSL.

The most common failure mode of any security system is that it is not
used. Turned off, left out, assumed away. SSL is no exception to this,
99% of all webservers fail this way. The first cause of the failure to
use SSL for security is that https cannot be easily shared across one IP
numbers, a crucial, limited resource.

(The second cause is certs :)

The security result was that it encouraged SSL not to be used.
Bypassed. "We don't need it." As this effected more sites than
actually use SSL, there is little doubt that the overall security impact
of the bug is several orders of magnitude more than any other security
bug ever seen with SSL.


> httpd v2.3.0-alpha is to be tagged soon, which means SNI will start
> being available in a release very soon, and SNI will start getting some
> attention from end users.

That would be good!

iang
Eddy Nigg
2008-12-04 11:54:59 UTC
Permalink
On 12/04/2008 01:04 PM, Graham Leggett:
> httpd v2.3.0-alpha is to be tagged soon, which means SNI will start
> being available in a release very soon, and SNI will start getting some
> attention from end users.

Just to reiterate, that the missing SNI support has been a pain for a
huge number of web site operators needing to buy additional IP addresses
for every secured web site.

StartCom Linux released yesterday a patched version of Apache with SNI
support (on the AS-5.0.2 release) and immediately available. It;'s hard
to believe that such an important feature has been missing for so long
for no apparent reason.

--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Graham Leggett
2008-12-04 12:27:21 UTC
Permalink
Eddy Nigg wrote:

> Just to reiterate, that the missing SNI support has been a pain for a
> huge number of web site operators needing to buy additional IP addresses
> for every secured web site.
>
> StartCom Linux released yesterday a patched version of Apache with SNI
> support (on the AS-5.0.2 release) and immediately available. It;'s hard
> to believe that such an important feature has been missing for so long
> for no apparent reason.

Of course there is a reason: efforts to backport the patch to v2.2 were
abandoned.

If the feature is truly important, finish the work on the feature.

Regards,
Graham
--
Michael Ströder
2008-12-04 12:48:48 UTC
Permalink
Eddy Nigg wrote:
> On 12/04/2008 01:04 PM, Graham Leggett:
>> httpd v2.3.0-alpha is to be tagged soon, which means SNI will start
>> being available in a release very soon, and SNI will start getting some
>> attention from end users.
>
> Just to reiterate, that the missing SNI support has been a pain for a
> huge number of web site operators needing to buy additional IP addresses
> for every secured web site.
>
> StartCom Linux released yesterday a patched version of Apache with SNI
> support (on the AS-5.0.2 release) and immediately available. It;'s hard
> to believe that such an important feature has been missing for so long
> for no apparent reason.

The Apache project team is pretty unresponsive. I've filed an issue for
what I consider a bug in mod_ssl together with a patch to fix NID/OID of
attribute 'uid' in subject names:

https://issues.apache.org/bugzilla/show_bug.cgi?id=45107

Status is still NEW...and will probably forever.

Ciao, Michael.
Eddy Nigg
2008-12-03 12:01:17 UTC
Permalink
On 12/02/2008 07:53 PM, Ian G:
> (Client side certs are a lot more ready for mass-deployment than S/MIME
> ones, but still have their foibles. One thing I discovered was that if
> you have multiple certs, the KCM is not so well developed in Firefox. It
> works if set to "choose-by-self," in which case we don't know which cert
> is in use. Or, if set to "ask-me", it asks me practically every click
> which to choose, and sometimes twice or thrice per click. If I had more
> time I'd chase the bugzilla.)

The former is the default setting of Firefox by now (due to privacy
issue with the automatic selection).

The later is an issue which currently is in some limbo by
finger-pointing between Apache and the NSS folks.


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Michael Ströder
2008-12-03 17:51:08 UTC
Permalink
Ian G wrote:
> (Client side certs are a lot more ready for mass-deployment than S/MIME
> ones, but still have their foibles. One thing I discovered was that if
> you have multiple certs, the KCM is not so well developed in Firefox. It
> works if set to "choose-by-self," in which case we don't know which cert
> is in use. Or, if set to "ask-me", it asks me practically every click
> which to choose, and sometimes twice or thrice per click. If I had more
> time I'd chase the bugzilla.)

I think these issues are mainly caused by misconfigured servers.

Ciao, Michael.
Kyle Hamilton
2008-11-27 21:12:29 UTC
Permalink
I wish I could wave my hands and say "it's a non-issue" like you.
Unfortunately, I'm the one who has to try to explain how to use these
things. Unfortunately, I'm the one who has to deal with the tech support
calls. When I can't figure it out (and I've been trying for over a decade),
how the fuck are non-experts supposed to figure it out?

I'm rather annoyed at people who seem to think that the problems that our
world has are just going to magically not be problems. They are, and they
will continue to do so, and thus far all attempts to change the situations
from whence the problems arise have failed spectacularly.

Fact: it's not just businesses who have need for S/MIME. (It's not just
private individuals who have need for S/MIME, either, but I'll blast the
businesses later.)
Fact: Home users don't want to pay upkeep for their machines.
Fact: Over half of all computers (meaning, machines that users use with a
keyboard, monitor, mouse, or other I/O devices) don't have backups.
Fact: people lose critical data all the time.
Fact: Even those who do try to maintain backups and such have to deal with
failing hardware.
Fact: Those who do suffer failing hardware quite often have unusable backups
when they are finally restored.

And another Fact: Those who understand are willing to move mountains and
work their way past insurmountable mountain ranges to get something to
work. A corollary to this fact is that those who are willing to do that
have a rather huge emotional investment in making sure that it works for
them -- and thus have huge blind spots: they're willing to ignore those who
say that it won't work for the masses, and they're willing to say "it works
for me, you must be doing it wrong" to those who can't get it to work.

Directories don't solve telephone issues.
Directories don't solve email issues.
Directories don't even solve IM issues.

And that's just to get the most basic, most rudimentary form of contact
going. Much less being able to exchange the keys necessary for secure
conversation. Directories are, after all, just another form of data
binding. Certificates are simply means of showing that the provider of that
data actually bound that data. (Note that I didn't say 'proving'. That's a
semantic can of worms that I'm not willing to open just yet.)

Realistically, if you can send email to a mailbox at a domain that is read
by somebody, then at the least the following must have happened:

1) the domain registrant had to talk to a domain registrar.
2) the domain registrar had to talk to the domain master.
3) the domain master had to add the domain to the DNS.
4) the DNS must have either an MX or an IP that will accept SMTP
connections.
5) the SMTP receiver must accept the message.
6) the SMTP receiver must find the mailbox to put it into.
7) the mail must be put into the mailbox.
8) the recipient must be able to read the mailbox.
9) the mailbox's address must be distributed to whoever wants to send to it.

And all 9 steps must be followed for the person who sent it, if that person
wants to receive it. And this is even beyond the problem of getting
connected to the network in a way that you can even run an SMTP server that
can be reached.

Each of these problems is surmountable. We've had over 30 years to figure
out how to get network access, over 30 years to figure out how to get DNS
running, over 30 years to figure out how to transfer mail. All of these are
successful because even if it does take understanding, it's only the
cognocenti -- those who know and care to know, who can get it figured out.
That's why we're hired by businesses. The masses only deal with email
clients, and bitch when they don't work.

We need something that's as transparent to the end-user as possible, if
we're going to get cryptographic authentication and transformation working.
This is why Cerulean Studios introduced SecureIM, even if it fails at
anything resembling end-to-end authentication (and for a fairly good reason:
honestly, if someone's using an IM service, it's up to that service to
ensure that identities are limited to those who have the passwords for them,
so it shouldn't matter if there's an extrinsic authority butting into a
private conversation trying to "prove" everyone's identity to a level far
beyond what would be desired or useful for anyone).

The problem that I have with certifying authorities isn't what they do as
far as cryptographically signing things. The problem that I have with them
is that they spread fear, uncertainty, and doubt about any communication
(not simply transaction, but any COMMUNICATION) that isn't authenticated by
something that they issue.

The problem that I have with Mozilla is that it's allowed itself to swallow
that entire concept, hook line and sinker, without doing effective and
appropriate risk assessment with appropriate threat models. Because of
this, it has failed to accept certain things: 1) not everything needs a
vetted CA, and 2) even if it thinks that it does, the public is going to
ignore any demand/requirement that is extrinsic and useless to the purposes
they have in mind. This means that Mozilla is operating blind to what its
public users need, for the sake of stroking a few egos.

-Kyle H

On Thu, Nov 27, 2008 at 5:51 AM, Michael Ströder <***@stroeder.com>wrote:

> Just to clarify: I also see a lot of practical problems to be solved when
> encrypting/signing e-mails. And I supported real end-users doing so. But
> these are not caused by S/MIME (or PGP) standards itself.
>
> Ian G wrote:
>
>> * it has no open + effective key distribution mechanism. (I exclude the
>> LDAP stuff as that is generally for internal / corporates, and is not a
>> general solution for the users.)
>>
>
> Just exchanging signed S/MIME e-mails is quite easy for most users. The
> case that e-mail receivers are completely unknown is fairly seldom. This is
> a non-issue.
>
> E.g., after changing laptops recently, I still cannot s/mime to half
>> my counterparties because I don't have their certs. This happens
>> regularly with everyone I know...
>>
>
> ???
>
> I've changed my notebook harddisk quite often. I never lost my Seamonkey
> cert DB containing the key history of the last 10 years since it's part of
> the Mozilla profile which I have backups of. When people in companies get
> new PCs there's backup concept to migrate their old data. If not the user
> has more problems than just the e-mail certs of others.
> If you create a new profile in your MUA then you have to import the certs
> therein. But does that happen very often?
>
> This is a non-issue.
>
> * it needs a few tweaks in UI to align it with the safe usage models, so,
>> for example the "signing" icon has to go because it cannot be used for
>> signing, because signing is needed for key distribution. It also cannot be
>> used for signing unless reference is made to the conditions of signing, and
>> no UI vendor has ever wanted to give time&space to a CPS.
>>
>
> Maybe it's me but frankly I don't understand what you say here. Especially
> I don't see the need for a "UI vendor" to define a CPS (if Certificate
> Practice Statement is meant here).
>
> No doubt the UI could be better in some S/MIME-enabled MUAs.
>
> > C.f., that recent thread with Nelson, where he reads everything before
> > signing.
>
> The thread about form signing? There was a basic question whether it's
> feasible at all and I commented on that.
>
> * it needs a click-to-launch method of key-creation, sort of like that
>> which Anders was demoing with Firefox. Or preferably, it should be launched
>> by default. "There is only one mode, and it is secure." But that will
>> likely clash with the next point.
>>
>
> Are you talking about the PGP model of peer trust? (Each end-user defining
> individual trust for each participant's public key).
>
> * the security architecture is referred to some IETF committee. This
>> means it is incapable of modifying its security model to deal with evolving
>> threats. Anything with its security leadership split across too many
>> components eventually falls into stasis.
>>
>
> I don't understand this.
>
> 2. abandon schmes that use explicit encryption keys like S/MIME
>>>>
>>>
>>> Are you aware of the requirements for separate encryption keys? Some
>>> companies have the legal requirements for key escrow in litigation cases.
>>> That's the main reason why encryption and signature keys are separated.
>>>
>>
>> What happens when we add complexity to an already broken system?
>>
>
> I fail to see why it's broken. So I can't answer. And I fail to see why the
> other schemes proposed are less broken. IMHO the opposite is true.
>
> 3. introduce secure mobile secure key-storage
>>>>
>>>
>>> Ah, yeah. Did you ever think of a growing key history and such?
>>>
>>
>> Is that the counterparty certs, which would then also disappear every time
>> someone changed cellphone? Yeah, I agree. It needs a better key distro
>> mechanism, like the key servers of OpenPGP.
>>
>
> No, I meant the archived private keys for accessing old encrypted e-mails.
>
> 4. put the latter in cell phones
>>>>
>>>
>>> Even cell phones can break. And I don't consider them to be trustworthy
>>> key stores
>>> 1. with all the control the cell phone provider has over them,
>>> 2. all the gadgets installed with security issues,
>>> 3. with the limited data storage size on today's SIM cards.
>>>
>>
>> Sounds about as robust as any Internet software on any modern PC that
>> bombs out once a year or so :)
>>
>
> Yes, there are risks with software on a PC. But on a PC I have a fairly
> good chance to keep more control on what I'm using. The mobile phones tend
> to be customized. Configuration options are very sparse. There is no
> reasonable update mechanism keeping me informed about security updates. (It
> was a major PITA to update the buggy firmware on my Sony Ericsson mobile
> phone. The update software needed a flash player to be installed to display
> some fancy graphics. Uumpf!)
>
> And the main point: You fail to explain how trust is to be established.
>>>
>>
>> Well, there is the old trick I described: do a DH key exchange and then
>> use the voice to authenticate the checksum over the results.
>>
>
> Yupp. But that's kind of an enrollment process which is what Anders would
> like to avoid.
>
> (Mind you, let's not get too hung up on this, as "trust" is not defined
>> yet!)
>>
>
> Trust is like beauty. Beauty is in the eye of the beholder. ;-)
>
>
> Ciao, Michael.
> _______________________________________________
> dev-tech-crypto mailing list
> dev-tech-***@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-crypto
>
Michael Ströder
2008-11-26 17:06:05 UTC
Permalink
Ian G wrote:
> Michael Ströder wrote:
>
>> Anders, that's not the real problem with S/MIME or PGP.
>> Encrypting/signing is simply not a business requirement.
> ...
>> => Encrypting/signing must be made a business requirement in
>> contracts. That's the whole point. And there's no technical solution
>> for it.
>
> That's as close to a perfect dilemma as I've come across!

Yupp.

> It's not a business requirement, so we must make it a business
> requirement ... What then creates the upstream requirement? If it
> doesn't come from business, where does it come from?

You have to teach people to make these requirements part of the
company's security policy which in turn has to be made integral part of
business contracts with external partners.

Technicians cannot solve this by inventing yet another technology.

But it seems that some security people are very busy with PKI bashing
and convincing others that a new technology will solve all the
non-technical problems. That will obviously fail miserably.

Ciao, Michael.
Ian G
2008-11-26 18:10:20 UTC
Permalink
Michael Ströder wrote:
> Ian G wrote:
>> Michael Ströder wrote:
>>
>>> Anders, that's not the real problem with S/MIME or PGP.
>>> Encrypting/signing is simply not a business requirement.
>> ...
>>> => Encrypting/signing must be made a business requirement in
>>> contracts. That's the whole point. And there's no technical solution
>>> for it.
>>
>> That's as close to a perfect dilemma as I've come across!
>
> Yupp.
>
>> It's not a business requirement, so we must make it a business
>> requirement ... What then creates the upstream requirement? If it
>> doesn't come from business, where does it come from?
>
> You have to teach people to make these requirements part of the
> company's security policy which in turn has to be made integral part of
> business contracts with external partners.


You can't put something in a company's security policy unless it is a
business requirement first.

(Unless we endorse the absolutist view of security, in which, we have to
fix security holes because we know how to ... rather than whether they
cost money for the business. But that's a firing offense ;)


> Technicians cannot solve this by inventing yet another technology.
>
> But it seems that some security people are very busy with PKI bashing
> and convincing others that a new technology will solve all the
> non-technical problems. That will obviously fail miserably.


It's a mystery!

iang
Michael Ströder
2008-11-26 18:45:01 UTC
Permalink
Ian G wrote:
> Michael Ströder wrote:
>> Ian G wrote:
>>> Michael Ströder wrote:
>>>
>>>> Anders, that's not the real problem with S/MIME or PGP.
>>>> Encrypting/signing is simply not a business requirement.
>>> ...
>>>> => Encrypting/signing must be made a business requirement in
>>>> contracts. That's the whole point. And there's no technical solution
>>>> for it.
>>>
>>> That's as close to a perfect dilemma as I've come across!
>>
>> Yupp.
>>
>>> It's not a business requirement, so we must make it a business
>>> requirement ... What then creates the upstream requirement? If it
>>> doesn't come from business, where does it come from?
>>
>> You have to teach people to make these requirements part of the
>> company's security policy which in turn has to be made integral part
>> of business contracts with external partners.
>
> You can't put something in a company's security policy unless it is a
> business requirement first.

Reality is much more complex. Sometimes requirements are in a security
policy but not in business contracts. And sometimes the management asks
for e-mail encryption but does not enforce the use of an existing e-mail
encryption infrastructure afterwards.

Or sometimes the technical infrastructure turns out to be pretty buggy
and everybody avoids using it. Fortunately these interop problems are
almost solved today.

> (Unless we endorse the absolutist view of security, in which, we have to
> fix security holes because we know how to ... rather than whether they
> cost money for the business. But that's a firing offense ;)

Well, it's all about risks and how people weigh them. Some security
people know a little more about some risks and technical counter
measures and try to propose them. But it's hard to reach everybody in
the business especially in big companies. And it's hard to convince
people to spend time/budget to mitigate the risks.

Ciao, Michael.
Frank Hecker
2008-11-25 23:11:56 UTC
Permalink
Nelson B Bolyard wrote:
> Are you aware of chatzilla? It's been around for a long time.
> Protocols and architecture are defined in RFCs 2810-2813. Chatzilla
> interoperates with many other chat clients that follow those RFCs.

For the record, there's also InstantBird <http://instantbird.com/> which
appears to be a multiprotocol IM client using the Mozilla code base.
(I'm guessing, but have not confirmed, that it's a XULRunner app.)

> Mozilla runs an Internet Relay Chat server for use by chatzilla users.
> It's widely and heavily used by mozilla developers and other community
> members. I think you'd have a difficult time convincing mozilla they need a
> SECOND chat client/service.

I agree with Ian here: The focus of Mozilla Messaging and of Thunderbird
should be on end users in general, not Mozilla community members
specifically. And the interest of typical end users would be on
connecting with their friends, who are not in general on IRC but on AIM
and other "consumer" IM networks. Whether it makes sense to include chat
in Thunderbird is an open question, but certainly if it were to be done
then it should be done as a general-purpose IM capability and not just
as an IRC client.

Frank


--
Frank Hecker
***@mozillafoundation.org
Eddy Nigg
2008-11-25 23:30:44 UTC
Permalink
On 11/26/2008 01:11 AM, Frank Hecker:
> I agree with Ian here: The focus of Mozilla Messaging and of Thunderbird
> should be on end users in general, not Mozilla community members
> specifically. And the interest of typical end users would be on
> connecting with their friends, who are not in general on IRC but on AIM
> and other "consumer" IM networks. Whether it makes sense to include chat
> in Thunderbird is an open question, but certainly if it were to be done
> then it should be done as a general-purpose IM capability and not just
> as an IRC client.
>

Well, as a matter of fact, Jabber/XMPP inclusion into Thunderbird has
been a widely requested feature (see
https://bugzilla.mozilla.org/show_bug.cgi?id=385758 ) and is part of the
broader road map of Mozilla Messaging. Unfortunately it will not make it
into TB 3, but it might be in successive releases. I'm just afraid that
not much work has been done yet however...


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Eddy Nigg
2008-11-25 23:37:05 UTC
Permalink
On 11/26/2008 01:30 AM, Eddy Nigg:
>
> Well, as a matter of fact, Jabber/XMPP inclusion into Thunderbird has
> been a widely requested feature (see
> https://bugzilla.mozilla.org/show_bug.cgi?id=385758 ) and is part of the
> broader road map of Mozilla Messaging. Unfortunately it will not make it
> into TB 3, but it might be in successive releases. I'm just afraid that
> not much work has been done yet however...
>

Forgot to mention this: https://addons.mozilla.org/en-US/firefox/addon/3633

Same goes for Thunderbird:
https://addons.mozilla.org/en-US/thunderbird/search?q=sameplace&cat=all


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Eddy Nigg
2008-11-22 16:17:45 UTC
Permalink
On 11/22/2008 05:39 PM, Ian G:
> I see this as an interesting question. There are pros and cons. First
> con; why would we want to do that? Just use Skype. Or, Nelson talked
> about AIM having some form of crypto. Also Jabber has something.
>

Jabber doesn't just have "something", but the XMPP Foundation runs an
intermediate CA under the auspices and at the infrastructure of
StartCom. Currently it only includes client-to-server and
server-to-server encryption, but they are on the way to implement
client-to-client encryption as well.
All server certificates are (obviously) domain name control validated
and the keys are under the control of the server operator - as this is a
decentralized network. Similarly client certificates will be validated
to be under the control of the XMPP address (looks like an email
address) and keys under the control of the user. Small but important
difference I thought worth mentioning.


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: ***@startcom.org
Blog: https://blog.startcom.org
Loading...