Discussion:
[cap-talk] Security considerations for cookies
Adam Barth
2010-02-11 23:39:04 UTC
Permalink
People of cap-talk,

As some of you know, I'm working on a specification for how cookie
work in practice. As part of writing the spec, I'd like to add a
section on the security perils of using cookies. I was hoping that
this group could give me some feedback on what I have so far.

Thanks,
Adam


7. Security Considerations

7.1. General Recommendations

The cookie protocol is NOT RECOMMENDED for new applications.

For applications that do use the cookie protocol, servers SHOULD NOT
rely upon cookies for security.

For servers that do use cookies for security, servers SHOULD use a
redundant form of authentication, such as HTTP authentication or TLS
client certificates.

7.2. Ambient Authority

If a server uses cookies to authenticate users, a server might suffer
security vulnerabilities because user agents occasionally issue HTTP
requests on behalf of remote parties (e.g., because of HTTP redirects
or HTML forms). When issuing those requests, user agent attaches
cookies even if the entity does not know the contents of the cookies,
possibly letting the remote entity exercise authority at an unwary
server. User agents can mitigate this issue to some degree by
providing APIs for suppressing the Cookie header on outgoing
requests.

Although this security concern goes by a number of names (e.g.,
cross-site scripting and cross-site request forgery), the issue stems
from cookies being a form of ambient authority. Cookies encourage
server operators to separate designation (in the form of URLs) from
authorization (in the form of cookies). Disentangling designation
and authorization can cause the server and its clients to become
confused deputies and undertake undesirable actions.

Instead of using cookies for authorization, server operators might
wish to consider entangling designation and authorization by treating
URLs as object-capabilities. Instead of storing secrets in cookies,
this approach stores secrets in URLs, requiring the remote entity to
supply the secret itself. ALthough this approach is not a panacea,
judicious use of these principles can lead to more robust security.

7.3. Clear Text

The information in the Set-Cookie and Cookie headers is transmitted
in the clear.

1. All sensitive information conveyed in these headers is exposed to
an eavesdropper.

2. A malicious intermediary could alter the headers as they travel
in either direction, with unpredictable results.

3. A malicious client could alter the Cookie header before
transmission, with unpredictable results.

Servers SHOULD encrypt and sign their cookies. However, encrypting
and signing cookies does not prevent an attacker from transplanting a
cookie from one user agent to another.

In addition to encrypting and signing the contents of every cookie,
servers that require a higher level of security SHOULD use the cookie
protocol only over a secure channel.

7.4. Weak Confidentiality

Cookies do not provide isolation by port. If a cookie is readable by
a service running on one port, the cookie is also readable by a
service running on another port of the same server. If a cookie is
writable by a service on one port, the cookie is also writable by a
service running on another port of the same server. For this reason,
servers SHOULD NOT both run mutually distrusting services on
different ports of the same machine and use cookies to store
security-sensitive information.

Cookies do not provide isolation by scheme. Although most commonly
used with the http and https schemes, the cookies for a given host
are also available to other schemes, such as ftp and gopher. This
lack of isolation is most easily seen when a user agent retrieves a
URI with a gopher scheme via HTTP, but the lack of isolation by
scheme is also apparent via non-HTTP APIs that permit access to
cookies, such as HTML's document.cookie API.

7.5. Weak Integrity

Cookies do not provide integrity guarantees for sibling domains (and
their subdomains). For example, consider foo.example.com and
bar.example.com. The foo.example.com server can set a cookie with a
Domain attribute of ".example.com", and the user agent will include
that cookie in HTTP requests to bar.example.com. In the worst case,
bar.example.com will be unable to distinguish this cookie from a
cookie it set itself. The foo.example.com server might be able to
leverage this ability to mount an attack against bar.example.com.

Similarly, an active network attacker can inject cookies into the
Cookie header sent to https://example.com/ by impersonating a
response from http://example.com/ and injecting a Set-Cookie header.
The HTTPS server at example.com will be unable to distinguish these
cookies from cookies that it set itself in an HTTPS response. An
active network attacker might be able to leverage this ability to
mount an attack against example.com even if example.com uses HTTPS
exclusively.

Servers can partially mitigate these attacks by encrypting and
signing their cookies. However, using cryptography does not mitigate
the issue completely because an attacker can replay a cookie he or
she received from the authentic example.com server in the user's
session, with unpredictable results.

7.6. Reliance on DNS

The cookie protocol relies upon the Domain Name System (DNS) for
security. If the DNS is partially or fully compromised, the cookie
protocol might fail to provide the security properties required by
applications.
Toby Murray
2010-02-12 10:01:05 UTC
Permalink
Dear Adam,

Your section on ambient authority is an excellent summary, as no doubt
you intended it to be, of much of the philosophy shared by many on
this list. Thanks for taking the time to distil it out so clearly.

Others on this list with more web savvy can probably give more useful
feedback on specifics so I'll leave it to them.

Cheers

Toby
Post by Adam Barth
People of cap-talk,
As some of you know, I'm working on a specification for how cookie
work in practice.  As part of writing the spec, I'd like to add a
section on the security perils of using cookies.  I was hoping that
this group could give me some feedback on what I have so far.
Thanks,
Adam
7.  Security Considerations
7.1.  General Recommendations
  The cookie protocol is NOT RECOMMENDED for new applications.
  For applications that do use the cookie protocol, servers SHOULD NOT
  rely upon cookies for security.
  For servers that do use cookies for security, servers SHOULD use a
  redundant form of authentication, such as HTTP authentication or TLS
  client certificates.
7.2.  Ambient Authority
  If a server uses cookies to authenticate users, a server might suffer
  security vulnerabilities because user agents occasionally issue HTTP
  requests on behalf of remote parties (e.g., because of HTTP redirects
  or HTML forms).  When issuing those requests, user agent attaches
  cookies even if the entity does not know the contents of the cookies,
  possibly letting the remote entity exercise authority at an unwary
  server.  User agents can mitigate this issue to some degree by
  providing APIs for suppressing the Cookie header on outgoing
  requests.
  Although this security concern goes by a number of names (e.g.,
  cross-site scripting and cross-site request forgery), the issue stems
  from cookies being a form of ambient authority.  Cookies encourage
  server operators to separate designation (in the form of URLs) from
  authorization (in the form of cookies).  Disentangling designation
  and authorization can cause the server and its clients to become
  confused deputies and undertake undesirable actions.
  Instead of using cookies for authorization, server operators might
  wish to consider entangling designation and authorization by treating
  URLs as object-capabilities.  Instead of storing secrets in cookies,
  this approach stores secrets in URLs, requiring the remote entity to
  supply the secret itself.  ALthough this approach is not a panacea,
  judicious use of these principles can lead to more robust security.
7.3.  Clear Text
  The information in the Set-Cookie and Cookie headers is transmitted
  in the clear.
  1.  All sensitive information conveyed in these headers is exposed to
      an eavesdropper.
  2.  A malicious intermediary could alter the headers as they travel
      in either direction, with unpredictable results.
  3.  A malicious client could alter the Cookie header before
      transmission, with unpredictable results.
  Servers SHOULD encrypt and sign their cookies.  However, encrypting
  and signing cookies does not prevent an attacker from transplanting a
  cookie from one user agent to another.
  In addition to encrypting and signing the contents of every cookie,
  servers that require a higher level of security SHOULD use the cookie
  protocol only over a secure channel.
7.4.  Weak Confidentiality
  Cookies do not provide isolation by port.  If a cookie is readable by
  a service running on one port, the cookie is also readable by a
  service running on another port of the same server.  If a cookie is
  writable by a service on one port, the cookie is also writable by a
  service running on another port of the same server.  For this reason,
  servers SHOULD NOT both run mutually distrusting services on
  different ports of the same machine and use cookies to store
  security-sensitive information.
  Cookies do not provide isolation by scheme.  Although most commonly
  used with the http and https schemes, the cookies for a given host
  are also available to other schemes, such as ftp and gopher.  This
  lack of isolation is most easily seen when a user agent retrieves a
  URI with a gopher scheme via HTTP, but the lack of isolation by
  scheme is also apparent via non-HTTP APIs that permit access to
  cookies, such as HTML's document.cookie API.
7.5.  Weak Integrity
  Cookies do not provide integrity guarantees for sibling domains (and
  their subdomains).  For example, consider foo.example.com and
  bar.example.com.  The foo.example.com server can set a cookie with a
  Domain attribute of ".example.com", and the user agent will include
  that cookie in HTTP requests to bar.example.com.  In the worst case,
  bar.example.com will be unable to distinguish this cookie from a
  cookie it set itself.  The foo.example.com server might be able to
  leverage this ability to mount an attack against bar.example.com.
  Similarly, an active network attacker can inject cookies into the
  Cookie header sent to https://example.com/ by impersonating a
  response from http://example.com/ and injecting a Set-Cookie header.
  The HTTPS server at example.com will be unable to distinguish these
  cookies from cookies that it set itself in an HTTPS response.  An
  active network attacker might be able to leverage this ability to
  mount an attack against example.com even if example.com uses HTTPS
  exclusively.
  Servers can partially mitigate these attacks by encrypting and
  signing their cookies.  However, using cryptography does not mitigate
  the issue completely because an attacker can replay a cookie he or
  she received from the authentic example.com server in the user's
  session, with unpredictable results.
7.6.  Reliance on DNS
  The cookie protocol relies upon the Domain Name System (DNS) for
  security.  If the DNS is partially or fully compromised, the cookie
  protocol might fail to provide the security properties required by
  applications.
_______________________________________________
cap-talk mailing list
http://www.eros-os.org/mailman/listinfo/cap-talk
Adam Barth
2010-02-13 05:28:56 UTC
Permalink
Thanks. I cheated and got input from Mark Miller before emailing this list. :)

Adam


On Fri, Feb 12, 2010 at 2:01 AM, Toby Murray
Post by Toby Murray
Dear Adam,
Your section on ambient authority is an excellent summary, as no doubt
you intended it to be, of much of the philosophy shared by many on
this list. Thanks for taking the time to distil it out so clearly.
Others on this list with more web savvy can probably give more useful
feedback on specifics so I'll leave it to them.
Cheers
Toby
Post by Adam Barth
People of cap-talk,
As some of you know, I'm working on a specification for how cookie
work in practice.  As part of writing the spec, I'd like to add a
section on the security perils of using cookies.  I was hoping that
this group could give me some feedback on what I have so far.
Thanks,
Adam
7.  Security Considerations
7.1.  General Recommendations
  The cookie protocol is NOT RECOMMENDED for new applications.
  For applications that do use the cookie protocol, servers SHOULD NOT
  rely upon cookies for security.
  For servers that do use cookies for security, servers SHOULD use a
  redundant form of authentication, such as HTTP authentication or TLS
  client certificates.
7.2.  Ambient Authority
  If a server uses cookies to authenticate users, a server might suffer
  security vulnerabilities because user agents occasionally issue HTTP
  requests on behalf of remote parties (e.g., because of HTTP redirects
  or HTML forms).  When issuing those requests, user agent attaches
  cookies even if the entity does not know the contents of the cookies,
  possibly letting the remote entity exercise authority at an unwary
  server.  User agents can mitigate this issue to some degree by
  providing APIs for suppressing the Cookie header on outgoing
  requests.
  Although this security concern goes by a number of names (e.g.,
  cross-site scripting and cross-site request forgery), the issue stems
  from cookies being a form of ambient authority.  Cookies encourage
  server operators to separate designation (in the form of URLs) from
  authorization (in the form of cookies).  Disentangling designation
  and authorization can cause the server and its clients to become
  confused deputies and undertake undesirable actions.
  Instead of using cookies for authorization, server operators might
  wish to consider entangling designation and authorization by treating
  URLs as object-capabilities.  Instead of storing secrets in cookies,
  this approach stores secrets in URLs, requiring the remote entity to
  supply the secret itself.  ALthough this approach is not a panacea,
  judicious use of these principles can lead to more robust security.
7.3.  Clear Text
  The information in the Set-Cookie and Cookie headers is transmitted
  in the clear.
  1.  All sensitive information conveyed in these headers is exposed to
      an eavesdropper.
  2.  A malicious intermediary could alter the headers as they travel
      in either direction, with unpredictable results.
  3.  A malicious client could alter the Cookie header before
      transmission, with unpredictable results.
  Servers SHOULD encrypt and sign their cookies.  However, encrypting
  and signing cookies does not prevent an attacker from transplanting a
  cookie from one user agent to another.
  In addition to encrypting and signing the contents of every cookie,
  servers that require a higher level of security SHOULD use the cookie
  protocol only over a secure channel.
7.4.  Weak Confidentiality
  Cookies do not provide isolation by port.  If a cookie is readable by
  a service running on one port, the cookie is also readable by a
  service running on another port of the same server.  If a cookie is
  writable by a service on one port, the cookie is also writable by a
  service running on another port of the same server.  For this reason,
  servers SHOULD NOT both run mutually distrusting services on
  different ports of the same machine and use cookies to store
  security-sensitive information.
  Cookies do not provide isolation by scheme.  Although most commonly
  used with the http and https schemes, the cookies for a given host
  are also available to other schemes, such as ftp and gopher.  This
  lack of isolation is most easily seen when a user agent retrieves a
  URI with a gopher scheme via HTTP, but the lack of isolation by
  scheme is also apparent via non-HTTP APIs that permit access to
  cookies, such as HTML's document.cookie API.
7.5.  Weak Integrity
  Cookies do not provide integrity guarantees for sibling domains (and
  their subdomains).  For example, consider foo.example.com and
  bar.example.com.  The foo.example.com server can set a cookie with a
  Domain attribute of ".example.com", and the user agent will include
  that cookie in HTTP requests to bar.example.com.  In the worst case,
  bar.example.com will be unable to distinguish this cookie from a
  cookie it set itself.  The foo.example.com server might be able to
  leverage this ability to mount an attack against bar.example.com.
  Similarly, an active network attacker can inject cookies into the
  Cookie header sent to https://example.com/ by impersonating a
  response from http://example.com/ and injecting a Set-Cookie header.
  The HTTPS server at example.com will be unable to distinguish these
  cookies from cookies that it set itself in an HTTPS response.  An
  active network attacker might be able to leverage this ability to
  mount an attack against example.com even if example.com uses HTTPS
  exclusively.
  Servers can partially mitigate these attacks by encrypting and
  signing their cookies.  However, using cryptography does not mitigate
  the issue completely because an attacker can replay a cookie he or
  she received from the authentic example.com server in the user's
  session, with unpredictable results.
7.6.  Reliance on DNS
  The cookie protocol relies upon the Domain Name System (DNS) for
  security.  If the DNS is partially or fully compromised, the cookie
  protocol might fail to provide the security properties required by
  applications.
_______________________________________________
cap-talk mailing list
http://www.eros-os.org/mailman/listinfo/cap-talk
_______________________________________________
cap-talk mailing list
http://www.eros-os.org/mailman/listinfo/cap-talk
Toby Murray
2010-02-12 10:01:15 UTC
Permalink
Dear Adam,

Your section on ambient authority is an excellent summary, as no doubt
you intended it to be, of much of the philosophy shared by many on
this list. Thanks for taking the time to distil it out so clearly.

Others on this list with more web savvy can probably give more useful
feedback on specifics so I'll leave it to them.

Cheers

Toby
Post by Adam Barth
People of cap-talk,
As some of you know, I'm working on a specification for how cookie
work in practice.  As part of writing the spec, I'd like to add a
section on the security perils of using cookies.  I was hoping that
this group could give me some feedback on what I have so far.
Thanks,
Adam
7.  Security Considerations
7.1.  General Recommendations
  The cookie protocol is NOT RECOMMENDED for new applications.
  For applications that do use the cookie protocol, servers SHOULD NOT
  rely upon cookies for security.
  For servers that do use cookies for security, servers SHOULD use a
  redundant form of authentication, such as HTTP authentication or TLS
  client certificates.
7.2.  Ambient Authority
  If a server uses cookies to authenticate users, a server might suffer
  security vulnerabilities because user agents occasionally issue HTTP
  requests on behalf of remote parties (e.g., because of HTTP redirects
  or HTML forms).  When issuing those requests, user agent attaches
  cookies even if the entity does not know the contents of the cookies,
  possibly letting the remote entity exercise authority at an unwary
  server.  User agents can mitigate this issue to some degree by
  providing APIs for suppressing the Cookie header on outgoing
  requests.
  Although this security concern goes by a number of names (e.g.,
  cross-site scripting and cross-site request forgery), the issue stems
  from cookies being a form of ambient authority.  Cookies encourage
  server operators to separate designation (in the form of URLs) from
  authorization (in the form of cookies).  Disentangling designation
  and authorization can cause the server and its clients to become
  confused deputies and undertake undesirable actions.
  Instead of using cookies for authorization, server operators might
  wish to consider entangling designation and authorization by treating
  URLs as object-capabilities.  Instead of storing secrets in cookies,
  this approach stores secrets in URLs, requiring the remote entity to
  supply the secret itself.  ALthough this approach is not a panacea,
  judicious use of these principles can lead to more robust security.
7.3.  Clear Text
  The information in the Set-Cookie and Cookie headers is transmitted
  in the clear.
  1.  All sensitive information conveyed in these headers is exposed to
      an eavesdropper.
  2.  A malicious intermediary could alter the headers as they travel
      in either direction, with unpredictable results.
  3.  A malicious client could alter the Cookie header before
      transmission, with unpredictable results.
  Servers SHOULD encrypt and sign their cookies.  However, encrypting
  and signing cookies does not prevent an attacker from transplanting a
  cookie from one user agent to another.
  In addition to encrypting and signing the contents of every cookie,
  servers that require a higher level of security SHOULD use the cookie
  protocol only over a secure channel.
7.4.  Weak Confidentiality
  Cookies do not provide isolation by port.  If a cookie is readable by
  a service running on one port, the cookie is also readable by a
  service running on another port of the same server.  If a cookie is
  writable by a service on one port, the cookie is also writable by a
  service running on another port of the same server.  For this reason,
  servers SHOULD NOT both run mutually distrusting services on
  different ports of the same machine and use cookies to store
  security-sensitive information.
  Cookies do not provide isolation by scheme.  Although most commonly
  used with the http and https schemes, the cookies for a given host
  are also available to other schemes, such as ftp and gopher.  This
  lack of isolation is most easily seen when a user agent retrieves a
  URI with a gopher scheme via HTTP, but the lack of isolation by
  scheme is also apparent via non-HTTP APIs that permit access to
  cookies, such as HTML's document.cookie API.
7.5.  Weak Integrity
  Cookies do not provide integrity guarantees for sibling domains (and
  their subdomains).  For example, consider foo.example.com and
  bar.example.com.  The foo.example.com server can set a cookie with a
  Domain attribute of ".example.com", and the user agent will include
  that cookie in HTTP requests to bar.example.com.  In the worst case,
  bar.example.com will be unable to distinguish this cookie from a
  cookie it set itself.  The foo.example.com server might be able to
  leverage this ability to mount an attack against bar.example.com.
  Similarly, an active network attacker can inject cookies into the
  Cookie header sent to https://example.com/ by impersonating a
  response from http://example.com/ and injecting a Set-Cookie header.
  The HTTPS server at example.com will be unable to distinguish these
  cookies from cookies that it set itself in an HTTPS response.  An
  active network attacker might be able to leverage this ability to
  mount an attack against example.com even if example.com uses HTTPS
  exclusively.
  Servers can partially mitigate these attacks by encrypting and
  signing their cookies.  However, using cryptography does not mitigate
  the issue completely because an attacker can replay a cookie he or
  she received from the authentic example.com server in the user's
  session, with unpredictable results.
7.6.  Reliance on DNS
  The cookie protocol relies upon the Domain Name System (DNS) for
  security.  If the DNS is partially or fully compromised, the cookie
  protocol might fail to provide the security properties required by
  applications.
_______________________________________________
cap-talk mailing list
http://www.eros-os.org/mailman/listinfo/cap-talk
Mark Seaborn
2010-02-12 14:13:56 UTC
Permalink
Post by Adam Barth
For servers that do use cookies for security, servers SHOULD use a
redundant form of authentication, such as HTTP authentication or TLS
client certificates.
Don't these both introduce ambient authority as well?

Why use redundant authentication? If I log in with a password to get a
cookie, why should I log in with a password via HTTP authentication as well?

AFAIK, TLS client certs aren't very usable, but then I've never set one up.
Would I be right in thinking that TLS client certs get sent to any server
that requests a cert, as with SSH public keys? This would make them a more
broadly-scoped form of ambient authority than cookies and HTTP auth.
Post by Adam Barth
7.2. Ambient Authority
If a server uses cookies to authenticate users, a server might suffer
security vulnerabilities because user agents occasionally issue HTTP
requests on behalf of remote parties (e.g., because of HTTP redirects
or HTML forms).
Not sure if you're looking for feedback on the wording, but this reads as
somewhat vague. How about: "A server that uses cookies to authenticate
users can suffer from security vulnerabilities because user agents provide
mechanisms that allow one party to issue HTTP requests on behalf of
another. These mechanisms include HTTP redirects and HTML forms."


When issuing those requests, user agent attaches
Post by Adam Barth
cookies even if the entity does not know the contents of the cookies,
possibly letting the remote entity exercise authority at an unwary
server.
User agents can mitigate this issue to some degree by
providing APIs for suppressing the Cookie header on outgoing
requests.
You're referring to UMP here?
Post by Adam Barth
Although this security concern goes by a number of names (e.g.,
cross-site scripting and cross-site request forgery),
Cross-site scripting is caused by failure to escape strings, not by cookies.
Post by Adam Barth
the issue stems
from cookies being a form of ambient authority. Cookies encourage
server operators to separate designation (in the form of URLs) from
authorization (in the form of cookies). Disentangling designation
and authorization can cause the server and its clients to become
confused deputies and undertake undesirable actions.
Again, I'm not sure how much depth you want to go into, but you could define
(or refer to definitions of) ambient authority and confused deputies.
Post by Adam Barth
Instead of using cookies for authorization, server operators might
wish to consider entangling designation and authorization by treating
URLs as object-capabilities.
I think "object-capability" is reserved for references that are unforgeable
(OS and language caps), not merely unguessable, but I'm not sure of the
exact definitions people have settled on.

7.3. Clear Text
Post by Adam Barth
The information in the Set-Cookie and Cookie headers is transmitted
in the clear.
Not if they're sent over HTTPS.

Servers SHOULD encrypt and sign their cookies.


It sounds like you're saying that servers should encrypt cookies-at-rest --
those stored in a database. Otherwise I am not sure if you're saying:
* cookies should only be sent over an encrypted channel, or
* cookies should consist of encrypted data.


7.4. Weak Confidentiality
Post by Adam Barth
Cookies do not provide isolation by port. If a cookie is readable by
a service running on one port, the cookie is also readable by a
service running on another port of the same server. If a cookie is
writable by a service on one port, the cookie is also writable by a
service running on another port of the same server.
I didn't know that. I suppose this ties in with your "Beware of
Finer-Grained Origins" paper. Does this mean that there is little point in
considering origins to be <scheme, domain, port> tuples instead of being
synonymous with domain names?

For example, the Web Geolocation API says that the browser should display
the requester's origin when prompting the user, where "origin" is defined by
HTML 5 to include the port number. Firefox 3.5 apparently violates this by
not displaying the port number. I am sure this is of no consequence for
most users, for whom port numbers are not meaningful. This property of
cookies seems like another reason not to care -- at least for sites that use
cookies.

For this reason,
Post by Adam Barth
servers SHOULD NOT both run mutually distrusting services on
different ports of the same machine and use cookies to store
security-sensitive information.
It's OK to do it on the same machine (i.e. IP address) if different domain
names are used, I think.

Mark
Adam Barth
2010-02-13 05:49:42 UTC
Permalink
Post by Mark Seaborn
Post by Adam Barth
  For servers that do use cookies for security, servers SHOULD use a
  redundant form of authentication, such as HTTP authentication or TLS
  client certificates.
Don't these both introduce ambient authority as well?
Yes. Is there something specific you think would be better to
recommend? In general, I wanted to point to things with RFCs in the
"general recommendations" section.
Post by Mark Seaborn
Why use redundant authentication?  If I log in with a password to get a
cookie, why should I log in with a password via HTTP authentication as well?
HTTP authentication has the virtue of better integrity protection than
cookies. For example, there isn't a way (that I know of) for an
active network attacker to force your HTTP auth credentials (at least
over HTTPS), but there is a way to overwrite your cookies.

The way I would imagine this working is that you'd login via HTTP
auth, which would then set a cookie (e.g., a session cookie).
Post by Mark Seaborn
AFAIK, TLS client certs aren't very usable, but then I've never set one up.
Would I be right in thinking that TLS client certs get sent to any server
that requests a cert, as with SSH public keys?  This would make them a more
broadly-scoped form of ambient authority than cookies and HTTP auth.
That might or might not be how browsers work today, but there's
nothing inherent in the design of TLS client certs that forces this to
be the case. I suspect we'll see some more innovation in client certs
to make them easier to use and to have better privacy properties.
Post by Mark Seaborn
Post by Adam Barth
7.2.  Ambient Authority
  If a server uses cookies to authenticate users, a server might suffer
  security vulnerabilities because user agents occasionally issue HTTP
  requests on behalf of remote parties (e.g., because of HTTP redirects
  or HTML forms).
Not sure if you're looking for feedback on the wording, but this reads as
somewhat vague.  How about: "A server that uses cookies to authenticate
users can suffer from security vulnerabilities because user agents provide
mechanisms that allow one party to issue HTTP requests on behalf of
another.  These mechanisms include HTTP redirects and HTML forms."
I've adopted a variant of the text you propose:

[[
A server that uses cookies to authenticate users can suffer
security vulnerabilities because some user agents let remote parties
issue HTTP requests from the user agent (e.g., via HTTP redirects and
HTML forms).
]]
Post by Mark Seaborn
Post by Adam Barth
  User agents can mitigate this issue to some degree by
  providing APIs for suppressing the Cookie header on outgoing
  requests.
You're referring to UMP here?
Yes, UMP is an example of such an API. The point is more general
though. For example, you might want a content security policy that
blocks all outgoing cookies. For example, the "no-outgoing-cookies"
directive I proposed here:

https://wiki.mozilla.org/Security/CSP/Strawman
Post by Mark Seaborn
Post by Adam Barth
  Although this security concern goes by a number of names (e.g.,
  cross-site scripting and cross-site request forgery),
Cross-site scripting is caused by failure to escape strings, not by cookies.
Perhaps in the proximate, but (as Mark is fond of point out) you can
run untrusted script in your web page as long as that script can't
abuse ambient authority. Some of that authority comes from the
location bar at the top of the window, but much of it comes from
cookies.
Post by Mark Seaborn
Post by Adam Barth
  the issue stems
  from cookies being a form of ambient authority.  Cookies encourage
  server operators to separate designation (in the form of URLs) from
  authorization (in the form of cookies).  Disentangling designation
  and authorization can cause the server and its clients to become
  confused deputies and undertake undesirable actions.
Again, I'm not sure how much depth you want to go into, but you could define
(or refer to definitions of) ambient authority and confused deputies.
I'm not sure how to do non-normative citations in an RFC. I'll ask
around to see what the best way to do this is.
Post by Mark Seaborn
Post by Adam Barth
  Instead of using cookies for authorization, server operators might
  wish to consider entangling designation and authorization by treating
  URLs as object-capabilities.
I think "object-capability" is reserved for references that are unforgeable
(OS and language caps), not merely unguessable, but I'm not sure of the
exact definitions people have settled on.
I'm happy to use whatever word is most accurate here.
Post by Mark Seaborn
Post by Adam Barth
7.3.  Clear Text
  The information in the Set-Cookie and Cookie headers is transmitted
  in the clear.
Not if they're sent over HTTPS.
Fixed.
Post by Mark Seaborn
Post by Adam Barth
  Servers SHOULD encrypt and sign their cookies.
It sounds like you're saying that servers should encrypt cookies-at-rest --
 * cookies should only be sent over an encrypted channel, or
 * cookies should consist of encrypted data.
I've changed this to the following:

[[
Servers SHOULD encrypt and sign their cookies when transmitting
them to the user agent (even when sending the cookies over a secure
channel).
]]
Post by Mark Seaborn
Post by Adam Barth
7.4.  Weak Confidentiality
  Cookies do not provide isolation by port.  If a cookie is readable by
  a service running on one port, the cookie is also readable by a
  service running on another port of the same server.  If a cookie is
  writable by a service on one port, the cookie is also writable by a
  service running on another port of the same server.
I didn't know that.  I suppose this ties in with your "Beware of
Finer-Grained Origins" paper.  Does this mean that there is little point in
considering origins to be <scheme, domain, port> tuples instead of being
synonymous with domain names?
Well, the isolation by scheme is super important. Without that, HTTPS
wouldn't provide any protection from active network attackers. Also,
there's more to secure in the browser than just cookies. Cookies have
weaker protections than most browser privileges because they're old.
Post by Mark Seaborn
For example, the Web Geolocation API says that the browser should display
the requester's origin when prompting the user, where "origin" is defined by
HTML 5 to include the port number.  Firefox 3.5 apparently violates this by
not displaying the port number.  I am sure this is of no consequence for
most users, for whom port numbers are not meaningful.  This property of
cookies seems like another reason not to care -- at least for sites that use
cookies.
I wouldn't use cookies as a model when designing new security
features. Port-based isolation isn't hugely valuable, but it's the
model we've got. There some value in not introducing gratuitous
complexity when designing new features. (Sadly, we're stuck with some
pretty complex old features.)
Post by Mark Seaborn
Post by Adam Barth
  For this reason,
  servers SHOULD NOT both run mutually distrusting services on
  different ports of the same machine and use cookies to store
  security-sensitive information.
It's OK to do it on the same machine (i.e. IP address) if different domain
names are used, I think.
Good point. I've changed "machine" to "host".

Thanks for your detailed comments.

Adam
Mark Miller
2010-02-13 17:15:10 UTC
Permalink
Hi Adam, it's great to see this coming together so well.
Post by Adam Barth
Post by Mark Seaborn
Post by Adam Barth
For servers that do use cookies for security, servers SHOULD use a
redundant form of authentication, such as HTTP authentication or TLS
client certificates.
Don't these both introduce ambient authority as well?
Yes. Is there something specific you think would be better to
recommend? In general, I wanted to point to things with RFCs in the
"general recommendations" section.
Post by Mark Seaborn
Why use redundant authentication? If I log in with a password to get a
cookie, why should I log in with a password via HTTP authentication as
well?
HTTP authentication has the virtue of better integrity protection than
cookies. For example, there isn't a way (that I know of) for an
active network attacker to force your HTTP auth credentials (at least
over HTTPS), but there is a way to overwrite your cookies.
The way I would imagine this working is that you'd login via HTTP
auth, which would then set a cookie (e.g., a session cookie).
Perhaps it could be clearer that these other ambient authority systems help
address weaknesses that cookies have aside from ambient authority, but that
they do not help avoid the ambient authority problems of cookies.
Post by Adam Barth
Post by Mark Seaborn
AFAIK, TLS client certs aren't very usable, but then I've never set one
up.
Post by Mark Seaborn
Would I be right in thinking that TLS client certs get sent to any server
that requests a cert, as with SSH public keys? This would make them a
more
Post by Mark Seaborn
broadly-scoped form of ambient authority than cookies and HTTP auth.
That might or might not be how browsers work today, but there's
nothing inherent in the design of TLS client certs that forces this to
be the case. I suspect we'll see some more innovation in client certs
to make them easier to use and to have better privacy properties.
Post by Mark Seaborn
Post by Adam Barth
7.2. Ambient Authority
If a server uses cookies to authenticate users, a server might suffer
security vulnerabilities because user agents occasionally issue HTTP
requests on behalf of remote parties (e.g., because of HTTP redirects
or HTML forms).
Not sure if you're looking for feedback on the wording, but this reads as
somewhat vague. How about: "A server that uses cookies to authenticate
users can suffer from security vulnerabilities because user agents
provide
Post by Mark Seaborn
mechanisms that allow one party to issue HTTP requests on behalf of
another. These mechanisms include HTTP redirects and HTML forms."
[[
A server that uses cookies to authenticate users can suffer
security vulnerabilities because some user agents let remote parties
issue HTTP requests from the user agent (e.g., via HTTP redirects and
HTML forms).
]]
Post by Mark Seaborn
Post by Adam Barth
User agents can mitigate this issue to some degree by
providing APIs for suppressing the Cookie header on outgoing
requests.
You're referring to UMP here?
Yes, UMP is an example of such an API. The point is more general
though. For example, you might want a content security policy that
blocks all outgoing cookies. For example, the "no-outgoing-cookies"
https://wiki.mozilla.org/Security/CSP/Strawman
Post by Mark Seaborn
Post by Adam Barth
Although this security concern goes by a number of names (e.g.,
cross-site scripting and cross-site request forgery),
Cross-site scripting is caused by failure to escape strings, not by
cookies.
Perhaps in the proximate, but (as Mark is fond of point out) you can
run untrusted script in your web page as long as that script can't
abuse ambient authority. Some of that authority comes from the
location bar at the top of the window, but much of it comes from
cookies.
I also find this connection to cross site scripting confusing. Mentioning
it raises more questions that need to be explained. I would also recommend
dropping it. CSRF is the clear case.
Post by Adam Barth
Post by Mark Seaborn
Post by Adam Barth
the issue stems
from cookies being a form of ambient authority. Cookies encourage
server operators to separate designation (in the form of URLs) from
authorization (in the form of cookies). Disentangling designation
and authorization can cause the server and its clients to become
confused deputies and undertake undesirable actions.
Again, I'm not sure how much depth you want to go into, but you could
define
Post by Mark Seaborn
(or refer to definitions of) ambient authority and confused deputies.
I'm not sure how to do non-normative citations in an RFC. I'll ask
around to see what the best way to do this is.
Post by Mark Seaborn
Post by Adam Barth
Instead of using cookies for authorization, server operators might
wish to consider entangling designation and authorization by treating
URLs as object-capabilities.
I think "object-capability" is reserved for references that are
unforgeable
Post by Mark Seaborn
(OS and language caps), not merely unguessable, but I'm not sure of the
exact definitions people have settled on.
I'm happy to use whatever word is most accurate here.
I think just "capabilities" here might be best. The term I normally use when
speaking precisely is "cryptograohic capabilities". Others on this list have
objected to that on reasonable grounds. "Password capabilities" is
unfortunately inappropriate because of its history. "Sparse capability" is
accurate and has no misleading bindings that I know of. However, it has
become obscure. Note that web-keys by themselves are not cryptographic
capabilities, as I explain at <
http://lists.w3.org/Archives/Public/www-tag/2010Feb/0118.html> and <
http://lists.w3.org/Archives/Public/www-tag/2010Feb/0119.html>.

Altogether, I think the phrase "treating URLs as capabilities" is simply
fine. It makes no strong statement that these URLs are capabilities.
Post by Adam Barth
Post by Mark Seaborn
Post by Adam Barth
7.3. Clear Text
The information in the Set-Cookie and Cookie headers is transmitted
in the clear.
Not if they're sent over HTTPS.
Fixed.
Post by Mark Seaborn
Post by Adam Barth
Servers SHOULD encrypt and sign their cookies.
It sounds like you're saying that servers should encrypt cookies-at-rest
--
Post by Mark Seaborn
* cookies should only be sent over an encrypted channel, or
* cookies should consist of encrypted data.
[[
Servers SHOULD encrypt and sign their cookies when transmitting
them to the user agent (even when sending the cookies over a secure
channel).
]]
Post by Mark Seaborn
Post by Adam Barth
7.4. Weak Confidentiality
Cookies do not provide isolation by port. If a cookie is readable by
a service running on one port, the cookie is also readable by a
service running on another port of the same server. If a cookie is
writable by a service on one port, the cookie is also writable by a
service running on another port of the same server.
I didn't know that. I suppose this ties in with your "Beware of
Finer-Grained Origins" paper. Does this mean that there is little point
in
Post by Mark Seaborn
considering origins to be <scheme, domain, port> tuples instead of being
synonymous with domain names?
Well, the isolation by scheme is super important. Without that, HTTPS
wouldn't provide any protection from active network attackers. Also,
there's more to secure in the browser than just cookies. Cookies have
weaker protections than most browser privileges because they're old.
Post by Mark Seaborn
For example, the Web Geolocation API says that the browser should display
the requester's origin when prompting the user, where "origin" is defined
by
Post by Mark Seaborn
HTML 5 to include the port number. Firefox 3.5 apparently violates this
by
Post by Mark Seaborn
not displaying the port number. I am sure this is of no consequence for
most users, for whom port numbers are not meaningful. This property of
cookies seems like another reason not to care -- at least for sites that
use
Post by Mark Seaborn
cookies.
I wouldn't use cookies as a model when designing new security
features. Port-based isolation isn't hugely valuable, but it's the
model we've got. There some value in not introducing gratuitous
complexity when designing new features. (Sadly, we're stuck with some
pretty complex old features.)
Post by Mark Seaborn
Post by Adam Barth
For this reason,
servers SHOULD NOT both run mutually distrusting services on
different ports of the same machine and use cookies to store
security-sensitive information.
It's OK to do it on the same machine (i.e. IP address) if different
domain
Post by Mark Seaborn
names are used, I think.
Good point. I've changed "machine" to "host".
Thanks for your detailed comments.
Adam
_______________________________________________
cap-talk mailing list
http://www.eros-os.org/mailman/listinfo/cap-talk
--
Text by me above is hereby placed in the public domain

Cheers,
--MarkM
Adam Barth
2010-02-14 18:15:11 UTC
Permalink
Post by Mark Miller
Post by Adam Barth
The way I would imagine this working is that you'd login via HTTP
auth, which would then set a cookie (e.g., a session cookie).
Perhaps it could be clearer that these other ambient authority systems help
address weaknesses that cookies have aside from ambient authority, but that
they do not help avoid the ambient authority problems of cookies.
I've removed the recommendation about using redundant authentication.
It seems to be more confusing than valuable.
Post by Mark Miller
Post by Adam Barth
Perhaps in the proximate, but (as Mark is fond of point out) you can
run untrusted script in your web page as long as that script can't
abuse ambient authority.  Some of that authority comes from the
location bar at the top of the window, but much of it comes from
cookies.
I also find this connection to cross site scripting confusing. Mentioning it
raises more questions that need to be explained. I would also recommend
dropping it. CSRF is the clear case.
Dropped.
Post by Mark Miller
Post by Adam Barth
I'm happy to use whatever word is most accurate here.
I think just "capabilities" here might be best.
Done.

Adam
Mark Seaborn
2010-02-13 19:47:11 UTC
Permalink
Post by Adam Barth
Post by Mark Seaborn
Post by Adam Barth
For servers that do use cookies for security, servers SHOULD use a
redundant form of authentication, such as HTTP authentication or TLS
client certificates.
Don't these both introduce ambient authority as well?
Yes. Is there something specific you think would be better to
recommend? In general, I wanted to point to things with RFCs in the
"general recommendations" section.
Couldn't you recommend web-keys? Although I don't think there is an RFC for
web-keys.
Post by Adam Barth
Why use redundant authentication? If I log in with a password to get a
Post by Mark Seaborn
cookie, why should I log in with a password via HTTP authentication as
well?
HTTP authentication has the virtue of better integrity protection than
cookies. For example, there isn't a way (that I know of) for an
active network attacker to force your HTTP auth credentials (at least
over HTTPS), but there is a way to overwrite your cookies.
Are you saying there is a way for an active network attacker to overwrite
your cookies, even if you're using HTTPS? Wouldn't this only work if the
client is not using HTTPS exclusively to connect to the server? (Maybe this
is clearer from the rest of your document -- is it online somewhere?)

Are there any consequences of overwriting a cookie other than denial of
service?
Post by Adam Barth
Post by Mark Seaborn
User agents can mitigate this issue to some degree by
Post by Adam Barth
providing APIs for suppressing the Cookie header on outgoing
requests.
I thought "to some degree" was a bit vague. Providing an API like UMP only
mitigates the issue if web apps use it. The issue is not so much that user
agents must be able to omit the cookie, but that servers must disregard the
presence or absence of cookies.
Post by Adam Barth
Post by Mark Seaborn
Servers SHOULD encrypt and sign their cookies.
It sounds like you're saying that servers should encrypt cookies-at-rest
--
Post by Mark Seaborn
* cookies should only be sent over an encrypted channel, or
* cookies should consist of encrypted data.
[[
Servers SHOULD encrypt and sign their cookies when transmitting
them to the user agent (even when sending the cookies over a secure
channel).
]]
It would be better to say that cookies should consist of encrypted/signed
data, or "servers should use cookies that are encrypted and signed".
Referring to 'encrypting a cookie' is not literally accurate because the
piece of data you're encrypting is not a cookie -- it is never sent in
cookie context. (I hope this is not too pedantic. I did find the original
wording ambiguous.)

BTW, it is not clear why cookies should be encrypted even when sending over
HTTPS. Does this come back to my earlier question about overwriting
cookies?

Presumably if a server uses Swiss numbers for its cookies, there is no need
for the cookies to be encrypted and signed.
Post by Adam Barth
Post by Mark Seaborn
7.4. Weak Confidentiality
Post by Adam Barth
Cookies do not provide isolation by port. If a cookie is readable by
a service running on one port, the cookie is also readable by a
service running on another port of the same server. If a cookie is
writable by a service on one port, the cookie is also writable by a
service running on another port of the same server.
I didn't know that. I suppose this ties in with your "Beware of
Finer-Grained Origins" paper. Does this mean that there is little point
in
Post by Mark Seaborn
considering origins to be <scheme, domain, port> tuples instead of being
synonymous with domain names?
Well, the isolation by scheme is super important. Without that, HTTPS
wouldn't provide any protection from active network attackers.
Is this because, if you're visiting https://a.com and http://b.com, an
attacker could spoof content from http://b.com to frame spoofed content from
http://a.com? I think I see what you were referring to above.

Cheers,
Mark
Adam Barth
2010-02-14 18:34:50 UTC
Permalink
Yes.  Is there something specific you think would be better to
recommend?  In general, I wanted to point to things with RFCs in the
"general recommendations" section.
Couldn't you recommend web-keys?  Although I don't think there is an RFC for
web-keys.
I think it's better to be slightly general here. We're writing the
document for the long term. Web-keys are one instance of the general
concept, but they might seem archaic to a reader ten years from now if
some other URL capability scheme catches on.
HTTP authentication has the virtue of better integrity protection than
cookies.  For example, there isn't a way (that I know of) for an
active network attacker to force your HTTP auth credentials (at least
over HTTPS), but there is a way to overwrite your cookies.
Are you saying there is a way for an active network attacker to overwrite
your cookies, even if you're using HTTPS?
Yes. An active network attacker can spoof an (unencrypted) HTTP
response from the server and include a Set-Cookie header. Nothing in
the protocol stops that from overwriting cookies set over HTTPS.
Wouldn't this only work if the
client is not using HTTPS exclusively to connect to the server?
If the UA issues a single HTTP request, an active network attacker can
spoof an HTTP redirect response and cause the UA to generate an HTTP
request to the server. Now, if the user agent is configured to use
Strict-Transport-Security for that host, there is some hope. :)
(Maybe this is clearer from the rest of your document -- is it online somewhere?)
I'm not sure what the best resource for this fact is. I'm hoping
these security considerations with bring these sorts of issues to
light.
Are there any consequences of overwriting a cookie other than denial of
service?
Oh yes. Some of the worse consequences involve login CSRF like
attacks. For more background, please see:

http://www.adambarth.com/papers/2008/barth-jackson-mitchell-b.pdf

Essentially, by overwriting the user's cookies with the attacker's
cookies, the attacker causes the user to communicate with the server
as if the user was the attacker. If the user communicates
confidential information to the server (such as composing an email),
that information will likely be stored in the attacker's account. For
example, the attacker can look the "drafts" or "sent email" folder and
read the user's confidential email.

Web-keys have the same login CSRF problems because the attacker can
force the user's browser to navigate to the attacker's web-key. Now,
the user might not notice that they're interacting with the server
under the attacker's authority.
Post by Adam Barth
  User agents can mitigate this issue to some degree by
  providing APIs for suppressing the Cookie header on outgoing
  requests.
I thought "to some degree" was a bit vague.  Providing an API like UMP only
mitigates the issue if web apps use it.  The issue is not so much that user
agents must be able to omit the cookie, but that servers must disregard the
presence or absence of cookies.
That's the exact argument I've been making in the CORS/UMP discussion
in W3C WebApps, but MarkM and Tyler don't appear to agree with that
point of view. Do you have a proposal for what we could say at that
point the draft that would be less vague?
[[
Servers SHOULD encrypt and sign their cookies when transmitting
them to the user agent (even when sending the cookies over a secure
channel).
]]
It would be better to say that cookies should consist of encrypted/signed
data, or "servers should use cookies that are encrypted and signed".
This text seems to be the same as the above, but in the passive voice.
Referring to 'encrypting a cookie' is not literally accurate because the
piece of data you're encrypting is not a cookie -- it is never sent in
cookie context.  (I hope this is not too pedantic.  I did find the original
wording ambiguous.)
I've changed this text to:

[[
Servers SHOULD encrypt and sign the contents of cookies
]]
BTW, it is not clear why cookies should be encrypted even when sending over
HTTPS.  Does this come back to my earlier question about overwriting
cookies?
For example, the cookies are written to disk, where they might be
visible to others. Also, the cookies could become visible during a
cross-site scripting attack.
Presumably if a server uses Swiss numbers for its cookies, there is no need
for the cookies to be encrypted and signed.
I presume you mean <http://wiki.erights.org/wiki/Swiss_number>?
Indeed, these do not require encryption or signatures. However, it
can't hurt. :)

This text is meant to encourage encryption and signatures at the
framework level, irrespective of the semantics of the cookies. For
example, I believe ASP.NET encrypts and signs all cookies for good
measure.
Well, the isolation by scheme is super important.  Without that, HTTPS
wouldn't provide any protection from active network attackers.
Is this because, if you're visiting https://a.com and http://b.com, an
attacker could spoof content from http://b.com to frame spoofed content from
http://a.com?  I think I see what you were referring to above.
Yes. I'd recommend thinking about the issue slightly more abstract
(i.e., without referring to frames), but you seem to understand why
isolation is important.

The easier way to think about it is that an active network attacker
gets to inhabit / control every security context that trusts
information received over HTTP. If a document from
https://example.com/foo.html shares a security context with
http://example.com/bar.html, then the active network attacker gets to
inhabit / control foo.html.

Adam
Mark Seaborn
2010-02-15 14:05:19 UTC
Permalink
Post by Adam Barth
Post by Mark Seaborn
Post by Adam Barth
Yes. Is there something specific you think would be better to
recommend? In general, I wanted to point to things with RFCs in the
"general recommendations" section.
Couldn't you recommend web-keys? Although I don't think there is an RFC
for
Post by Mark Seaborn
web-keys.
I think it's better to be slightly general here. We're writing the
document for the long term. Web-keys are one instance of the general
concept, but they might seem archaic to a reader ten years from now if
some other URL capability scheme catches on.
Fair enough. You do mention secrets-in-URLs later in the document; maybe
this should go in "general recommendations" too, if HTTP auth and client
certs are mentioned in this section.
Post by Adam Barth
Post by Mark Seaborn
Post by Adam Barth
HTTP authentication has the virtue of better integrity protection than
cookies. For example, there isn't a way (that I know of) for an
active network attacker to force your HTTP auth credentials (at least
over HTTPS), but there is a way to overwrite your cookies.
Are you saying there is a way for an active network attacker to over
write
Post by Mark Seaborn
your cookies, even if you're using HTTPS?
Yes. An active network attacker can spoof an (unencrypted) HTTP
response from the server and include a Set-Cookie header. Nothing in
the protocol stops that from overwriting cookies set over HTTPS.
Post by Mark Seaborn
Wouldn't this only work if the
client is not using HTTPS exclusively to connect to the server?
If the UA issues a single HTTP request, an active network attacker can
spoof an HTTP redirect response and cause the UA to generate an HTTP
request to the server. Now, if the user agent is configured to use
Strict-Transport-Security for that host, there is some hope. :)
Thanks for explaining. Is there a name for this attack? "Cookie
overwriting" sounds appropriate (and you use this term in your paper), but
Googling for this term doesn't produce many references.
Post by Adam Barth
Post by Mark Seaborn
Are there any consequences of overwriting a cookie other than denial of
service?
Oh yes. Some of the worse consequences involve login CSRF like
http://www.adambarth.com/papers/2008/barth-jackson-mitchell-b.pdf
Essentially, by overwriting the user's cookies with the attacker's
cookies, the attacker causes the user to communicate with the server
as if the user was the attacker. If the user communicates
confidential information to the server (such as composing an email),
that information will likely be stored in the attacker's account. For
example, the attacker can look the "drafts" or "sent email" folder and
read the user's confidential email.
OK, so there are two kinds of cookie overwriting attack:
* cookie overwriting by hijacking an unencrypted HTTP connection
* cookie overwriting by "login CSRF"
Is there a succinct term for the first that distinguishes it from the
second?

Can't these attacks be addressed by the usual means of including a suitably
unguessable secret in the URL or POST parameter (which can be checked
against the cookie if you want to protect against URL leaks)?

My initial reaction was that overwriting-by-hijacking is not a CSRF or a
confused deputy attack. Normally in CSRF, Alice (an attacker) makes a
request and Bob's ambient credentials are applied. In the attacks above,
Alice arranges it so that when Bob makes a request, Alice's credentials are
applied.

Or to look at it a different way, the attack is enabled by Bob's failure to
authenticate (or at least fully specify) the object he is talking to. Bob
thinks he's talking to (url, bob_cookie), but he's actually talking to (url,
alice_cookie).

On further reflection, CSRF is different from the classical confused deputy
compiler example:
* In the compiler example, the attacker fully designates an object using a
guessable filename in a global namespace. More fully designating the object
does not solve the problem.
* In CSRF, the attacker specifies an object using a guessable name that is
relative to an account. More fully designating the object (using
unguessable strings) is part of the fix. (I am assuming that an object is
something that is account-specific here, which is not always true.)

Login CSRF consists of two steps:
* The attacker sends the login HTTP request.
* The innocent page uses the resulting overwritten cookies.
Which part is the CSRF? The first step looks like a CSRF, but since it's
not using any credentials in the request, I suppose it is abusing the HTTP
server's ambient authority to overwrite the browser's cookies.
Post by Adam Barth
Web-keys have the same login CSRF problems because the attacker can
force the user's browser to navigate to the attacker's web-key. Now,
the user might not notice that they're interacting with the server
under the attacker's authority.
This attack can only work at a coarser granularity than login CSRF, though,
can't it? The attack can only replace the whole page, whereas login CSRF
can violate the integrity of individual parts of the page during the page's
lifetime.

I would call this attack a kind of spoofing, rather than CSRF. Rather than
one site spoofing another, it can be one account or page spoofing another on
the same site. Tyler's Petname toolbar would not help in this case. Maybe
this can be addressed by petnames that are finer-grained than a site, which
might require sites' co-operation to establish.
Post by Adam Barth
Post by Mark Seaborn
Post by Adam Barth
Post by Adam Barth
User agents can mitigate this issue to some degree by
providing APIs for suppressing the Cookie header on outgoing
requests.
My problem with this statement is the agency it implies for user agents (no
pun intended!). There are some security issues an individual user agent can
address on its own, but this is not one of them.
Post by Adam Barth
Post by Mark Seaborn
I thought "to some degree" was a bit vague. Providing an API like UMP
only
Post by Mark Seaborn
mitigates the issue if web apps use it. The issue is not so much that
user
Post by Mark Seaborn
agents must be able to omit the cookie, but that servers must disregard
the
Post by Mark Seaborn
presence or absence of cookies.
That's the exact argument I've been making in the CORS/UMP discussion
in W3C WebApps, but MarkM and Tyler don't appear to agree with that
point of view. Do you have a proposal for what we could say at that
point the draft that would be less vague?
You mean like in
http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/0155.html?

I'd still say that the best way to ensure that servers disregard the
presence or absence of cookies is to ensure that new APIs don't send
cookies.

How about: "Servers can mitigate this issue by disregarding the presence or
absence of ambiently-provided credentials (including cookies) in requests
and by using other authorising information instead. Standards bodies [or
user agent implementors] can encourage this behaviour by providing user
agent APIs that do not send ambient credentials in requests."
Post by Adam Barth
Post by Mark Seaborn
Post by Adam Barth
[[
Servers SHOULD encrypt and sign their cookies when transmitting
them to the user agent (even when sending the cookies over a secure
channel).
]]
It would be better to say that cookies should consist of encrypted/signed
data, or "servers should use cookies that are encrypted and signed".
This text seems to be the same as the above, but in the passive voice.
Not exactly the same, because the passive voice adds ambiguity. :-)

C = encrypt(D, key)

C is the cookie.
D is not a cookie.
The server encrypts D.
The server does not encrypt C.
D is encrypted (that is, D gets encrypted, but D does not consist of
encrypted data).
C is encrypted (that is, C consists of encrypted data, but C does not get
encrypted).

This is my interpretation, anyhow. :-)
Post by Adam Barth
Post by Mark Seaborn
Presumably if a server uses Swiss numbers for its cookies, there is no
need
Post by Mark Seaborn
for the cookies to be encrypted and signed.
I presume you mean <http://wiki.erights.org/wiki/Swiss_number>?
Yes.
Post by Adam Barth
Indeed, these do not require encryption or signatures.
You could change the text to cover this case by saying "Cookies should not
be meaningful to any party other than the server. The server should ensure
that cookies can only be interpreted by the server by either:
* using randomly-generated numbers that are looked up in a table on the
server [Swiss numbers], or
* encrypting and signing potentially-sensitive data to yield cookies."

Cheers,
Mark
Adam Barth
2010-02-15 15:35:59 UTC
Permalink
I think it's better to be slightly general here.  We're writing the
document for the long term.  Web-keys are one instance of the general
concept, but they might seem archaic to a reader ten years from now if
some other URL capability scheme catches on.
Fair enough.  You do mention secrets-in-URLs later in the document; maybe
this should go in "general recommendations" too, if HTTP auth and client
certs are mentioned in this section.
I've actually removed HTTP auth and client certs from the earlier
section because they seemed to be causing more confusion than good.
If the UA issues a single HTTP request, an active network attacker can
spoof an HTTP redirect response and cause the UA to generate an HTTP
request to the server.  Now, if the user agent is configured to use
Strict-Transport-Security for that host, there is some hope.  :)
Thanks for explaining.  Is there a name for this attack?  "Cookie
overwriting" sounds appropriate (and you use this term in your paper), but
Googling for this term doesn't produce many references.
I don't know of a good name. Informally, we've been referring to it
as "cookie forcing."
 * cookie overwriting by hijacking an unencrypted HTTP connection
 * cookie overwriting by "login CSRF"
Is there a succinct term for the first that distinguishes it from the
second?
Well, there are more actually. For example, that two sibling domains
can overwrite each others cookies is another form. Really, these are
just integrity failures in the cookie protocol.
Can't these attacks be addressed by the usual means of including a suitably
unguessable secret in the URL or POST parameter (which can be checked
against the cookie if you want to protect against URL leaks)?
Nope. Recall, that we're worried about an attack who uses these
integrity failures to transplant cookies from his browser to the
user's browser. He can just as easily transplant the "unguessable"
secret he receives in his browser to the user's browser in the URL or
POST parameters.
My initial reaction was that overwriting-by-hijacking is not a CSRF or a
confused deputy attack.  Normally in CSRF, Alice (an attacker) makes a
request and Bob's ambient credentials are applied.  In the attacks above,
Alice arranges it so that when Bob makes a request, Alice's credentials are
applied.
Indeed, it is not a CSRF because there is no cross-site request that
is being forged. The consequences are similar to login CSRF, which is
why I mentioned it.
Or to look at it a different way, the attack is enabled by Bob's failure to
authenticate (or at least fully specify) the object he is talking to.  Bob
thinks he's talking to (url, bob_cookie), but he's actually talking to (url,
alice_cookie).
I don't think that's correct. The attack is cause by the inability of
the server to bind to the user's browser. That's why URL tokens don't
help: they don't binding to the browser any better than cookies in
this threat model.
On further reflection, CSRF is different from the classical confused deputy
 * In the compiler example, the attacker fully designates an object using a
guessable filename in a global namespace.  More fully designating the object
does not solve the problem.
 * In CSRF, the attacker specifies an object using a guessable name that is
relative to an account.  More fully designating the object (using
unguessable strings) is part of the fix.  (I am assuming that an object is
something that is account-specific here, which is not always true.)
 * The attacker sends the login HTTP request.
 * The innocent page uses the resulting overwritten cookies.
Which part is the CSRF?  The first step looks like a CSRF, but since it's
not using any credentials in the request, I suppose it is abusing the HTTP
server's ambient authority to overwrite the browser's cookies.
The first part is CSRF. Quite literally, the attacker is forging a
cross-site request to the login page. You are correct that it's a
CSRF attack that doesn't involve any credentials.
Web-keys have the same login CSRF problems because the attacker can
force the user's browser to navigate to the attacker's web-key.  Now,
the user might not notice that they're interacting with the server
under the attacker's authority.
This attack can only work at a coarser granularity than login CSRF, though,
can't it?  The attack can only replace the whole page, whereas login CSRF
can violate the integrity of individual parts of the page during the page's
lifetime.
Or maybe it works at a finer grain because I can replace pages in one
tab but leave pages in other tabs unmolested. In any case, the
granularity of the attack doesn't matter. The attack is still
problematic.
I would call this attack a kind of spoofing, rather than CSRF.  Rather than
one site spoofing another, it can be one account or page spoofing another on
the same site.  Tyler's Petname toolbar would not help in this case.  Maybe
this can be addressed by petnames that are finer-grained than a site, which
might require sites' co-operation to establish.
You're welcome to call the attack whatever you like, but it is, quite
literally, a CSRF attack. Namely, the attacker is forging a
cross-site request containing a web-key (his own).

Instead of thinking about CSRF in terms of ambient authority, it's
more helpful to think about them in terms of the integrity failures
that arise from the ability of one web site to generate HTTP requests
to other web sites.
Post by Mark Seaborn
Post by Adam Barth
Post by Adam Barth
  User agents can mitigate this issue to some degree by
  providing APIs for suppressing the Cookie header on outgoing
  requests.
My problem with this statement is the agency it implies for user agents (no
pun intended!).  There are some security issues an individual user agent can
address on its own, but this is not one of them.
I guess we can say, more precisely, that the agency lies with the user
agent implementors (to provide the APIs) and the application
programmers (to use the APIs).
That's the exact argument I've been making in the CORS/UMP discussion
in W3C WebApps, but MarkM and Tyler don't appear to agree with that
point of view.  Do you have a proposal for what we could say at that
point the draft that would be less vague?
You mean like in
http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/0155.html?
Possibly. There's been a lot of messages on that topic. :)
I'd still say that the best way to ensure that servers disregard the
presence or absence of cookies is to ensure that new APIs don't send
cookies.
Perhaps.
How about:  "Servers can mitigate this issue by disregarding the presence or
absence of ambiently-provided credentials (including cookies) in requests
and by using other authorising information instead.  Standards bodies [or
user agent implementors] can encourage this behaviour by providing user
agent APIs that do not send ambient credentials in requests."
I've just removed the sentence. The more we clean it up, the more it
becomes like the third paragraph in that subsection, which does a
better job of explaining what's going on anyway. Plus, it makes more
sense to present the solution after explain the problem fully (which
we do in paragraph 2).
Post by Mark Seaborn
Post by Adam Barth
[[
Servers SHOULD encrypt and sign their cookies when transmitting
them to the user agent (even when sending the cookies over a secure
channel).
]]
It would be better to say that cookies should consist of
encrypted/signed
data, or "servers should use cookies that are encrypted and signed".
This text seems to be the same as the above, but in the passive voice.
Not exactly the same, because the passive voice adds ambiguity. :-)
C = encrypt(D, key)
C is the cookie.
D is not a cookie.
The server encrypts D.
The server does not encrypt C.
D is encrypted (that is, D gets encrypted, but D does not consist of
encrypted data).
C is encrypted (that is, C consists of encrypted data, but C does not get
encrypted).
This is my interpretation, anyhow. :-)
This issue should be fixed now that we refer to encrypting the
"content" of the cookie. We can then take D to be the content of the
cookie (note that we take C to the be the "value" of the cookie
earlier in the document).
I presume you mean <http://wiki.erights.org/wiki/Swiss_number>?
Yes.
Indeed, these do not require encryption or signatures.
You could change the text to cover this case by saying "Cookies should not
be meaningful to any party other than the server.  The server should ensure
 * using randomly-generated numbers that are looked up in a table on the
server [Swiss numbers], or
 * encrypting and signing potentially-sensitive data to yield cookies."
I'm not sure this distinction is worth making. I'd rather folks just
encrypted and signed all their cookies. It's a SHOULD-level
requirement, so folks don't need to follow it if they understand the
consequences of what they're doing.

Adam
Mark Seaborn
2010-02-17 13:55:39 UTC
Permalink
Post by Mark Seaborn
Post by Mark Seaborn
Post by Adam Barth
Web-keys have the same login CSRF problems because the attacker can
force the user's browser to navigate to the attacker's web-key. Now,
the user might not notice that they're interacting with the server
under the attacker's authority.
This attack can only work at a coarser granularity than login CSRF,
though,
Post by Mark Seaborn
can't it? The attack can only replace the whole page, whereas login CSRF
can violate the integrity of individual parts of the page during the
page's
Post by Mark Seaborn
lifetime.
Or maybe it works at a finer grain because I can replace pages in one
tab but leave pages in other tabs unmolested. In any case, the
granularity of the attack doesn't matter. The attack is still
problematic.
What's not clear to me is which tabs you're saying the attacker can
navigate. Suppose the browser has two tabs open:
* Tab A: https://webmail.com/users-webmail-webkey
* Tab B: https://attacker.com

The attacker controls tab B and can navigate it to
https://webmail.com/attacker-webmail-webkey, which is a "Compose Mail"
page. When switching between tabs, the user mistakes this page as belonging
to their own webmail account and starts entering a draft e-mail there, which
the attacker can see. This attack seems implausible because the attacker
does not know when the user is likely to want to compose an e-mail so does
not know when to redirect. If "Compose Mail" pages appear unexpectedly the
user would get suspicious. If the attacker redirects to an "Inbox" page,
information that only the user can see will be missing, so won't that be a
giveaway?

Are you saying that the attacker can also cause tab A to navigate to an
attacker-supplied web-key?
Post by Mark Seaborn
Can't these attacks be addressed by the usual means of including a suitably
Post by Mark Seaborn
unguessable secret in the URL or POST parameter (which can be checked
against the cookie if you want to protect against URL leaks)?
Nope. Recall, that we're worried about an attack who uses these
integrity failures to transplant cookies from his browser to the
user's browser. He can just as easily transplant the "unguessable"
secret he receives in his browser to the user's browser in the URL or
POST parameters.
Doesn't that involve navigating the user's browser tab as well?

Without navigating a tab, I thought the potential vulnerability was
something like the following:

Suppose the user has a tab open on "https://webmail.com/compose-mail". An
attacker uses an active network attack to replace the user's browser's
cookie for webmail.com. If the "Compose Mail" page is sending drafts back
to the server by posting them to "https://webmail.com/save-draft", they will
be saved to the attacker's account. But if the page instead posts to "
https://webmail.com/save-draft?secret-id", the server can check that
secret-id corresponds to the cookie, and ignore the request if it doesn't.
How can the attacker replace the secret-id that the page is using, without
navigating the tab?

Mark
Ben Laurie
2010-02-23 19:05:29 UTC
Permalink
Post by Adam Barth
Post by Mark Seaborn
Post by Adam Barth
If the UA issues a single HTTP request, an active network attacker can
spoof an HTTP redirect response and cause the UA to generate an HTTP
request to the server. Now, if the user agent is configured to use
Strict-Transport-Security for that host, there is some hope. :)
Thanks for explaining. Is there a name for this attack? "Cookie
overwriting" sounds appropriate (and you use this term in your paper),
but
Post by Mark Seaborn
Googling for this term doesn't produce many references.
I don't know of a good name. Informally, we've been referring to it
as "cookie forcing."
This is a form of session fixation attack. Not sure if that helps much, but
at least it nails down what the threat is.
James A. Donald
2010-02-16 00:17:29 UTC
Permalink
Recall, that we're worried about an attack who uses these integrity
failures to transplant cookies from his browser to the user's
browser. He can just as easily transplant the "unguessable" secret
he receives in his browser to the user's browser in the URL or POST
parameters.
I have always thought of this, and perhaps heard of this, as "cookie
forcing"

Sandro Magi
2010-02-23 05:39:20 UTC
Permalink
Post by Mark Seaborn
I would call this attack a kind of spoofing, rather than CSRF. Rather
than one site spoofing another, it can be one account or page spoofing
another on the same site. Tyler's Petname toolbar would not help in
this case. Maybe this can be addressed by petnames that are
finer-grained than a site, which might require sites' co-operation to
establish.
I agree, but gmail already sports a solution: require the user to pick a
custom theme, colour scheme, or a unique icon for his webmail interface,
which is prominently visible on each screen. It should be exceedingly
unlikely that the attacker could know and pick the same one. The user
will immediately see that he is not in his inbox.

One must also guard against the possibility of the JS in the page being
redirected in the background to another account while still displaying
the old icon/colour scheme/theme.

Sandro
Toby Murray
2010-02-23 08:58:26 UTC
Permalink
Post by Sandro Magi
I would call this attack a kind of spoofing, rather than CSRF.  Rather
than one site spoofing another, it can be one account or page spoofing
another on the same site.  Tyler's Petname toolbar would not help in
this case.  Maybe this can be addressed by petnames that are
finer-grained than a site, which might require sites' co-operation to
establish.
I agree, but gmail already sports a solution: require the user to pick a
custom theme, colour scheme, or a unique icon for his webmail interface,
I use the standard default for all. I'm sure the vast majority of
users do likewise. I'm not convinced that this would offer any real
protection, therefore.

Cheers

Toby
Sandro Magi
2010-02-23 19:16:01 UTC
Permalink
On 23/02/2010 3:58 AM, Toby Murray wrote:>> I agree, but gmail already
sports a solution: require the user to pick a
Post by Toby Murray
Post by Sandro Magi
custom theme, colour scheme, or a unique icon for his webmail interface,
I use the standard default for all. I'm sure the vast majority of
users do likewise. I'm not convinced that this would offer any real
protection, therefore.
Hence why I said the user would be required to pick one, and the order
of selections presented would always be randomized.

Sandro
Toby Murray
2010-02-23 19:37:12 UTC
Permalink
Post by Sandro Magi
On 23/02/2010 3:58 AM, Toby Murray wrote:>> I agree, but gmail already
sports a solution: require the user to pick a
Post by Toby Murray
Post by Sandro Magi
custom theme, colour scheme, or a unique icon for his webmail interface,
I use the standard default for all. I'm sure the vast majority of
users do likewise. I'm not convinced that this would offer any real
protection, therefore.
Hence why I said the user would be required to pick one, and the order
of selections presented would always be randomized.
Sorry, I thought you were implying that GMail already implements this.

I'm still not convinced that even I would be protected by this system
however.
Were I presented with the wrong scheme, I might well just assume Google was
buggy.

You're asking users to make a mental judgement ("I might be under attack")
which they are loathe to do. Who wants to believe they might be under attack
when it's much easier to believe GMail is buggy?

Just as secure systems need to be designed so that "the most secure way for
this system to be used, is also the easiest and most natural way to use it"
they also need to be designed so that "the mental model for the user to
adopt that yields the interactions with the system that keep the user most
secure, is also the easiest and most natural for them to adopt", with "easy"
there interpreted to include "requires the user to make the least number of
uncomfortable assumptions, or adopt the least number of beliefs that lead to
uncomfortable conclusions".

Only real fear makes people adopt uncomfortable beliefs (c.f. terrorism
hype). Without proper fear, people naturally choose beliefs and assumptions
that produce less discomfort (c.f. climate change denial).

It follows that any system that requires the user to consciously acknowledge
the fact that they might be under attack, in order for it to be secure, is
sub optimal.

Cheers

Toby
Sandro Magi
2010-02-23 20:10:14 UTC
Permalink
This exact argument applies to the Petname Toolbar, so if you're
suggesting this anti-spoofing scheme would fail, then so would petnames.

If the user is suddenly presented with a theme/icon that is completely
different, the initial shock will be sufficient to make them look twice
and question what's going on. They will then notice that none of their
e-mails are there, or their folders don't look right, etc.

In order to be explicit about what action should be taken, you can also
display a prominent message with a link, "Not your account/Something
doesn't look right? Sign out here."

Sandro
Post by Toby Murray
Sorry, I thought you were implying that GMail already implements this.
I'm still not convinced that even I would be protected by this system
however.
Were I presented with the wrong scheme, I might well just assume Google
was buggy.
You're asking users to make a mental judgement ("I might be under
attack") which they are loathe to do. Who wants to believe they might be
under attack when it's much easier to believe GMail is buggy?
Just as secure systems need to be designed so that "the most secure way
for this system to be used, is also the easiest and most natural way to
use it" they also need to be designed so that "the mental model for the
user to adopt that yields the interactions with the system that keep the
user most secure, is also the easiest and most natural for them to
adopt", with "easy" there interpreted to include "requires the user to
make the least number of uncomfortable assumptions, or adopt the least
number of beliefs that lead to uncomfortable conclusions".
Only real fear makes people adopt uncomfortable beliefs (c.f. terrorism
hype). Without proper fear, people naturally choose beliefs and
assumptions that produce less discomfort (c.f. climate change denial).
It follows that any system that requires the user to consciously
acknowledge the fact that they might be under attack, in order for it to
be secure, is sub optimal.
Cheers
Toby
_______________________________________________
cap-talk mailing list
http://www.eros-os.org/mailman/listinfo/cap-talk
Toby Murray
2010-02-23 20:59:42 UTC
Permalink
Post by Sandro Magi
This exact argument applies to the Petname Toolbar, so if you're
suggesting this anti-spoofing scheme would fail, then so would petnames.
Absolutely. I was thinking of the petname tool as well when I wrote
that. I would like to think that there is a way to do better than
petnames, or a method that doesn't place the same burden of the user
to do something out of the ordinary in order to be secure when they're
being attacked. The system shouldn't require the user to do anything
out of the ordinary in an attack scenario, because that requires the
user to acknowledge the possibility of an attack. Cognitive/emotional
bias prevents that acknowledgement from being able to be relied upon
to be made by the user.
Post by Sandro Magi
If the user is suddenly presented with a theme/icon that is completely
different, the initial shock will be sufficient to make them look twice
and question what's going on.
I think 'will' is far too strong. I would concede 'maybe' but also
argue that there would be a reasonable proportion of users who
wouldn't question it at all. Websites update their themes all the
time.
Post by Sandro Magi
They will then notice that none of their
e-mails are there, or their folders don't look right, etc.
Indeed for GMail. But I think my more general point about
'personalised theme' based authentication (to allow users to
authenticate services) still holds.
Post by Sandro Magi
In order to be explicit about what action should be taken, you can also
display a prominent message with a link, "Not your account/Something
doesn't look right? Sign out here."
You can do all of these things. But you're just shifting the
responsibility to the user. Better would be to find a way that doesn't
require the user to do anything out of the ordinary (that they haven't
done the other N-1 times they've checked their email) to be secure.

Every time the user has logged in and not been attacked in this way,
they've been trained to think of that "Not your account/...." text as
something that need not concern them. You can't expect them to take
notice of it the one time they're attacked if it has never been
relevant to them in the past. That's asking too much IMO.

Cheers

Toby
Raoul Duke
2010-02-23 21:39:00 UTC
Permalink
On Tue, Feb 23, 2010 at 12:59 PM, Toby Murray
Post by Toby Murray
that. I would like to think that there is a way to do better than
petnames, or a method that doesn't place the same burden of the user
to do something out of the ordinary in order to be secure when they're
being attacked.
hear hear!
Karp, Alan H
2010-02-24 01:11:43 UTC
Permalink
Post by Toby Murray
that. I would like to think that there is a way to do better than
petnames, or a method that doesn't place the same burden of the user
to do something out of the ordinary in order to be secure when they're
being attacked.
PassPet (if Ping ever gets around to finishing it) addresses that problem by being incapable of computing your password if you're at a phishing site.

________________________
Alan Karp
Principal Scientist
Virus Safe Computing Initiative
Hewlett-Packard Laboratories
1501 Page Mill Road
Palo Alto, CA 94304
(650) 857-3967, fax (650) 857-7029
http://www.hpl.hp.com/personal/Alan_Karp
Raoul Duke
2010-02-23 18:36:13 UTC
Permalink
Post by Sandro Magi
It should be exceedingly
unlikely that the attacker could know and pick the same one. The user
will immediately see that he is not in his inbox.
personally, i can't say i buy that claim w/out extensive research
behind it. users are just too crazy and varied! :-)
David Wagner
2010-02-16 07:15:35 UTC
Permalink
Post by Mark Seaborn
Can't these attacks be addressed by the usual means of including a suitably
unguessable secret in the URL or POST parameter (which can be checked
against the cookie if you want to protect against URL leaks)?
No. Parameters cannot prevent overwriting of cookies.
Implication: Whatever protocol you layer on top of protocols had
better be resilient to overwriting of cookies (or at least, to those
kinds of overwriting that can occur, given your threat model).

A related threat is that an arbitrary site may be able to delete
all cookies, even of other sites.
http://kuza55.blogspot.com/2008/02/understanding-cookie-security.html
Implication: Whatever protocol you use that uses cookies ought to
be resilient to malicious deletion of cookies. I'm not sure if this
was mentioned in Adam's document.
Post by Mark Seaborn
On further reflection, CSRF is different from the classical confused deputy
* In the compiler example, the attacker fully designates an object using a
guessable filename in a global namespace. More fully designating the object
does not solve the problem.
* In CSRF, the attacker specifies an object using a guessable name that is
relative to an account. More fully designating the object (using
unguessable strings) is part of the fix. (I am assuming that an object is
something that is account-specific here, which is not always true.)
You've lost me here. I don't see what distinction you're trying to draw.
I think classical CSRF *is* a confused deputy example, just like the
classical compiler example.

One fix to classical CSRF is to use a designator, knowledge of which
also suffices to prove authorization (note: this is not the same as
"fully designating"). That fix would have fixed the compilers problem,
and it would fix the web CSRF problem. It doesn't mean it is the only
possible fix. Speaking of "the fix" is nonsensical; there may be multiple
fixes, and "the fix" is not uniquely defined.

Perhaps you meant to ask whether Login CSRF is a confused deputy
problem?
Post by Mark Seaborn
* The attacker sends the login HTTP request.
* The innocent page uses the resulting overwritten cookies.
Which part is the CSRF? The first step looks like a CSRF, but since it's
not using any credentials in the request, I suppose it is abusing the HTTP
server's ambient authority to overwrite the browser's cookies.
FYI, you've left some important stuff out of your description of the first
step. The first step is that the attacker tricks the victim's browser
into sending the login HTTP request.

I guess that this probably gets labelled CSRF because it involves an
attacker tricking the victim's browser into visiting some URL. We could
probably discuss whether it ought to be called CSRF, or whether it ought
to be called a spoofing/trusted path problem.

Or, perhaps instead of asking "what's the CSRF?" you meant to ask
"is this a confused deputy problem?" or "where's the ambient authority?"
Those are interesting questions.
Post by Mark Seaborn
Post by Adam Barth
Web-keys have the same login CSRF problems because the attacker can
force the user's browser to navigate to the attacker's web-key. Now,
the user might not notice that they're interacting with the server
under the attacker's authority.
I would call this attack a kind of spoofing, rather than CSRF.
I'm tempted to agree.
Adam Barth
2010-02-16 07:27:50 UTC
Permalink
Post by David Wagner
A related threat is that an arbitrary site may be able to delete
all cookies, even of other sites.
 http://kuza55.blogspot.com/2008/02/understanding-cookie-security.html
Implication: Whatever protocol you use that uses cookies ought to
be resilient to malicious deletion of cookies.  I'm not sure if this
was mentioned in Adam's document.
Good point. I've added this text:

[[
<t>Finally, an attacker might be able to force the user agent to
delete cookies by storing large number of cookies. Once the user agent
reaches its storage limit, the user agent will be forced to evict some
cookies. Servers SHOULD NOT rely upon user agents retaining
cookies.</t>
]]
Post by David Wagner
FYI, you've left some important stuff out of your description of the first
step.  The first step is that the attacker tricks the victim's browser
into sending the login HTTP request.
I'm not sure there is a "trick" involved. HTML contains an API (the
Form element) specifically for generating such requests.
Post by David Wagner
I guess that this probably gets labelled CSRF because it involves an
attacker tricking the victim's browser into visiting some URL.  We could
probably discuss whether it ought to be called CSRF, or whether it ought
to be called a spoofing/trusted path problem.
For reference, here's a definition of CSRF I wrote in 2008:

[[
In a cross-site request forgery (CSRF) attack, the attacker disrupts
the integrity of the user’s session with a web site by injecting
network requests via the user’s browser.
]]

By that definition, login CSRF is indeed a CSRF attack. Of course,
you might not like that definition. :)

Adam
Adam Barth
2010-02-17 01:54:26 UTC
Permalink
Post by Bill Frantz
Post by David Wagner
A related threat is that an arbitrary site may be able to delete
all cookies, even of other sites.
 http://kuza55.blogspot.com/2008/02/understanding-cookie-security.html
Implication: Whatever protocol you use that uses cookies ought to
be resilient to malicious deletion of cookies.  I'm not sure if this
was mentioned in Adam's document.
[[
       <t>Finally, an attacker might be able to force the user agent to
       delete cookies by storing large number of cookies. Once the user agent
       reaches its storage limit, the user agent will be forced to evict some
       cookies. Servers SHOULD NOT rely upon user agents retaining
       cookies.</t>
]]
Even stronger, Safari has a UI for deleting cookies. I use it frequently.
If I weren't so lazy I might have a program to mutate them instead. :-)
Indeed. The non-malicious deletion of cookies is mentioned in the
main text of the spec.

Adam
David Wagner
2010-02-16 07:21:38 UTC
Permalink
Post by Adam Barth
[[
Servers SHOULD encrypt and sign their cookies when transmitting
them to the user agent (even when sending the cookies over a secure
channel).
]]
Hmm. I wonder if this advice can be improved.
Consider two alternate paradigms, which we could recommend:

P1: Servers should encrypt and sign cookies.
P2: Servers should store only a random unguessable ID in the cookie,
and all state should be stored on the server, indexed by that ID.

I think P1 has an additional security risk that is less likely to arise
in P2: the risk of replay attacks. So, to me, P1 seems like it might be
harder to secure: i.e., it seems like if we want to explain to developers
how to use P1 securely, the list of things we have to explain is longer
than if we recommend P2.

(Yes, replay attacks could occur in P2 if developers added new IDs
and entries to the state table instead of mutating the entry associated
with an existing ID, but I conjecture that this kind of mistake is
less likely, because developers are used to updating entries in
hashmaps as time passes.)

This makes me think it may be better to recommend that developers
follow approach P2. What do you think?
Adam Barth
2010-02-16 07:37:32 UTC
Permalink
Post by Adam Barth
[[
Servers SHOULD encrypt and sign their cookies when transmitting
them to the user agent (even when sending the cookies over a secure
channel).
]]
Hmm.  I wonder if this advice can be improved.
P1: Servers should encrypt and sign cookies.
P2: Servers should store only a random unguessable ID in the cookie,
and all state should be stored on the server, indexed by that ID.
I think P1 has an additional security risk that is less likely to arise
in P2: the risk of replay attacks.  So, to me, P1 seems like it might be
harder to secure: i.e., it seems like if we want to explain to developers
how to use P1 securely, the list of things we have to explain is longer
than if we recommend P2.
(Yes, replay attacks could occur in P2 if developers added new IDs
and entries to the state table instead of mutating the entry associated
with an existing ID, but I conjecture that this kind of mistake is
less likely, because developers are used to updating entries in
hashmaps as time passes.)
This makes me think it may be better to recommend that developers
follow approach P2.  What do you think?
I think it depends on what you think the goals of the Security
Considerations section are. My understand is that it serves two
purposes:

1) It's CYA for the IETF. If something goes wrong, the IETF doesn't
want to be blamed. They want to point to the security considerations
and say "look, we told you that wasn't a good idea."

2) It's a resource for folks who want to use the protocol securely.
By listing all the things we know are wrong with the protocol and
recommending mitigations, folks can use that information to build more
secure systems.

Purpose (1) pushes us to be as ridiculous and draconian as possible.
The more we tighten down the recommendations, the better CYA we get.
However, purpose (2) pushes us to recommend things that folks might
actually want to implement.

Returning to the issue at hand, I don't think that everyone is going
to switch to using cookies exclusively as hash keys. For example,
ASP.NET exports all its server-side state to the client in an
encrypted-and-MACed cookie. They do this to avoid having to keep
per-user state on the server, making ASP.NET services scale better.

I'd rather recommend that all cookies be encrypted and MACed so that
folks build that into frameworks rather than trying to draw fine
distinctions between which things can be stored "in the clear."
Recommending that servers always encrypt-and-MAC (conveniently) serves
both purpose (1) and (2).

Adam
David Wagner
2010-02-16 07:29:12 UTC
Permalink
Post by David Wagner
Post by Mark Seaborn
Can't these attacks be addressed by the usual means of including a suitably
unguessable secret in the URL or POST parameter (which can be checked
against the cookie if you want to protect against URL leaks)?
No. Parameters cannot prevent overwriting of cookies.
Implication: Whatever protocol you layer on top of protocols had
better be resilient to overwriting of cookies (or at least, to those
kinds of overwriting that can occur, given your threat model).
I see that I can't write. What I meant to say:

Implication: Whatever protocol you layer on top of *cookies* had
better be resilient to overwriting of cookies (or at least, to those
kinds of overwriting that can occur, given your threat model).

Sorry.

P.S. I hope I'm interpreting what you mean by "these attacks" accurately:
I'm interpreting it to mean "attacks that overwrite cookies".
David Wagner
2010-02-16 08:35:50 UTC
Permalink
Post by Adam Barth
I think it depends on what you think the goals of the Security
Considerations section are. My understand is that it serves two
1) It's CYA for the IETF. If something goes wrong, the IETF doesn't
want to be blamed. They want to point to the security considerations
and say "look, we told you that wasn't a good idea."
2) It's a resource for folks who want to use the protocol securely.
By listing all the things we know are wrong with the protocol and
recommending mitigations, folks can use that information to build more
secure systems.
Purpose (1) pushes us to be as ridiculous and draconian as possible.
The more we tighten down the recommendations, the better CYA we get.
However, purpose (2) pushes us to recommend things that folks might
actually want to implement.
My goals: Help people use the protocol securely. Yes, I'd like to
recommend things that folks might actually want to implement. Listing
things that can go wrong (a blacklist of antipatterns to avoid) is fine as
far as it goes, but that's not always the best way to help developers;
sometimes it might be helpful to also give positive recommendations
(some positive patterns to follow).

Aren't CYA considerations are a dubious basis for making these kinds of
technical recommendations? Can't the CYA goal can be met in other ways,
without compromising the quality of the technical advice?
Post by Adam Barth
Returning to the issue at hand, I don't think that everyone is going
to switch to using cookies exclusively as hash keys. For example,
ASP.NET exports all its server-side state to the client in an
encrypted-and-MACed cookie. They do this to avoid having to keep
per-user state on the server, making ASP.NET services scale better.
I can't speak to performance or scalability, but when it comes
to security, replay attacks are a non-obvious hazard with ASP.NET's
ViewState mechanism (just as you'd predict for any encrypt-and-MAC
approach to secure management of cookies):
http://seclists.org/bugtraq/2005/May/27
http://scottonwriting.net/sowblog/posts/3747.aspx
Note the bottom line advice from the latter post: it's tempting for
developers to think that since ViewState is encrypted and signed, it's
safe; but that is mistaken; and the author of MS's documentation on
ViewState recommends "don't trust view state" (even if it was signed
and encrypted).

I'm not sure about the full scope within which ViewStates can be replayed,
and explaining this to a programmer sounds non-trivial. If I'm reading
Microsoft's documentation correctly, ViewState is bound to the page
but not to the session, and it's not bound to the username unless the
programmer takes a special step to set ViewStateUserKey. See also:
http://scottonwriting.net/sowblog/posts/3747.aspx
It sounds like by default ViewStates can be replayed from session to
session, or from user to user, or to an instance of the same page but
with different query parameters. But Microsoft's documentation fails
to explain this hazard in an understandable way:
http://msdn.microsoft.com/en-us/library/ms972976.aspx
Post by Adam Barth
I'd rather recommend that all cookies be encrypted and MACed so that
folks build that into frameworks rather than trying to draw fine
distinctions between which things can be stored "in the clear."
I too think it would be good if frameworks supported secure use of this
kind of state. Are you suggesting that if the framework is providing
this support, then encrypt-and-MAC is better than unguessable IDs?
I don't see why. Can you spell out the argument?

A framework could support storing state with unguessable IDs in the same
way it supports session management. Actually, for some common usage
cases, there is no need to "wish for" new features from the framework:
in cases where developers want to use session cookies, they can just
store the data in the Session object.

Perhaps the new functionality you are asking for (from frameworks) is
support for state stored in persistent cookies. I'm not sure whether
it will be easier to support persistent state generically in frameworks
using unguessable IDs or easier with encrypt-and-MAC. With unguessable
ID approach, there's probably some integration with the database needed.
With encrypt-and-MAC, there's probably some key management (particularly
for clustering or load balancing).

For non-persistent state, a framework that uses unguessable IDs might
be provide a safer API than a framework that uses encrypt-and-MAC.
The natural API for non-persistent state (analogous to that for Session
state) becomes immune to replay attacks the framework implements it using
unguessable IDs, but I don't immediately see how to implement such an API
securely using plain encrypt-and-MAC (without enabling replay attacks).

In any case, if you decide to recommend encrypt-and-MAC, then I think
it would be useful to warn about replay attacks.

Another possibility is to mention both approaches and the considerations
for each.

By the way, I had no intention to draw fine distinctions between which
things can be stored in the clear. Instead my intent was to describe
one fairly simple pattern that is reasonably secure.
Adam Barth
2010-02-16 17:21:21 UTC
Permalink
Thanks for your feedback. I've added the following text:

[[
However, encrypting and signing cookie
contents does not prevent an attacker from transplanting a cookie from
one user agent to another or from replying the cookie at a later
time.
]]

and

[[
<section anchor="session-identifiers" title="Session Identifiers">
<t>Instead of storing session information directly in a cookie (where
it might be exposed to or replayed by an attacker), servers commonly
store a nonce (or "session identifier") in a cookie. When the server
receives an HTTP request with a nonce, the server can look up state
information associated with the cookie using the nonce as a key.</t>

<t>Using session identifier cookies limits the damage an attacker can
cause if the attacker learns the contents of a cookie because the
nonce is useful only for interacting with the server (unlike non-nonce
cookie content, which might itself be sensitive). Furthermore, using a
single nonce prevents an attacker from "splicing" together cookie
content from two interactions with the server, which could cause the
server to behave unexpectedly.</t>

<t>Using session identifiers is not without risk. For example, the
server SHOULD take care to avoid "session fixation" vulnerabilities. A
session fixation vulnerability proceeds in three steps. First, the
attacker transplants a session identifier from his or her user agent
to the victim's user agent. Second, the victim uses that session
identifier to interact with the server, possibly imbuing the session
identifier with the user's credentials or confidential information.
Third, the attacker uses the session identifier to interact with
server directly, possibly obtaining the user's authority or
confidential information.</t>
</section>
]]

Let me know if you have additional feedback.

Adam
Post by Adam Barth
I think it depends on what you think the goals of the Security
Considerations section are.  My understand is that it serves two
1) It's CYA for the IETF.  If something goes wrong, the IETF doesn't
want to be blamed.  They want to point to the security considerations
and say "look, we told you that wasn't a good idea."
2) It's a resource for folks who want to use the protocol securely.
By listing all the things we know are wrong with the protocol and
recommending mitigations, folks can use that information to build more
secure systems.
Purpose (1) pushes us to be as ridiculous and draconian as possible.
The more we tighten down the recommendations, the better CYA we get.
However, purpose (2) pushes us to recommend things that folks might
actually want to implement.
My goals: Help people use the protocol securely.  Yes, I'd like to
recommend things that folks might actually want to implement.  Listing
things that can go wrong (a blacklist of antipatterns to avoid) is fine as
far as it goes, but that's not always the best way to help developers;
sometimes it might be helpful to also give positive recommendations
(some positive patterns to follow).
Aren't CYA considerations are a dubious basis for making these kinds of
technical recommendations?  Can't the CYA goal can be met in other ways,
without compromising the quality of the technical advice?
Post by Adam Barth
Returning to the issue at hand, I don't think that everyone is going
to switch to using cookies exclusively as hash keys.  For example,
ASP.NET exports all its server-side state to the client in an
encrypted-and-MACed cookie.  They do this to avoid having to keep
per-user state on the server, making ASP.NET services scale better.
I can't speak to performance or scalability, but when it comes
to security, replay attacks are a non-obvious hazard with ASP.NET's
ViewState mechanism (just as you'd predict for any encrypt-and-MAC
 http://seclists.org/bugtraq/2005/May/27
 http://scottonwriting.net/sowblog/posts/3747.aspx
Note the bottom line advice from the latter post: it's tempting for
developers to think that since ViewState is encrypted and signed, it's
safe; but that is mistaken; and the author of MS's documentation on
ViewState recommends "don't trust view state" (even if it was signed
and encrypted).
I'm not sure about the full scope within which ViewStates can be replayed,
and explaining this to a programmer sounds non-trivial.  If I'm reading
Microsoft's documentation correctly, ViewState is bound to the page
but not to the session, and it's not bound to the username unless the
 http://scottonwriting.net/sowblog/posts/3747.aspx
It sounds like by default ViewStates can be replayed from session to
session, or from user to user, or to an instance of the same page but
with different query parameters.  But Microsoft's documentation fails
 http://msdn.microsoft.com/en-us/library/ms972976.aspx
Post by Adam Barth
I'd rather recommend that all cookies be encrypted and MACed so that
folks build that into frameworks rather than trying to draw fine
distinctions between which things can be stored "in the clear."
I too think it would be good if frameworks supported secure use of this
kind of state.  Are you suggesting that if the framework is providing
this support, then encrypt-and-MAC is better than unguessable IDs?
I don't see why.  Can you spell out the argument?
A framework could support storing state with unguessable IDs in the same
way it supports session management.  Actually, for some common usage
in cases where developers want to use session cookies, they can just
store the data in the Session object.
Perhaps the new functionality you are asking for (from frameworks) is
support for state stored in persistent cookies.  I'm not sure whether
it will be easier to support persistent state generically in frameworks
using unguessable IDs or easier with encrypt-and-MAC.  With unguessable
ID approach, there's probably some integration with the database needed.
With encrypt-and-MAC, there's probably some key management (particularly
for clustering or load balancing).
For non-persistent state, a framework that uses unguessable IDs might
be provide a safer API than a framework that uses encrypt-and-MAC.
The natural API for non-persistent state (analogous to that for Session
state) becomes immune to replay attacks the framework implements it using
unguessable IDs, but I don't immediately see how to implement such an API
securely using plain encrypt-and-MAC (without enabling replay attacks).
In any case, if you decide to recommend encrypt-and-MAC, then I think
it would be useful to warn about replay attacks.
Another possibility is to mention both approaches and the considerations
for each.
By the way, I had no intention to draw fine distinctions between which
things can be stored in the clear.  Instead my intent was to describe
one fairly simple pattern that is reasonably secure.
_______________________________________________
cap-talk mailing list
http://www.eros-os.org/mailman/listinfo/cap-talk
Adam Barth
2010-02-16 17:22:22 UTC
Permalink
       one user agent to another or from replying the cookie at a later
s/replying/replaying/

Adam
Bill Frantz
2010-02-17 01:40:18 UTC
Permalink
Post by Adam Barth
Post by David Wagner
A related threat is that an arbitrary site may be able to delete
all cookies, even of other sites.
 http://kuza55.blogspot.com/2008/02/understanding-cookie-security.html
Implication: Whatever protocol you use that uses cookies ought to
be resilient to malicious deletion of cookies.  I'm not sure if this
was mentioned in Adam's document.
[[
<t>Finally, an attacker might be able to force the user agent to
delete cookies by storing large number of cookies. Once the user agent
reaches its storage limit, the user agent will be forced to evict some
cookies. Servers SHOULD NOT rely upon user agents retaining
cookies.</t>
]]
Even stronger, Safari has a UI for deleting cookies. I use it frequently.
If I weren't so lazy I might have a program to mutate them instead. :-)

Cheers - Bill

-----------------------------------------------------------------------
Bill Frantz | gets() remains as a monument | Periwinkle
(408)356-8506 | to C's continuing support of | 16345 Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032
David Wagner
2010-02-17 18:04:09 UTC
Permalink
Post by Mark Seaborn
What's not clear to me is which tabs you're saying the attacker can
* Tab A: https://webmail.com/users-webmail-webkey
* Tab B: https://attacker.com
[...]
Post by Mark Seaborn
Are you saying that the attacker can also cause tab A to navigate to an
attacker-supplied web-key?
Adam can give you an authoritative answer. I think it depends
upon the provenance of Tab A (how it was opened). I highly recommend
this paper (Adam is a co-author):

http://www.adambarth.com/papers/2008/barth-jackson-mitchell.pdf

Also, Google's Browser Security Handbook is often a useful resource
for these kinds of questions, although it doesn't seem to have very
clear coverage of frame/tab navigation issues:

http://code.google.com/p/browsersec/wiki/Main

I have a recollection that script from Tab A can navigate Tab B if
Tab A opened Tab B, but I'm not certain about that. I also have a
vague recollection that Tab A may be able to navigate Tab B if Tab B
was opened with a name that is guessable or known to Tab A. But see
Adam's paper for the definitive answers; I'm just going from memory,
and my memory is probably wrong. It's tricky and at times
counter-intuitive (at least for me). Don't you just love web
security?
Adam Barth
2010-02-17 19:47:44 UTC
Permalink
Post by Mark Seaborn
What's not clear to me is which tabs you're saying the attacker can
* Tab A:  https://webmail.com/users-webmail-webkey
* Tab B:  https://attacker.com
[...]
Post by Mark Seaborn
Are you saying that the attacker can also cause tab A to navigate to an
attacker-supplied web-key?
Adam can give you an authoritative answer.  I think it depends
upon the provenance of Tab A (how it was opened).
The exact details are somewhat complicated (especially in
multi-process browsers). It's easiest to make the assumption that an
attacker can navigate any top-level tab at any time.

Adam
Tyler Close
2010-02-17 18:52:12 UTC
Permalink
Hi Adam,

Thanks for undertaking this work. It'll be great to be able to refer
people to an RFC document that explains the problems with cookies.

So far, I have just one comment on a paragraph in the Security Section:

"""
Although this security concern goes by a number of names (e.g.,
cross-site request forgery), the issue stems from cookies being a form
of ambient authority. Cookies encourage server operators to separate
designation (in the form of URLs) from authorization (in the form of
cookies). Disentangling designation and authorization can cause the
server and its clients to become confused deputies and undertake
undesirable actions.
"""

Although the term "Confused Deputy" seems to have caught on somewhat,
I find that people (even very smart ones) almost universally don't
really understand what it means. So I'm worried the last sentence of
the above paragraph will simply read to most people as: "Disentangling
designation and authorization can cause bad stuff to happen." Which is
true, but not an effective explanation.

For a thread on another mailing list, I recently wrote the following:

"""
When a private resource is identified by a guessable URI an attacker
can navigate an authorized user to it under a pretense of the
attacker's choosing. In this unexpected context, the attacker can
cause the user to interact with the private resource in an undesired
way. By measuring response times, the attacker may also learn
significant confidential information about the private resource. Using
unguessable URIs, instead of guessable ones, prevents these attacks.
"""

I think the essence of the point is that disentangling designation and
authorization enables an attacker to direct how the permissions of
*other* agents are applied and so lets an attacker exercise
permissions that he himself doesn't have. Perhaps the existing
paragraph could be rewritten to:

"""
Although this security concern goes by a number of names (e.g.,
cross-site request forgery, confused deputy), the issue stems from
cookies being a form
of ambient authority. Cookies encourage server operators to separate
designation (in the form of URLs) from authorization (in the form of
cookies). Consequently, an attacker can provide the designation for a
request and the victim user agent provides the authorization. As a
result, the user agent performs actions chosen by the attacker, but
attributed to the user.
"""

Maybe something like the above will hit developer's brain pan in a
more permanent way.

--Tyler
Raoul Duke
2010-02-17 19:05:03 UTC
Permalink
Post by Tyler Close
Although the term "Confused Deputy" seems to have caught on somewhat,
I find that people (even very smart ones) almost universally don't
really understand what it means. So I'm worried the last sentence of
the above paragraph will simply read to most people as: "Disentangling
designation and authorization can cause bad stuff to happen." Which is
true, but not an effective explanation.
$0.02 -- apart from the "even very smart ones" aspect of what Tyler
said, i was such a person who didn't really understand - er, and might
not totally still for all i know - what it means. (i think that is
both because security stuff can easily be confusing, and because the
description i read i think left room for confusion. seems like
something worth re-documenting better somewhere someday.)

sincerely.
Ka-Ping Yee
2010-02-17 19:48:19 UTC
Permalink
Post by Tyler Close
Although the term "Confused Deputy" seems to have caught on somewhat,
I find that people (even very smart ones) almost universally don't
really understand what it means.
"Confused" doesn't really express the malicious nature of the confusion.
I know "confused deputy" already has a lot of mindshare, but I have at
times wondered if "abused deputy" or "hoodwinked deputy" might get that
across better.


-- ?!ng
Adam Barth
2010-02-17 20:09:06 UTC
Permalink
Post by Tyler Close
Maybe something like the above will hit developer's brain pan in a
more permanent way.
Thanks Tyler. I've changed the text to:

[[
<t>Although this security concern goes by a number of names (e.g.,
cross-site request forgery, confused deputy), the issue stems from
cookies being a form of ambient authority. Cookies encourage server
operators to separate designation (in the form of URLs) from
authorization (in the form of cookies). Consequently, the user agent
might supply the authorization for a resource designated by the
attacker, possibly causing the server or its clients to undertake
actions designated by the attacker as though they were authorized by
the user.</t>
]]

This text is meant to be the same as yours, but slightly more
conservative in its claims about what the attacker can actually do and
slightly more precise about which entity undertakes which actions.

Let me know if you have additional feedback.

Adam
Bill Frantz
2010-02-17 21:02:48 UTC
Permalink
Don't you just love web security?
Well not really. (And I realize that David wrote it with his tongue firmly
in his cheek).

In the old days, when I didn't understand how a security system achieved
its goals, I thought the problem was that I didn't understand the system
well enough. As time passed, and I became more of a curmudgeon, I found
that in most cases, the reason I didn't understand how it worked was
because it didn't work. Web security has more fiddley little bits, with new
ones being discovered every day. And these flaws assume "perfect"
implementations. And people wonder why I normally run my browser with
Javascript turned off.

Cheers - Bill

-----------------------------------------------------------------------
Bill Frantz | I like the farmers' market | Periwinkle
(408)356-8506 | because I can get fruits and | 16345 Englewood Ave
www.pwpconsult.com | vegetables without stickers. | Los Gatos, CA 95032
Loading...