Discussion:
[cap-talk] Challenge problem from David Mazieres
David Wagner
2006-12-12 08:50:04 UTC
Permalink
MarkM has been looking around for security challenge problems.
I've got one to add to the list. I've been having an email
conversation with David Mazieres about the HiStar system, and he
raised an interesting problem that I think fits the bill.

You've got a laptop. While travelling, you sometimes connect
to the public Internet unprotected ("skinnydipping"). While at
home, you sometimes connect to your work's intranet over a VPN.
The desired security policy is this: any data you got over the
public Internet must be scanned with a virus scanner before it
can be sent to your work's intranet via the VPN.

Question: How do we enforce this policy? Of course, the real
question is how a capability system can be used to enforce this
desired policy, and whether capabilities provide any extra leverage.

In David Mazieres' formulation, he wanted an OS solution that
would work with legacy applications, but we could presumably
relax that a bit. For instance, if you've got in mind a hypothetical
capability-based desktop, a la CapDesk, then we could ask how your
desktop could enforce this policy without making unreasonable changes
to all your applications.

Any takers?

(My first reaction: This problem is orthogonal to the kinds of
problems that capability systems usually try to solve. Consequently,
capability systems might not have any special advantage at enforcing
this kind of policy. That seems ok; capabilities aren't a silver
bullet, and they don't solve every problem in the world. But maybe
others have a different reaction, or can see some clever way in which
capabilities would make this problem easier to solve.)
Mark S. Miller
2006-12-12 10:19:54 UTC
Permalink
Post by David Wagner
The desired security policy is this: any data you got over the
public Internet must be scanned with a virus scanner before it
can be sent to your work's intranet via the VPN.
Is it adequate for a solution to simply virus-scan all data before sending it
over the VPN? Alternatively, is it adequate to simply virus-scan all data
received from the public Internet?
--
Text by me above is hereby placed in the public domain

Cheers,
--MarkM
David Hopwood
2006-12-12 17:05:43 UTC
Permalink
Post by David Wagner
MarkM has been looking around for security challenge problems.
I've got one to add to the list. I've been having an email
conversation with David Mazieres about the HiStar system, and he
raised an interesting problem that I think fits the bill.
You've got a laptop. While travelling, you sometimes connect
to the public Internet unprotected ("skinnydipping"). While at
home, you sometimes connect to your work's intranet over a VPN.
The desired security policy is this: any data you got over the
public Internet must be scanned with a virus scanner before it
can be sent to your work's intranet via the VPN.
Question: How do we enforce this policy? Of course, the real
question is how a capability system can be used to enforce this
desired policy, and whether capabilities provide any extra leverage.
In David Mazieres' formulation, he wanted an OS solution that
would work with legacy applications, but we could presumably
relax that a bit. For instance, if you've got in mind a hypothetical
capability-based desktop, a la CapDesk, then we could ask how your
desktop could enforce this policy without making unreasonable changes
to all your applications.
Any takers?
- Confine the browser (and any other application with direct internet
access) that is used for 'skinnydipping'. It should already have been
designed to be confineable, along similar lines to the DarpaBrowser.

- Add a hook to the browser's powerbox so that is unable to write
anything outside its own private file area, unless it has been
virus-checked. This can be implemented by wrapping any directory
capabilities returned to the browser.
--
David Hopwood <***@blueyonder.co.uk>
David Wagner
2006-12-12 16:56:30 UTC
Permalink
Post by Mark S. Miller
Post by David Wagner
The desired security policy is this: any data you got over the
public Internet must be scanned with a virus scanner before it
can be sent to your work's intranet via the VPN.
Is it adequate for a solution to simply virus-scan all data before sending it
over the VPN? Alternatively, is it adequate to simply virus-scan all data
received from the public Internet?
Beats me. I guess part of the challenge is filling in the details of
the challenge problem. :-)

Here would be my vote. It might be interesting to try answering it
either way, but I think it's probably a more interesting problem if the
answer to both of your questions is taken to be "No".

For instance, you just know that someone is going to propose the
following variation of the problem, where the desired security policy
is to not allow any data you got over the public Internet to be sent to
your work's intranet via the VPN. If we answer "No" to both of your
questions, then this variation is just a special case of the problem
I posed (the special case where the virus scanner always says "virus
found, transfer forbidden" for any file fed to it). If we answer "Yes"
to your questions, we'll have to think about the variation separately.

P.S. To forestall another possible question, I personally think we should
interpret this to refer only to overt channels of communication, and to
ignore all covert channels. Also, I think we should assume that the
user might have many different networked applications that he uses to
access the Internet/intranet, so we don't want to have to change them all.
Sandro Magi
2006-12-12 17:10:43 UTC
Permalink
Not the most efficient solution, but serves to highlight the flexibility
of EROS-style systems:

Have a different "network interface" domain for each network you connect
to. You can thus apply custom filters to any data passing through a
particular interface, ie. virus-checking and firewalls on the open
network, and no filters on the VPN.

Sandro
Post by David Wagner
MarkM has been looking around for security challenge problems.
I've got one to add to the list. I've been having an email
conversation with David Mazieres about the HiStar system, and he
raised an interesting problem that I think fits the bill.
You've got a laptop. While travelling, you sometimes connect
to the public Internet unprotected ("skinnydipping"). While at
home, you sometimes connect to your work's intranet over a VPN.
The desired security policy is this: any data you got over the
public Internet must be scanned with a virus scanner before it
can be sent to your work's intranet via the VPN.
Question: How do we enforce this policy? Of course, the real
question is how a capability system can be used to enforce this
desired policy, and whether capabilities provide any extra leverage.
In David Mazieres' formulation, he wanted an OS solution that
would work with legacy applications, but we could presumably
relax that a bit. For instance, if you've got in mind a hypothetical
capability-based desktop, a la CapDesk, then we could ask how your
desktop could enforce this policy without making unreasonable changes
to all your applications.
Any takers?
(My first reaction: This problem is orthogonal to the kinds of
problems that capability systems usually try to solve. Consequently,
capability systems might not have any special advantage at enforcing
this kind of policy. That seems ok; capabilities aren't a silver
bullet, and they don't solve every problem in the world. But maybe
others have a different reaction, or can see some clever way in which
capabilities would make this problem easier to solve.)
_______________________________________________
cap-talk mailing list
http://www.eros-os.org/mailman/listinfo/cap-talk
David Wagner
2006-12-12 17:06:54 UTC
Permalink
Post by David Hopwood
- Confine the browser (and any other application with direct internet
access) that is used for 'skinnydipping'. It should already have been
designed to be confineable, along similar lines to the DarpaBrowser.
- Add a hook to the browser's powerbox so that is unable to write
anything outside its own private file area, unless it has been
virus-checked. This can be implemented by wrapping any directory
capabilities returned to the browser.
I like this, as a starting point for discussion.
However, I have two questions:

- Does this mean that the browser instance I use to browse the
public Internet cannot share any settings or preferences with
the browser instance I use when connected to my intranet? In
essence, I have to configure every network application twice,
and manually synchronize them? I can see some ways how one
might avoid that, but they may require modifying every network
application that I might use while connected to the Internet.

- Doesn't this require modifying (the powerbox part of) every
network application that I might use while connected to the
public Internet, to support this sort of virtualization?
David Hopwood
2006-12-12 20:27:14 UTC
Permalink
Post by David Wagner
Post by David Hopwood
- Confine the browser (and any other application with direct internet
access) that is used for 'skinnydipping'. It should already have been
designed to be confineable, along similar lines to the DarpaBrowser.
- Add a hook to the browser's powerbox so that is unable to write
anything outside its own private file area, unless it has been
virus-checked. This can be implemented by wrapping any directory
capabilities returned to the browser.
I like this, as a starting point for discussion.
- Does this mean that the browser instance I use to browse the
public Internet cannot share any settings or preferences with
the browser instance I use when connected to my intranet?
No, it can share settings. To make sure that these settings cannot be
used as a channel between instances, we can do something like this:

- there is a settings editor UI which is a separate component to the
rest of the browser. The confined instances never write to the settings
directly; they just have read-only access, plus the ability to make
the settings editor appear when the user asks for it (and it is not
already displayed).

- the settings editor has the ability to display a (labelled) window,
and its interface essentially acts as a function from old to new
settings. It doesn't need, and therefore doesn't have, any network
or filesystem access.

With two browser instances, the information flow in this system is:

<pre>

.----------------> [ user ] <----------------.
| ^ |
v | v
[ Internet ] v [ intranet ]
[ browser ] <---- [ settings editor ] ----> [ browser ]
^ | | ^
| | | |
| '---------> [ virus checker ] | |
| | | |
| v | |
| [ filesystem ] <-----------' |
| |
v v
( Internet ) <-------> [ VPN bridge ] <------> ( intranet )

</pre>

Note that information cannot get from the network cloud to the filesystem
other than:
- via the virus checker;
- via the VPN bridge;
- by the user cutting and pasting (or retyping) between applications.

I think that cutting and pasting is probably out of scope for this problem,
although it could be addressed in something like the EROS Window System
design: <http://www.sagecertification.org/events/sec04/tech/shapiro.html>.

I imagine that a capability-based desktop environment would already have a
framework implementing the above, because it is such a common requirement to
be able to share settings between otherwise confined application instances.

(The problem statement begs the question of why you would not want the files
saved by the intranet browser also to be virus checked -- bearing in mind
that false positive virus indications are quite common, and would need to be
overridable by the user in any case. But hopefully the advantage of being
able to prevent direct communication between application instances, while
still allowing them to share settings, is clear in any case.)

Note that routinely separating the settings editor for each application from
the rest of the app has some other advantages: it would make it easier to
manage (back up, roll back, share between user accounts, limit according to
organisational policy) settings in a uniform way for all applications.
This could well improve usability, compared to the mess of settings handled
differently by each app in current operating systems.
Post by David Wagner
In essence, I have to configure every network application twice,
and manually synchronize them? I can see some ways how one
might avoid that, but they may require modifying every network
application that I might use while connected to the Internet.
- Doesn't this require modifying (the powerbox part of) every
network application that I might use while connected to the
public Internet, to support this sort of virtualization?
That's why I said 'add a hook to the browser's powerbox'. Running some
boolean predicate on file contents before allowing them to be saved is
such an obviously useful facility, then we can reasonably expect it to
have been anticipated. (In fact, it's insane that virus checkers don't
already work that way.) If we have more than one predicate to run, we can
'and' them without requiring them to have knowledge of each other.

In any case, the powerbox that needs to be changed/hooked here is logically
part of the shell, not of the app.
--
David Hopwood <***@blueyonder.co.uk>
David Hopwood
2006-12-12 22:02:05 UTC
Permalink
Post by David Hopwood
<pre>
.----------------> [ user ] <----------------.
| ^ |
v | v
[ Internet ] v [ intranet ]
[ browser ] <---- [ settings editor ] ----> [ browser ]
^ | | ^
| | | |
| '---------> [ virus checker ] | |
| | | |
| v | |
| [ filesystem ] <-----------' |
| |
v v
( Internet ) <-------> [ VPN bridge ] <------> ( intranet )
</pre>
Note that information cannot get from the network cloud to the filesystem
- via the virus checker;
- via the VPN bridge;
- by the user cutting and pasting (or retyping) between applications.
I meant, "from the Internet to the filesystem".
--
David Hopwood <***@blueyonder.co.uk>
David Wagner
2006-12-12 17:11:27 UTC
Permalink
Post by Sandro Magi
Not the most efficient solution, but serves to highlight the flexibility
Have a different "network interface" domain for each network you connect
to. You can thus apply custom filters to any data passing through a
particular interface, ie. virus-checking and firewalls on the open
network, and no filters on the VPN.
I don't follow. It sounds like we have a layering mismatch. By
"interface", I assume you are talking at the level of the TCP/IP stack
or of the Ethernet interface. But at that level of the system, the code
only sees Layer 2 or Layer 3 packets, whereas the security policy is
about application-layer semantics. For instance, let's say you use your
web browser to read your email by navigating to https://www.hotmail.com
and using Hotmail's webmail interface, then you save an attachment that
someone sent you to your hard disk. The TCP/IP stack or Ethernet driver
has no chance of reconstructing the file contents from the bits flying
over the wire; it is going to see random-looking bits (SSL ciphertexts),
and the bits it sees may have an extremely indirect relationship to the
way those bits are interpreted by the application. Am I misunderstanding
your proposed solution?
Sandro Magi
2006-12-12 17:44:13 UTC
Permalink
Post by David Wagner
Post by Sandro Magi
Not the most efficient solution, but serves to highlight the flexibility
Have a different "network interface" domain for each network you connect
to. You can thus apply custom filters to any data passing through a
particular interface, ie. virus-checking and firewalls on the open
network, and no filters on the VPN.
I don't follow. It sounds like we have a layering mismatch. By
"interface", I assume you are talking at the level of the TCP/IP stack
or of the Ethernet interface. But at that level of the system, the code
only sees Layer 2 or Layer 3 packets, whereas the security policy is
about application-layer semantics. For instance, let's say you use your
web browser to read your email by navigating to https://www.hotmail.com
and using Hotmail's webmail interface, then you save an attachment that
someone sent you to your hard disk. The TCP/IP stack or Ethernet driver
has no chance of reconstructing the file contents from the bits flying
over the wire; it is going to see random-looking bits (SSL ciphertexts),
and the bits it sees may have an extremely indirect relationship to the
way those bits are interpreted by the application. Am I misunderstanding
your proposed solution?
Yes and no. I agree it would be difficult to analyze at some layers
(although some ISPs do it anyway), but given a "componentized" system
like EROS, the interposition of a filter can be done at *any* layer,
Ethernet, TCP/IP, post-decryption, file system, http and https protocols
can be implemented in their own components, between which a filter could
analyze requests/responses, etc.

However, I am assuming that we are designing the system from scratch to
enable these patterns (though any sufficiently componentized system
would probably suffice), and not trying to retrofit existing
applications into this usage model.

If you're trying to do this with Firefox, I think you'd have to
interpose a virus scanning filter in front of the file system. That may
not be enough if you keep the application running while you switch
networks though. Interesting challenge.

Sandro
Jonathan Smith
2006-12-12 20:51:46 UTC
Permalink
The SubOS ideas may be relevant here.

http://citeseer.ist.psu.edu/421583.html

-JMS
Post by Sandro Magi
Post by David Wagner
Post by Sandro Magi
Not the most efficient solution, but serves to highlight the
flexibility
Have a different "network interface" domain for each network you connect
to. You can thus apply custom filters to any data passing through a
particular interface, ie. virus-checking and firewalls on the open
network, and no filters on the VPN.
I don't follow. It sounds like we have a layering mismatch. By
"interface", I assume you are talking at the level of the TCP/IP stack
or of the Ethernet interface. But at that level of the system, the code
only sees Layer 2 or Layer 3 packets, whereas the security policy is
about application-layer semantics. For instance, let's say you use your
web browser to read your email by navigating to https://
www.hotmail.com
and using Hotmail's webmail interface, then you save an attachment that
someone sent you to your hard disk. The TCP/IP stack or Ethernet driver
has no chance of reconstructing the file contents from the bits flying
over the wire; it is going to see random-looking bits (SSL
ciphertexts),
and the bits it sees may have an extremely indirect relationship to the
way those bits are interpreted by the application. Am I
misunderstanding
your proposed solution?
Yes and no. I agree it would be difficult to analyze at some layers
(although some ISPs do it anyway), but given a "componentized" system
like EROS, the interposition of a filter can be done at *any* layer,
Ethernet, TCP/IP, post-decryption, file system, http and https
protocols
can be implemented in their own components, between which a filter could
analyze requests/responses, etc.
However, I am assuming that we are designing the system from
scratch to
enable these patterns (though any sufficiently componentized system
would probably suffice), and not trying to retrofit existing
applications into this usage model.
If you're trying to do this with Firefox, I think you'd have to
interpose a virus scanning filter in front of the file system. That may
not be enough if you keep the application running while you switch
networks though. Interesting challenge.
Sandro
_______________________________________________
cap-talk mailing list
http://www.eros-os.org/mailman/listinfo/cap-talk
Jonathan M. Smith, Pompa Professor of EAS,
Professor of CIS, University of Pennsylvania,
T: 215.898.9509, E: ***@cis.upenn.edu
David Wagner
2006-12-12 19:10:06 UTC
Permalink
Post by Sandro Magi
[...]the [system] only sees Layer 2 or Layer 3 packets, whereas the
security policy is about application-layer semantics. For instance,
let's say you use your web browser to read your email by navigating to
https://www.hotmail.com and using Hotmail's webmail interface, then you
save an attachment that someone sent you to your hard disk. The TCP/IP
stack or Ethernet driver has no chance of reconstructing the file contents
from the bits flying over the wire; it is going to see random-looking bits
(SSL ciphertexts), and the bits it sees may have an extremely indirect
relationship to the way those bits are interpreted by the application.
Yes and no. I agree it would be difficult to analyze at some layers
(although some ISPs do it anyway), but given a "componentized" system
like EROS, the interposition of a filter can be done at *any* layer,
Ethernet, TCP/IP, post-decryption, file system, http and https protocols
can be implemented in their own components, between which a filter could
analyze requests/responses, etc.
Well, I have to say that I don't buy any of this.

First, it's funny that you mention that ISPs analyze application layer
content, because my understanding is they do it poorly for the most
part, and they only do it in a very limited fashion. Maybe a better
example are stateful packet inspection firewalls. Their marketers will
say that they are enforcing application-layer filtering policies at the
packet layer; yet any techie who has looked closely at their technology
knows that this is basically a fairy tale told to those who don't know
any better. It's really easy for an attacker to confuse the packet-layer
filters, because it's practically impossible to re-implement all of the
application semantics accurately at the OS layer. I think experience
with stateful packet inspection firewalls refutes the idea that you can
reliably enforce application-layer content policies at the packet layer.

Second, a componentized OS doesn't help, because we're talking about
application-layer policies. No matter how modular your OS is, you can't
enforce application-layer policies via OS-layer mechanisms without some
help from applications, because the OS won't understand the application
semantics.

Third, I encourage you to re-read my example above. I guarantee you that
in any sane system, SSL will be implemented at the application layer, as
part of the web browser, so the OS won't know the keys needed to decrypt.
Also, the HTTP processing will be done at the application layer, so it's
not reasonable to expect the OS to be able to reliably reconstruct the
HTTP stream. And, the HTML processing will be done at the application
layer, and there's really no chance of the OS being able to independent
reconstruct exactly how the web browser is going to parse and process
that HTML. You just can't do this at the OS layer without re-implementing
all of your applications inside the OS, and no one would recommend that.

Fourth, let me mention a few more applications just to give you some
idea of how hard this is. Suppose I run my IMAP or POP client to read
some email. Are you going to have IMAP and POP built into the OS, too?
Suppose I run Kazaa (or my favorite P2P client) to download some music
files and save them onto my hard drive. Are OS implementors going to
reverse engineer and re-implement the Kazaa network protocol in the OS?
I don't think so!
Post by Sandro Magi
However, I am assuming that we are designing the system from scratch to
enable these patterns (though any sufficiently componentized system
would probably suffice), and not trying to retrofit existing
applications into this usage model.
I think retrofitting is an important part of the point of the challenge.
Any answer that begins by saying "modify all of your applications as
follows: ..." sounds problematic, to my thinking. I suggest that a
truly convincing answer might start from a system built from scratch
to capability principles (e.g., some hypothetical CapDesk-like system)
but not necessarily built with this particular problem in mind, and then
would explain how you will extend the system to solve this challenge
without modifying all existing applications.

Keep in mind that there are competing solutions that do not involve
modifying all applications or re-writing the whole system from scratch
with this security policy in mind, so an implicit part of the challenge
is: Can capabilities do as well, or better than, competing approaches?
My guess is that the answer is likely to be "No, this is not the sort
of problem that capabilities were designed to solve" -- but I could be
wrong.
Sandro Magi
2006-12-12 20:07:33 UTC
Permalink
Post by David Wagner
Well, I have to say that I don't buy any of this.
First, it's funny that you mention that ISPs analyze application layer
content, because my understanding is they do it poorly for the most
part, and they only do it in a very limited fashion.
I never said they were any good, just that they could, and try to, do it.
Post by David Wagner
[...]
Second, a componentized OS doesn't help, because we're talking about
application-layer policies. No matter how modular your OS is, you can't
enforce application-layer policies via OS-layer mechanisms without some
help from applications, because the OS won't understand the application
semantics.
I'm also not talking about the OS enforcing anything, except providing
mechanisms for the applications to help the user enforce his policies.
Post by David Wagner
Third, I encourage you to re-read my example above. I guarantee you that
in any sane system, SSL will be implemented at the application layer, as
part of the web browser, so the OS won't know the keys needed to decrypt.
I never said the OS would be aware of the keys or need to be. Take the
following componentized browser design:

...Rest of browser <-> http <--+-> network stream
| | |
File system +-SSL-+

In EROS, http, SSL, network stream, could be separate processes, and the
browser process constructor could accept capabilities to third-party SSL
and http components. So then why couldn't the user simply define a new
configuration with a third-party virus scanner that understands http:

...Rest of browser <-> virus scanner <-> http <--+-> network stream
| | |
Virus scanner +-SSL-+
|
File system

This is what I mean by "any sufficiently componentized" system. No, you
can't enforce such designs on all applications, but applications are
more likely to reuse existing abstractions than not (like the file
system), so eventually we might get there.

Hopefully, that clarifies my suggestion. I didn't think through the
initial suggestion of interposing on the Ethernet interface, but the
network interface could be used to launch the correct browser
configuration (with or without virus scanning).
Post by David Wagner
[...]
Fourth, let me mention a few more applications just to give you some
idea of how hard this is. Suppose I run my IMAP or POP client to read
some email. Are you going to have IMAP and POP built into the OS, too?
Suppose I run Kazaa (or my favorite P2P client) to download some music
files and save them onto my hard drive. Are OS implementors going to
reverse engineer and re-implement the Kazaa network protocol in the OS?
I don't think so!
I agree that it shouldn't be "built-in", and I wasn't suggesting that.
You could use the same pattern as above for each of these protocols, but
it requires a bit work to parse each protocol. Or an interposition on
something more coarse-grained and common to all of these applications:
the file system.
Post by David Wagner
I think retrofitting is an important part of the point of the challenge.
Any answer that begins by saying "modify all of your applications as
follows: ..." sounds problematic, to my thinking.
Sure, because that's saying, "here's how I would specifically design my
solution to circumvent the problem". I was instead saying, "here's how I
think a web browser would work under EROS, and how such
capability-secure designs can solve this and similar problems".
Post by David Wagner
[...]
Keep in mind that there are competing solutions that do not involve
modifying all applications or re-writing the whole system from scratch
with this security policy in mind, so an implicit part of the challenge
is: Can capabilities do as well, or better than, competing approaches?
My guess is that the answer is likely to be "No, this is not the sort
of problem that capabilities were designed to solve" -- but I could be
wrong.
You didn't address my final suggestion for Firefox if you were seeking
backwards compatibility (interpose a scanner between the app and the
file system). The file system is a component in most OS's and
applications today, so it's similar to the above solution, but more
coarse grained.

Sandro
David Hopwood
2006-12-12 21:45:15 UTC
Permalink
Post by David Wagner
Post by Sandro Magi
Yes and no. I agree it would be difficult to analyze at some layers
(although some ISPs do it anyway), but given a "componentized" system
like EROS, the interposition of a filter can be done at *any* layer,
Ethernet, TCP/IP, post-decryption, file system, http and https protocols
can be implemented in their own components, between which a filter could
analyze requests/responses, etc.
Well, I have to say that I don't buy any of this.
First, it's funny that you mention that ISPs analyze application layer
content, because my understanding is they do it poorly for the most
part, and they only do it in a very limited fashion. Maybe a better
example are stateful packet inspection firewalls. Their marketers will
say that they are enforcing application-layer filtering policies at the
packet layer; yet any techie who has looked closely at their technology
knows that this is basically a fairy tale told to those who don't know
any better. It's really easy for an attacker to confuse the packet-layer
filters, because it's practically impossible to re-implement all of the
application semantics accurately at the OS layer. I think experience
with stateful packet inspection firewalls refutes the idea that you can
reliably enforce application-layer content policies at the packet layer.
Agreed.
Post by David Wagner
Second, a componentized OS doesn't help, because we're talking about
application-layer policies. No matter how modular your OS is, you can't
enforce application-layer policies via OS-layer mechanisms without some
help from applications, because the OS won't understand the application
semantics.
Third, I encourage you to re-read my example above. I guarantee you that
in any sane system, SSL will be implemented at the application layer, as
part of the web browser, so the OS won't know the keys needed to decrypt.
Also, the HTTP processing will be done at the application layer, so it's
not reasonable to expect the OS to be able to reliably reconstruct the
HTTP stream. And, the HTML processing will be done at the application
layer, and there's really no chance of the OS being able to independent
reconstruct exactly how the web browser is going to parse and process
that HTML. You just can't do this at the OS layer without re-implementing
all of your applications inside the OS, and no one would recommend that.
Fourth, let me mention a few more applications just to give you some
idea of how hard this is. Suppose I run my IMAP or POP client to read
some email. Are you going to have IMAP and POP built into the OS, too?
SSL, HTTP, HTML, IMAP, POP, and SMTP are all implemented in components of
Windows. Whether this is a good idea is a different question; I'm not holding
up Windows as an exemplar of anything other than that this is possible. Most
antivirus packages (e.g. Norton Antivirus and F-Secure at least) also interpose
on IMAP, POP and/or SMTP connections, using application-level gateways/proxies.

In any case, for the Mazieres challenge problem, I think it's fairly clear
that (regardless of how existing antivirus packages do it), the browser's
access to the filesystem is an easier and more reliable point at which to
enforce the virus checking, than by trying to interpose on arbitrary network
protocols.
Post by David Wagner
Suppose I run Kazaa (or my favorite P2P client) to download some music
files and save them onto my hard drive. Are OS implementors going to
reverse engineer and re-implement the Kazaa network protocol in the OS?
I don't think so!
Not even Windows implements the Kazaa protocol.
--
David Hopwood <***@blueyonder.co.uk>
Mark S. Miller
2006-12-12 19:23:45 UTC
Permalink
I will be traveling and otherwise busy from now through the first week in
January. The following message proposes a significant expansion in the scope
of the discussion, which I won't be able to participate in till 1/8 or so.
Post by David Wagner
MarkM has been looking around for security challenge problems.
I've got one to add to the list. I've been having an email
conversation with David Mazieres about the HiStar system, and he
raised an interesting problem that I think fits the bill.
Rather than try to guess the nature of the challenge, please invite David and
the other HiStar and Asbestos folks to cap-talk. They are clearly doing very
related work.

Googling reveals
<http://www.scs.stanford.edu/histar/>
but if you like to read pdfs online, rather than follow the link to their
paper on that page,
<http://www.scs.stanford.edu/~nickolai/papers/osdi2006-histar.pdf>
seems identical, except that all the internal LaTeX references became live pdf
links.

HiStar derives from Asbestos
<http://asbestos.cs.ucla.edu/doku.php>

whose main paper
<http://asbestos.cs.ucla.edu/pubs/asbestos-sosp05.pdf>
Post by David Wagner
In theory, capabilities alone suffice to implement mandatory access
control. For instance, KeyKOS [18] achieved military-grade
security by isolating processes into compartments and interposing
reference monitors to control use of capabilities across compartment
boundaries. EROS [39] later successfully realized the
principles behind KeyKOS on modern hardware. Psychologically,
however, people have not accepted pure capability-based confinement
[29], perhaps from fear that if just one inappropriate capability
escapes, the security of the whole system may be compromised.
As a result, a number of designs have combined capabilities with
authority checks [4], interposition [15], or even labels [16].
[18] Key Logic. The KeyKOS/KeySAFE System Design, March 1989.
SEC009-01. http://www.agorics.com/Library/KeyKos/keysafe/Keysafe.html.
A fast capability system. In Proc. 17th ACM Symposium on Operating
Systems Principles, pp. 170–185, Kiawah Island, SC, December
1999.
[29] Mark S. Miller, Ka-Ping Yee, and Jonathan Shapiro. Capability
myths demolished. Technical Report SRL2003-02, Johns Hopkins
University Systems Research Laboratory, 2003.
http://www.erights.org/elib/capability/duals/.
[4] Viktors Berstis. Security and protection of data in the IBM System/
38. In Proc. 7th Annual Symposium on Computer Architecture
(ISCA ’80), pp. 245–252, May 1980.
[15] Paul A. Karger. Limiting the damage potential of discretionary
Trojan horses. In Proc. 1987 IEEE Symposium on Security and
Privacy, pp. 32–37, Oakland, CA, April 1987.
[16] Paul A. Karger and Andrew J. Herbert. An augmented capability
architecture to support lattice security and traceability of access.
In Proc. 1984 IEEE Symposium on Security and Privacy, pp. 2–
12, Oakland, CA, April 1984.
Without further text, this would seem to admit that the rest of the paper is a
marketing exercise, since it would seem the only shortfall of the pure cap
solution is a psychologically-based illusion. Unfortunately, this paper never
revisits the issue, so it's not clear what one should actually conclude.

In any case, they go on to show that their label system is efficient, and give
evidence that it's expressive. So perhaps we could also derive the following
additional challenges from this paper:

* Efficiency aside, can we model Asbestos/HiStar labels in a pure cap world
using membranes?

* What sensible security policies, if any, does their label system allow them
to express naturally that are difficult to express with pure caps? This could
include a clarified statement of the virus-scanning challenge.

(I propose that a "sensible" policy must be one that can successfully enforce
constraints on authority, not just permissions, and do so assuming that side
channels can be plugged but outward covert channels cannot. By "enforce", I
mean prevent, not just impose a speed bump or meet a legally mandated but
meaningless demand. We can of course also argue about the definition of
sensible. And I do think we should argue about whether "information flow" per
se is a sensible concern.)

* Of these sensible policies, can we derive reusable cap-based libraries, such
that these policies can be easily expressed in a pure cap system using these
libraries?

* Finally, for the OS-guys, if such reusable abstractions are indeed helpful
for actual well-motivated cases, what functionality, if any, needs to be
migrated into the kernel to make these practically efficient? How might such
mechanisms relate to our own recent concerns for making some membrane patterns
efficient?
--
Text by me above is hereby placed in the public domain

Cheers,
--MarkM
Stiegler, Marc D
2006-12-12 20:14:29 UTC
Permalink
So, I feel obligated to answer this challenge in the ubiquitous-CapDesk
world :-)
Such a world is sufficiently different that the terminology in the
challenge is poor. Specifically, there is no "vpn", no hard boundary
between those "inside" and those "outside". Rather, every individual
with a relationship with the company has a POLA level of authority
within the company, delegated by the company.

Let us consider a typical employee, for whom the pola outcome is
reasonably vpn-like. Let us consider this employee to have been
delegated 3 authorities within the company when he was hired. All these
authorities are present on his laptop: an authority to talk to the mail
server, an authority to talk to the file server, and an authority to
talk to the web server (in the radical version of this future, the
employee is given one authority onto a dynamically variable list of
authorities that, in the typical case, include these three authorities,
i.e., the corporation keeps a powerbox associated with the individual
employee just as the capdesk keeps a powerbox associated with the
individual app).

When these authorities are delegated, they are delegated via membranes.
This is necessary not just to solve the challenge problem, but to solve
diverse policy issues, from logging for accountability to revocation in
the event of employee termination (of course, in a pola world,
"termination" could come in shades of pola, too, but that is another
story). The net effect of these individual membranes around each of the
"base" authorities from which the employee derives other authorities is
that there is a corporate membrane between the employee and the
corporation.

Having established this membrane, we now stand in a position where we
can implement diverse policies, including logging, revocation, and, of
course, including the virus-check-before-accepting policy. So this
policy can be implemented, even though, in a CapDesk world, virus
checking is much much less interesting :-)

Having banged so hard earlier on the weakness of imposing an artificial
boundary between "in-model" and "out-of-model" attacks, I feel obligated
to point out the way to circumnavigate this system: send email to other
employees of the same company via some non-corporate mail server. Which
raises a philosphical question: how many employees do you have to
deliver a viral email attachment to before you can say that you
delivered the attachment to "the company"?

--marcs
-----Original Message-----
Sent: Tuesday, December 12, 2006 12:50 AM
Subject: [cap-talk] Challenge problem from David Mazieres
MarkM has been looking around for security challenge problems.
I've got one to add to the list. I've been having an email
conversation with David Mazieres about the HiStar system, and
he raised an interesting problem that I think fits the bill.
You've got a laptop. While travelling, you sometimes connect
to the public Internet unprotected ("skinnydipping"). While
at home, you sometimes connect to your work's intranet over a VPN.
The desired security policy is this: any data you got over
the public Internet must be scanned with a virus scanner
before it can be sent to your work's intranet via the VPN.
Question: How do we enforce this policy? Of course, the real
question is how a capability system can be used to enforce
this desired policy, and whether capabilities provide any
extra leverage.
In David Mazieres' formulation, he wanted an OS solution that
would work with legacy applications, but we could presumably
relax that a bit. For instance, if you've got in mind a
hypothetical capability-based desktop, a la CapDesk, then we
could ask how your desktop could enforce this policy without
making unreasonable changes to all your applications.
Any takers?
(My first reaction: This problem is orthogonal to the kinds
of problems that capability systems usually try to solve.
Consequently, capability systems might not have any special
advantage at enforcing this kind of policy. That seems ok;
capabilities aren't a silver bullet, and they don't solve
every problem in the world. But maybe others have a
different reaction, or can see some clever way in which
capabilities would make this problem easier to solve.)
_______________________________________________
cap-talk mailing list
http://www.eros-os.org/mailman/listinfo/cap-talk
David Hopwood
2006-12-12 21:58:11 UTC
Permalink
Post by Stiegler, Marc D
Having banged so hard earlier on the weakness of imposing an artificial
boundary between "in-model" and "out-of-model" attacks, I feel obligated
to point out the way to circumnavigate this system: send email to other
employees of the same company via some non-corporate mail server.
Let's say that's a webmail provider; then the email will be virus-checked
when the employee tries to save it to the filesystem from their web browser
instance that is configured to access the Internet. The challenge problem
is trying to enforce a coherent, in-principle enforceable security policy,
IMHO.

(It would be more robust to enforce the policy, "this filesystem contains
no files that have not been virus-checked", though. And not by periodically
checking all the files, which is a stupid way of failing to enforce that
constraint.)
--
David Hopwood <***@blueyonder.co.uk>
Stiegler, Marc D
2006-12-12 22:29:09 UTC
Permalink
-----Original Message-----
Sent: Tuesday, December 12, 2006 1:58 PM
To: General discussions concerning capability systems.
Subject: Re: [cap-talk] Challenge problem from David Mazieres
Post by Stiegler, Marc D
Having banged so hard earlier on the weakness of imposing an
artificial boundary between "in-model" and "out-of-model"
attacks, I
Post by Stiegler, Marc D
send email to other employees of the same company via some
non-corporate mail server.
Let's say that's a webmail provider; then the email will be
virus-checked when the employee tries to save it to the
filesystem from their web browser instance that is configured
to access the Internet. The challenge problem is trying to
enforce a coherent, in-principle enforceable security policy, IMHO.
Yes, if we presume that the second employee also has a corporate
membrane between himself and the corporation, even this out of band
transmission will eventually hit the membrane and get checked...assuming
that the employee tries to save the file on a corporate server rather
than just saving it and using it on his laptop. Which still leaves the
philosophical question, how many laptops associated with the company do
you have to deliver viral email to before you can say you've delivered
viral email to the company? Particularly if everyone uses a laptop
because the company itself has become virtualized? :-)
(It would be more robust to enforce the policy, "this
filesystem contains no files that have not been
virus-checked", though. And not by periodically checking all
the files, which is a stupid way of failing to enforce that
constraint.)
Yes, though I consider this to answer a different problem statement. For
this problem statement, the right place to stand is the file server
(CapDesk has a good place to stand for this, in its distributed file
server component). Membranes would have to be wrapped around all the
grants of file server access, not only to people, but also to corporate
services like web servers and ftp servers, for this to work. Which would
be easy, but, as I say, a different answer to a different (perhaps more
sensible) question.

--marcs

Loading...