Discussion:
Vulnerability Response (was: BGP TCP RST Attacks)
Ben Nagy
2004-05-21 08:05:46 UTC
Permalink
Hm. Playing catch up, and ran across this.
-----Original Message-----
Of Ahmed, Balal
[...]
Microsoft targeted Exploits usually arrive on the scene 3 - 8
weeks after a vulnerability has been announced, this TCP RST
advisory cannot be looked at in the same light though as it
is cross platform/vendor.
Oh it's way worse than that. The "full disclosure for $$$" crowd had working
exploits for lsass within 24 hours (cf Dave Aitel's CANVAS announcement on
full disclosure). Given how trivial it was to exploit I have no doubt that
the underground had them around the same time. There was a widely publicised
exploit for the RPC-DCOM stack overflow after 9 days. IIS SSL PCT - 8 days.
Last year Workstation and Windows Messenger were around the same, but I
can't be bothered doing the research for exact numbers. Remember that these
are only the public exploits.

Let me expand a little on this part first.

There are a bunch of different vulnerability "classes". The simplest - a
"stack based buffer overflow" is both very well understood, easy to exploit,
and reliable. All of the major worms I can recall off the top of my head
have been of this kind. We tend to see exploits for these really fast.

More common recently are more complex vulnerabilities like the recent heap
corruption bugs in various services, where it is downright HARD to either
trace back the fault or to "seed" the heap to get even vaguely reliable
exploitation. These are often not exploited in public at all, and the more
you know about the exploitation process the less surprising this becomes.

If you want an oversimplified yardstick - any vulnerability which is a
stack-based overflow that yields remote SYSTEM should be addressed yesterday
- there is less than 24 hours of safe time. This does not mean that you
shouldn't patch _all_ vulnerabilities which are rated by the vendor as
critical.
As stated elsewhere in this thread the largest threat vector
will be feeds from the Internet. Given that sasser exploited
a known vulnerability for which a patch was available, no
patch release from any vendor should be dismissed without due
process and risk analysis with buy in from security officers
and management. Its very easy to dismiss a vulnerability
without assessing the full impact until it is exploited by
which time its too late.
I completely agree here.

One trend I have heard of is security staff second guessing vulnerability
advisories or vendor severity ratings - "Oh that's not exploitable". In
general - don't do that. Even low-level memory management and architecture
courses do nothing to prepare you for assessing real-world exploitability.
This is partly because attack is easier than defence (and easier than
understanding the theory) - an attacker doesn't care _why_ EIP is
0x41414141. :)

Another trend is "people" (also known as "they") claiming disingenuously
that a vulnerability is not exploitable in order to taunt the vendor or
whoever issued the advisory into giving more details. These "not
exploitable" arguments can then be picked up by impressionable people, and
taken as fact. Don't fall for it.
-----Original Message-----
Of Josh Welch
Sent: 05 May 2004 16:24
Subject: RE: [fw-wiz] BGP TCP RST Attacks (was:CIsco PIX
vulnerable to TCP RST DOS attacks)
<snip>
I still believe that the #1 impact of this vulnerability,
as seen in
an Internet-wide perspective, is killing BGP sessions in
core routers.[...]
(Josh Welch)
The advisories I have seen have made this same statement.
However, according to another list I read there are a number
of network operators who feel this is not a real threat. A
number of them hold that it would be excessively challenging
to be able to match up the source-ip:source-port and
dest-ip:dest-port and effectively reset a BGP session without
generating a large volume of traffic, which should be noticed
in and of itself.
The advisories are right. Those network operators are wrong. Surprise!

OK, so here's my point. Assessing vulnerability exploitability is not some
kind of "Oh yeah, sez who?" challenge. It is almost certain that the person
telling you a vulnerability is critical has spent a lot more time looking at
it then whoever is second-guessing the advisory. It is _strongly_ advisable,
IMO, to err on the side of caution. I can think of three examples offhand
where "they" cried "not real-world exploitable" and were completely wrong
(ASN.1, BGP TCP RST, OpenSSH) and none, offhand, of the reverse case.

Yes, I am completely aware that patching costs money, and remediation needs
to be prioritised. Maybe vendors and researchers should do something more
formal about also providing ease / reliability of exploitation data along
with vulnerability information, but then everyone would think they were
bragging and ignore it anyway.

I'm going to go ahead and cut this short before it turns into a rant.

Cheers,

ben
Ben Nagy
2004-05-25 12:29:10 UTC
Permalink
Heya,

Well, I see your point, BUT...

If we're talking real world, my experience is that virtually every company
that is large enough to be complex is wide open to multiple worm infection
vectors. A well designed worm (for a change) would go through the first
world like curry through a drunk. Firewalls don't really help very much -
every major organisation that gets w0rmed already had one (and YES, I'm sure
they had 139 445 and 1025 closed).

This is not news, and that's part of the reason I work in "vulnerability
management" now.

Did you see this:
http://www.dtc.umn.edu/weis2004/weaver.pdf

I like Weaver / Paxons' stuff, but in this case I think they are being
conservative.

My stance: Stack-based remote system windows exploits need to be identified
and patched yesterday, end of story. Anything else is downright negligent.
"Mitigation" (eg pseudo airgaps, firewalls, pixies and unicorns) has failed
in 911 systems, utilities, airline reservation systems, coastguard,
banks.... - all of which included "isolated networks".

The trouble is that people are so punch-drunk now with MS patches that
nobody knows "critical" from "critical and urgent". I think that OS vendors
_and_ the research community could do more to address that issue. It will
only get worse - a 0day worm would knock our socks off.

To me, amongst the plethora of product, service and snake oil there are two
evolving solution spaces that solve real problems. Host based vulnerability
mitigation, and anything that allows an organisation to condense and
prioritise information about where they are exposed to known vulnerabilities
in realtime. Firewalls remain a critical part of any infrastructure, of
course, but, to be frank, they just don't work as well anymore.

Actually, I feel so strongly about this I'm going to go ahead and cc the
list on a unicast response. Sorry - and, Ken, please don't interpret this as
a flame or rant against you. I think you must have just touched a nerve. ;)

Cheers,

ben
-----Original Message-----
Sent: Tuesday, May 25, 2004 1:36 PM
To: Ben Nagy
Subject: RE: [fw-wiz] Vulnerability Response (was: BGP TCP
RST Attacks)
Good morning Ben. I would say "Mitigating Factors" provides
the ease of use that you refer to. By implementing or
compensating for the Mitigating Factors it is possible to
decrease the real world severity in a given environment.
Note: Lessen does not mean eliminate. IE. A firewall blocks
LSASS until an infected laptop comes home and connects to the
LAN. Unless there is a screening process before the Laptop is
allowed on the LAN (uncommon) it will likely pass the
infection on to other systems. In this case the mitigating
factor extended the time a company could spend evaluating the
patch before deployment, but does not entirely eliminate risk
of infection/compromise.
Ken Claussen MCSE (NT42K) CCNA CCA
"In Theory it should work as you describe, but the difference
between theory and reality is the truth! For this we all strive"
Marcus J. Ranum
2004-05-26 22:30:10 UTC
Permalink
Post by Ben Nagy
To me, amongst the plethora of product, service and snake oil there are two
evolving solution spaces that solve real problems. Host based vulnerability
mitigation
The big problem with host based anything is that the management effort
scales with the number of hosts. That's just a losing battle in the long-term
because nobody's host-count is shrinking. Basically, the host-side problem
is the same as the system administration problem - and the industry has
made a frightening bodge out of its attempts to "solve" that issue.
Post by Ben Nagy
and anything that allows an organisation to condense and
prioritise information about where they are exposed to known vulnerabilities
in realtime.
Asset management, change control, and security workflow are all
good, yes. Condensing and prioritizing is just part of it. I'm not
at all convinced that it's enough. After all, if you condense and
prioritize the "must fix: disaster" list for many companies you'll get
a list so long that they'll decide to do something else, instead.
Anything else, in fact. :)
Post by Ben Nagy
Firewalls remain a critical part of any infrastructure, of
course, but, to be frank, they just don't work as well anymore.
Firewalls are perfectly good tools that are regularly mis-used.
It says more about the intellectual state of security than it
does about the technical usefulness of firewalls.

The problem is that firewalls are a tool that was intended to be used
in "default deny" mode and the technical user community is operating
in a "vulnerabilty centric" mode. Rather than focusing on doing a few
things safely, the idea is always to figure out what the current threats
and vulnerabilities are, and whack those. That's a really useless
approach in the long run. I'd guess that a significant number of the
firewalls I've seen are being used to knock down "well known bad things"
instead of "only allow a few good things." I did a talk the other day
in which I outlined the "old-school" secure firewall approach (non-routed
networks, proxy everything, default deny, audit policy violations) and
people in the room were amazed: "None of our users would accept
that kind of solution!" they cried. Therein lies the rub. As long as something
so important as security is the tail trying to wag the dog, it's not going
to go anyplace.

You *think* host-based vulnerability mitigation (what *is* that,
by the way? it sounds like marketing...) is going to work. But
that's just because not enough users have TRIED it enough to
figure out how to politically sandbag it, yet. But don't worry, they
will. Remember, users are supposed to be running host-based
antivirus, too. :P

mjr.
Ben Nagy
2004-05-27 07:56:34 UTC
Permalink
-----Original Message-----
[...]
Post by Ben Nagy
To me, amongst the plethora of product, service and snake
oil there are
Post by Ben Nagy
two evolving solution spaces that solve real problems. Host based
vulnerability mitigation
The big problem with host based anything is that the
management effort scales with the number of hosts.
Not linearly, though. I am convinced that it can be done - AV vendors
already do it, MS is shipping more and more default security plus they even
have a (very very basic) host-based firewall which will be enabled by
default - I don't hear users screaming that XP is "less compatible" than
Win95. Managability of host-based agents is basically a solved problem -
let's move on.

[...]
Post by Ben Nagy
and anything that allows an organisation to condense and prioritise
information about where they are exposed to known vulnerabilities in
realtime.
Asset management, change control, and security workflow are
all good, yes. Condensing and prioritizing is just part of
it. I'm not at all convinced that it's enough. After all, if
you condense and prioritize the "must fix: disaster" list for
many companies you'll get a list so long that they'll decide
to do something else, instead.
Anything else, in fact. :)
One of my fundamental premises - no company will get secure without
corporate will to do so. I agree, and we all know a lot of examples. However
today even the places that _do_ have the will are frustrated by either
information overload, confusion regarding what various solutions _do_, and
the nitty-gritty of getting done in practice what we tell them is easy in
theory (like patch, for example).

To me, change control is an _enemy_ when talking about rank and file
machines, not a friend. If you start with secure boxes, strip down the
services and then monitor the critical applications for problems then change
control rocks. If you start with a million desktop PCs, build a standard
image based on what works for all the corporate apps and then run change
control then you end up with a million insecure PCs that nobody has the
authority to fix with any kind of agility.
Post by Ben Nagy
Firewalls remain a critical part of any infrastructure, of course,
but, to be frank, they just don't work as well anymore.
Firewalls are perfectly good tools that are regularly mis-used.
[...]
I did a talk the
other day
in which I outlined the "old-school" secure firewall approach
Old school networks had less entry points. My only real point is that true
chokepoint networks are (sadly) a dying breed. I have no doubt that you are
amused by the trend for firewalls to return to application intelligence like
it's a new thing, but not even the mjr perfectly secure firewall will work
if the traffic can get to the hosts another way.
You *think* host-based vulnerability mitigation (what *is*
that, by the way? it sounds like marketing...)
LOL. It means putting stuff on hosts to try and stop zero-day
vulnerabilities, or known ones for which you are not yet patched/fixed. The
marketing term would probably be prevention - I use mitigation to underline
that it's Just Another Layer and not pixie dust.
is going to
work. But that's just because not enough users have TRIED it
enough to figure out how to politically sandbag it, yet. But
don't worry, they will. Remember, users are supposed to be
running host-based antivirus, too. :P
And AV does a reasonable job, within its defined scope, provided it is used.
It has also reached the point of "no brainer" security investments - it's
what / how much, instead of whether. That's a good thing.

Unlike marketing (that smarts, by the way ;) all I'm claiming is that those
two EVOLVING solution sets are interesting, and pointed in the right
direction, unlike many which are boring revamps of existing tech or security
appendices that basically do nothing for 90% of the marketplace.

Coffee now.

ben
Marcus J. Ranum
2004-05-27 17:40:23 UTC
Permalink
Post by Ben Nagy
Post by Marcus J. Ranum
The big problem with host based anything is that the
management effort scales with the number of hosts.
Not linearly, though.
It scales non-linearly if the problem area is well-defined.
When you go above a simple problem (a/v is a "simple"
problem - I'll get to that) then it starts to fall over pretty
quickly.

Consider A/V as a case study. The problem is easy because
there's no need to make a site-specific policy or enforce it.
The problem's black and white:
- Either A/V is installed on a machine or it isn't
- Either the signatures are up to date or they aren't
There's no case where a user is going to need to be able to
run Netsky.V3 on his desktop, or whatever. So administration
scales because there's no real complexity.

Now - if you're gonna make a firewall policy for 10,000 desktops
and 2,000 servers, that's another story! User bob is gonna want
access to file sharing, Fred needs to reach the mainframe, etc, etc.
You wind up adopting one of 2 approaches:
- Use the policy that's most convenient to build (e.g.: permissive)
- Use a policy based on minimizing access (e.g.: secure)
The former is relatively easy but worthless. The latter is extremely
hard because it's a Layer 8 problem, but it's extremely valuable.
Post by Ben Nagy
I am convinced that it can be done - AV vendors
already do it, MS is shipping more and more default security plus they even
have a (very very basic) host-based firewall which will be enabled by
default - I don't hear users screaming that XP is "less compatible" than
Win95.
Wrong!

MS is shipping more and more default security EXCEPT WHERE IT
IS INCONVENIENT. There's a host-based firewall that nobody uses
and nobody uses at an enterprise level. There's file sharing that everyone
enables with no authentication, etc, etc. It doesn't matter if you have
desktops that ship with potentially useful tools if they only remain
at the potential stage. Therein lies the rub.

When someone talks about doing mitigation at the host level,
it needs to be pervasive to succeed. It needs to have centralized
policies to succeed. It needs to enhance administrators' ability
to see and enforce trust boundaries to succeed. There are
technologies out there that are aimed at doing this, and they
work well. Sygate, for example, is probably the best-thought-out
enterprise firewall concept/system. But I won't get enthused
about host-side mitigation until I see more than 1% of companies
using something like that.
Post by Ben Nagy
Managability of host-based agents is basically a solved problem -
let's move on.
Manageability of host-based agents for trivial problems is a
solved problem. Management of host-based agents for complex
administrative configurations is a HARD problem - not because
the software is hard to build but because of the Layer 8
issues.
Post by Ben Nagy
One of my fundamental premises - no company will get secure without
corporate will to do so.
Absolutely! You are 100% correct.
Post by Ben Nagy
To me, change control is an _enemy_ when talking about rank and file
machines, not a friend. If you start with secure boxes, strip down the
services and then monitor the critical applications for problems then change
control rocks.
That *is* change control.
Post by Ben Nagy
If you start with a million desktop PCs, build a standard
image based on what works for all the corporate apps and then run change
control then you end up with a million insecure PCs that nobody has the
authority to fix with any kind of agility.
That's not change control; "that's centralized management using
a stupid configuration." :)
Post by Ben Nagy
Old school networks had less entry points.
Coincidentally, they were more secure. ;)
Post by Ben Nagy
My only real point is that true
chokepoint networks are (sadly) a dying breed. I have no doubt that you are
amused by the trend for firewalls to return to application intelligence like
it's a new thing, but not even the mjr perfectly secure firewall will work
if the traffic can get to the hosts another way.
It's not installed correctly if you don't cut ALL the wires!!!! :)

mjr.
Ben Nagy
2004-05-28 12:15:33 UTC
Permalink
:) OK, let's go. Forgive me if you think my snips have distorted your
message, I tried to avoid that.
-----Original Message-----
[...]
Post by Ben Nagy
Post by Marcus J. Ranum
The big problem with host based anything is that the management
effort scales with the number of hosts.
Not linearly, though.
It scales non-linearly if the problem area is well-defined.
[...]
Consider A/V as a case study. [...]
There's no case where a user is going to need to be able to
run Netsky.V3 on his desktop, or whatever. So administration
scales because there's no real complexity.
Now - if you're gonna make a firewall policy for 10,000
desktops [then it gets hard]
I agree.

However, there are a LOT of protocol problems that you can pick up at a host
level which are basically the same thing. No user will ever want to see
/../../../../../ on their webserver, no user will ever want
A[...]AAAAAA\xeb\x1a\x5e\x31\xc0\x88\x46\x07\x8d\x1e\[...] blah blah blah.

A network protocol firewall is just one example of "things that are hard to
do on a granular basis". All the "good" solutions contain much more generic
protections.

Just one example - "kernel" (kernel32.dll in windows runs in userspace.
Nice.) protection. I stop LoadLibrary from being called from writeable
memory on windows. Boom, I stop a huge percentage of casually written attack
payloads[1]. One vendor uses this as one of their _core_ strategies - the
dumb thing is that it _works_.
Post by Ben Nagy
I don't hear users screaming that XP is "less compatible" than Win95.
Wrong!
[...]
It doesn't matter if you have desktops that ship with
potentially useful tools if they only remain at the potential
stage. Therein lies the rub.
That's a very cynical view (although I admit you have cause).

Sticking with Hamlet, I think you're taking arms against a sea of troubles,
while I am suffering the slings and arrows of outrageous Windows.

Windows is a security issue that most companies need to live with. Limited
understanding of security is an issue that all experts need to live with.
Those who don't know better and don't want to learn STILL need to be
protected, if only for the sake of the rest of us.

Windows is getting more secure _by_default_. Fact. I will have that argument
with anybody. However, it is still EXTREMELY susceptible to worms, malware
and targeted attack.

However, there are a bunch of things we can do to make things better for the
overwhelmingly VAST population of organisations that fit the following
profile: "I do not really buy into real security theory. I want to buy a
product that will let me have my cake and eat it too - fragmented or
non-existant policies without catastrophic security failure."

I believe it can be done to a much greater extent than currently, but a
"Firewall, IDS, AV" approach will fail to do so.
When someone talks about doing mitigation at the host level,
it needs to be [good]. Sygate, for example, is
probably the best-thought-out enterprise firewall
concept/system. But I won't get enthused about host-side
mitigation until I see more than 1% of companies using
something like that.
So we agree that the concept is worthwhile, and implementations vary.
Peachy. I am happy now.

[...]
Post by Ben Nagy
If you start with a million desktop PCs, build a standard
image based
Post by Ben Nagy
on what works for all the corporate apps and then run change control
then you end up with a million insecure PCs that nobody has the
authority to fix with any kind of agility.
That's not change control; "that's centralized management
using a stupid configuration." :)
Tomahto, Tomayto. Go word up some CSOs, nobody here but us chickens. :P

Bring on the cat and the horse, I'll fight youse all!

(kidding)

ben

[1] When executing an attack program on the stack of the victim computer I
will want to _do_ something. To get the address of the make_this_my_box()
function, or whatever I am calling, the "lazy" way is to call LoadLibrary.
Since your standard malware is executing on the stack, we can look at the
calling address and then nix the execution. Easy, right?
Dave Piscitello
2004-05-27 13:29:45 UTC
Permalink
Post by Marcus J. Ranum
there are two evolving solution spaces that solve real problems. Host
based vulnerability mitigation
The big problem with host based anything is that the management effort
scales with the number of hosts.
Agreed. Done in a vaccuum, host vulnerability assessment illustrates how
poorly you are configuring and maintaining your hosts. Moreover, if
vulnerability mitigation is addressed per host, based on scanning results,
you have to question whether you ever achieve uniform security policy. But
don't you think you can manage risk better if you mitigate by central
policy definition and patch management? I've used the CIS security tool
(includes HFnetchk) and templates one-off with both MMC plug-in and local
policy editing. This is hours per computer, and does not scale even in my
home office. But if you use a template and push a configuration from a
central policy server to all clients, it's more efficient, and uniform.
Post by Marcus J. Ranum
and anything that allows an organisation to condense and
prioritise information about where they are exposed to known vulnerabilities
in realtime.
Asset management, change control, and security workflow are all
good, yes. Condensing and prioritizing is just part of it. I'm not
at all convinced that it's enough. After all, if you condense and
prioritize the "must fix: disaster" list for many companies you'll get
a list so long that they'll decide to do something else, instead.
Anything else, in fact. :)
Perhaps initially, but this is a systemic problem, no? Anyone with kids
knows the "clean the room" syndrome, and security operations are like
parents with lots of messy children. Each child does little or nothing for
a long long time, until the only way to clean his or her room is to
literally empty it and restore order and cleanliness. But if the effort to
establish the baseline is followed by more disciplined administration and
housekeeping, the must fix disaster list is shorter, and more suitable to
prioritization.
Post by Marcus J. Ranum
"None of our users would accept that kind of solution!" they cried.
If this attitude is pervasive, then the client wasted your time and spent
their money unwisely.
Post by Marcus J. Ranum
Therein lies the rub.
Hamlet, Act III,
"To die, to sleep; To sleep, perchance to dream-there's the rub;...
Post by Marcus J. Ranum
You *think* host-based vulnerability mitigation (what *is* that,
by the way? it sounds like marketing...) is going to work. But
that's just because not enough users have TRIED it enough to
figure out how to politically sandbag it, yet. But don't worry, they
will. Remember, users are supposed to be running host-based
antivirus, too. :P
Curmudgeon factor is high today, eh Marcus?
Marcus J. Ranum
2004-05-27 17:07:34 UTC
Permalink
But don't you think you can manage risk better if you mitigate by central policy definition and patch management?
I don't think patch management is the solution for any significant aspect
of the problem. I know that flies in the face of the "common wisdom" of
security these days, but I think eventually time will tell and we'll give up
on patch management as a security technique. The only place patches
make a difference is on services that are internet-facing or mission critical.
What you'll find is, if you can define those systems and services, you'll
have good security if the list is small and the machines are well configured.
If the list is large (the "have cake, eat it too, not get fat" philosophy of
Internet security) you'll have cruddy security no matter what you do.

Put differently, I see the "patch it everyplace" approach as an over-extension
of an approach that *did* work OK: policy-centric host hardening. The idea
was that we could harden certain crucial hosts and they'd be "safe enough"
for Internet use. So people went and extended that philosophy to "harden
everything" - i.e.: patch it everyplace. The problem is that hardening hosts
only works when you are working with underlying software that CAN BE
hardened and host operating systems that CAN BE secured. We were
comfortable with building - and were able to build - very strong bastion
hosts back in the early 90's. So people looked and said, "behold! if
we just patch everything, then we won't NEED a proxy firewall! after
all, we'll be as SECURE as a locked down proxy host!" Ummm....
Wrong. Given a few more years this reality will sink in.

The "right way" is still the right way and has been all along. It's:
- minimize your access footprint (reduce zone of risk)
- default deny
- identify the few Internet services you need to the few servers that need
them, and lock those services down
* keep those services patched if you can't lock them down with
better means like chroot, setuid, and file permissions
- audit and look at your logs

The way this is gonna play itself out is that eventually the "old school"
security folks are going to get really really tired of saying "we told you so"
and switch to saying, "yeah, it hurts, doesn't it? stop whining. you got
what you asked for."
I've used the CIS security tool (includes HFnetchk) and templates one-off with both MMC plug-in and local policy editing. This is hours per computer, and does not scale even in my home office. But if you use a template and push a configuration from a central policy server to all clients, it's more efficient, and uniform.
Centralized policy administration works but largely because it's
USUALLY used to implement basic commonsense like default
deny and minimized zone of risk.

Don't confuse the symptom with the solution, though!!
Organizations that do centralized policy administration
are not more secure because that's what they do. It's
that organizations that "get" security are more likely
to be doing centralized policy administration. Because,
done cluefully and with discipline, it helps. Clue and
discipline is better off centralized because your typical
organization does not have enough clue across the
spectrum so you need to aggregate it in the central
administration. Organizations with high clue in their
IT department will centralize administration because they
know users can't be trusted.

As I write this I am realizing that there are a lot of
dearly held myths of security that are misunderstood
because we assign the effect to the symptom!! How
could we have been so foolish, as a community?? :(
Perhaps initially, but this is a systemic problem, no? Anyone with kids knows the "clean the room" syndrome, and security operations are like parents with lots of messy children. Each child does little or nothing for a long long time, until the only way to clean his or her room is to literally empty it and restore order and cleanliness. But if the effort to establish the baseline is followed by more disciplined administration and housekeeping, the must fix disaster list is shorter, and more suitable to prioritization.
Put differently: after one beats one's head against the wall
long enough, either one's brains turn to jelly, or one's clue
level increases.
Post by Marcus J. Ranum
"None of our users would accept that kind of solution!" they cried.
If this attitude is pervasive, then the client wasted your time and spent their money unwisely.
:)
How pervasive do *YOU* think that attitude is, Dave?

mjr.
Frederick M Avolio
2004-05-27 19:01:07 UTC
Permalink
Post by Marcus J. Ranum
- minimize your access footprint (reduce zone of risk)
Zone of risk!? No patch management? Stop, you're killing me! That is so 90s!

;-)

f
Devdas Bhagat
2004-05-27 19:34:52 UTC
Permalink
Post by Dave Piscitello
But don't you think you can manage risk better if you mitigate by central
policy definition and patch management?
I don't think patch management is the solution for any significant aspect
of the problem. I know that flies in the face of the "common wisdom" of
security these days, but I think eventually time will tell and we'll give up
on patch management as a security technique. The only place patches
Patch management is a crucial part of the defense in depth concept.
Make your network like a rock, not a nut is how I might even phrase it.

Relying only on patch management as a security technique is a bad idea.
(It works for a single box not providing any services to the Internet,
or only a very restricted set of services. For anything more complicated
than that, I will not rely purely on any one single solution for security. ).
Post by Dave Piscitello
make a difference is on services that are internet-facing or mission critical.
That is where they make an immediate difference. However, your browser
is facing the Internet all the time. The days when the browser was a
simple application that would parse a specific subset of HTML tags are
long gone. Today we deal with Javascript, ActiveX and other functional
enhancements.
Post by Dave Piscitello
What you'll find is, if you can define those systems and services, you'll
have good security if the list is small and the machines are well configured.
That second point has a problem. A very big problem, which we are
discussing right now :).
Post by Dave Piscitello
If the list is large (the "have cake, eat it too, not get fat" philosophy of
Internet security) you'll have cruddy security no matter what you do
Or a small list with non well configured machines.
Post by Dave Piscitello
Put differently, I see the "patch it everyplace" approach as an over-extension
of an approach that *did* work OK: policy-centric host hardening. The idea
It still does. You have to do it right. Every host involved needs to be
treated like a bastion host. If that idea is not the base for your
patching, then you are doing it wrong.
Post by Dave Piscitello
was that we could harden certain crucial hosts and they'd be "safe enough"
for Internet use. So people went and extended that philosophy to "harden
everything" - i.e.: patch it everyplace. The problem is that hardening hosts
only works when you are working with underlying software that CAN BE
hardened and host operating systems that CAN BE secured. We were
This is a crucial point. One that is usually missed. I am sure there are
plenty of people here who can lock down a Windows system.
Post by Dave Piscitello
comfortable with building - and were able to build - very strong bastion
hosts back in the early 90's. So people looked and said, "behold! if
we just patch everything, then we won't NEED a proxy firewall! after
all, we'll be as SECURE as a locked down proxy host!" Ummm....
Wrong. Given a few more years this reality will sink in.
They would be right if their applications were built the same way the
application level proxies were.
Post by Dave Piscitello
- minimize your access footprint (reduce zone of risk)
- default deny
- identify the few Internet services you need to the few servers that need
them, and lock those services down
* keep those services patched if you can't lock them down with
better means like chroot, setuid, and file permissions
Do both! chroot merely increases the time you have in hand to respond to a
vulnerability. That you are running chrooted is no excuse to stay
unpatched.
Post by Dave Piscitello
- audit and look at your logs
Yet another point missed all too often.
<snip>
Post by Dave Piscitello
administration. Organizations with high clue in their
IT department will centralize administration because they
know users can't be trusted.
A good administration will also talk to their IT department, and let
them know of the business needs of the organisation. Usually the
requirement is not to implement a product^Wsolution but to do a certain
task.
Once these needs are defined, then the IT department can come up with a
list of possible solutions, with implementation requirements, support
requirements and security risks. Given that data, management can
determine which specific solution to implement.

Often enough, it is not that users can't be trusted, but that they lack
sufficient information to make a good judgement on the implications fo
their actions. (This is different from the case where the users can't be
trusted, and checks and balances need to be built into the system to
catch such users).

(Funnily, all the people with clue that I personally know will always
build an Internet facing server by stripping it down to bare essentials
and then adding necessary services. The people without clue tend to
install everything and then stop services, but not uninstall them. Quite
a few of them follow that same policy for their own desktops as well).

Devdas Bhagat
Ben Nagy
2004-05-28 12:56:34 UTC
Permalink
[...]
But don't you think you can manage risk better if you
mitigate by central policy definition and patch management?
[this level is mjr]
I don't think patch management is the solution for any
significant aspect of the problem. I know that flies in the
face of the "common wisdom" of security these days, but I
think eventually time will tell and we'll give up on patch
management as a security technique. [...]
If I stood on top of a very large building with a hundred foot stack of
Marshall amps and used the entire building as a pre-amp and subwoofer, I
still could not yell "CRAP!" loud enough.

Take a look at the recent security record of MS RPC endpoints. You can't
turn them off. You can't secure them. Windows will break.

How _ELSE_ do you want to deal with that problem? Let me put it a different
way. However much you lock down machines, your biggest remaining worry will
be software vulnerabilities in the services you _do_ run - the rest is just
a matter of degrees. How do you eliminiate vulnerabilities? Patch.
Put differently, I see the "patch it everyplace" approach as
policy-centric host hardening.
You can only harden up until the OS will let you. If the core service has an
exploitable bug then only a patch will fix it. Other solutions (like my
famous "marketing" host based vulnerability mitigation ;) might save your
backside for a while, but the real intent of those solutions is to buy you
time, not obviate the need to fix the real problem.

Even assuming that you could have pre-hardened a box (it is true that
hardening _might_ have let you dodge Blaster and Sasser, but wait until the
multiple vectored worms really start hitting us) then most people just won't
do it. In any case, having a huge freaking gaping security hole in a core
service is not something I feel comfortable about, same as running a
thousand Win95 boxes "behind a firewall" sends shivers down my spine.

It may be just me, but it sounds like you are arguing that people's
mainstream desktop OSes should be something that can be easily hardened on a
service-centric basis, understand true user / kernel / virtualisation
separation and yet have full enterprise functionality.

If anybody else advanced this theory I would snort milk through my nose.
With you I will just say that you are five years ahead of your time. I am
100% behind you as an idealist, but, as a professional, I don't see that as
useful right now. :D

Cheers,

ben
Marcus J. Ranum
2004-05-28 16:19:22 UTC
Permalink
Post by Ben Nagy
Post by Marcus J. Ranum
I don't think patch management is the solution for any
significant aspect of the problem. I know that flies in the
face of the "common wisdom" of security these days, but I
think eventually time will tell and we'll give up on patch
management as a security technique. [...]
If I stood on top of a very large building with a hundred foot stack of
Marshall amps and used the entire building as a pre-amp and subwoofer, I
still could not yell "CRAP!" loud enough.
As I said, I think time will tell. :)

<RANT>

Come on, Ben! Join me in challenging the preconceptions of an
industry that has grown up around "if you can't do something RIGHT
do something STUPID, HARDER!" That's what we're talking about,
here, with all the focus on patch management:
- Rather than run a good O/S: run a bad one and MANAGE it BETTER
- Rather than understand your connectivity: leave it OPEN and FIDDLE WITH
your endpoints CONSTANTLY
- Rather than run good code: run bad code and UPGRADE IT DAILY

Talk about not being able to yell "CRAP" loud enough?? What's
wrong with this picture?!?!
Post by Ben Nagy
Take a look at the recent security record of MS RPC endpoints. You can't
turn them off. You can't secure them. Windows will break.
Yes. So? YOU ARE INSANE IF YOU ARE RELYING ON WINDOWS
FOR INTERNET-FACING CRITICAL SYSTEMS.

Of course at this point everyone chimes in and says, "BUT WE HAVE TO!"
for (whatever) reason(s). Well, then don't complain that it sucks. But
don't expect to be able to make it not suck through dint of sheer
effort. That's not gonna happen, either.

We have seen - CLEARLY - with software and O/S in general - that
they are not reliable enough to provide a solid security platform. The
evidence is manifest; it's been staring us in the face for at least the
last 10 years and it's been covered in big, blinky neon signeage for
the last 4 years. Everyone would rather be in denial.

What do you think? If we install JUST ONE MORE PATCH it's gonna
be SECURE? Heck, no. The only way to secure this crap is to hold
it down and hammer a stake through its heart.

</RANT>
Post by Ben Nagy
How _ELSE_ do you want to deal with that problem? Let me put it a different
way. However much you lock down machines, your biggest remaining worry will
be software vulnerabilities in the services you _do_ run - the rest is just
a matter of degrees. How do you eliminiate vulnerabilities? Patch.
Ok... now let me catch my breath and we can talk sense... ;)

You're absolutely right that the software vulnerabilities in services are
what will kill you. That's why the old-school doctrine was:
- collapse access down to services
- collapse services down to a handful of trusted servers for those services
- make the service implementations on those trusted servers as well-configured
as possible
- use mitigation techniques to surround those services (setuid, chroot,
noexec, append-only, tamper-detection, app proxies, default
deny) wherever possible and to the highest extent practical
- deny all else

There are a few people on this list (Hi Fred, Paul!) who have been singing
this song for a very very long time. We don't sing it because it's fun and
we like the sound of our voices - we sing it because it's the only way we
know to come close to reliably producing good results. It doesn't
GUARANTEE good results - but it's close to reliable.

The other approach is:
- we want to make everything accessible to everything else
- we know our software sucks
- we can't secure everything
- so we'll try to automate the process of securing whatever is
particularly unpleasant right now
Post by Ben Nagy
Post by Marcus J. Ranum
Put differently, I see the "patch it everyplace" approach as
policy-centric host hardening.
You can only harden up until the OS will let you.
Well, yeah. If you're using the wrong OS you're an idiot. The fact that
there are a lot of idiots out there doesn't make them any less idiotic, either.
Let me see here: "I am gonna build a 'bastion host' on an O/S that doesn't
have chroot, or any notion of file permissions or execution control. But
I like it because it automatically loads device drivers on demand and it
has shared libraries and no CHANCE of producing a statically bound
executable and by the way anyone can overwrite a shared library any time
they get file level access because there are no file permissions enforced."
That sound smart to you? Thats like saying "I am going to build the
Eiffel Tower out of toilet paper because I like how soft toilet paper is!"
;)
Post by Ben Nagy
If the core service has an
exploitable bug then only a patch will fix it.
Yes. But.

First off, the big mistake a lot of folks make is that the place the
underlying code of a service where it's exposed to untrusted access.
I can't avoid saying "I told you so!" when I see that "application security"
is now a HOT TOPIC (again) - why? Because it's STUPID to, for example,
expose an Exchange server to the Internet when you can expose a
trusted piece of minimized code that runs in a locked-down environment.
Yes, Postfix has needed security patches. But has it needed as many
as Exchange? Yes, smap has needed a security patch (one, once, but
it was because it was linked against syslog()) but has it needed as many
as Exchange? In either case, the failure modes of both - running chrooted
and setuid nobody - are infinitely preferable to the failure modes of the
other approach.

The idea that code needs to be patched frequently and often is
predicated on the flawed concept that cruddy code is exposed to
untrusted network. That's just dumb. The fact that lots of people
do it, and lot of people want to do it, doesn't make it one iota
less dumb. The fact that lots of security practitioners aid and abet
the dumbness by preaching "patch! patch! patch!" makes them
willing participants in the dumbness. Fight back. Fight dumbness.
Come over to the light. Turn away from the darkness. Fight the
"accepted wisdom" of defeat. Use The Force, Ben.... ;)
Post by Ben Nagy
Other solutions (like my
famous "marketing" host based vulnerability mitigation ;) might save your
backside for a while, but the real intent of those solutions is to buy you
time, not obviate the need to fix the real problem.
Exactly!! Put another way - the intent of those solutions is to
make it easier for you to survive doing something stupid that
you may not survive anyhow.
Post by Ben Nagy
Even assuming that you could have pre-hardened a box (it is true that
hardening _might_ have let you dodge Blaster and Sasser, but wait until the
multiple vectored worms really start hitting us)
I have never had a worm or virus since I got interested in security.
NEVER. And I use Windows as my primary desktop platform,
so it's not that I'm a UNIX bigot. I have no idea why other people
seem to accept them as a fact of life. Accept dumbness, more like.
Post by Ben Nagy
then most people just won't
do it. In any case, having a huge freaking gaping security hole in a core
service is not something I feel comfortable about, same as running a
thousand Win95 boxes "behind a firewall" sends shivers down my spine.
Depends on the firewall and the services it's letting through. That
kind of thing can be controlled with tight policies, service-centric
segmentation, server-centric subnets, and desktop A/V. This is
not rocket science. It's just not WHAT PEOPLE WANT. They
want to accept dumbness and let those Windows boxes get to
Instant Messenger, Peer-to-peer file sharing, remote control
desktop, and dancing animated pigs that run on their desktops.
They want dumbness.
Post by Ben Nagy
It may be just me, but it sounds like you are arguing that people's
mainstream desktop OSes should be something that can be easily hardened on a
service-centric basis, understand true user / kernel / virtualisation
separation and yet have full enterprise functionality.
No, I think networks need to be segregated into roles, access needs
to be mediated on a service level and minimized. Yes, desktops
that are vulnerable to malcode should have malcode protection
(my desktop AV clobbers about 1 or 2 viruses a week that get
through my spam filters and attachment blockers) and core
services that are mission critical need to be run on real operating
systems not glorified program loaders which were designed to
appeal to dummies.
Post by Ben Nagy
If anybody else advanced this theory I would snort milk through my nose.
I'd pay to watch that!!!!!!!
Post by Ben Nagy
With you I will just say that you are five years ahead of your time.
What?? I've been saying EXACTLY THE SAME THING since
1990.

*BUT* Peter Neumann has been saying EXACTLY THE SAME THING
since 1963 or thereabouts. I was 1 year old then.

Dude, I'm not "advanced" I'm "retro" !!!! :)
Post by Ben Nagy
I am
100% behind you as an idealist, but, as a professional, I don't see that as
useful right now. :D
Because you're stuck in the dumbness. When you're up to your
neck in wet horse manure and you've got a shovel in your hand,
it's hard to think about just getting out of the manure and going
someplace else. After all, you've got a perfectly good shovel,
and the next shovel load might be the one that turns the tide...

Keep shovelling,
mjr.
Ben Nagy
2004-06-01 09:06:52 UTC
Permalink
-----Original Message-----
[...]
[...]
Post by Marcus J. Ranum
I think eventually time will tell
and we'll give up on patch management as a security
technique. [...]
(me)
| "CRAP!"
As I said, I think time will tell. :)
<RANT>
Come on, Ben! Join me in challenging the preconceptions of an
industry that has grown up around "if you can't do something
RIGHT do something STUPID, HARDER!" That's what we're
- Rather than run a good O/S: run a bad one and MANAGE it BETTER
- Rather than understand your connectivity: leave it OPEN and
FIDDLE WITH
your endpoints CONSTANTLY
- Rather than run good code: run bad code and UPGRADE IT DAILY
Talk about not being able to yell "CRAP" loud enough?? What's
wrong with this picture?!?!
I'm horribly torn here. I completely agree with you, but I just don't see
any evidence of change. Essentially what you are claiming, when you say that
"time will tell", is that little green men from the Planet Clue are going to
invade earth with their rectal clue applicators and drag most of the IT
industry in the world off to re-education camps. Until then, I applaud
evangelism, but it won't stop me trying to secure the mess we have.
Post by Marcus J. Ranum
Take a look at the recent security record of MS RPC endpoints. You
can't turn them off. You can't secure them. Windows will break.
Yes. So? YOU ARE INSANE IF YOU ARE RELYING ON WINDOWS FOR
INTERNET-FACING CRITICAL SYSTEMS.
Trouble is that it's not just internet facing systems that get owned. This
idea of crunchy outside chewy centre has GOT to change. It's dead. Didn't
work. Bye-bye.

[...]
We have seen - CLEARLY - with software and O/S in general -
that they are not reliable enough to provide a solid security
platform. The evidence is manifest; it's been staring us in
the face for at least the last 10 years and it's been covered
in big, blinky neon signeage for the last 4 years. Everyone
would rather be in denial.
What do you think? If we install JUST ONE MORE PATCH it's
gonna be SECURE? Heck, no. The only way to secure this crap
is to hold it down and hammer a stake through its heart.
Ah c'mon.

Given that we can't go back to the abacus, we need to work from where we
are, and it is happening. I see MS doing GOOD WORK in improving the
fundamental security core of their OS. I nearly passed out when I saw
support for NX memory, no anonymous RPC and host firewall enabled by default
in a general purpose service pack. They've come a long way from VMS. :) I
see linux including easy (enough) to use stack protection in most major
distributions, with DAC being doable In Real Life. I see
MacOS....um...taking massive steps backwards, but hey, they've always
"thought different".

The other option to burning it all and starting again is to "get there from
here". I say it's possible (eventually). Until that happens, we need
auxilliary solutions to prop things up.
</RANT>
Post by Marcus J. Ranum
How _ELSE_ do you want to deal with that problem? Let me put it a
different way. However much you lock down machines, your biggest
remaining worry will be software vulnerabilities in the services you
_do_ run - the rest is just a matter of degrees. How do you
eliminiate vulnerabilities? Patch.
Ok... now let me catch my breath and we can talk sense... ;)
You're absolutely right that the software vulnerabilities in
services are what will kill you. That's why the old-school
doctrine was [smart]
I think you're STILL thinking in terms of building hardened entry points.
Yes, more people should do that as well. Now what about the other 99.9% of
machines in the network? Some of the manufacturing places I talk to still
have Windows 95 machines running production robots. Win 95! The only reason
they didn't get knocked over by Sasser is that they didn't _have_ a Local
Security Authority!

[...]
Post by Marcus J. Ranum
You can only harden up until the OS will let you.
Well, yeah. If you're using the wrong OS you're an idiot. The
fact that there are a lot of idiots out there doesn't make
them any less idiotic, either.
This line brings a smile to my face every time I read it.

You're right, of course, but lots of people aren't going to admit it when
you rub their nose in it like that. I'm writing this on a Windows box - and
you just told me that your work box is Windows too. I vote that us "idiots"
deserve security too.
Let me see here: "I am gonna build a 'bastion host' on an O/S
that doesn't have chroot, or any notion of file permissions
or execution control. But I like it because it automatically
loads device drivers on demand and it has shared libraries
and no CHANCE of producing a statically bound executable and
by the way anyone can overwrite a shared library any time
they get file level access because there are no file
permissions enforced."
[...]

What can I say? :) It's so useable!

No, seriously, the argument about what to use if building a hardened
single-service box was conceded a long time ago by all but the masochists.
I'm talking about the _rest_.

[...]
The idea that code needs to be patched frequently and often
is predicated on the flawed concept that cruddy code is
exposed to untrusted network. That's just dumb.
So this is, again, where we differ in opinion. The desktop - also known as
Cruddy Code Central - is what is causing the problem. You "old school"
genuises have been telling us "newbies" to build super duper amazing transit
points between networks of different trust levels, which we have been trying
to do. The trouble is that malware still gets in. Poot. Them dang worms is
like roaches, I tell ya. Looks 'ifn that there trusted network weren't quite
so trusted after all...

There comes a point where we have to admit that "the security architecture
operation was a complete success, but the patient died" is of limited value.
One of the funniest things I ever saw was a small copper tail running out of
a door in a military research institute - the building was a faraday cage,
and so they needed the tail to make the radio work. People DO these things -
it's HUMAN.
Fight back. Fight dumbness.
Come over to the light. Turn away from the darkness. Fight
the "accepted wisdom" of defeat. Use The Force, Ben.... ;)
Ha! "It's fun to use learning for evil!" [1]
Post by Marcus J. Ranum
Other solutions (like my
famous "marketing" host based vulnerability mitigation ;) might save
your backside for a while, but the real intent of those
solutions is to
Post by Marcus J. Ranum
buy you time, not obviate the need to fix the real problem.
Exactly!! Put another way - the intent of those solutions is
to make it easier for you to survive doing something stupid
that you may not survive anyhow.
That's correct. This is a bad thing, how? Seatbelts. The rail around Niagra.
etc...

[...]
I have never had a worm or virus since I got interested in security.
NEVER. And I use Windows as my primary desktop platform.
Because you have one machine to take care of, plus you have some idea what
you are doing maybe?

[...]
Yes,
desktops that are vulnerable to malcode should have malcode
protection (my desktop AV clobbers about 1 or 2 viruses a
week that get through my spam filters and attachment
blockers)
!! So we agree! Yay! It's just that AV is not really effective against
network-borne threats because the threat clobbers the network service before
the AV gets a crack at it. AV is OK at stopping stuff that comes in from
Layer 8, but doesn't cover lots of other threats. Other stuff _can_ cover
some of those threats.

[...]
Post by Marcus J. Ranum
With you I will just say that you are five years ahead of your time.
What?? I've been saying EXACTLY THE SAME THING since 1990.
*BUT* Peter Neumann has been saying EXACTLY THE SAME THING
since 1963 or thereabouts. I was 1 year old then.
Dude, I'm not "advanced" I'm "retro" !!!! :)
Computing since the 60s has proved that those two words are effectively
synonyms. ;)
Post by Marcus J. Ranum
I am
100% behind you as an idealist, but, as a professional, I don't see
that as useful right now. :D
Because you're stuck in the dumbness.[...]
Keep shovelling,
mjr.
<shovel, shovel>

ben

[1] http://www.dieselsweeties.com/shirts/ This is not my company, I have no
affiliation, I make no money from shirt sales - I just didn't wanna steal a
possibly-trademarked line. ;)

PS: I am ten ninjas. [1]
Marcus J. Ranum
2004-06-01 14:38:07 UTC
Permalink
Post by Ben Nagy
Post by Marcus J. Ranum
As I said, I think time will tell. :)
I'm horribly torn here. I completely agree with you, but I just don't see
any evidence of change. Essentially what you are claiming, when you say that
"time will tell", is that little green men from the Planet Clue are going to
invade earth with their rectal clue applicators and drag most of the IT
industry in the world off to re-education camps.
I didn't say that!!! I didn't even *THINK* that!!

What I think is going to happen is that people are going to
keep spending huge amounts of money on approaches that
don't work. Some, a small number, are going to say, "well, Duh!
and solve the problem." After a while, the folks who are busy
fighting the bug-of-the-week club down in the trenches are
going to say, "hey! look! that guy over there doesn't have this
problem!" and they'll adapt. Or they'll die out or just keep
cheerfully pounding their heads against the wall. But eventually
it will become clear that their approach is loserly.

Remember, loserly behavior is not a function of population
size. Just because lots of people are doing something dumb
doesn't make it any less dumb. It only means that there are
more people doing it.

I *hope* that in 10 years security practitioners will look back
at the days of "the system-wide patching fad" and laugh.

We're a society of fads and "get rich quick" schemes. We'd
rather pay 3X as much for special food that has 1/2 the calories
of normal food - instead of eating 1/2 as much of the normal
food (which actually has real flavor). We'd rather follow a fad
diet that destroys our body with saturated fats than simply
"eat lots. work hard. burn lots of energy." We're still in the
era of get.rich.quick low-carb Internet security - perhaps it
will be the aliens with their clue probes that get us out of it, but
it's more likely we'll either stay there or wise up.
Post by Ben Nagy
Post by Marcus J. Ranum
Post by Ben Nagy
Take a look at the recent security record of MS RPC endpoints. You
can't turn them off. You can't secure them. Windows will break.
Yes. So? YOU ARE INSANE IF YOU ARE RELYING ON WINDOWS FOR
INTERNET-FACING CRITICAL SYSTEMS.
Trouble is that it's not just internet facing systems that get owned. This
idea of crunchy outside chewy centre has GOT to change. It's dead. Didn't
work. Bye-bye.
I'm not advocating a perimeter-only defense!!! I *NEVER* have.
But it's the first and best place to start. If you don't do something
sensible at the perimeter - or you don't have a perimeter at all -
then all your systems are internet-facing. We've seen how well
*THAT* works, too.

Let me try some different logic on you:
- Every year there are more internet-facing systems by
some huge number, as more homes go online
- Many of those systems rely on endpoint mitigation and
patching as their sole security
- Every year, the number of systems compromised keeps
going up

What does that tell you? That the attackers are getting smarter?
No - they're doing the "same old same old". That the attackers
are working harder? Maybe, but it's largely automated. So
if you have largely automated attacks succeeding wildly against
system that are using low-carb security - well.... What do you
conclude?
Post by Ben Nagy
Post by Marcus J. Ranum
What do you think? If we install JUST ONE MORE PATCH it's
gonna be SECURE? Heck, no. The only way to secure this crap
is to hold it down and hammer a stake through its heart.
Ah c'mon.
I'm serious.
Back in 1997 (blackhat keynote, you can hear the audio on
http://www.ranum.com/security/computer_security/audio/mjr-blackhat-97.mp3
- it's a cruddy recording and I was a bit hung over when I did
the talk, but the idea remains. There's one major "bug" in the
talk, and here's the patch:
s/"it would be funny if I wasn't kidding"/"it would be funny if I wasn't serious"/)

Are you trying to tell me that operating systems are holy
writ that cannot be discarded and replaced with something
better? Ever hear of TOPS-10, MULTICS, OS/9, VMS? They
are operating systems that people used to use. O/S' come
and go. Windows is "just a phase" (as my parents used to
say when I wanted to dye my hair weird colors in high
school) it will pass. Maybe.
Post by Ben Nagy
Given that we can't go back to the abacus, we need to work from where we
are, and it is happening.
Why do we need to wok from where we are? Where we are is
not good!!! Working harder on it may not make it better. In fact
the preponderance of evidence is that it's getting WORSE.
Do you want to work harder on a situation where hard work
may be rewarded with worsening results? I'm not being
facetious; I am deadly serious. Trying to fix Windows security
has *ONLY* paid off in the stock prices of security companies
and not improved end user experience or system reliability
one iota.
Post by Ben Nagy
I see MS doing GOOD WORK in improving the
fundamental security core of their OS.
I see MS doing GOOD MARKETING in attempting to
unscrew that which is permanently screwed.
Post by Ben Nagy
I nearly passed out when I saw
support for NX memory
It's a nice kludge. Making the stack grow *up* into memory
like MULTICS did this in ~1965 - around the time I was learning
to walk upright. It's a little harder to code that kind of thing in
your kernel if you're smarter than a chimpanzee but it means
you never have buffer overruns.

You've all probably heard the old joke, "if computer programmers
built bridges like they write code, the first rainstorm we had would
collapse civilization" - it's wrong. If computer programmers built
bridges like they write code, they'd start off by re-inventing the I-beam
for each bridge - and they'd never get anything done because
they'd be arguing about the relative merits of whatever strongly-hyped
metal alloy was popular that week (XML? couldn't we use XML for that?)
Post by Ben Nagy
no anonymous RPC and host firewall enabled by default
in a general purpose service pack. They've come a long way from VMS. :)
Yes, they have. VMS was so much better, and the gap is growing
rapidly. :)
Post by Ben Nagy
The other option to burning it all and starting again is to "get there from
here". I say it's possible (eventually). Until that happens, we need
auxilliary solutions to prop things up.
I thing it's time to start grabbing our stakes and hammers
and getting to work!!
Post by Ben Nagy
Post by Marcus J. Ranum
Well, yeah. If you're using the wrong OS you're an idiot. The
fact that there are a lot of idiots out there doesn't make
them any less idiotic, either.
This line brings a smile to my face every time I read it.
You're right, of course, but lots of people aren't going to admit it when
you rub their nose in it like that. I'm writing this on a Windows box - and
you just told me that your work box is Windows too. I vote that us "idiots"
deserve security too.
I have fabulous security!!! My machine is isolated so that its
manifest weaknesses don't bother me. I accepted the fact
that I have a dumb O/S and because I am smart guy I
designed around it. I also have terrific backups "just in case" ;)
It's what I mean about understanding your risks and working
around them. The problem is that people don't want to
understand 'em and work around them. They just get as
far as "well, there are risks." and start patching.
Post by Ben Nagy
[...]
Post by Marcus J. Ranum
The idea that code needs to be patched frequently and often
is predicated on the flawed concept that cruddy code is
exposed to untrusted network. That's just dumb.
So this is, again, where we differ in opinion. The desktop - also known as
Cruddy Code Central - is what is causing the problem. You "old school"
genuises have been telling us "newbies" to build super duper amazing transit
points between networks of different trust levels, which we have been trying
to do.
NO you haven't!!! You're like the guys who want to eat 3 gallons
of ice cream a day and still lose weight using some fad diet.
Those things many people call "firewalls" are just low-carb
feel-good half-hearted nods toward security. Their policies
have been set up by committees with marketing people on
them, and their security posture depends more on which business
unit brings in more money than on actually protecting the
network. I mean these darned things allow attachments
through; they allow ActiveX through, they allow IM through,
etc, etc, etc. That's not a firewall. That's a "slow router."
And these "firewalled" networks are full of users who come
and go with laptops that they just plug in wherever they
want whenever they want and are given an IP address and
off they go. Those "mobile users" are on common segments
with mission critical servers and the only "authentication" they
use is the fact that they're physically there. Did I just describe
the typical corporate network? Can you tell me what is
"firewalled" about *THAT*!?!!? That's not firewalled. That's
low-carb-fat-free-firewalled.
Post by Ben Nagy
The trouble is that malware still gets in. Poot. Them dang worms is
like roaches, I tell ya. Looks 'ifn that there trusted network weren't quite
so trusted after all...
Peter Neumann likes to make sure people use the words "trusted"
and "trustworthy" properly. :) That was a trusted network but not
a trustworthy network. :) oops.
Post by Ben Nagy
There comes a point where we have to admit that "the security architecture
operation was a complete success, but the patient died" is of limited value.
The patient died AND IS STARTING TO SMELL!

mjr.
Paul D. Robertson
2004-06-01 16:49:42 UTC
Permalink
Post by Marcus J. Ranum
Post by Ben Nagy
Post by Marcus J. Ranum
As I said, I think time will tell. :)
I'm horribly torn here. I completely agree with you, but I just don't see
any evidence of change. Essentially what you are claiming, when you say that
"time will tell", is that little green men from the Planet Clue are going to
invade earth with their rectal clue applicators and drag most of the IT
industry in the world off to re-education camps.
I didn't say that!!! I didn't even *THINK* that!!
Yeah, but admit it, you're still wishing it would! ;)
Post by Marcus J. Ranum
I *hope* that in 10 years security practitioners will look back
at the days of "the system-wide patching fad" and laugh.
We wished 10 years ago that people would look at idiots writing code and
laugh, I think your timeframe is way short...
Post by Marcus J. Ranum
We're a society of fads and "get rich quick" schemes. We'd
rather pay 3X as much for special food that has 1/2 the calories
of normal food - instead of eating 1/2 as much of the normal
food (which actually has real flavor). We'd rather follow a fad
diet that destroys our body with saturated fats than simply
"eat lots. work hard. burn lots of energy." We're still in the
era of get.rich.quick low-carb Internet security - perhaps it
will be the aliens with their clue probes that get us out of it, but
it's more likely we'll either stay there or wise up.
The nice thing about carbohydrates is that your brain needs them- I see
the low-carb diet as a self-fixing problem long-term.
Post by Marcus J. Ranum
But it's the first and best place to start. If you don't do something
sensible at the perimeter - or you don't have a perimeter at all -
then all your systems are internet-facing. We've seen how well
*THAT* works, too.
It's a new trend again though!
Post by Marcus J. Ranum
Why do we need to wok from where we are? Where we are is
not good!!! Working harder on it may not make it better. In fact
the preponderance of evidence is that it's getting WORSE.
Ah, but that's a civil level of proof, and we're looking at a crime! ;)
Post by Marcus J. Ranum
I see MS doing GOOD MARKETING in attempting to
unscrew that which is permanently screwed.
To be fair, they are making some progress in the right direction, it's
just that they started on Pluto and the goal line is on this planet.

Releasing an operating system with sixty-some thousand known bugs as
"ready to use" should mean hard time.
Post by Marcus J. Ranum
NO you haven't!!! You're like the guys who want to eat 3 gallons
of ice cream a day and still lose weight using some fad diet.
Those things many people call "firewalls" are just low-carb
feel-good half-hearted nods toward security. Their policies
have been set up by committees with marketing people on
them, and their security posture depends more on which business
unit brings in more money than on actually protecting the
network. I mean these darned things allow attachments
through; they allow ActiveX through, they allow IM through,
etc, etc, etc. That's not a firewall. That's a "slow router."
And these "firewalled" networks are full of users who come
and go with laptops that they just plug in wherever they
want whenever they want and are given an IP address and
off they go. Those "mobile users" are on common segments
with mission critical servers and the only "authentication" they
use is the fact that they're physically there. Did I just describe
the typical corporate network? Can you tell me what is
"firewalled" about *THAT*!?!!? That's not firewalled. That's
low-carb-fat-free-firewalled.
Amen, Brother Marcus!
Post by Marcus J. Ranum
Post by Ben Nagy
The trouble is that malware still gets in. Poot. Them dang worms is
like roaches, I tell ya. Looks 'ifn that there trusted network weren't quite
so trusted after all...
Peter Neumann likes to make sure people use the words "trusted"
and "trustworthy" properly. :) That was a trusted network but not
a trustworthy network. :) oops.
I still say that 90% of the problem would disappear overnight if MS
removed the execute bit from Outlook's attachments. That doesn't mean we
wouldn't still have problems- but there'd be a lot less of them.

Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
***@compuwar.net which may have no basis whatsoever in fact."
***@trusecure.com Director of Risk Assessment TruSecure Corporation
R. DuFresne
2004-06-01 17:04:42 UTC
Permalink
[SNIP]
Post by Ben Nagy
[...]
Post by Marcus J. Ranum
I have never had a worm or virus since I got interested in security.
NEVER. And I use Windows as my primary desktop platform.
Because you have one machine to take care of, plus you have some idea what
you are doing maybe?
And yet it's not that hard, in 5 years with a teen and sometimes two teens
on their desktops, 8 windows boxen and a few SUNS <running open BSD> and a
few intel systems running various levels of slackware, all behind an old
archaaic gateway, that is mostly open, but, knows the bad windows related
ports and the few unix related ports that can be hit with nasties, only
one system has suffered a virus infection out of the hoard that has been
spewed in the past 5 years. That system was infected due to a teen
trusting other teens and getting a /dcc download of nasty. Course the
virus remained isolated from the rest of the windows boxen due to they AV
sigs being up to date.

The point is, certain windows related ports should not be passed from
outside in, nor vice versa. M$ has not gotten that right and perhaps
never will, so one has to institute measures to ensure that, since the M$
packet filtering FW is so bogus as to work only one way, then put
something either in front of the widows box that can block inside out as
well as outside in, or replace the windows packet filter with something
that does know ingress as well in egress.

Rather then trying to beat the vendor into submission, why not sidestep
the vendors toys with decent safe replacements and be done with it?

Thanks,

Ron DuFresne

<this has been a great thread, and if Ben will allow me, I may scarf up
his little green men and the anal whatch-a-ma-callits line for use later
with mgt>
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
admin & senior security consultant: sysinfo.com
http://sysinfo.com

"Cutting the space budget really restores my faith in humanity. It
eliminates dreams, goals, and ideals and lets us get straight to the
business of hate, debauchery, and self-annihilation."
-- Johnny Hart

testing, only testing, and damn good at it too!
Jim Seymour
2004-06-01 13:31:26 UTC
Permalink
"Marcus J. Ranum" <***@ranum.com> wrote:
[snip]
Post by Marcus J. Ranum
Well, yeah. If you're using the wrong OS you're an idiot. The fact that
there are a lot of idiots out there doesn't make them any less idiotic, either.
[snip]
"If fifty million people say a stupid thing,
it is still a stupid thing."
- Anatole France

One of my favourite quotes.
--
Jim Seymour | Spammers sue anti-spammers:
***@LinxNet.com | http://www.LinxNet.com/misc/spam/slapp.php
http://jimsun.LinxNet.com | Please donate to the SpamCon Legal Fund:
| http://www.spamcon.org/legalfund/
Marcus J. Ranum
2004-06-01 16:07:57 UTC
Permalink
Post by Jim Seymour
"If fifty million people say a stupid thing,
it is still a stupid thing."
- Anatole France
One of my favourite quotes.
There used to be a great T-shirt in the 70's that National Lampoon
used to sell. It read:
"Eat Sh*t. 50 billion flies can't all be wrong."



Hey, I run Windows, too! :)

mjr.
Paul D. Robertson
2004-06-01 12:45:54 UTC
Permalink
Post by Ben Nagy
Take a look at the recent security record of MS RPC endpoints. You can't
turn them off. You can't secure them. Windows will break.
Funnily enough, I booted WinXP Pro on my laptop[0] last week to put some
shellcode through a disassembler. There was no danger from any RPC-based
malcode.
Post by Ben Nagy
How _ELSE_ do you want to deal with that problem? Let me put it a different
Strategically, I want to deal with it the right way- either removing the
dependence on RPC (hey, all my Linux systems don't need network-based RPC
anymore) or by getting the developers to give me better separation- MS is
actually starting to do that with
whatever-the-heck-the-next-bug-cluster-is-called.
Post by Ben Nagy
You can only harden up until the OS will let you. If the core service has an
Not true- you can firewall things that the OS won't let you do.
Post by Ben Nagy
exploitable bug then only a patch will fix it. Other solutions (like my
If it can't be attacked, then arguably, it doesn't need to be fixed.
Post by Ben Nagy
Even assuming that you could have pre-hardened a box (it is true that
hardening _might_ have let you dodge Blaster and Sasser, but wait until the
multiple vectored worms really start hitting us) then most people just won't
do it. In any case, having a huge freaking gaping security hole in a core
service is not something I feel comfortable about, same as running a
thousand Win95 boxes "behind a firewall" sends shivers down my spine.
Yet lots of people do it every day and don't have many problems....

Paul
[0] G4 Powerbook, running XP in VirutalPC with the hosting OS providing
firewalling. I find BOCHs interesting strategically because you actually
could do kernel level firewalling.
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
***@compuwar.net which may have no basis whatsoever in fact."
***@trusecure.com Director of Risk Assessment TruSecure Corporation
M. Dodge Mumford
2004-06-01 16:00:33 UTC
Permalink
Post by Paul D. Robertson
If it can't be attacked, then arguably, it doesn't need to be fixed.
That sentiment surprises me a bit. It appears to me to violate the concept
of defense in depth. Blocking the exploit path to a vulnerability may
mitigate the risk greatly, but the vulnerability still remains. In your
instance, the exploit path would involve attacking your host operating
system that's performing the firewalling.

I would think the point of mitigating the risk is to buy you time to fix the
vulnerability. That "time to fix" may be "until Longhorn is released." Which
assumes that Longhorn (or, broadly, version++) will fix the vulnerability.
--
Dodge
Paul D. Robertson
2004-06-01 16:33:25 UTC
Permalink
Post by M. Dodge Mumford
Post by Paul D. Robertson
If it can't be attacked, then arguably, it doesn't need to be fixed.
That sentiment surprises me a bit. It appears to me to violate the concept
of defense in depth. Blocking the exploit path to a vulnerability may
I did say "arguably." However, "can't be attacked" isn't automatically
equatable to "can't be attacked right now," which is what I think you're
getting at- for a large number of vulnerabilities- can't be attacked
right now equates to "this quarter/year" sorts of things- for another
large set of vulnerabilities, it equates to "better odds that your
platform vendor of choice will suddenly figure out how to convert all
their software to bug-free code!"

There's an argument to be made for always patching, only patching
occasionally, or never patching- it really all depends on your risk, and
which ways you choose to mitigate risk. While patching is a valid
additional control, it's not the only additional control, and it may well
be that _for_some_things_ it's not the right suspenders to go with the
belt.
Post by M. Dodge Mumford
mitigate the risk greatly, but the vulnerability still remains. In your
instance, the exploit path would involve attacking your host operating
system that's performing the firewalling.
Obviously, which is why infrastructure is more important than ever! It's
also a much easier sell to harden and spend real time on infrastructure
than it is on things which are infrastructure, but are so ubiquitous that
they're not seen as such, like desktop operating systems. Now, guess
which OS gets more care and feeding- even though it's probably less at
risk (after all, the XP subsystem exists to deal with actively malicious
and unknown code of decidedly questionable use.)
Post by M. Dodge Mumford
I would think the point of mitigating the risk is to buy you time to fix the
vulnerability. That "time to fix" may be "until Longhorn is released." Which
assumes that Longhorn (or, broadly, version++) will fix the vulnerability.
Yes, that's a risk mitigation point, however it is possible to sometimes
ignore risk, or marginalize it where the rate of successful attack is near
zero, and it's possible to do the same where the cost of successful attack
is near zero. The proof, of course, is in being able to successfully
predict those things, and move to change the posture when those
predictions aren't correct in a timeframe that still provides sufficient
protection.

It may be that it's more prudent to patch regularly once a quarter, and
block in the meantime, or it may actually be prudent to patch immediately-
there are significant costs with patching (not all of them
labo{u}r-related.) Sometimes, the cure is worse than the disease- and
that has to be factored into any risk assessment.

Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
***@compuwar.net which may have no basis whatsoever in fact."
***@trusecure.com Director of Risk Assessment TruSecure Corporation
Marcus J. Ranum
2004-06-01 17:13:20 UTC
Permalink
Post by M. Dodge Mumford
Post by Paul D. Robertson
If it can't be attacked, then arguably, it doesn't need to be fixed.
That sentiment surprises me a bit. It appears to me to violate the concept
of defense in depth.
This is Peter Tippett's theory of synergistic controls. If you have
several things that each reduce the likelihood of something bad
happening, then it's really good to do more of them a little bit
because the marginal returns eventually go down.

So, if making your network separated so that "it can't be attacked"
is going to address 95% of the risks (ninjas, nanobots, etc, are still
a problem) and hardening the system is going to address another 95%
you're best off if you do the easiest/cheapest one first. In the case
of using my "perfect firewall" it's usually easier since it's almost
always easier and cheaper to NOT DO SOMETHING than to DO
something. The equipment cost for an air gap is low. ;)

What's interesting is that if you have 2 security controls that each
help block (on average, assuming random distribution of attack
vectors - which is an interesting assumption) 50% of the attacks,
then you've got 75% of the attacks blocked. Again, the assumption
of random distribution is an interesting and important problem
in the theory. If the attacks distribute disproportionately - if you
can whack 50% of the network attacks and 90% of the attacks
are networked - then your air gap is going to show a much higher
value (95% of 90%) One of the things that makes firewalls
remain attractive is that a disproportion of attacks are networked
AND the effort factor to install them at a perimeter is low.

The concept of defense in depth is to do some pretty basic
stuff in lots of places. And it works. So if you're willing to
assume in Paul's example that "the system cannot be attacked
is ONLY 95% effective - then a 50% effective antivirus system
on the desktop behind the airgap bumps your likelihood of an
attack getting through down to a whopping 2.5%. But if you
think about it, your first line of defense makes a lot of the
difference and after that it's all diminishing returns.

Hmm... Did I just say that "just doing ANYTHING" is a good
start? I think I did. ;) Perhaps that's why we find ourselves
on the fence about the host/network - where do I secure it ?
issue - doing *anything* that's not manifestly stupid helps
a great deal. Doing any 2 things that aren't manifestly
stupid gets you most of the rest of the way 100% for all
intents and purposes. If you accept some of the logic I've
thrown at you above, then it stands to reason that doing
things that help less than 40-50% of the time is probably
a waste of time unless you're doing 3 or more of them.

mjr.
Paul D. Robertson
2004-06-01 17:55:56 UTC
Permalink
Post by Marcus J. Ranum
So, if making your network separated so that "it can't be attacked"
is going to address 95% of the risks (ninjas, nanobots, etc, are still
a problem) and hardening the system is going to address another 95%
you're best off if you do the easiest/cheapest one first. In the case
of using my "perfect firewall" it's usually easier since it's almost
always easier and cheaper to NOT DO SOMETHING than to DO
something. The equipment cost for an air gap is low. ;)
Right, this is why default deny inbound is still the #1 thing you can do
to a network. It negates probably 85% of the attack traffic coming at
you, and it's usually less than 10 minutes to implement on the border
router. When I ran mail servers, I'd refuse to implement AV on my
gateway, proclaiming it a "desktop problem!" That was probably the
silliest stance I've ever taken (but it did keep my gateway maintanence
low) - that's probably good for >80% of e-mail spreading malware. Add a DMZ
for inbound HTTP, and you're sitting pretty good overall for the easy to
do and high protection stuff. From there on, things get complicated,
which is why everything else is not so uniformly implemented. I'd say
that >70% of the outbound trojans use IRC as a control vector- so blocking
6667 outbound (and this is a temporal block, it won't always be true for
the bulk of trojans, but is for now) will reduce the risk of already
compromised machines more than anything other than blocking outbound SMTP
(well, outbound SMTP reduces your risk of hitting anyone else, not your
risk of bad stuff happening _yet_.)
Post by Marcus J. Ranum
What's interesting is that if you have 2 security controls that each
help block (on average, assuming random distribution of attack
vectors - which is an interesting assumption) 50% of the attacks,
then you've got 75% of the attacks blocked. Again, the assumption
of random distribution is an interesting and important problem
in the theory. If the attacks distribute disproportionately - if you
can whack 50% of the network attacks and 90% of the attacks
are networked - then your air gap is going to show a much higher
value (95% of 90%) One of the things that makes firewalls
remain attractive is that a disproportion of attacks are networked
AND the effort factor to install them at a perimeter is low.
Right, and in this case, defense in depth comes from two things- the first
is doing that blocking in two places, the router *and* the firewall- it's
your most effective control, so it should be duplicated, and the second is
to deal with things that have to be allowed to use that vector (including
desktop browsers, SMTP servers and Web servers.)
Post by Marcus J. Ranum
The concept of defense in depth is to do some pretty basic
stuff in lots of places. And it works. So if you're willing to
assume in Paul's example that "the system cannot be attacked
is ONLY 95% effective - then a 50% effective antivirus system
on the desktop behind the airgap bumps your likelihood of an
attack getting through down to a whopping 2.5%. But if you
Actually, I'd argue that perimeter AV probably gets you that far, and
desktop AV is good for about .5% protection from the same vector. Desktop
AV is really the primary control for machine<->machine worms, but it's
_likely_ that a "personal firewall" is more effective and requires less
change over time.
Post by Marcus J. Ranum
think about it, your first line of defense makes a lot of the
difference and after that it's all diminishing returns.
Hmm... Did I just say that "just doing ANYTHING" is a good
start? I think I did. ;) Perhaps that's why we find ourselves
on the fence about the host/network - where do I secure it ?
issue - doing *anything* that's not manifestly stupid helps
a great deal. Doing any 2 things that aren't manifestly
stupid gets you most of the rest of the way 100% for all
intents and purposes. If you accept some of the logic I've
thrown at you above, then it stands to reason that doing
things that help less than 40-50% of the time is probably
a waste of time unless you're doing 3 or more of them.
Right, the non-obvious point here is that doing 1 thing that's 95%
effective is not always better than doing three things that are 80%
effective. The obvious synergies have to be in the attack vector
protection, but if the 95% thing is really hard and the three 80%'s are
easier, you're much more likely to achieve success with the easy ones.
The other side of the coin is failure modes- if you do 3 80% things and
two of them fail, you're still likely to be better protected than doing
one 95% thing, especially if that were to fail.

So, if we take perimeter AV and desktop AV, both probably ~85% effective,
and map that against hardening a system to not run signed code at all,
which is probably 99.5% effective, but requires massive support and
maintenance costs- AV wins. Yes, hardening is better- 99 times out of 100
it'll work, where AV will work 85 times out of 100. Also, you'll get
slightly better protection if the AV product isn't identical, because then
the vendor's update cycles become synergistic.

Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
***@compuwar.net which may have no basis whatsoever in fact."
***@trusecure.com Director of Risk Assessment TruSecure Corporation
R. DuFresne
2004-06-02 00:05:18 UTC
Permalink
Post by M. Dodge Mumford
Post by Paul D. Robertson
If it can't be attacked, then arguably, it doesn't need to be fixed.
That sentiment surprises me a bit. It appears to me to violate the concept
of defense in depth. Blocking the exploit path to a vulnerability may
mitigate the risk greatly, but the vulnerability still remains. In your
instance, the exploit path would involve attacking your host operating
system that's performing the firewalling.
I would think the point of mitigating the risk is to buy you time to fix the
vulnerability. That "time to fix" may be "until Longhorn is released." Which
assumes that Longhorn (or, broadly, version++) will fix the vulnerability.
blocking the exploit path should be viewed in the context of "defense in
depth", and a person has to avoid tunnel vision;

At my present place of employment one of the CISSP's had tunnel vision to
the affect that in scanning systems for potential sploitable services, he
had the impression that if he could not touch a service with his scanner
that in and of itself was an issue; nevermind that our unix toolsets used
a number of apps to provide "defense in depth" and thus his scanner was
'running' into them and they were doing their job, blocking his scans to
those services. Was this a problem? Only in his eyes....


Thanks,

Ron DuFresne
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
admin & senior security consultant: sysinfo.com
http://sysinfo.com

"Cutting the space budget really restores my faith in humanity. It
eliminates dreams, goals, and ideals and lets us get straight to the
business of hate, debauchery, and self-annihilation."
-- Johnny Hart

testing, only testing, and damn good at it too!
R. DuFresne
2004-06-01 19:12:44 UTC
Permalink
[snip]
Post by Paul D. Robertson
Funnily enough, I booted WinXP Pro on my laptop[0] last week to put some
shellcode through a disassembler. There was no danger from any RPC-based
malcode.
Post by Ben Nagy
How _ELSE_ do you want to deal with that problem? Let me put it a different
Strategically, I want to deal with it the right way- either removing the
dependence on RPC (hey, all my Linux systems don't need network-based RPC
anymore) or by getting the developers to give me better separation- MS is
actually starting to do that with
whatever-the-heck-the-next-bug-cluster-is-called.
I do recall not long ago, some of these very same folks trying to work out
how to do the same with SUN systems and RPC, which was then, a near
nightmare iwth SUN's dependance or wish to depend upon RPC for many of
it's services. One might have thought that would have been a clue for the
redmond crowd to hook into by now?!

Thanks,

Ron DuFresne
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
admin & senior security consultant: sysinfo.com
http://sysinfo.com

"Cutting the space budget really restores my faith in humanity. It
eliminates dreams, goals, and ideals and lets us get straight to the
business of hate, debauchery, and self-annihilation."
-- Johnny Hart

testing, only testing, and damn good at it too!
Paul D. Robertson
2004-06-01 19:35:49 UTC
Permalink
Post by R. DuFresne
Post by Paul D. Robertson
Strategically, I want to deal with it the right way- either removing the
dependence on RPC (hey, all my Linux systems don't need network-based RPC
anymore) or by getting the developers to give me better separation- MS is
actually starting to do that with
whatever-the-heck-the-next-bug-cluster-is-called.
I do recall not long ago, some of these very same folks trying to work out
how to do the same with SUN systems and RPC, which was then, a near
nightmare iwth SUN's dependance or wish to depend upon RPC for many of
it's services. One might have thought that would have been a clue for the
redmond crowd to hook into by now?!
I did a kernel module once that bound daemon sockets to loopback, worked
great for RPC services- but Sun's compiler hated me, so I ended up punting
and doing the POC on Linux (this was before GCC was 64-bit clean on
Sparc.)

Logic went something like "if there's no controlling TTY, and you're
trying to bind to if_any, then force the address for the call to loopback."
I eventually added parent process checking- it was a fun little hack-
unfortunately it only worked for things which went through the syscall
table- guess what- in-kernel network filesystems on Linux don't. *sigh*

Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
***@compuwar.net which may have no basis whatsoever in fact."
***@trusecure.com Director of Risk Assessment TruSecure Corporation
Devdas Bhagat
2004-05-27 16:08:28 UTC
Permalink
Post by Marcus J. Ranum
Post by Ben Nagy
To me, amongst the plethora of product, service and snake oil there are two
evolving solution spaces that solve real problems. Host based vulnerability
mitigation
The big problem with host based anything is that the management effort
scales with the number of hosts. That's just a losing battle in the long-term
Actually, it scales with the number of *unique* hosts. If each host is
unique, then the management effort does scale linearly or worse.
However, if we design the system so that we have fewer combinations of
hosts, then the system is actually easier to manage.
Post by Marcus J. Ranum
because nobody's host-count is shrinking. Basically, the host-side problem
is the same as the system administration problem - and the industry has
made a frightening bodge out of its attempts to "solve" that issue.
http://www.infrastructures.org/ is a good way of designing a solution to
the system administration problem. The same approach can be applied to
the security administration issue.
Personally, I would go with a service centric approach to security,
rather than a host centric approach. This is where most security systems
appeared to lead, until we ended up with too many services to manage.

IMHO, a host centric approach (where "host" maps to a group of identical
systems) is a good idea for system management.

A service oriented approach is a good idea for security management.
To clarify:
Each system [1] offers a "service" [2] to its clients. The task for the
security system [3] is to ensure that only authorized clients are allowed to
access these services.

For example, the task of a MUA is to *display* email. Hence, the MUA
needs to be allowed access to functions that display email, but not to
functions that cause possibly harmful content to execute.

<snip>
Devdas Bhagat
[1] A system is a single host or group of hosts.
[2] A service is an interaction between two processes, not necessarily
on the same system.
[3] The security system includes software, hardware *and* wetware. For
my given example, the security system would consist of not including
code that would execute the harmful content.
Marcus J. Ranum
2004-05-27 18:55:40 UTC
Permalink
Post by Devdas Bhagat
Personally, I would go with a service centric approach to security,
rather than a host centric approach. This is where most security systems
appeared to lead, until we ended up with too many services to manage.
You're totally correct.

I used to preach that back in 1990 when I was first teaching firewall
systems analysis. You can see how well it's worked!! <LOL>

When I used to audit clients' firewalls (back in the days when people
actually wanted their firewall policies to be understood and thought
about before implementing them) the first question was "what are
the different roles of computing on your network?" So we'd take
all the roles of computing (back in the days when organizations actually
KNEW that they did with their networks) and we'd draw a connectivity
matrix between those different roles. Internet access was just another
role. The cells of the connectivity matrix got loaded with the services
that were necessary between the different roles. The details of how the
services got back and forth was left to the final stage, once it was
agreed that the service was necessary. Services were treated as
high-level concepts (e.g: "file transfer" not "FTP" or "port 24")
Then you could walk through and talk about transport for services
and mitigation for attacks at an enterprise-role level. It was always
a very "clarifying" exercise.

Usually part way through someone would stand on their chair and
yell, "this is COMPLICATED!"" Well, yeah. Transitive trust and
transitive access *are* complicated. And if you don't think about
them, you can have firewalls and host security until you're purple
in the face and you've accomplished nothing except making your
firewall and host security vendors happy.

Nobody wants to think about transitive trust and transitive access.
Those are big issues that most organizations treat as "solved" or
"nonexistent" depending on their maturity. In truth, they are extremely
complex problems that should not be swept under the rug lightly.

mjr.
George Capehart
2004-05-27 21:58:06 UTC
Permalink
On Wednesday 26 May 2004 06:30 pm, Marcus J. Ranum wrote:

<snip>
threats and vulnerabilities are, and whack those. That's a really
useless approach in the long run. I'd guess that a significant number
of the firewalls I've seen are being used to knock down "well known
bad things" instead of "only allow a few good things." I did a talk
the other day in which I outlined the "old-school" secure firewall
approach (non-routed networks, proxy everything, default deny, audit
policy violations) and people in the room were amazed: "None of our
users would accept that kind of solution!" they cried. Therein lies
the rub. As long as something so important as security is the tail
trying to wag the dog, it's not going to go anyplace.
*crawls out from under rock, drags out soap box*

Seems to me this is less a case of security being the tail trying to wag
the dog as it is a case of users being the tail that actually wags the
dog. One must wonder who is running the company. These are policy
issues, for crying out loud! Sounds like it's time to introduce a
certification and accreditation process into those organizations.
Doesn't have to be as rigorous as DITSCAP or SP 800-37 . . . just
something that forces the people in the company who are supposed to be
managing the risk to do so . . . or formally, in writing, accept the
risk that they're *not* managing.

My 0.02 $currency_denomination.

Cheers,

George Capehart
Paul D. Robertson
2004-06-01 12:29:03 UTC
Permalink
Post by George Capehart
*crawls out from under rock, drags out soap box*
Seems to me this is less a case of security being the tail trying to wag
the dog as it is a case of users being the tail that actually wags the
dog. One must wonder who is running the company. These are policy
issues, for crying out loud! Sounds like it's time to introduce a
certification and accreditation process into those organizations.
Doesn't have to be as rigorous as DITSCAP or SP 800-37 . . . just
something that forces the people in the company who are supposed to be
managing the risk to do so . . . or formally, in writing, accept the
risk that they're *not* managing.
The main issue I've seen is that traditionally, the person ordered to make
rule changes is not often empowered to reject changes, or even require
written justification.

This is the reason that a security policy is important. If your security
policy enumerates who can authorize changes, what the default stance is,
and how risk is to be investigated- then you're way ahead of the users
dictate policy game.

When you do it right, it works. At my last employer, I had a division
vice president who thought that my insistence that they waddle out from behind
their desks to a second computer in their office to use a new application
they were evaluating was too inconvenient for them. They tried to
override me, and when they scheduled a meeting with the corporate CIO to
go over my head, the CIO invited me to explain the policy.

Because I'd done all the policy stuff up-front, and because I'd regularly
haul the CIO into a conference room and make him understand the risks by
doing a half-hour to hour of whiteboarding, where we discussed the network
and business risks of various things, as well as detection and protection
strategies, I had really good solid backing.

The cost of risk is very important. If people in the business expect that
"open a port on the firewall" sorts of things are all that's needed to
accept new risk, then you'll get lots of requests, and they'll be very
difficult to stop, since almost anything can be justified in a forward
looking basis (the trick is to get the finance people to account for the
benefits and history of the projections.)- but if people have to pay for
risk mitigation, and it's a part of the process, then you tend to get only
the requests that really have some business merit. "We need to reduce
this connectivity by buying this network infrastructure, these licenses
and 1/2 of a FTE" tends to have a pretty good self-moderating impact on
things that aren't strategically important to the organization.

Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
***@compuwar.net which may have no basis whatsoever in fact."
***@trusecure.com Director of Risk Assessment TruSecure Corporation
George Capehart
2004-06-02 15:57:04 UTC
Permalink
Post by Paul D. Robertson
Post by George Capehart
*crawls out from under rock, drags out soap box*
Seems to me this is less a case of security being the tail trying
to wag the dog as it is a case of users being the tail that
actually wags the dog. One must wonder who is running the company.
These are policy issues, for crying out loud! Sounds like it's
time to introduce a certification and accreditation process into
those organizations. Doesn't have to be as rigorous as DITSCAP or
SP 800-37 . . . just something that forces the people in the
company who are supposed to be managing the risk to do so . . . or
formally, in writing, accept the risk that they're *not* managing.
The main issue I've seen is that traditionally, the person ordered to
make rule changes is not often empowered to reject changes, or even
require written justification.
Yep. That's the problem. :> A C&A process would go a long way to
solving it . . . But, as you point out below, *having* and *enforcing*
a C&A process is, in itself, a matter of policy . . . ;}
Post by Paul D. Robertson
This is the reason that a security policy is important. If your
security policy enumerates who can authorize changes, what the
default stance is, and how risk is to be investigated- then you're
way ahead of the users dictate policy game.
When you do it right, it works. At my last employer, I had a
division vice president who thought that my insistence that they
waddle out from behind their desks to a second computer in their
office to use a new application they were evaluating was too
inconvenient for them. They tried to override me, and when they
scheduled a meeting with the corporate CIO to go over my head, the
CIO invited me to explain the policy.
Because I'd done all the policy stuff up-front, and because I'd
regularly haul the CIO into a conference room and make him understand
the risks by doing a half-hour to hour of whiteboarding, where we
discussed the network and business risks of various things, as well
as detection and protection strategies, I had really good solid
backing.
This is an example of a well-functioning risk management process at
work. The absence of policy and/or policy enforcement is a symptom of
a poor or non-existent risk management process. I submit that an
organization that allows users to "set policy" either has no risk
management process or, if it does, it is so weak that it might as well
not exist. And, anticipating the point made in the next paragraph,
this organization has no idea of the cost of the exposure it has.
Post by Paul D. Robertson
The cost of risk is very important.
Hear, hear!
Post by Paul D. Robertson
If people in the business expect
that "open a port on the firewall" sorts of things are all that's
needed to accept new risk, then you'll get lots of requests, and
they'll be very difficult to stop, since almost anything can be
justified in a forward looking basis (the trick is to get the finance
people to account for the benefits and history of the projections.)-
but if people have to pay for risk mitigation, and it's a part of the
process, then you tend to get only the requests that really have some
business merit. "We need to reduce this connectivity by buying this
network infrastructure, these licenses and 1/2 of a FTE" tends to
have a pretty good self-moderating impact on things that aren't
strategically important to the organization.
And on the flip side, requiring the business owner of a system to
formally acknowledge and accept the residual risk *in writing*, is a
powerful tool in helping to manage^Wminimize residual risk . . . :-)

In the end, I guess, the real challenge is to sell those naive
organizations on the value of actively managing their IT risks and the
costs of not doing so.

George
David Lang
2004-06-02 17:58:18 UTC
Permalink
Post by George Capehart
Post by Paul D. Robertson
The cost of risk is very important.
Hear, hear!
unfortunantly this is much easier to say then to define, especially when
you have disagreements between departments over the liklyhood of something
beign exploited "Vendor BIDNAME says that their equpiment that will span 5
networks is perfectly safe and can't possibly be comprimised becouse they
don't run an OS" from the folks who want to install something vs the
security departments view of the same hardware "these are x86 based nodes
plugged into every network with an ethernet backplane between them, they
are a very high risk"

let alone the more subtle issues of how expensive the risk is to open one
more port through a firewall.

David Lang
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan
George Capehart
2004-06-03 13:35:46 UTC
Permalink
Post by David Lang
Post by George Capehart
Post by Paul D. Robertson
The cost of risk is very important.
Hear, hear!
unfortunantly this is much easier to say then to define, especially
when you have disagreements between departments over the liklyhood of
something beign exploited "Vendor BIDNAME says that their equpiment
that will span 5 networks is perfectly safe and can't possibly be
comprimised becouse they don't run an OS" from the folks who want to
install something vs the security departments view of the same
hardware "these are x86 based nodes plugged into every network with
an ethernet backplane between them, they are a very high risk"
let alone the more subtle issues of how expensive the risk is to open
one more port through a firewall.
I certainly agree that sometimes it is hard to quantify risk to two
decimal places. But not all risk assessment schemes require that.
With respect to disagreements among departments over the likelihood of
an exploit, that is non-problem. If the organization's management
style is to achieve consensus, lock 'em all in a room and don't let
them out until they come to agreement. If the organization's
management style is by decree, decree it. Bottom line: either risk is
managed or it's not. A functioning risk management process has
mechanisms it needs in place to ensure that risks are identified and
managed. If those mechanisms are not in place, the organization is not
managing its risk . . .

Cheers,

/g
--
George Capehart

capegeo at opengroup dot org

PGP Key ID: 0x63F0F642 available on most public key servers

"It is always possible to agglutenate multiple separate problems into a
single complex interdependent solution. In most cases this is a bad
idea." -- RFC 1925
Paul D. Robertson
2004-06-03 13:55:09 UTC
Permalink
Post by David Lang
unfortunantly this is much easier to say then to define, especially when
you have disagreements between departments over the liklyhood of something
beign exploited "Vendor BIDNAME says that their equpiment that will span 5
networks is perfectly safe and can't possibly be comprimised becouse they
don't run an OS" from the folks who want to install something vs the
security departments view of the same hardware "these are x86 based nodes
plugged into every network with an ethernet backplane between them, they
are a very high risk"
That's a function of being in the room when $vendor proclaims that their
code is the only code ever written securely. Asking for proof (or better
yet formal proofs,) metrics, measurements and independent assessments and
explanations of how they handle specific circumstances wins almost every
time. "How many bugs/kloc do your coders produce?" "What's the number of
bugs in your bug database for $product?" What happens if there's a bug in
your interface and someone does the following..."

You *need* to be in those meetings- because then the users see that the
vendor's sales rep and his support *don't* have all that much security
clue, and really don't know all that much about their products.

While it's fun to make the vendor turn tail and run, the real objective is
to ensure that (a) the vendor sweats enough to make the pricing
negotiations much easier, and (b) you can either shoot down the stupid
ideas, or offer "safe" alternatives to doing things the wrong way.

One of the best quotes yet that I got from a vendor in a meeting was
"Stop! I can't think that fast!" In that case though, the users were
being pressured into evaluating and possibly purchasing something they
didn't want- but politically couldn't dismiss themselves. I got invited
to do the thing they were used to seeing me do- beat up the vendor over
security- but this time it was to their advantage for me to poke holes in
it, since it'd give them ammo for rejecting the whole silly scheme.
Post by David Lang
let alone the more subtle issues of how expensive the risk is to open one
more port through a firewall.
Get some sand, a bucket, a nail and a hammer, and *show* them how much
effectiveness they lose with each port.

Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
***@compuwar.net which may have no basis whatsoever in fact."
***@trusecure.com Director of Risk Assessment TruSecure Corporation
Gwendolynn ferch Elydyr
2004-06-03 14:39:04 UTC
Permalink
Post by Paul D. Robertson
One of the best quotes yet that I got from a vendor in a meeting was
"Stop! I can't think that fast!" In that case though, the users were
being pressured into evaluating and possibly purchasing something they
didn't want- but politically couldn't dismiss themselves. I got invited
to do the thing they were used to seeing me do- beat up the vendor over
security- but this time it was to their advantage for me to poke holes in
it, since it'd give them ammo for rejecting the whole silly scheme.
Wandering somewhat afield, the most remarkable reaction that I've ever
gotten from a vendor was the one who called up, practically in tears,
and proclaimed "You can't do this to me! It's not fair!" [0].

I was completely boggled that they thought that a social attack of that
nature was likely to have any effect other than causing me to flee farther.

More to the point, it also helps when you can go down a litany of
requirements with the vendor, and force them to address each item [1]...
Post by Paul D. Robertson
Get some sand, a bucket, a nail and a hammer, and *show* them how much
effectiveness they lose with each port.
Hrm. I may have to try that... if nothing else, it's a fun example ;>

cheers!
[0] "this" being not including their product in the final evaluation
phase. At the time, they didn't have a TLS gateway, which was a showstopper.
[1] Then again, it's always fun to include "Meets RFC 1149 and 3514".
==========================================================================
"A cat spends her life conflicted between a deep, passionate and profound
desire for fish and an equally deep, passionate and profound desire to
avoid getting wet. This is the defining metaphor of my life right now."
Paul D. Robertson
2004-06-03 15:08:38 UTC
Permalink
Post by Gwendolynn ferch Elydyr
Wandering somewhat afield, the most remarkable reaction that I've ever
gotten from a vendor was the one who called up, practically in tears,
and proclaimed "You can't do this to me! It's not fair!" [0].
Those who can't do, teach.
Those who can't teach, sell.
Those who can't sell, market.
Those who can't do, teach, sell, or market go into management.

Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
***@compuwar.net which may have no basis whatsoever in fact."
***@trusecure.com Director of Risk Assessment TruSecure Corporation
Ben Nagy
2004-06-04 09:11:22 UTC
Permalink
-----Original Message-----
Of Paul D. Robertson[...]
That's a function of being in the room when $vendor proclaims
that their code is the only code ever written securely.
Asking for proof (or better yet formal proofs,) metrics,
measurements and independent assessments and explanations of
how they handle specific circumstances wins almost every
time. "How many bugs/kloc do your coders produce?" "What's
the number of bugs in your bug database for $product?" What
happens if there's a bug in your interface and someone does
the following..."
Current vulnerability research is finding lots of lots of amusing ways to
make software fail in spite of "Secure Development Practices".

The thing is that even good methodology can create bugs that are nearly
impossible to find with manual or automatic code auditing tools - as long as
your language lets you directly screw around with memory and your
architecture lets you treat arbitrary memory as code things will continue to
suck. That's why the job is easier for the attacker - they don't care why
things fail, they can just hammer on things until they break, and if the
process if well enough instrumented they can start working back from the
fault to then find out if/how they can craft input which will trigger it in
an interesting way.

Sure some attackers will use static analysis or "redpoint" and look for
unsafe calls, pointer arithmetic and the like - but techniques like fault
injection with input fuzzing are quick, dirty but sadly very effective.
Rather than measuring on (known) bugs/kloc I think it would be better to ask
"What is your approach to fault test your own object code?", "How do you
plan for component failure" and that kind of thing. Vendors that don't test
their code using the same methods as attackers will get 0wned. This is why
we're still seeing some security vendors with their names up in lights on
bugtraq. (You were probably about to say all this but replaced it with ...
:)

To hark back to my original point >grin< .... this is why I see value in
"stuff" that can mitigate standard vulnerabilities, or block standard
attacks. It turns out to be a lot easier than waiting for every software
vendor to do the right thing. This goes way beyond "personal firewalls"
though, and is where people start talking about IPS, then things get vague,
then people do marketing hand-waving, then Marcus gets mad, and then, and
then...

For a couple of really fun reads on the vulnerability angle, I really like
Hoglund/McGraw "Exploiting Software" and "The Shellcoders Handbook", which
are both recent and both written by real researchers.

This thread has been kinda fun. We've got enough soapboxes to build a couple
of carts now. ;)

Cheers,

ben
Paul D. Robertson
2004-06-04 19:27:16 UTC
Permalink
Post by Ben Nagy
Current vulnerability research is finding lots of lots of amusing ways to
make software fail in spite of "Secure Development Practices".
Yes, but that's mostly immaterial the the point- which was inserting the
security function into the product evaluation phase by making vendors do
the security dance, letting the users see that despite the marketing
glossies, the vendors have No Clue[tm] of what's really in their products
unless they're doing things well- if they are, then they're likely to have
reduced more risk than their competitors.
Post by Ben Nagy
The thing is that even good methodology can create bugs that are nearly
impossible to find with manual or automatic code auditing tools - as long as
Yes, but bad methodology creates more bugs- so it's still a general win.
Post by Ben Nagy
Rather than measuring on (known) bugs/kloc I think it would be better to ask
"What is your approach to fault test your own object code?", "How do you
plan for component failure" and that kind of thing. Vendors that don't test
their code using the same methods as attackers will get 0wned. This is why
Vendors don't need to be attackers, they just have to know how to code
well, and know what attacks exist, and how to not have them. Like all
things, it's relative, but just like inspecting a configuration versus
scanning a system- when you have the source, you get more accuracy from
looking at it[0] than from trying it[1].

[0] Assuming you know what you're looking at.
[1] Assuming no environmental issues.

Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
***@compuwar.net which may have no basis whatsoever in fact."
***@trusecure.com Director of Risk Assessment TruSecure Corporation
Brian Ford
2004-06-01 17:59:35 UTC
Permalink
Marcus,

Wow. That was quite a chunk.

I agree with a lot of what you have said here. But the patient hasn't
died. We all just live too near a place where it stinks. Sometimes when
the smell gets real bad we do something; like close the windows or open an
air freshener. Other times we just grin and bear it.
Post by Marcus J. Ranum
After a while, the folks who are busy
fighting the bug-of-the-week club down in the trenches are
going to say, "hey! look! that guy over there doesn't have this
problem!" and they'll adapt. Or they'll die out or just keep
cheerfully pounding their heads against the wall. But eventually
it will become clear that their approach is loserly.
I don't agree that best practices are flowing through the community. Lots
of folks are using stuff that isn't working well. They don't know what
else is out there or how anything else other than how "their thing" works.

We need to raise awareness about what is out there; what is good and what
is bad. Not by labelling technology or products but by talking about
practices. We can start by just focusing on people on lists like
this. What's working well for you and why? I don't see many messages like
that here (or at any of the conferences) any more.

We need to think about how to grow smarter practitioners. I thought last
year it might be via CISSP or some other "certification". I gave that a
shot. Before that I thought the SANs direction (again with certifications)
was good. I don't know if this will work for as large a portion of the
population as is needed.

Patching isn't great. But it is what we have right now and many folks who
insist on sitting in front of computers can use it. Hey, I wish we didn't
depend on oil for energy. But we do.

The sad reality is that many user type folks insist on doing stuff that is
bad for themselves. They read email they shouldn't read. They surf to
sites they shouldn't surf to. They don't use good passwords. They don't
backup data.

If we really want to make the Internet a better place we should solve these
problems.
- Create strong, effective, cross country laws and go after spammers and
phishers.
- Ditto that with web sites that feed the problem.
- Push the strong password issue back on the organizations that require
them. Don't allow the costs of fraud to be assumed by customers. If
financial institutions had to pay damages to their customers or others for
info leakage incidents or fraud then financial institutions would work on
developing better password technology.
- Develop an OS that has backup built into the OS.

There is no easy path here. We're somewhere in an unpleasant swamp and we
have to _continue_ to try and find a way out.

Liberty for All,

Brian
Post by Marcus J. Ranum
Message: 1
Date: Tue, 01 Jun 2004 10:38:07 -0400
Subject: RE: [fw-wiz] Vulnerability Response (was: BGP TCP RST Attacks)
Post by Ben Nagy
Post by Marcus J. Ranum
As I said, I think time will tell. :)
I'm horribly torn here. I completely agree with you, but I just don't see
any evidence of change. Essentially what you are claiming, when you say that
"time will tell", is that little green men from the Planet Clue are going to
invade earth with their rectal clue applicators and drag most of the IT
industry in the world off to re-education camps.
I didn't say that!!! I didn't even *THINK* that!!
What I think is going to happen is that people are going to
keep spending huge amounts of money on approaches that
don't work. Some, a small number, are going to say, "well, Duh!
and solve the problem." After a while, the folks who are busy
fighting the bug-of-the-week club down in the trenches are
going to say, "hey! look! that guy over there doesn't have this
problem!" and they'll adapt. Or they'll die out or just keep
cheerfully pounding their heads against the wall. But eventually
it will become clear that their approach is loserly.
Remember, loserly behavior is not a function of population
size. Just because lots of people are doing something dumb
doesn't make it any less dumb. It only means that there are
more people doing it.
I *hope* that in 10 years security practitioners will look back
at the days of "the system-wide patching fad" and laugh.
We're a society of fads and "get rich quick" schemes. We'd
rather pay 3X as much for special food that has 1/2 the calories
of normal food - instead of eating 1/2 as much of the normal
food (which actually has real flavor). We'd rather follow a fad
diet that destroys our body with saturated fats than simply
"eat lots. work hard. burn lots of energy." We're still in the
era of get.rich.quick low-carb Internet security - perhaps it
will be the aliens with their clue probes that get us out of it, but
it's more likely we'll either stay there or wise up.
Post by Ben Nagy
Post by Marcus J. Ranum
Post by Ben Nagy
Take a look at the recent security record of MS RPC endpoints. You
can't turn them off. You can't secure them. Windows will break.
Yes. So? YOU ARE INSANE IF YOU ARE RELYING ON WINDOWS FOR
INTERNET-FACING CRITICAL SYSTEMS.
Trouble is that it's not just internet facing systems that get owned. This
idea of crunchy outside chewy centre has GOT to change. It's dead. Didn't
work. Bye-bye.
I'm not advocating a perimeter-only defense!!! I *NEVER* have.
But it's the first and best place to start. If you don't do something
sensible at the perimeter - or you don't have a perimeter at all -
then all your systems are internet-facing. We've seen how well
*THAT* works, too.
- Every year there are more internet-facing systems by
some huge number, as more homes go online
- Many of those systems rely on endpoint mitigation and
patching as their sole security
- Every year, the number of systems compromised keeps
going up
What does that tell you? That the attackers are getting smarter?
No - they're doing the "same old same old". That the attackers
are working harder? Maybe, but it's largely automated. So
if you have largely automated attacks succeeding wildly against
system that are using low-carb security - well.... What do you
conclude?
Post by Ben Nagy
Post by Marcus J. Ranum
What do you think? If we install JUST ONE MORE PATCH it's
gonna be SECURE? Heck, no. The only way to secure this crap
is to hold it down and hammer a stake through its heart.
Ah c'mon.
I'm serious.
Back in 1997 (blackhat keynote, you can hear the audio on
http://www.ranum.com/security/computer_security/audio/mjr-blackhat-97.mp3
- it's a cruddy recording and I was a bit hung over when I did
the talk, but the idea remains. There's one major "bug" in the
s/"it would be funny if I wasn't kidding"/"it would be funny if I wasn't
serious"/)
Are you trying to tell me that operating systems are holy
writ that cannot be discarded and replaced with something
better? Ever hear of TOPS-10, MULTICS, OS/9, VMS? They
are operating systems that people used to use. O/S' come
and go. Windows is "just a phase" (as my parents used to
say when I wanted to dye my hair weird colors in high
school) it will pass. Maybe.
Post by Ben Nagy
Given that we can't go back to the abacus, we need to work from where we
are, and it is happening.
Why do we need to wok from where we are? Where we are is
not good!!! Working harder on it may not make it better. In fact
the preponderance of evidence is that it's getting WORSE.
Do you want to work harder on a situation where hard work
may be rewarded with worsening results? I'm not being
facetious; I am deadly serious. Trying to fix Windows security
has *ONLY* paid off in the stock prices of security companies
and not improved end user experience or system reliability
one iota.
Post by Ben Nagy
I see MS doing GOOD WORK in improving the
fundamental security core of their OS.
I see MS doing GOOD MARKETING in attempting to
unscrew that which is permanently screwed.
Post by Ben Nagy
I nearly passed out when I saw
support for NX memory
It's a nice kludge. Making the stack grow *up* into memory
like MULTICS did this in ~1965 - around the time I was learning
to walk upright. It's a little harder to code that kind of thing in
your kernel if you're smarter than a chimpanzee but it means
you never have buffer overruns.
You've all probably heard the old joke, "if computer programmers
built bridges like they write code, the first rainstorm we had would
collapse civilization" - it's wrong. If computer programmers built
bridges like they write code, they'd start off by re-inventing the I-beam
for each bridge - and they'd never get anything done because
they'd be arguing about the relative merits of whatever strongly-hyped
metal alloy was popular that week (XML? couldn't we use XML for that?)
Post by Ben Nagy
no anonymous RPC and host firewall enabled by default
in a general purpose service pack. They've come a long way from VMS. :)
Yes, they have. VMS was so much better, and the gap is growing
rapidly. :)
Post by Ben Nagy
The other option to burning it all and starting again is to "get there from
here". I say it's possible (eventually). Until that happens, we need
auxilliary solutions to prop things up.
I thing it's time to start grabbing our stakes and hammers
and getting to work!!
Post by Ben Nagy
Post by Marcus J. Ranum
Well, yeah. If you're using the wrong OS you're an idiot. The
fact that there are a lot of idiots out there doesn't make
them any less idiotic, either.
This line brings a smile to my face every time I read it.
You're right, of course, but lots of people aren't going to admit it when
you rub their nose in it like that. I'm writing this on a Windows box - and
you just told me that your work box is Windows too. I vote that us "idiots"
deserve security too.
I have fabulous security!!! My machine is isolated so that its
manifest weaknesses don't bother me. I accepted the fact
that I have a dumb O/S and because I am smart guy I
designed around it. I also have terrific backups "just in case" ;)
It's what I mean about understanding your risks and working
around them. The problem is that people don't want to
understand 'em and work around them. They just get as
far as "well, there are risks." and start patching.
Post by Ben Nagy
[...]
Post by Marcus J. Ranum
The idea that code needs to be patched frequently and often
is predicated on the flawed concept that cruddy code is
exposed to untrusted network. That's just dumb.
So this is, again, where we differ in opinion. The desktop - also known as
Cruddy Code Central - is what is causing the problem. You "old school"
genuises have been telling us "newbies" to build super duper amazing transit
points between networks of different trust levels, which we have been trying
to do.
NO you haven't!!! You're like the guys who want to eat 3 gallons
of ice cream a day and still lose weight using some fad diet.
Those things many people call "firewalls" are just low-carb
feel-good half-hearted nods toward security. Their policies
have been set up by committees with marketing people on
them, and their security posture depends more on which business
unit brings in more money than on actually protecting the
network. I mean these darned things allow attachments
through; they allow ActiveX through, they allow IM through,
etc, etc, etc. That's not a firewall. That's a "slow router."
And these "firewalled" networks are full of users who come
and go with laptops that they just plug in wherever they
want whenever they want and are given an IP address and
off they go. Those "mobile users" are on common segments
with mission critical servers and the only "authentication" they
use is the fact that they're physically there. Did I just describe
the typical corporate network? Can you tell me what is
"firewalled" about *THAT*!?!!? That's not firewalled. That's
low-carb-fat-free-firewalled.
Post by Ben Nagy
The trouble is that malware still gets in. Poot. Them dang worms is
like roaches, I tell ya. Looks 'ifn that there trusted network weren't quite
so trusted after all...
Peter Neumann likes to make sure people use the words "trusted"
and "trustworthy" properly. :) That was a trusted network but not
a trustworthy network. :) oops.
Post by Ben Nagy
There comes a point where we have to admit that "the security architecture
operation was a complete success, but the patient died" is of limited value.
The patient died AND IS STARTING TO SMELL!
mjr.
Brian Ford, CISSP
Consulting Engineer, Security & Integrity Specialist
Office of Strategic Technology Planning
Cisco Systems Inc.
http://wwwin.cisco.com/corpdev/
Marcus J. Ranum
2004-06-01 18:33:15 UTC
Permalink
We need to raise awareness about what is out there; what is good and what is bad. Not by labelling technology or products but by talking about practices. We can start by just focusing on people on lists like this. What's working well for you and why? I don't see many messages like that here (or at any of the conferences) any more.
Well, I know a *lot* of us have posted various "here's what works" - including
me - but it's not what people "want to hear" - that's the problem.

What works is not doing it. What works is understanding your traffic.
What works is log monitoring and strict enforcement of a tight policy.
What works is not having business units jump over the chain of command.
What works is not what people WANT or are ABLE to do.
Fortunately, that's not my problem. :) I'll let Darwinian evolution
take care of it, over time.
We need to think about how to grow smarter practitioners. I thought last year it might be via CISSP or some other "certification". I gave that a shot. Before that I thought the SANs direction (again with certifications) was good. I don't know if this will work for as large a portion of the population as is needed.
If education was going to work, it would have worked by now.

Back in the old days, the population of clueful system administrators was larger,
proportionally, than it is now. Largely due to population growth in the Internet
population. Was security better? Or proportionally the same? The environment
has shifted too much to tell - but I think that if there was a big amount of
leverage positive or negative to be achieved by education, we'd be seeing
it by now, right? The population would be sharply divided into the clued
and the non-clued. But instead it's not happening that way. I don't have to
prove a negative: show me how education is helping in the big picture...
Patching isn't great. But it is what we have right now
Eat sh*t, 50 billion flies can't all be wrong.
Besides, there's lots of it. Is that what you're
saying?
The sad reality is that many user type folks insist on doing stuff that is bad for themselves. They read email they shouldn't read. They surf to sites they shouldn't surf to. They don't use good passwords. They don't backup data.
Right! That's what I mean. It's too late. It's now a human right to click on
attachments in Outlook. Heck, it's a human right to run Outlook,
apparently. What a crock of dingoes kidneys that is! It's a
public health issue. It's a corporate governance issue. It's a matter
of survival - or of bearing the costs of being stupid. I don't care which.
But people gotta stop whining about the end results of their being
stupid.

"*sniffle* I run Windows and no matter what I do, I get HACKED!"
Duh! Here's your sign!
"*WAAAH!* I have a firewall and it didn't help!"
Duh! Here's your sign, go stand over there!
"Boo-HOO! I put my mission critical stuff on a toy O/S and it crashed and burned
when some co-worker clicked on an attachment in Outlook!"
Duh! Here's your sign, welcome to the club!
If we really want to make the Internet a better place we should solve these problems.
- Create strong, effective, cross country laws and go after spammers and phishers.
Y'know, I saw one go across my radar screen this morning. I'll quote some
of it..
http://news.com.com/2102-1034_3-5218178.html?tag=st.util.print
More than 85% of the 800 million email messages sent every day from
Comcast networks are spam from zombie computers. One reason for the
sheer volume of spam coming from Comcast is that Comcast has a large
number of high-speed Internet customers whose connections are most
desirable for spammers to hijack. Comcast's marketing department
nixed a proposal to block traffic on port 25 because the cost of helping
customers reconfigure their mail programs would be quite high.

DUH! HERE'S YOUR SIGN!

When marketing weenies are worried that *other* people are
too dumb to do something, then you KNOW that sound in
the distance is the hoofbeats of the four horsemen.
- Ditto that with web sites that feed the problem.
What, and ruin the $129million/year anti-spam industry?
- Push the strong password issue back on the organizations that require them. Don't allow the costs of fraud to be assumed by customers. If financial institutions had to pay damages to their customers or others for info leakage incidents or fraud then financial institutions would work on developing better password technology.
Passwords are pointless to worry about for real when the operating
systems they are being used on are less secure than your average
paper bag. The Orange Book Guys knew all this in the 1970's.
- Develop an OS that has backup built into the OS.
Been done. And that's not counting VMS' file versioning, which was
great though annoying to many.
There is no easy path here. We're somewhere in an unpleasant swamp and we have to _continue_ to try and find a way out.
It's important to have the sense to sometimes say, "WOW! dead end! time
to try a different plan!" If you're lost running around FASTER only gets you
tired.

mjr.
Ames, Neil
2004-06-02 16:24:20 UTC
Permalink
Ron,
You hit one of my peeves. Some very large organizations have this mindset--that you can't have host-based firewalls or low-end appliances doing firewalling of islands of servers--because they want to be able to scan everything and control everything from their desktops. ("Red is grey and yellow white, and *they* decide which is right and which is an illusion..."--if I may [mis-]quote Moody Blues lyrics) I don't beat my head against the wall anymore. I just watch them scramble every once in a while when the chewy middle goes bad for obvious and preventable reasons.


--Fritz

-----Original Message-----
From: R. DuFresne [mailto:***@sysinfo.com]
Sent: Tue 6/1/2004 8:05 PM
To: M. Dodge Mumford
Cc: Paul D. Robertson; Ben Nagy; 'Marcus J. Ranum'; firewall-***@honor.icsalabs.com
Subject: Re: [fw-wiz] Vulnerability Response (was: BGP TCP RST Attacks)
Post by M. Dodge Mumford
Post by Paul D. Robertson
If it can't be attacked, then arguably, it doesn't need to be fixed.
That sentiment surprises me a bit. It appears to me to violate the concept
of defense in depth. Blocking the exploit path to a vulnerability may
mitigate the risk greatly, but the vulnerability still remains. In your
instance, the exploit path would involve attacking your host operating
system that's performing the firewalling.
I would think the point of mitigating the risk is to buy you time to fix the
vulnerability. That "time to fix" may be "until Longhorn is released." Which
assumes that Longhorn (or, broadly, version++) will fix the vulnerability.
blocking the exploit path should be viewed in the context of "defense in
depth", and a person has to avoid tunnel vision;

At my present place of employment one of the CISSP's had tunnel vision to
the affect that in scanning systems for potential sploitable services, he
had the impression that if he could not touch a service with his scanner
that in and of itself was an issue; nevermind that our unix toolsets used
a number of apps to provide "defense in depth" and thus his scanner was
'running' into them and they were doing their job, blocking his scans to
those services. Was this a problem? Only in his eyes....


Thanks,

Ron DuFresne
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
admin & senior security consultant: sysinfo.com
http://sysinfo.com

"Cutting the space budget really restores my faith in humanity. It
eliminates dreams, goals, and ideals and lets us get straight to the
business of hate, debauchery, and self-annihilation."
-- Johnny Hart

testing, only testing, and damn good at it too!

_______________________________________________
firewall-wizards mailing list
firewall-***@honor.icsalabs.com
http://hon
Phil Burg
2004-06-03 00:07:45 UTC
Permalink
Post by David Lang
unfortunantly this is much easier to say then to define, especially when
you have disagreements between departments over the liklyhood of something
beign exploited "Vendor BIDNAME says that their equpiment that will span 5
networks is perfectly safe and can't possibly be comprimised becouse they
don't run an OS" from the folks who want to install something vs the
security departments view of the same hardware "these are x86 based nodes
plugged into every network with an ethernet backplane between them, they
are a very high risk"
This part, IMNSHO, is a key part of your risk management policy /
standard / whatever $YOUR_SITE calls it: you need to clearly
define who evaluates security risks and how they do it, the intention
being to arrive at a situation wherein any suitably qualified person
(for some value of suitably qualified) can pick up your RM documentation
and produce a very similar assessment of the risk as any other suitably
qualified person would produce. And of course it needs to be auditable.

Selling this to management at $YOUR_SITE is left as an exercise to the
reader...

Phil
--
Phil Burg
Senior Security Adviser
IT S&A Security and Governance
Coles Myer Ltd
(03) 9483 7165 / 0409 028 411
Paul D. Robertson
2004-06-03 13:35:34 UTC
Permalink
Post by Phil Burg
This part, IMNSHO, is a key part of your risk management policy /
standard / whatever $YOUR_SITE calls it: you need to clearly
define who evaluates security risks and how they do it, the intention
being to arrive at a situation wherein any suitably qualified person
(for some value of suitably qualified) can pick up your RM documentation
and produce a very similar assessment of the risk as any other suitably
qualified person would produce. And of course it needs to be auditable.
*Exactly.* Someone has to _own_ risk management. The people who don't
own it should have input, but not the ability to nitpick. That means the
organization must be comfortable with the person who owns it being able to
assess not just the security risk, but the business risk and weigh the
two.

Generally, though it seemed like I rejected everything put to me, in fact,
almost all of my rejections were "no, we won't just open up $foo and let
you do it on $bar, but if you're willing to buy $baz and move things
thusly..." Naturally, I started with "No!" because I'm always in default
denial ;)
Post by Phil Burg
Selling this to management at $YOUR_SITE is left as an exercise to the
reader...
*No!*[1] This is where we *absolutely* need to share experiences- if it
worked for me, it should work for someone else. Enough of those and we
can make some forward progress industry-wide.

There are half a zillion things dedicated to "How do I block P2P?" We
need more "How do I gain and keep responsibility?"

When I left my last company, I thought they'd throw a huge party. I know
I'd pissed off at least hundreds, if not thousands of my co-workers by not
allowing them lots of cool, fun and potentially profitable services. I
didn't make exceptions (even for me,) didn't give politically correct
answers, and didn't bend one bit on my policy. I upset lots and lots of
people, lots and lots of times. The sentiment I got when I said "Bet
you're glad I'm leaving!" was completely the opposite of what I expected.

The understood that I did my job, and my job was to protect the company.
They knew that the company was going to take on more risk within a week or
two- because like most large corporations, there was a lot of internal
politics, and very few people will take the "more likely to be career
limiting, but right" path.

In the end, the people who I interacted with most for new things had
gotten to realize that it was easier by far to come and ask me how they
should do something new than to fight for the right to do it at all after
sneaking it in.

The years of fighting before that weren't fun (mostly for them- I was the
undefeated NO champion of the Universe!)- but they got to where network
security (and infrastructure) became a part of the "we must cover this"
phase of any project.

Paul
[1.] There I go again!
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
***@compuwar.net which may have no basis whatsoever in fact."
***@trusecure.com Director of Risk Assessment TruSecure Corporation
George Capehart
2004-06-03 16:36:10 UTC
Permalink
Post by Paul D. Robertson
Post by Phil Burg
This part, IMNSHO, is a key part of your risk management policy /
standard / whatever $YOUR_SITE calls it: you need to clearly
define who evaluates security risks and how they do it, the
intention being to arrive at a situation wherein any suitably
qualified person (for some value of suitably qualified) can pick up
your RM documentation and produce a very similar assessment of the
risk as any other suitably qualified person would produce. And of
course it needs to be auditable.
*Exactly.* Someone has to _own_ risk management. The people who
don't own it should have input, but not the ability to nitpick. That
means the organization must be comfortable with the person who owns
it being able to assess not just the security risk, but the business
risk and weigh the two.
At the risk of going *way* OT here, I'm going to indulge in a "Best of
All Possible Worlds" scenario:

In my ideal world, IT Risk Management would be part of the charter of
the corporate risk management process. In this world, Risk Management
(TM) is "owned" by the Board of Directors' Risk Management Committee
and is implemented by the Corporate Risk Management Group (CRMG). The
CRMG is staffed with people who understand the different kinds of risk
that the organization faces and is responsible determining the levels
of risk that the corporation is willing to sustain for setting policy
(including InfoSec policies) that allows the organization to manage to
that level. (This doesn't mean that members of this group must define
and write detailed policy, only that they are responsible for getting
it done and balancing the policy statements that come from the
different areas. There are many ways to skin a cat, and their job is
to make the final decision as to which way to do it). The Executive
Committee is responsible and accountable for the enforcement of the
policy. Inputs into the policy-making process are sought from all
affected parties who are represented on the working watchdog committees
that are chartered with understanding the organization's risk profile,
recommending controls, and monitoring the effectiveness of existing
controls.

Granted, this picture assumes a not-really-small corporate structure,
but the basic logic can be applied to organizations of all types and
sizes. The idea is that risk management is part of organizational
governance, and as such, is "owned" by those who are responsible for
oversight of the organization, whatever its structure. LOB managers
(and I lump the CIO in this category) are responsible for managing the
risk that is generated by their "branch of the tree," and they should
be given some latitude in how they do so. If they elect to do things
in a way that "violates policy," they should be required to sign off on
the fact that they realize that they're violating policy and accept the
additional risk, and that their job is on the line for it.

The main idea that I'm trying to promote here is that risk management is
a governance issue and that "ownership" of the process should be at
that level. It is not fair to the folks "at the pointed end of the
stick" to put them in the position of having to policy decisions.
Implementation decisions, yes, policy decisions, no. For example, a
policy decision might be "No P2P." An implementation decision would be
how and where to block it. Network security shouldn't be the one to
have to defend set and defend the policy. They should certainly be
represented on the policy-making committee(s), but the policy should
come from the risk management group, not network security.

Now, having said all that, there will always be things that slip between
the cracks and new threats will evolve for which there is no policy.
This is where the folks at the pointed end of the stick get involved .
. .
Post by Paul D. Robertson
Generally, though it seemed like I rejected everything put to me, in
fact, almost all of my rejections were "no, we won't just open up
$foo and let you do it on $bar, but if you're willing to buy $baz and
move things thusly..." Naturally, I started with "No!" because I'm
always in default denial ;)
*snicker* *snicker* *guffaw* *guffaw* Default deny . . . Default denial
. . . *snort* *snort* That was a good one. :-)

Now, in my ideal world, things like this would happen every now and
then, but after the "Hell, no" would come: "Lets present this to this
month's meeting of the risk management group see what they say."

<snip>
Post by Paul D. Robertson
The understood that I did my job, and my job was to protect the
company. They knew that the company was going to take on more risk
within a week or two- because like most large corporations, there was
a lot of internal politics, and very few people will take the "more
likely to be career limiting, but right" path.
In the end, the people who I interacted with most for new things had
gotten to realize that it was easier by far to come and ask me how
they should do something new than to fight for the right to do it at
all after sneaking it in.
It's been that way in most places I've been, but there have been some
refreshing exceptions . . . sometimes in organizations from which I'd
least expected it.

OK. I've gotten it off my chest. Even if this message doesn't make it
to the list, I feel *much* better . . . ;-)
f***@bellsouth.net
2004-06-03 18:51:31 UTC
Permalink
Not to try and finish your thought, but all too often...

"Those who can't do, teach.
Those who can't teach, sell.
Those who can't sell, market.
Those who can't do, teach, sell, or market go into management."

Those who can't manage become consultants.

[:o)


============================================================
From: "Paul D. Robertson" <***@compuwar.net>
Date: 2004/06/03 Thu AM 11:08:38 EDT
To: Gwendolynn ferch Elydyr <***@reptiles.org>
CC: David Lang <***@digitalinsight.com>,
George Capehart <***@opengroup.org>,
firewall-***@honor.icsalabs.com
Subject: Re: [fw-wiz] Vulnerability Response (was: BGP TCP RST Attacks)
Post by Gwendolynn ferch Elydyr
Wandering somewhat afield, the most remarkable reaction that I've ever
gotten from a vendor was the one who called up, practically in tears,
and proclaimed "You can't do this to me! It's not fair!" [0].
Those who can't do, teach.
Those who can't teach, sell.
Those who can't sell, market.
Those who can't do, teach, sell, or market go into management.

Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
***@compuwar.net which may have no basis whatsoever in fact."
***@trusecure.com Director of Risk Assessment TruSecure Corporation
_______________________________________________
firewall-wizards mailing list
firewall-***@honor.icsalabs.com
http://honor.icsalabs.com/mailman/listinfo/firewall-wizards
============================================================


Mark F.
MCP, CCNA
"You can spend your life any way you want... But you can only spend it once."
Gwendolynn ferch Elydyr
2004-06-03 19:28:46 UTC
Permalink
Post by f***@bellsouth.net
Not to try and finish your thought, but all too often...
"Those who can't do, teach.
Those who can't teach, sell.
Those who can't sell, market.
Those who can't do, teach, sell, or market go into management."
Those who can't manage become consultants.
^
Big5

... but really - it's actually much more difficult to teach, sell, and
market than most of us are willing to admit - and an independant
consultant gets to do it all.

I've also noticed that while employers are frequently willing to excuse
the lack of certifications in their FTE, they're very eager to see certs
for consultants. Has anybody else seen this?

cheers!
==========================================================================
"A cat spends her life conflicted between a deep, passionate and profound
desire for fish and an equally deep, passionate and profound desire to
avoid getting wet. This is the defining metaphor of my life right now."
f***@bellsouth.net
2004-06-03 20:08:29 UTC
Permalink
Yes, teaching is an art. It can be difficult in and of itself, but I've seen PLENTY of good teachers who wouldn't last a day in the field. Selling is also a talent. My boss is a good example. He was in the tech field for a few years and not half bad at it, then he became an instructor/administrator at a small college and was pretty good. Then he moved up the management track in my company and is now the VP because he is a great salesman/marketer. Though he relies on his technical staff for our technical skill, he could market and sell a line of battery operated freezers to Eskimos. He drums up the work and contracts while allowing his staff to perform the tasks at hand.

I agree with your observation about consultants needing certs to appear credible. Anymore though and one needs certs and a BA to get a job in this field, not counting the old timers who have been with the same outfit for 10+ years, they have proved their credibility. If they ever DO need to find new employment, though, it can get tough with "just" 10-15 years experience and no certs. Individuals vary greatly though, so it's not etched in stone about what you need. Who you know, personality, salesmanship, luck, etc all play a part.

Whoops, been talking way too long. Sorry!

============================================================
From: Gwendolynn ferch Elydyr <***@reptiles.org>
Date: 2004/06/03 Thu PM 03:28:46 EDT
To: ***@bellsouth.net
CC: "Paul D. Robertson" <***@compuwar.net>,
David Lang <***@digitalinsight.com>,
George Capehart <***@opengroup.org>,
<firewall-***@honor.icsalabs.com>
Subject: Re: Re: [fw-wiz] Vulnerability Response (was: BGP TCP RST Attacks)
Post by f***@bellsouth.net
Not to try and finish your thought, but all too often...
"Those who can't do, teach.
Those who can't teach, sell.
Those who can't sell, market.
Those who can't do, teach, sell, or market go into management."
Those who can't manage become consultants.
^
Big5

... but really - it's actually much more difficult to teach, sell, and
market than most of us are willing to admit - and an independant
consultant gets to do it all.

I've also noticed that while employers are frequently willing to excuse
the lack of certifications in their FTE, they're very eager to see certs
for consultants. Has anybody else seen this?

cheers!
==========================================================================
"A cat spends her life conflicted between a deep, passionate and profound
desire for fish and an equally deep, passionate and profound desire to
avoid getting wet. This is the defining metaphor of my life right now."

============================================================


Mark F.
MCP, CCNA
"You can spend your life any way you want... But you can only spend it once."
Margles Singleton
2004-06-04 03:22:11 UTC
Permalink
Post by Brian Ford
I don't agree that best practices are flowing through the community. Lots
of folks are using stuff that isn't working well. They don't know what
else is out there or how anything else other than how "their thing" works.
Speaking as a newbie, these lists are a great thing: I "listen" to how
experienced folks think and argue - and I learn. I believe there are many
folks like myself on these lists, simply listening in order to improve their
skills and knowledge.
Post by Brian Ford
gave that a shot. Before that I thought the SANs direction (again with
certifications) was good. I don't know if this will work for as large a
portion of the population as is needed.
When I moved into security, SANS was decidedly the best thing I ever did for
myself. I was working for a company that had no security
awareness/department, and I had to figure out *everything* for myself. SANS
gave me a road map, and a yardstick by which to measure my progress.

Something I noticed, however: the SANS conferences draw a large crowd - but
a very small percentage of those attending ever certify. I think this
demonstrates that old saw: "You can lead a horse to water, but you can't
make him think...."

Unless - I believe until - security can be packaged in a black box, there
will not be tremendous gains in security. My reasoning? Black boxes are
those technologies that we have faith in working without knowing why:
microwaves, cars, and TV sets are all examples. A NASCAR team will know the
fine details of tuning a car, but the Great Unwashed will not: they will
simply turn the key and go - and this is how it should be - and I believe in
future it will be like that for security as well. In the meantime, I don't
believe there is a more exciting time to be working in the field of security
than NOW, before everything is packaged up in dull, boring, black boxes that
anyone can utilize.

Frankly, I think all you guys and geeks are getting too easily discouraged,
and not recognizing the great job that you are all doing - INCLUDING
communicating....

Margles

_________________________________________________________________
MSN Toolbar provides one-click access to Hotmail from any Web page – FREE
download! http://toolbar.msn.click-url.com/go/onm00200413ave/direct/01/
Gwendolynn ferch Elydyr
2004-06-04 20:23:47 UTC
Permalink
Post by Margles Singleton
Something I noticed, however: the SANS conferences draw a large crowd - but
a very small percentage of those attending ever certify. I think this
demonstrates that old saw: "You can lead a horse to water, but you can't
make him think...."
I disagree. There's a difference between learning and certification. It's
disingenious [although lucrative] to confused the two.

Looking at the costs involved in certification, before addressing the
question of the value of certification:

Trolling through the SANS web pages, it looks like the course
fees vary from ~$600 per tutorial, up to ~$900, if you register
late.

The GIAC certification is a mere $250 -per certification- with
the SANS training - $450 -per cert- without SANS training [all in
USD, of course]. Recertification [which is required every two years]
is $120 [but will cover all the exams that you take].

The CISSP exam appears to be $450 USD - review courses all appear
to be in the $2000+ USD range [~$2500 USD on average].

This is before you factor in travel, lodgings, and meals.

If you can persuade your company that training you is valuable, and not
likely to lead to your immediate departure for greener fields, that's
definitely a bonus.

Otherwise, you're looking at significant out-of-pocket costs unless you
elect to challenge the exams [and even then you're looking at $450+ per
exam] - not to mention time away from work, and travel costs if you
don't live in a major metropolitan area.

Moving on to the merits of certification, there's also the question of
whether a certification actually says anything at all about your retained
knowledge and ability, rather than your ability to cram and regurgitate
enough information to pass an exam.

I rather suspect that most of us succeeded in passing exams in high
school and college/university that we'd be hard pressed to fathom today.

That said, those letters are a quick way to get through the HR filters.

cheers!
==========================================================================
"A cat spends her life conflicted between a deep, passionate and profound
desire for fish and an equally deep, passionate and profound desire to
avoid getting wet. This is the defining metaphor of my life right now."
Laura Taylor
2004-06-13 00:40:34 UTC
Permalink
Certification is only a qualifier of technical skills. From my experience,
there is always an obvious solution for the technical problems. The people
problems are much more difficult to solve, and only years of experience
polishes up a person's people skills. Typically what separates junior level
folks from senior level, or executive level, folks is more often not their
technical skills, but their people skills -- at least in my opinion.

Laura Taylor
Relevant Technologies, Inc.
www.relevanttechnologies.com


-----Original Message-----
From: firewall-wizards-***@honor.icsalabs.com
[mailto:firewall-wizards-***@honor.icsalabs.com]On Behalf Of
Gwendolynn ferch Elydyr
Sent: Friday, June 04, 2004 4:24 PM
To: Margles Singleton
Cc: firewall-***@honor.icsalabs.com
Subject: Certification (was Re:[fw-wiz] Vulnerability Response)
Post by Margles Singleton
Something I noticed, however: the SANS conferences draw a large crowd - but
a very small percentage of those attending ever certify. I think this
demonstrates that old saw: "You can lead a horse to water, but you can't
make him think...."
I disagree. There's a difference between learning and certification. It's
disingenious [although lucrative] to confused the two.

Looking at the costs involved in certification, before addressing the
question of the value of certification:

Trolling through the SANS web pages, it looks like the course
fees vary from ~$600 per tutorial, up to ~$900, if you register
late.

The GIAC certification is a mere $250 -per certification- with
the SANS training - $450 -per cert- without SANS training [all in
USD, of course]. Recertification [which is required every two years]
is $120 [but will cover all the exams that you take].

The CISSP exam appears to be $450 USD - review courses all appear
to be in the $2000+ USD range [~$2500 USD on average].

This is before you factor in travel, lodgings, and meals.

If you can persuade your company that training you is valuable, and not
likely to lead to your immediate departure for greener fields, that's
definitely a bonus.

Otherwise, you're looking at significant out-of-pocket costs unless you
elect to challenge the exams [and even then you're looking at $450+ per
exam] - not to mention time away from work, and travel costs if you
don't live in a major metropolitan area.

Moving on to the merits of certification, there's also the question of
whether a certification actually says anything at all about your retained
knowledge and ability, rather than your ability to cram and regurgitate
enough information to pass an exam.

I rather suspect that most of us succeeded in passing exams in high
school and college/university that we'd be hard pressed to fathom today.

That said, those letters are a quick way to get through the HR filters.

cheers!
==========================================================================
"A cat spends her life conflicted between a deep, passionate and profound
desire for fish and an equally deep, passionate and profound desire to
avoid getting wet. This is the defining metaphor of my life right now."
Gwendolynn ferch Elydyr
2004-06-14 15:45:05 UTC
Permalink
Post by Laura Taylor
Certification is only a qualifier of technical skills. From my experience,
there is always an obvious solution for the technical problems. The people
problems are much more difficult to solve, and only years of experience
polishes up a person's people skills. Typically what separates junior level
folks from senior level, or executive level, folks is more often not their
technical skills, but their people skills -- at least in my opinion.
That's an interesting assertion. I don't believe that certification is
in any respect a predictor of technical skills.

The only thing that certification is a predictor of is your ability to
pass the testing requirements at the time that you were tested. Nothing
else.

Further, many certifications are based on soft-skills[0], not the
ability to recall which flags a command requires.

The critical skill difference that I see between junior and senior level
people is the ability to see a broader picture, and apply plans and
decisions appropriately, rather than acting reactively.

A junior person will say "You need netmeeting? Okay! Let me open the
ports in the firewall".

A senior person will say "You have a need for video conferencing and
ip telephony? Let's see how we can do this securely, while still meeting
your needs".

The CXX will say "We want to be able to talk to our London office from
New York with live video cheaply".

That's planning and vision, not people skills[1].

cheers!
[0] Take a look at the CISSP curiculum
[1] Having people skills doesn't hurt, but I think we can all come up
with a number of senior and CXX folk that have -terrible- people
skills, and a vast number that have mediocre people skills.
==========================================================================
"A cat spends her life conflicted between a deep, passionate and profound
desire for fish and an equally deep, passionate and profound desire to
avoid getting wet. This is the defining metaphor of my life right now."
Marcus J. Ranum
2004-06-14 16:15:50 UTC
Permalink
Post by Laura Taylor
Typically what separates junior level
folks from senior level, or executive level, folks is more often not their
technical skills, but their people skills -- at least in my opinion
There's also a huge amount of value to having been around long
enough to have made a lot of mistakes and learned from them. ;)

mjr.
Crispin Cowan
2004-06-14 18:21:17 UTC
Permalink
Post by Marcus J. Ranum
Post by Laura Taylor
Typically what separates junior level
folks from senior level, or executive level, folks is more often not their
technical skills, but their people skills -- at least in my opinion
There's also a huge amount of value to having been around long
enough to have made a lot of mistakes and learned from them. ;)
Expert, n: someone who has made all of the mistakes. See "consultant".

Crispin
--
Crispin Cowan, Ph.D. http://immunix.com/~crispin/
CTO, Immunix http://immunix.com
Vladimir Parkhaev
2004-06-14 19:06:12 UTC
Permalink
Post by Crispin Cowan
Expert, n: someone who has made all of the mistakes. See "consultant".
I strongly disagree. $consultant != 'expert'. Most of the consultants
out there have great PR and soft skills and limited technical skills.
There are soooo many clueless consultants out there, way more than
cluefull ones...
--
.signature: No such file or directory
Dave Piscitello
2004-06-18 17:56:49 UTC
Permalink
Post by Vladimir Parkhaev
Post by Crispin Cowan
Expert, n: someone who has made all of the mistakes. See "consultant".
I strongly disagree. $consultant != 'expert'. Most of the consultants
out there have great PR and soft skills and limited technical skills.
There are soooo many clueless consultants out there, way more than
cluefull ones...
I hope the moderator will put this thread to bed, as it's degenerating
into digital racism. Once we begin blanket condemnation of any group
we aren't acting much like wizards...
Paul D. Robertson
2004-06-18 22:30:51 UTC
Permalink
Post by Dave Piscitello
Post by Vladimir Parkhaev
Post by Crispin Cowan
Expert, n: someone who has made all of the mistakes. See "consultant".
I strongly disagree. $consultant != 'expert'. Most of the consultants
out there have great PR and soft skills and limited technical skills.
There are soooo many clueless consultants out there, way more than
cluefull ones...
I hope the moderator will put this thread to bed, as it's degenerating
into digital racism. Once we begin blanket condemnation of any group
we aren't acting much like wizards...
Sure we are, we're smiting! You can't have wizards without a good smiting
or two! Me? I call for a blanket condemnation of attackers.

There, I feel better now. Good thing it's Friday.

Seriously, no need for anyone to get wrapped up in it, there's nothing
wrong with a little give an take- it's really amusing to see otherwise
rational folks defend consulting or certifications, or attack consulting
or certifications based upon their status as such- now, is the status due
to the belief, or the belief due to the status? That's the interesting
question in these threads...

Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
***@compuwar.net which may have no basis whatsoever in fact."
***@trusecure.com Director of Risk Assessment TruSecure Corporation
Dave Piscitello
2004-06-19 00:51:03 UTC
Permalink
Post by Paul D. Robertson
Sure we are, we're smiting! You can't have wizards without a good smiting
or two! Me?
Ah, is that a broad sword 2d5 under your slicker, or are you happy to be here?

For the record, I did not take personal offense at the comment, nor was I
in any way throwing my strength=18/30 body in the way of incoming darts
aimed at consultants at large. I'm just tired of the bickering.

So my final $.02. It's the sum of the qualifications and the measure of the
character of any individual and not any one of experience, charm,
certification, or integrity that we might consider when we evaluate a job
candidate, whether for a LAN admin position or C*O.
Post by Paul D. Robertson
I call for a blanket condemnation of attackers.
Unfortunate timing for such a comment given today's events.
But I couldn't agree more.
If only it were that easy in the virtual and real world...

Now, everyone go home and hug (or call) someone you love...
David M. Piscitello
Core Competence, Inc.
Myrtle Bank Lane HHI, SC 29926
Company: http://www.corecom.com
WebLog: http://hhi.corecom.com/weblogindex.htm
Personal: http://hhi.corecom.com
SIP: ***@fwd.pulver.com

Vladimir Parkhaev
2004-06-18 23:19:42 UTC
Permalink
Post by Dave Piscitello
Post by Vladimir Parkhaev
I strongly disagree. $consultant != 'expert'. Most of the consultants
out there have great PR and soft skills and limited technical skills.
There are soooo many clueless consultants out there, way more than
cluefull ones...
I hope the moderator will put this thread to bed, as it's degenerating
into digital racism. Once we begin blanket condemnation of any group
we aren't acting much like wizards...
Well, sorry Dave (and other cluefull consultants out there). I meant to
say that 'consultant' does not automaticaly imply 'expert'. It is kind
of tricky to express it in perl...

It was totaly wrong of me to say
$consultant != 'expert';
Since it produces
Argument "expert" isn't numeric in numeric eq (==) at line whatever.

None of the following describe the meaning correctly:
$consultant ne 'expert';
$consultant !~ /expert/;
perhaps the best is
($consultant && $cluefull)? 'expert' : undef;
but $cluefull is also seems to be undefined....

;)
--
.signature: No such file or directory
R. DuFresne
2004-06-14 16:38:08 UTC
Permalink
Post by Laura Taylor
Certification is only a qualifier of technical skills.
This depends upon the certification, a CISSP is a management
certification, and reflectys little on the technical skills of the person
possessing. They are far more common these days and of minimal use to the
technicall7y oriented except in getting aresume past HR/recruiters that
lack the ability yo actually determine and define skills in the technical
realm. Not to mention that certifications is a sub-industry of the field
in and of itself and driven by pretty much stricktly monetary modifiers.
Post by Laura Taylor
From my experience,
there is always an obvious solution for the technical problems. The people
problems are much more difficult to solve, and only years of experience
polishes up a person's people skills.
This was once the case, buut, these days the market is blinded by the
whole certification sub-industry. Try and get a resume lacking the three
letter syndrome past HR/recruiters these days and see how well you flly.

Useless anancronyms<sp?> mean more then experience r skills. Course the
questions also remains, how far do recuriters and HR folks go in verifying
certification? How about re-certification? <as though these mean much
either>
Post by Laura Taylor
Typically what separates junior level
folks from senior level, or executive level, folks is more often not their
technical skills, but their people skills -- at least in my opinion.
Yes, the more junior the tech, the closer they tend to be to the enduser
level, and they tend to have better people skills, mgt and level 1 support
often has forgotten what they once knew and used as a skill.

Thanks,

Ron DuFresne
Post by Laura Taylor
-----Original Message-----
Gwendolynn ferch Elydyr
Sent: Friday, June 04, 2004 4:24 PM
To: Margles Singleton
Subject: Certification (was Re:[fw-wiz] Vulnerability Response)
Post by Margles Singleton
Something I noticed, however: the SANS conferences draw a large crowd -
but
Post by Margles Singleton
a very small percentage of those attending ever certify. I think this
demonstrates that old saw: "You can lead a horse to water, but you can't
make him think...."
I disagree. There's a difference between learning and certification. It's
disingenious [although lucrative] to confused the two.
Looking at the costs involved in certification, before addressing the
Trolling through the SANS web pages, it looks like the course
fees vary from ~$600 per tutorial, up to ~$900, if you register
late.
The GIAC certification is a mere $250 -per certification- with
the SANS training - $450 -per cert- without SANS training [all in
USD, of course]. Recertification [which is required every two years]
is $120 [but will cover all the exams that you take].
The CISSP exam appears to be $450 USD - review courses all appear
to be in the $2000+ USD range [~$2500 USD on average].
This is before you factor in travel, lodgings, and meals.
If you can persuade your company that training you is valuable, and not
likely to lead to your immediate departure for greener fields, that's
definitely a bonus.
Otherwise, you're looking at significant out-of-pocket costs unless you
elect to challenge the exams [and even then you're looking at $450+ per
exam] - not to mention time away from work, and travel costs if you
don't live in a major metropolitan area.
Moving on to the merits of certification, there's also the question of
whether a certification actually says anything at all about your retained
knowledge and ability, rather than your ability to cram and regurgitate
enough information to pass an exam.
I rather suspect that most of us succeeded in passing exams in high
school and college/university that we'd be hard pressed to fathom today.
That said, those letters are a quick way to get through the HR filters.
cheers!
==========================================================================
"A cat spends her life conflicted between a deep, passionate and profound
desire for fish and an equally deep, passionate and profound desire to
avoid getting wet. This is the defining metaphor of my life right now."
_______________________________________________
firewall-wizards mailing list
http://honor.icsalabs.com/mailman/listinfo/firewall-wizards
_______________________________________________
firewall-wizards mailing list
http://honor.icsalabs.com/mailman/listinfo/firewall-wizards
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
admin & senior security consultant: sysinfo.com
http://sysinfo.com

"Cutting the space budget really restores my faith in humanity. It
eliminates dreams, goals, and ideals and lets us get straight to the
business of hate, debauchery, and self-annihilation."
-- Johnny Hart

testing, only testing, and damn good at it too!
Chris Blask
2004-06-08 00:03:34 UTC
Permalink
Post by Margles Singleton
Post by Brian Ford
I don't agree that best practices are flowing through the community. Lots
of folks are using stuff that isn't working well. They don't know what
else is out there or how anything else other than how "their thing" works.
Depends what you mean by "flowing". It's not the Nile River, but it keeps
the structure ticking away so far. Brian, you know I understand the need
for canaries in the coal mine, but there's lots of canaries so I'll let
others carry that burden and I'll be the Bluebird of Optimism... ;-)
Post by Margles Singleton
Speaking as a newbie, these lists are a great thing: I "listen" to how
experienced folks think and argue - and I learn. I believe there are many
folks like myself on these lists, simply listening in order to improve
their skills and knowledge.
'Freedom of Speech Proven to Work. Central Control heard to mutter
"damnit" before tripping over a box of Approved Worker Units, falling down
a staircase and breaking its neck.'
Post by Margles Singleton
When I moved into security, SANS was decidedly the best thing I ever did
for myself. I was working for a company that had no security
awareness/department, and I had to figure out *everything* for
myself. SANS gave me a road map, and a yardstick by which to measure my
progress.
Left to your own devices you figured out where to start, worked through a
session some other folks made avaliable for their own self-directed
reasons, then monitored the thoughts of people attempting similar tasks.

Isn't that just incredibly cool?

Never forget that only a few decades ago it was a serious debate among
Learned Folks whether people needed to be Centrally Controlled or were
better off left to their own devices. The moment-by-moment existence of
the Internet is proof that Central Control can go hang itself, quietly,
thank you very much.

In some ways the debate goes on, and we can Never Let Them Win.
Post by Margles Singleton
Something I noticed, however: the SANS conferences draw a large crowd -
but a very small percentage of those attending ever certify. I think this
demonstrates that old saw: "You can lead a horse to water, but you can't
make him think...."
Darwin.

Even better, turns out Darwin works inside individuals - we evolve at meme
speed. There remains hope for many of the un-watered.
Post by Margles Singleton
Unless - I believe until - security can be packaged in a black box, there
will not be tremendous gains in security. My reasoning? Black boxes are
those technologies that we have faith in working without knowing
why: microwaves, cars, and TV sets are all examples. A NASCAR team will
know the fine details of tuning a car, but the Great Unwashed will
not: they will simply turn the key and go - and this is how it should be
- and I believe in future it will be like that for security as well.
True. To an extent it is already. Lots of things that used to take a
great deal of expert handiwork are already available in sheetmetal boxes.

Why trust sheetmetal boxes?

1 - don't.

2 - trust your ability to make informed choices on what sort of trust to
put into each piece of your defenses.

3 - if you take the effort and responsibility to be informed, you can
determine which sheetmetal boxes are being produced by folks who are
following Darwinistic Success Paths and use such boxes in your defense
structure.

You shouldn't have to mine the ore and grind the gunpowder yourself, but a
reliable MK 15 Phalanx Close-In Weapons System sure can come in handy from
time to time...
Post by Margles Singleton
In the meantime, I don't believe there is a more exciting time to be
working in the field of security than NOW, before everything is packaged
up in dull, boring, black boxes that anyone can utilize.
I agree.

Still, I think playing with the boxes and arranging them against bad guys
will be fun for a while yet. There's still a lot of brand new thinking to do.

What Brian and many others are saying remains true - there's a lot of work
to be done and no time for lolly-gagging around. I just have exceptional
trust in individual's aggregate ability to seek success.
Post by Margles Singleton
Frankly, I think all you guys and geeks are getting too easily
discouraged, and not recognizing the great job that you are all doing -
INCLUDING communicating....
Yep yep!

I love it!

Go Freedom of Speech!

:-)

-chris



Chris Blask
Vice President, Business Development
Protego Networks Inc.

(1) 416 358 9885- Mobile
(1) 408 262 5220 - HQ
(1) 408 262 5280 - Fax

***@protegonetworks.com
www.protegonetworks.com

Protego MARS - Integration, Insight and Control

Integration. Insight. Control.
Margles Singleton
2004-06-08 12:50:31 UTC
Permalink
Post by Chris Blask
Post by Margles Singleton
Unless - I believe until - security can be packaged in a black box, there
will not be tremendous gains in security. My reasoning? Black boxes are
microwaves, cars, and TV sets are all examples. A NASCAR team will know
the fine details of tuning a car, but the Great Unwashed will not: they
will simply turn the key and go - and this is how it should be - and I
believe in future it will be like that for security as well.
True. To an extent it is already. Lots of things that used to take a
great deal of expert handiwork are already available in sheetmetal boxes.
Why trust sheetmetal boxes?
1 - don't.
2 - trust your ability to make informed choices on what sort of trust to
put into each piece of your defenses.
3 - if you take the effort and responsibility to be informed, you can
determine which sheetmetal boxes are being produced by folks who are
following Darwinistic Success Paths and use such boxes in your defense
structure.
You shouldn't have to mine the ore and grind the gunpowder yourself, but a
reliable MK 15 Phalanx Close-In Weapons System sure can come in handy from
time to time...
YES!!!! .....i suspect this is why i tend to avoid gui's as well - at least
when i'm in "learning phase". blind trust is never a good idea...
Post by Chris Blask
Still, I think playing with the boxes and arranging them against bad guys
will be fun for a while yet. There's still a lot of brand new thinking to
do.
What Brian and many others are saying remains true - there's a lot of work
to be done and no time for lolly-gagging around. I just have exceptional
trust in individual's aggregate ability to seek success.
i think this is called the Muddle Theory of Optimism, and yeppers,
absolutely. it's just that if one dwells on the too-big picture, pessimism
and dismay may set in....

enjoy!!/mas

_________________________________________________________________
Get fast, reliable Internet access with MSN 9 Dial-up – now 3 months FREE!
http://join.msn.click-url.com/go/onm00200361ave/direct/01/
Don Parker
2004-06-14 16:24:11 UTC
Permalink
The problem with certification is that many times that is the only thing that HR pesonnel
have to go by. So and so cert must mean that a person knows what they are talking about.
While this is sometimes true it is not a hard and fast rule. Laura is also quite correct
in that people skills are also very much important. Too often people with computer skills
are a little too arrogant for their own good. Not a good plan is making management
feeling like dummies. In an ideal world there would only be technical interviews and no
need of certs, however that is not the case.

Cheers,

Don

-------------------------------------------
Don Parker, GCIA
Intrusion Detection Specialist
Rigel Kent Security & Advisory Services Inc
www.rigelksecurity.com
ph :613.233.HACK
fax:613.233.1788
toll: 1-877-777-H8CK
--------------------------------------------

On Jun 12, "Laura Taylor" <***@relevanttechnologies.com> wrote:

Certification is only a qualifier of technical skills. From my experience,
there is always an obvious solution for the technical problems. The people
problems are much more difficult to solve, and only years of experience
polishes up a person's people skills. Typically what separates junior level
folks from senior level, or executive level, folks is more often not their
technical skills, but their people skills -- at least in my opinion.

Laura Taylor
Relevant Technologies, Inc.
www.relevanttechnologies.com


-----Original Message-----
From: firewall-wizards-***@honor.icsalabs.com
[mailto:firewall-wizards-***@honor.icsalabs.com]On Behalf Of
Gwendolynn ferch Elydyr
Sent: Friday, June 04, 2004 4:24 PM
To: Margles Singleton
Cc: firewall-***@honor.icsalabs.com
Subject: Certification (was Re:[fw-wiz] Vulnerability Response)
Post by Margles Singleton
Something I noticed, however: the SANS conferences draw a large crowd -
but
Post by Margles Singleton
a very small percentage of those attending ever certify. I think this
demonstrates that old saw: "You can lead a horse to water, but you can't
make him think...."
I disagree. There's a difference between learning and certification. It's
disingenious [although lucrative] to confused the two.

Looking at the costs involved in certification, before addressing the
question of the value of certification:

Trolling through the SANS web pages, it looks like the course
fees vary from ~$600 per tutorial, up to ~$900, if you register
late.

The GIAC certification is a mere $250 -per certification- with
the SANS training - $450 -per cert- without SANS training [all in
USD, of course]. Recertification [which is required every two years]
is $120 [but will cover all the exams that you take].

The CISSP exam appears to be $450 USD - review courses all appear
to be in the $2000+ USD range [~$2500 USD on average].

This is before you factor in travel, lodgings, and meals.

If you can persuade your company that training you is valuable, and not
likely to lead to your immediate departure for greener fields, that's
definitely a bonus.

Otherwise, you're looking at significant out-of-pocket costs unless you
elect to challenge the exams [and even then you're looking at $450+ per
exam] - not to mention time away from work, and travel costs if you
don't live in a major metropolitan area.

Moving on to the merits of certification, there's also the question of
whether a certification actually says anything at all about your retained
knowledge and ability, rather than your ability to cram and regurgitate
enough information to pass an exam.

I rather suspect that most of us succeeded in passing exams in high
school and college/university that we'd be hard pressed to fathom today.

That said, those letters are a quick way to get through the HR filters.

cheers!
==========================================================================
"A cat spends her life conflicted between a deep, passionate and profound
desire for fish and an equally deep, passionate and profound desire to
avoid getting wet. This is the defining metaphor of my life right now."
Crissup, John (MBNP is)
2004-06-14 19:21:20 UTC
Permalink
Post by Don Parker
The problem with certification is that many times
that is the only thing that HR pesonnel have to go by.
When I last applied for an NT admin position, the HR department for the
company was insisting that I have an A+ certification. My MCSE wasn't
sufficient for them, had to be an A+. I told them I would call them back,
ran out that afternoon and took the A+ exam. Called them back on my way
home from the testing center and told them I was now MCSE *and* A+
certified. :)


As for difference between junior and senior, I've always argued that the
junior guy knows what button to push to accomplish a specific task. But,
the senior guy understands the ramifications of pushing that button and how
to recover if it blows up in his face.


====================================================
This email is confidential and intended solely for the use of the
individual or organisation to whom it is addressed. Any opinions or
advice presented are solely those of the author and do not necessarily
represent those of the Millward Brown Group of Companies. If you are
not the intended recipient of this email, you should not copy, modify,
distribute or take any action in reliance on it. If you have received
this email in error please notify the sender and delete this email
from your system. Although this email has been checked for viruses
and other defects, no responsibility can be accepted for any loss or
damage arising from its receipt or use.
====================================================
Yinal Ozkan
2004-06-16 05:03:49 UTC
Permalink
Not the certification but the technical tests are very useful to determine
what people "don't" know. I strongly believe in the tests, where the test
takers answers are visible to the evaluators. Instead of a pass/fail score,
individual answers for specific questions will clearly show the technical
expertise of the test taker.

The idea is same as the technical interviews. But tests are closer to real
life since
a- Time is limited
b- Open book
c- Options are visible
d- results are quantifiable

Implementing regular tests for the technical stuff will increase the quality
(as well as the tension).

Again, tests are very good to find out what people don't know. (which is a
kind of knowledge level). But they do not measure the upper limits..

Certifications on the other hand show one thing clearly. The person with the
certification has some sort of dedication. Either time or money, something
was spent on that title. A certification is a proof that the holder has
eager to go further.

cheers,
- yinal ozkan



-----Original Message-----
From: Don Parker [mailto:***@rigelksecurity.com]
Sent: Monday, June 14, 2004 12:24 PM
To: Laura Taylor; 'Gwendolynn ferch Elydyr'; 'Margles Singleton'
Cc: firewall-***@honor.icsalabs.com
Subject: RE: Certification (was Re:[fw-wiz] Vulnerability Response)


The problem with certification is that many times that is the only thing
that HR pesonnel
have to go by. So and so cert must mean that a person knows what they are
talking about.
While this is sometimes true it is not a hard and fast rule. Laura is also
quite correct
in that people skills are also very much important. Too often people with
computer skills
are a little too arrogant for their own good. Not a good plan is making
management
feeling like dummies. In an ideal world there would only be technical
interviews and no
need of certs, however that is not the case.

Cheers,

Don

-------------------------------------------
Don Parker, GCIA
Intrusion Detection Specialist
Rigel Kent Security & Advisory Services Inc
www.rigelksecurity.com
ph :613.233.HACK
fax:613.233.1788
toll: 1-877-777-H8CK
--------------------------------------------

On Jun 12, "Laura Taylor" <***@relevanttechnologies.com> wrote:

Certification is only a qualifier of technical skills. From my experience,
there is always an obvious solution for the technical problems. The people
problems are much more difficult to solve, and only years of experience
polishes up a person's people skills. Typically what separates junior level
folks from senior level, or executive level, folks is more often not their
technical skills, but their people skills -- at least in my opinion.

Laura Taylor
Relevant Technologies, Inc.
www.relevanttechnologies.com


-----Original Message-----
From: firewall-wizards-***@honor.icsalabs.com
[mailto:firewall-wizards-***@honor.icsalabs.com]On Behalf Of
Gwendolynn ferch Elydyr
Sent: Friday, June 04, 2004 4:24 PM
To: Margles Singleton
Cc: firewall-***@honor.icsalabs.com
Subject: Certification (was Re:[fw-wiz] Vulnerability Response)
Post by Margles Singleton
Something I noticed, however: the SANS conferences draw a large crowd -
but
Post by Margles Singleton
a very small percentage of those attending ever certify. I think this
demonstrates that old saw: "You can lead a horse to water, but you can't
make him think...."
I disagree. There's a difference between learning and certification. It's
disingenious [although lucrative] to confused the two.

Looking at the costs involved in certification, before addressing the
question of the value of certification:

Trolling through the SANS web pages, it looks like the course
fees vary from ~$600 per tutorial, up to ~$900, if you register
late.

The GIAC certification is a mere $250 -per certification- with
the SANS training - $450 -per cert- without SANS training [all in
USD, of course]. Recertification [which is required every two
years]
is $120 [but will cover all the exams that you take].

The CISSP exam appears to be $450 USD - review courses all appear
to be in the $2000+ USD range [~$2500 USD on average].

This is before you factor in travel, lodgings, and meals.

If you can persuade your company that training you is valuable, and not
likely to lead to your immediate departure for greener fields, that's
definitely a bonus.

Otherwise, you're looking at significant out-of-pocket costs unless you
elect to challenge the exams [and even then you're looking at $450+ per
exam] - not to mention time away from work, and travel costs if you
don't live in a major metropolitan area.

Moving on to the merits of certification, there's also the question of
whether a certification actually says anything at all about your retained
knowledge and ability, rather than your ability to cram and regurgitate
enough information to pass an exam.

I rather suspect that most of us succeeded in passing exams in high
school and college/university that we'd be hard pressed to fathom today.

That said, those letters are a quick way to get through the HR filters.

cheers!
==========================================================================
"A cat spends her life conflicted between a deep, passionate and profound
desire for fish and an equally deep, passionate and profound desire to
avoid getting wet. This is the defining metaphor of my life right now."

_______________________________________________
firewall-wizards mailing list
firewall-***@honor.icsalabs.com
<a href='http://honor.icsalabs.com/mailman/listinfo/firewall-
wizards'>http://honor.icsalabs.com/mailman/listinfo/firewall-wizards</a>

_______________________________________________
firewall-wizards mailing list
firewall-***@honor.icsalabs.com
<a href='http://honor.icsalabs.com/mailman/listinfo/firewall-
wizards'>http://honor.icsalabs.com/mailman/listinfo/firewall-wizards</a>

_______________________________________________
firewall-wizards mailing list
firewall-***@honor.icsalabs.com
http://honor.icsalabs.com/mailman/listinfo/firewall-wizards


Please note that:

1. This e-mail may constitute privileged information. If you are not the intended recipient, you have received this confidential email and any attachments transmitted with it in error and you must not disclose, copy, circulate or in any other way use or rely on this information.
2. E-mails to and from the company are monitored for operational reasons and in accordance with lawful business practices.
3. The contents of this email are those of the individual and do not necessarily represent the views of the company.
4. The company does not conclude contracts by email and all negotiations are subject to contract.
5. The company accepts no responsibility once an e-mail and any attachments is sent.

http://www.integralis.com
Devdas Bhagat
2004-06-17 02:05:46 UTC
Permalink
On 16/06/04 01:03 -0400, Yinal Ozkan wrote:
<snip>
Post by Yinal Ozkan
Certifications on the other hand show one thing clearly. The person with the
certification has some sort of dedication. Either time or money, something
was spent on that title. A certification is a proof that the holder has
eager to go further.
Or that the job market was so bad that the holder couldn't even consider
getting employment without a certificate and was forced to pay for the
certificate. Certificates are great when the market is good and not
every man and his dog have certificates.

Just to illustrate, the minimum qualification for call center work today in
India is a bachelors degree (3 years). Hence, the "eager to go further"
doesn't even come into consideration. Its more of an essential requirement
like literacy.
(The call center example is deliberate, where no technical skills are
really needed. Usually IT departments demand higher qualifications.).

Devdas Bhagat
DRISCOLL, ROBERT
2004-06-18 22:58:44 UTC
Permalink
I've been lurking on this thread for a while and haven't really had much
to add.

However, I'd like to take a crack.

In the 20+ years that I've been doing IT work, from the Military to
Government to private sector, I've seen people with Certs that didn't
couldn't troubleshoot minor problems, and others that really do some
good work.

But, I've also seen the same from people without Certs.

I guess I see it as this..

In order to have a certificate, at one time in their life a person had
to be familiar enough with the material to pass a test. Whether that
knowledge stays with them or not, is going to vary by the individual.

Hiring Managers and upper management do place a value on certificate,
that is evident by job posting that require/desire them. And cert
holders do report to earn more money. I guess that's the deciding
factor, if a certificate can get you an interview or earn you a few more
$$$ then its probably worth it.

-----Original Message-----
From: firewall-wizards-***@honor.icsalabs.com
[mailto:firewall-wizards-***@honor.icsalabs.com] On Behalf Of Paul D.
Robertson
Sent: Friday, June 18, 2004 3:31 PM
To: Dave Piscitello
Cc: Vladimir Parkhaev; firewall-***@honor.icsalabs.com
Subject: Re: Certification (was Re:[fw-wiz] Vulnerability Response)
Post by Dave Piscitello
Post by Vladimir Parkhaev
Post by Crispin Cowan
Expert, n: someone who has made all of the mistakes. See
"consultant".
I strongly disagree. $consultant != 'expert'. Most of the consultants
out there have great PR and soft skills and limited technical skills.
There are soooo many clueless consultants out there, way more than
cluefull ones...
I hope the moderator will put this thread to bed, as it's degenerating
into digital racism. Once we begin blanket condemnation of any group
we aren't acting much like wizards...
Sure we are, we're smiting! You can't have wizards without a good
smiting or two! Me? I call for a blanket condemnation of attackers.

There, I feel better now. Good thing it's Friday.

Seriously, no need for anyone to get wrapped up in it, there's nothing
wrong with a little give an take- it's really amusing to see otherwise
rational folks defend consulting or certifications, or attack consulting
or certifications based upon their status as such- now, is the status
due to the belief, or the belief due to the status? That's the
interesting question in these threads...

Paul
------------------------------------------------------------------------
-----
Paul D. Robertson "My statements in this message are personal
opinions
***@compuwar.net which may have no basis whatsoever in fact."
***@trusecure.com Director of Risk Assessment TruSecure
Corporation _______________________________________________
firewall-wizards mailing list firewall-***@honor.icsalabs.com
http://honor.icsalabs.com/mailman/listinfo/firewall-wizards
Loading...