Discussion:
Making Better People
(too old to reply)
Quadibloc
2019-10-31 22:27:15 UTC
Permalink
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.

While that seems like an outrageous thing to do, perhaps matters are so desperate,
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?

Ah. An answer to the Fermi paradox, then.

I don't really think that if it's trivially easy for people to destroy the world,
that we would be able to engineer people whose wisdom and ethics are reliable
enough so that this is not a problem. Plus, if all the countries on Earth would
consent to participate in such a project, we would already have world peace.

Instead, I suspect this will push the world in the direction of political
structures that are effective at keeping dangerous toys out of most people's hands
- your typical totalitarian dictatorship.

John Savard
a***@yahoo.com
2019-11-01 00:45:50 UTC
Permalink
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
I am not sure CRISPR can do that. It might make them taste better, though.
Robert Carnegie
2019-11-01 01:48:39 UTC
Permalink
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
While that seems like an outrageous thing to do, perhaps matters are so desperate,
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
Ah. An answer to the Fermi paradox, then.
I don't really think that if it's trivially easy for people to destroy the world,
that we would be able to engineer people whose wisdom and ethics are reliable
enough so that this is not a problem. Plus, if all the countries on Earth would
consent to participate in such a project, we would already have world peace.
Instead, I suspect this will push the world in the direction of political
structures that are effective at keeping dangerous toys out of most people's hands
- your typical totalitarian dictatorship.
John Savard
These genetically engineered, ethical, compassionate
people would have to do something about the remaining
non-engineered people, or their ethical asses would be
kicked.

Of course, the action they took would be compassionate.
Humane. And guided by the thoughts of Mao.
Johnny1A
2019-11-01 04:12:39 UTC
Permalink
Post by Robert Carnegie
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
While that seems like an outrageous thing to do, perhaps matters are so desperate,
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
Ah. An answer to the Fermi paradox, then.
I don't really think that if it's trivially easy for people to destroy the world,
that we would be able to engineer people whose wisdom and ethics are reliable
enough so that this is not a problem. Plus, if all the countries on Earth would
consent to participate in such a project, we would already have world peace.
Instead, I suspect this will push the world in the direction of political
structures that are effective at keeping dangerous toys out of most people's hands
- your typical totalitarian dictatorship.
John Savard
These genetically engineered, ethical, compassionate
people would have to do something about the remaining
non-engineered people, or their ethical asses would be
kicked.
Of course, the action they took would be compassionate.
Humane. And guided by the thoughts of Mao.
I'm not sure that doesn't arise anyway. If we create a 'new man', we've created a rival for our ecological niche, almost by definition.

Also, historically, organizations and groups that espouse the creation of a 'new man' in this world have a fairly nasty track record.
Quadibloc
2019-11-01 16:28:08 UTC
Permalink
Post by Johnny1A
Also, historically, organizations and groups that espouse the creation of a
'new man' in this world have a fairly nasty track record.
Quite true, I haven't forgotten either Hitler or Stalin.

But the issue that our technology keeps growing, and the part that's within
reach of almost everybody also keeps growing. If something that could wipe out
humanity gets into that part of our technology, then some angry and frustrated
hacker could wipe out humanity.

Either make everyone "nicer" somehow, or control the spread and advancement of
technology. The latter seems easier, but it doesn't happen - partly because of
market forces, and partly because of military competition between nations.

I don't think there's an easy solution. If we had faster than light travel, we
could disperse to the four corners of the Universe; _that_ would be the
solution.

However, if our only tool is genetic manipulation, I would suggest making
humanity more intelligent. Then it would be more likely to be able to solve hard
problems like this.

John Savard
Quadibloc
2019-11-01 04:58:14 UTC
Permalink
Post by Robert Carnegie
These genetically engineered, ethical, compassionate
people would have to do something about the remaining
non-engineered people, or their ethical asses would be
kicked.
Of course, the action they took would be compassionate.
Humane. And guided by the thoughts of Mao.
I think the idea is that CRISPR would be used in such a way that the gene-
changing virus would infect *everyone* the world over... so that the natural
succession of generations would take care of that problem with no genocide
required.

If so, I think, however, that the problem will take *two* generations to solve.
In addition to changing people's brains to make them more ethical, the CRISPR
payload should reduce human fertility. Thus, in the second generation, one would
combine more ethical people with a smaller global population - with the current
world population, even highly ethical people will have little choice but to
behave badly.

John Savard
nuny@bid.nes
2019-11-01 05:42:05 UTC
Permalink
...with the current world population, even highly ethical people
will have little choice but to behave badly.
You don't even see what's wrong with that statement, do you?


Mark L. Fergerson
Robert Carnegie
2019-11-01 10:14:06 UTC
Permalink
Post by Quadibloc
Post by Robert Carnegie
These genetically engineered, ethical, compassionate
people would have to do something about the remaining
non-engineered people, or their ethical asses would be
kicked.
Of course, the action they took would be compassionate.
Humane. And guided by the thoughts of Mao.
I think the idea is that CRISPR would be used in such a way that the gene-
changing virus would infect *everyone* the world over... so that the natural
succession of generations would take care of that problem with no genocide
required.
Oh, a /weapon/. In that case, wouldn't it be more
effective to use it just to kill everybody on the
enemy side. And illegal, technically.

Or, you change them genetically to be less aggressive,
then conquer them. If you need a work force.

This is the situation with the fictional Daleks; their
long term goal is to "exterminate" anything alive that
isn't them, but they often show up with slaves. These
are usually disposed of in act three.
nuny@bid.nes
2019-11-01 05:51:53 UTC
Permalink
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
Sure, just determine exactly which genes are responsible for selfishness etc.
and... do what, delete them? Genes tend to do more than one thing, you know.

Edit them? How will that impact their interaction with other genes? Genes
tend to interact with other genes, you know.
Post by Quadibloc
While that seems like an outrageous thing to do, perhaps matters are so desperate,
What "matters" are so much more "desperate" than before Malthus put pen to
paper or before aggression was inherited from our last common ancestor with
chimpanzees?
Post by Quadibloc
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
To begin with, nobody can "destroy the world"- not easily or with difficulty.

If you're referring to nuclear weapons, going on seventy-five years so far.
Post by Quadibloc
Ah. An answer to the Fermi paradox, then.
That was baked into Drake's equation.
Post by Quadibloc
I don't really think
(sigh) We know.
Post by Quadibloc
that if it's trivially easy for people to destroy the world,
It isn't. If it were it would have happened.
Post by Quadibloc
that we would be able to engineer people whose wisdom and ethics are
reliable enough so that this is not a problem.
Sure. Who gets to define wisdom and ethics, the Human Rights Council of the
United Nations?
Post by Quadibloc
Plus, if all the countries on Earth would
consent to participate in such a project, we would already have world peace.
YA THINK?
Post by Quadibloc
Instead, I suspect this will push the world in the direction of political
structures that are effective at keeping dangerous toys out of most
people's hands - your typical totalitarian dictatorship.
Once again, you justify the Second Amendment while probably intending the exact opposite..


Mark L. Fergerson
Quadibloc
2019-11-01 08:51:20 UTC
Permalink
Post by ***@bid.nes
Post by Quadibloc
that if it's trivially easy for people to destroy the world,
It isn't. If it were it would have happened.
The point of the article I was reading was that it would *become* trivially easy
for people to wipe out humanity - sometime in the future. With a Boy Scientist
nanotechnology kit or something like that.

As technology continues to advance, that is a danger that has to be watched for.

I do think that society is not so stupid, though... we can benefit from nuclear
power despite tight controls on fissionable materials, and similarly,
precautions can be taken with nanotechnology and the like.

Also, long before wiping out the human race becomes trivially easy, someone will
perhaps infect every computer connected to the Intermet with a hard-disk wiping
virus. After that happens, we will wake up and take security seriously, and will
be prepared for stuff like that.

And while I could imagine reducing petty crime by genetically altering humans
against impulsive behavior... a gene change that gives 99.9999999999% immunity
to suicidal depression and they got it right on the first try? I admit to having
doubts.

John Savard
nuny@bid.nes
2019-11-02 01:25:59 UTC
Permalink
Post by Quadibloc
Post by ***@bid.nes
Post by Quadibloc
that if it's trivially easy for people to destroy the world,
It isn't. If it were it would have happened.
The point of the article I was reading was that it would *become*
trivially easy for people to wipe out humanity - sometime in the future.
With a Boy Scientist nanotechnology kit or something like that.
With, say, commercialized CRISPR?
Post by Quadibloc
As technology continues to advance, that is a danger that has to be watched for.
I believe that you're having more and more trouble distinguishing actual
science from science fiction every day.
Post by Quadibloc
I do think that society is not so stupid, though... we can benefit from
nuclear power despite tight controls on fissionable materials, and
similarly, precautions can be taken with nanotechnology and the like.
Any High School kid can build a neutron generator that freaks out Homeland Security.
Post by Quadibloc
Also, long before wiping out the human race becomes trivially easy, someone
will perhaps infect every computer connected to the Intermet with a hard-disk
wiping virus.
By which time nobody will be using hard drives any more anyway.
Post by Quadibloc
After that happens, we will wake up and take security seriously, and will
be prepared for stuff like that.
Reread that sentence, please. (hint: barn door, horse)
Post by Quadibloc
And while I could imagine reducing petty crime by genetically altering
humans against impulsive behavior...
Why do you think that would be a good idea? How exactly do you define "impulsive"?
Post by Quadibloc
a gene change that gives 99.9999999999% immunity to suicidal depression
and they got it right on the first try? I admit to having doubts.
How do you determine if someone's suicidal intent is due to depression, and by the way- are you in favor of assisted suicide? If so, why are you against DIY suicide?


Mark L. Fergerson
Quadibloc
2019-11-02 13:50:03 UTC
Permalink
Post by ***@bid.nes
Post by Quadibloc
And while I could imagine reducing petty crime by genetically altering
humans against impulsive behavior...
Why do you think that would be a good idea? How exactly do you define "impulsive"?
A lot of crime is committed by people who don't think twice about the
consequences if they get caught, and the likelihood of getting caught. So the
penalties for crime don't deter it as well as we would like.
Post by ***@bid.nes
Post by Quadibloc
a gene change that gives 99.9999999999% immunity to suicidal depression
and they got it right on the first try? I admit to having doubts.
How do you determine if someone's suicidal intent is due to depression, and
by the way- are you in favor of assisted suicide? If so, why are you against
DIY suicide?
I'm thinking about a certain *kind* of suicide, and I thought this was obvious
from context. Someone's girlfriend has jilted him, say, and so he decides to
commit suicide in a spectacular manner, taking others with him. That's the kind
of person who might take the whole world with him if it were convenient.

John Savard
J. Clarke
2019-11-01 11:03:29 UTC
Permalink
Post by ***@bid.nes
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
Sure, just determine exactly which genes are responsible for selfishness etc.
and... do what, delete them? Genes tend to do more than one thing, you know.
Edit them? How will that impact their interaction with other genes? Genes
tend to interact with other genes, you know.
Post by Quadibloc
While that seems like an outrageous thing to do, perhaps matters are so desperate,
What "matters" are so much more "desperate" than before Malthus put pen to
paper or before aggression was inherited from our last common ancestor with
chimpanzees?
Post by Quadibloc
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
To begin with, nobody can "destroy the world"- not easily or with difficulty.
If you're referring to nuclear weapons, going on seventy-five years so far.
Post by Quadibloc
Ah. An answer to the Fermi paradox, then.
That was baked into Drake's equation.
Post by Quadibloc
I don't really think
(sigh) We know.
Post by Quadibloc
that if it's trivially easy for people to destroy the world,
It isn't. If it were it would have happened.
Depends on how you define "trivially easy". The scary thing about
CRISPR is that it's within the reach of middle-school students. In
the '50s kids hacked cars. Right now kids hack computers. At some
point in this century the kids are going to be hacking organisms. One
can envision a smart, angry kid creating something that can get ahead
of efforts to control it.
Post by ***@bid.nes
Post by Quadibloc
that we would be able to engineer people whose wisdom and ethics are
reliable enough so that this is not a problem.
Sure. Who gets to define wisdom and ethics, the Human Rights Council of the
United Nations?
Post by Quadibloc
Plus, if all the countries on Earth would
consent to participate in such a project, we would already have world peace.
YA THINK?
Post by Quadibloc
Instead, I suspect this will push the world in the direction of political
structures that are effective at keeping dangerous toys out of most
people's hands - your typical totalitarian dictatorship.
Once again, you justify the Second Amendment while probably intending the exact opposite..
Mark L. Fergerson
Titus G
2019-11-01 20:32:42 UTC
Permalink
Post by J. Clarke
Depends on how you define "trivially easy". The scary thing about
CRISPR is that it's within the reach of middle-school students. In
the '50s kids hacked cars. Right now kids hack computers. At some
point in this century the kids are going to be hacking organisms. One
can envision a smart, angry kid creating something that can get ahead
of efforts to control it.
Currently just over halfway through Oryx and Crake by Margaret Atwood.
Johnny1A
2019-11-02 04:43:09 UTC
Permalink
Post by J. Clarke
Post by ***@bid.nes
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
Sure, just determine exactly which genes are responsible for selfishness etc.
and... do what, delete them? Genes tend to do more than one thing, you know.
Edit them? How will that impact their interaction with other genes? Genes
tend to interact with other genes, you know.
Post by Quadibloc
While that seems like an outrageous thing to do, perhaps matters are so desperate,
What "matters" are so much more "desperate" than before Malthus put pen to
paper or before aggression was inherited from our last common ancestor with
chimpanzees?
Post by Quadibloc
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
To begin with, nobody can "destroy the world"- not easily or with difficulty.
If you're referring to nuclear weapons, going on seventy-five years so far.
Post by Quadibloc
Ah. An answer to the Fermi paradox, then.
That was baked into Drake's equation.
Post by Quadibloc
I don't really think
(sigh) We know.
Post by Quadibloc
that if it's trivially easy for people to destroy the world,
It isn't. If it were it would have happened.
Depends on how you define "trivially easy". The scary thing about
CRISPR is that it's within the reach of middle-school students. In
the '50s kids hacked cars. Right now kids hack computers. At some
point in this century the kids are going to be hacking organisms. One
can envision a smart, angry kid creating something that can get ahead
of efforts to control it.
True, and frightening. It's possible in principle to do some fairly nasty things with genetic engineering with fairly cheap gear and in not much more than a kitchen.

OTOH, while something getting out of control is entirely possible, reaching world-destroying levels is vastly less probable. The same factors that keep that from happening naturally would mostly apply to an accidental release of something artificial, the 'super-flu' scenario is radically improbable.
J. Clarke
2019-11-02 13:09:54 UTC
Permalink
On Fri, 1 Nov 2019 21:43:09 -0700 (PDT), Johnny1A
Post by Johnny1A
Post by J. Clarke
Post by ***@bid.nes
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
Sure, just determine exactly which genes are responsible for selfishness etc.
and... do what, delete them? Genes tend to do more than one thing, you know.
Edit them? How will that impact their interaction with other genes? Genes
tend to interact with other genes, you know.
Post by Quadibloc
While that seems like an outrageous thing to do, perhaps matters are so desperate,
What "matters" are so much more "desperate" than before Malthus put pen to
paper or before aggression was inherited from our last common ancestor with
chimpanzees?
Post by Quadibloc
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
To begin with, nobody can "destroy the world"- not easily or with difficulty.
If you're referring to nuclear weapons, going on seventy-five years so far.
Post by Quadibloc
Ah. An answer to the Fermi paradox, then.
That was baked into Drake's equation.
Post by Quadibloc
I don't really think
(sigh) We know.
Post by Quadibloc
that if it's trivially easy for people to destroy the world,
It isn't. If it were it would have happened.
Depends on how you define "trivially easy". The scary thing about
CRISPR is that it's within the reach of middle-school students. In
the '50s kids hacked cars. Right now kids hack computers. At some
point in this century the kids are going to be hacking organisms. One
can envision a smart, angry kid creating something that can get ahead
of efforts to control it.
True, and frightening. It's possible in principle to do some fairly nasty things with genetic engineering with fairly cheap gear and in not much more than a kitchen.
OTOH, while something getting out of control is entirely possible, reaching world-destroying levels is vastly less probable. The same factors that keep that from happening naturally would mostly apply to an accidental release of something artificial, the 'super-flu' scenario is radically improbable.
Nothing natural is purpose-made to destroy the entire human species.
Super-flu is not a good model--flu has an acute phase and it's done. I
can imagine much more effective models in an engineered product.
Johnny1A
2019-11-04 08:17:16 UTC
Permalink
Post by J. Clarke
On Fri, 1 Nov 2019 21:43:09 -0700 (PDT), Johnny1A
Post by Johnny1A
Post by J. Clarke
Post by ***@bid.nes
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
Sure, just determine exactly which genes are responsible for selfishness etc.
and... do what, delete them? Genes tend to do more than one thing, you know.
Edit them? How will that impact their interaction with other genes? Genes
tend to interact with other genes, you know.
Post by Quadibloc
While that seems like an outrageous thing to do, perhaps matters are so
desperate,
What "matters" are so much more "desperate" than before Malthus put pen to
paper or before aggression was inherited from our last common ancestor with
chimpanzees?
Post by Quadibloc
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
To begin with, nobody can "destroy the world"- not easily or with difficulty.
If you're referring to nuclear weapons, going on seventy-five years so far.
Post by Quadibloc
Ah. An answer to the Fermi paradox, then.
That was baked into Drake's equation.
Post by Quadibloc
I don't really think
(sigh) We know.
Post by Quadibloc
that if it's trivially easy for people to destroy the world,
It isn't. If it were it would have happened.
Depends on how you define "trivially easy". The scary thing about
CRISPR is that it's within the reach of middle-school students. In
the '50s kids hacked cars. Right now kids hack computers. At some
point in this century the kids are going to be hacking organisms. One
can envision a smart, angry kid creating something that can get ahead
of efforts to control it.
True, and frightening. It's possible in principle to do some fairly nasty things with genetic engineering with fairly cheap gear and in not much more than a kitchen.
OTOH, while something getting out of control is entirely possible, reaching world-destroying levels is vastly less probable. The same factors that keep that from happening naturally would mostly apply to an accidental release of something artificial, the 'super-flu' scenario is radically improbable.
Nothing natural is purpose-made to destroy the entire human species.
Super-flu is not a good model--flu has an acute phase and it's done. I
can imagine much more effective models in an engineered product.
True, an artificial disease could be made nastier by any of several means, the more so because the designer presumably doesn't care about its long-term evolutionary viability and can ignore such requirements in favor of nastiness.

But that very fact also means that as the Nasty spreads, it'll be under selective pressures to change from Nasty to Adaptive. That's one of the limitations I was talking about. There are others.

I'm not saying a super-nasty bug can't be engineered, I'm saying it would very, very hard to engineer one capable of world-ruining potency.
J. Clarke
2019-11-04 12:31:33 UTC
Permalink
On Mon, 4 Nov 2019 00:17:16 -0800 (PST), Johnny1A
Post by Johnny1A
Post by J. Clarke
On Fri, 1 Nov 2019 21:43:09 -0700 (PDT), Johnny1A
Post by Johnny1A
Post by J. Clarke
Post by ***@bid.nes
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
Sure, just determine exactly which genes are responsible for selfishness etc.
and... do what, delete them? Genes tend to do more than one thing, you know.
Edit them? How will that impact their interaction with other genes? Genes
tend to interact with other genes, you know.
Post by Quadibloc
While that seems like an outrageous thing to do, perhaps matters are so
desperate,
What "matters" are so much more "desperate" than before Malthus put pen to
paper or before aggression was inherited from our last common ancestor with
chimpanzees?
Post by Quadibloc
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
To begin with, nobody can "destroy the world"- not easily or with difficulty.
If you're referring to nuclear weapons, going on seventy-five years so far.
Post by Quadibloc
Ah. An answer to the Fermi paradox, then.
That was baked into Drake's equation.
Post by Quadibloc
I don't really think
(sigh) We know.
Post by Quadibloc
that if it's trivially easy for people to destroy the world,
It isn't. If it were it would have happened.
Depends on how you define "trivially easy". The scary thing about
CRISPR is that it's within the reach of middle-school students. In
the '50s kids hacked cars. Right now kids hack computers. At some
point in this century the kids are going to be hacking organisms. One
can envision a smart, angry kid creating something that can get ahead
of efforts to control it.
True, and frightening. It's possible in principle to do some fairly nasty things with genetic engineering with fairly cheap gear and in not much more than a kitchen.
OTOH, while something getting out of control is entirely possible, reaching world-destroying levels is vastly less probable. The same factors that keep that from happening naturally would mostly apply to an accidental release of something artificial, the 'super-flu' scenario is radically improbable.
Nothing natural is purpose-made to destroy the entire human species.
Super-flu is not a good model--flu has an acute phase and it's done. I
can imagine much more effective models in an engineered product.
True, an artificial disease could be made nastier by any of several means, the more so because the designer presumably doesn't care about its long-term evolutionary viability and can ignore such requirements in favor of nastiness.
But that very fact also means that as the Nasty spreads, it'll be under selective pressures to change from Nasty to Adaptive. That's one of the limitations I was talking about. There are others.
And if the designer took that into consideration and took measures to
delay such mutation?
Post by Johnny1A
I'm not saying a super-nasty bug can't be engineered, I'm saying it would very, very hard to engineer one capable of world-ruining potency.
You're assuming that anything you don't know how to do is "very very
hard".

In 1492 it took the resources of a powerful empire to cross the
Atlantic. In 2007 a child crossed the Atlantic alone.
Johnny1A
2019-11-06 06:27:10 UTC
Permalink
Post by J. Clarke
On Mon, 4 Nov 2019 00:17:16 -0800 (PST), Johnny1A
Post by Johnny1A
Post by J. Clarke
On Fri, 1 Nov 2019 21:43:09 -0700 (PDT), Johnny1A
Post by Johnny1A
Post by J. Clarke
Post by ***@bid.nes
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
Sure, just determine exactly which genes are responsible for selfishness etc.
and... do what, delete them? Genes tend to do more than one thing, you know.
Edit them? How will that impact their interaction with other genes? Genes
tend to interact with other genes, you know.
Post by Quadibloc
While that seems like an outrageous thing to do, perhaps matters are so
desperate,
What "matters" are so much more "desperate" than before Malthus put pen to
paper or before aggression was inherited from our last common ancestor with
chimpanzees?
Post by Quadibloc
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
To begin with, nobody can "destroy the world"- not easily or with difficulty.
If you're referring to nuclear weapons, going on seventy-five years so far.
Post by Quadibloc
Ah. An answer to the Fermi paradox, then.
That was baked into Drake's equation.
Post by Quadibloc
I don't really think
(sigh) We know.
Post by Quadibloc
that if it's trivially easy for people to destroy the world,
It isn't. If it were it would have happened.
Depends on how you define "trivially easy". The scary thing about
CRISPR is that it's within the reach of middle-school students. In
the '50s kids hacked cars. Right now kids hack computers. At some
point in this century the kids are going to be hacking organisms. One
can envision a smart, angry kid creating something that can get ahead
of efforts to control it.
True, and frightening. It's possible in principle to do some fairly nasty things with genetic engineering with fairly cheap gear and in not much more than a kitchen.
OTOH, while something getting out of control is entirely possible, reaching world-destroying levels is vastly less probable. The same factors that keep that from happening naturally would mostly apply to an accidental release of something artificial, the 'super-flu' scenario is radically improbable.
Nothing natural is purpose-made to destroy the entire human species.
Super-flu is not a good model--flu has an acute phase and it's done. I
can imagine much more effective models in an engineered product.
True, an artificial disease could be made nastier by any of several means, the more so because the designer presumably doesn't care about its long-term evolutionary viability and can ignore such requirements in favor of nastiness.
But that very fact also means that as the Nasty spreads, it'll be under selective pressures to change from Nasty to Adaptive. That's one of the limitations I was talking about. There are others.
And if the designer took that into consideration and took measures to
delay such mutation?
Post by Johnny1A
I'm not saying a super-nasty bug can't be engineered, I'm saying it would very, very hard to engineer one capable of world-ruining potency.
You're assuming that anything you don't know how to do is "very very
hard".
In 1492 it took the resources of a powerful empire to cross the
Atlantic. In 2007 a child crossed the Atlantic alone.
But crossing the Atlantic is easier than making a world-ruining pathogen. Much easier, because the operating nature of biology works against it.

Note that I didn't say it was absolutely _impossible_, just very improbable.
J. Clarke
2019-11-06 22:40:01 UTC
Permalink
On Tue, 5 Nov 2019 22:27:10 -0800 (PST), Johnny1A
Post by Johnny1A
Post by J. Clarke
On Mon, 4 Nov 2019 00:17:16 -0800 (PST), Johnny1A
Post by Johnny1A
Post by J. Clarke
On Fri, 1 Nov 2019 21:43:09 -0700 (PDT), Johnny1A
Post by Johnny1A
Post by J. Clarke
Post by ***@bid.nes
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
Sure, just determine exactly which genes are responsible for selfishness etc.
and... do what, delete them? Genes tend to do more than one thing, you know.
Edit them? How will that impact their interaction with other genes? Genes
tend to interact with other genes, you know.
Post by Quadibloc
While that seems like an outrageous thing to do, perhaps matters are so
desperate,
What "matters" are so much more "desperate" than before Malthus put pen to
paper or before aggression was inherited from our last common ancestor with
chimpanzees?
Post by Quadibloc
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
To begin with, nobody can "destroy the world"- not easily or with difficulty.
If you're referring to nuclear weapons, going on seventy-five years so far.
Post by Quadibloc
Ah. An answer to the Fermi paradox, then.
That was baked into Drake's equation.
Post by Quadibloc
I don't really think
(sigh) We know.
Post by Quadibloc
that if it's trivially easy for people to destroy the world,
It isn't. If it were it would have happened.
Depends on how you define "trivially easy". The scary thing about
CRISPR is that it's within the reach of middle-school students. In
the '50s kids hacked cars. Right now kids hack computers. At some
point in this century the kids are going to be hacking organisms. One
can envision a smart, angry kid creating something that can get ahead
of efforts to control it.
True, and frightening. It's possible in principle to do some fairly nasty things with genetic engineering with fairly cheap gear and in not much more than a kitchen.
OTOH, while something getting out of control is entirely possible, reaching world-destroying levels is vastly less probable. The same factors that keep that from happening naturally would mostly apply to an accidental release of something artificial, the 'super-flu' scenario is radically improbable.
Nothing natural is purpose-made to destroy the entire human species.
Super-flu is not a good model--flu has an acute phase and it's done. I
can imagine much more effective models in an engineered product.
True, an artificial disease could be made nastier by any of several means, the more so because the designer presumably doesn't care about its long-term evolutionary viability and can ignore such requirements in favor of nastiness.
But that very fact also means that as the Nasty spreads, it'll be under selective pressures to change from Nasty to Adaptive. That's one of the limitations I was talking about. There are others.
And if the designer took that into consideration and took measures to
delay such mutation?
Post by Johnny1A
I'm not saying a super-nasty bug can't be engineered, I'm saying it would very, very hard to engineer one capable of world-ruining potency.
You're assuming that anything you don't know how to do is "very very
hard".
In 1492 it took the resources of a powerful empire to cross the
Atlantic. In 2007 a child crossed the Atlantic alone.
But crossing the Atlantic is easier than making a world-ruining pathogen. Much easier, because the operating nature of biology works against it.
Note that I didn't say it was absolutely _impossible_, just very improbable.
Give it a hundred years, or a thousand, or a million.
Paul S Person
2019-11-04 17:26:50 UTC
Permalink
On Mon, 4 Nov 2019 00:17:16 -0800 (PST), Johnny1A
Post by Johnny1A
Post by J. Clarke
On Fri, 1 Nov 2019 21:43:09 -0700 (PDT), Johnny1A
Post by Johnny1A
Post by J. Clarke
Post by ***@bid.nes
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
Sure, just determine exactly which genes are responsible for selfishness etc.
and... do what, delete them? Genes tend to do more than one thing, you know.
Edit them? How will that impact their interaction with other genes? Genes
tend to interact with other genes, you know.
Post by Quadibloc
While that seems like an outrageous thing to do, perhaps matters are so
desperate,
What "matters" are so much more "desperate" than before Malthus put pen to
paper or before aggression was inherited from our last common ancestor with
chimpanzees?
Post by Quadibloc
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
To begin with, nobody can "destroy the world"- not easily or with difficulty.
If you're referring to nuclear weapons, going on seventy-five years so far.
Post by Quadibloc
Ah. An answer to the Fermi paradox, then.
That was baked into Drake's equation.
Post by Quadibloc
I don't really think
(sigh) We know.
Post by Quadibloc
that if it's trivially easy for people to destroy the world,
It isn't. If it were it would have happened.
Depends on how you define "trivially easy". The scary thing about
CRISPR is that it's within the reach of middle-school students. In
the '50s kids hacked cars. Right now kids hack computers. At some
point in this century the kids are going to be hacking organisms. One
can envision a smart, angry kid creating something that can get ahead
of efforts to control it.
True, and frightening. It's possible in principle to do some fairly nasty things with genetic engineering with fairly cheap gear and in not much more than a kitchen.
OTOH, while something getting out of control is entirely possible, reaching world-destroying levels is vastly less probable. The same factors that keep that from happening naturally would mostly apply to an accidental release of something artificial, the 'super-flu' scenario is radically improbable.
Nothing natural is purpose-made to destroy the entire human species.
Super-flu is not a good model--flu has an acute phase and it's done. I
can imagine much more effective models in an engineered product.
True, an artificial disease could be made nastier by any of several means, the more so because the designer presumably doesn't care about its long-term evolutionary viability and can ignore such requirements in favor of nastiness.
But that very fact also means that as the Nasty spreads, it'll be under selective pressures to change from Nasty to Adaptive. That's one of the limitations I was talking about. There are others.
I'm not saying a super-nasty bug can't be engineered, I'm saying it would very, very hard to engineer one capable of world-ruining potency.
Ever read /The Satan Bug/? Or seen the movie?

It introduced the concept of biological warfare to the world ... in
1965 (film) or a bit earlier (book).

This is, as the pace of technological change goes, a very old concept.

And CRISPR/Cas9 (or whatever) was not needed, any more than it was for
the weaponized anthrax in 2001.
--
"I begin to envy Petronius."
"I have envied him long since."
John Halpenny
2019-11-01 15:18:41 UTC
Permalink
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
While that seems like an outrageous thing to do, perhaps matters are so desperate,
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
Ah. An answer to the Fermi paradox, then.
I don't really think that if it's trivially easy for people to destroy the world,
that we would be able to engineer people whose wisdom and ethics are reliable
enough so that this is not a problem. Plus, if all the countries on Earth would
consent to participate in such a project, we would already have world peace.
Instead, I suspect this will push the world in the direction of political
structures that are effective at keeping dangerous toys out of most people's hands
- your typical totalitarian dictatorship.
John Savard
There is an old school approach that may also result in better people.

First we must judge which people are the "bad" people. This is a difficult task, and we can leave it to special people we will call "judges". Then we have to execute the decisions of the judges to remove these people, and we can leave that to those we call "executioners". If we continue the process for a few centuries, we can gradually eliminate undesirables from the population.

Isn't this what has been going on ever since we invented the legal system?

John
Juho Julkunen
2019-11-01 16:10:58 UTC
Permalink
Post by John Halpenny
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
While that seems like an outrageous thing to do, perhaps matters are so desperate,
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
Ah. An answer to the Fermi paradox, then.
I don't really think that if it's trivially easy for people to destroy the world,
that we would be able to engineer people whose wisdom and ethics are reliable
enough so that this is not a problem. Plus, if all the countries on Earth would
consent to participate in such a project, we would already have world peace.
Instead, I suspect this will push the world in the direction of political
structures that are effective at keeping dangerous toys out of most people's hands
- your typical totalitarian dictatorship.
John Savard
There is an old school approach that may also result in better people.
First we must judge which people are the "bad" people. This is a difficult task, and we can leave it to special people we will call "judges". Then we have to execute the decisions of the judges to remove these people, and we can leave that to those we call "executioners". If we continue the process for a few centuries, we can gradually eliminate undesirables from the population.
Isn't this what has been going on ever since we invented the legal system?
This is why there aren't any bad people left in Europe. Alas for the
former colonies.
--
Juho Julkunen
nuny@bid.nes
2019-11-02 01:17:33 UTC
Permalink
Post by Juho Julkunen
Post by John Halpenny
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
While that seems like an outrageous thing to do, perhaps matters are so desperate,
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
Ah. An answer to the Fermi paradox, then.
I don't really think that if it's trivially easy for people to destroy the world,
that we would be able to engineer people whose wisdom and ethics are reliable
enough so that this is not a problem. Plus, if all the countries on Earth would
consent to participate in such a project, we would already have world peace.
Instead, I suspect this will push the world in the direction of political
structures that are effective at keeping dangerous toys out of most people's hands
- your typical totalitarian dictatorship.
John Savard
There is an old school approach that may also result in better people.
First we must judge which people are the "bad" people. This is a difficult task, and we can leave it to special people we will call "judges". Then we have to execute the decisions of the judges to remove these people, and we can leave that to those we call "executioners". If we continue the process for a few centuries, we can gradually eliminate undesirables from the population.
Isn't this what has been going on ever since we invented the legal system?
This is why there aren't any bad people left in Europe. Alas for the
former colonies.
AHAHAHAHAHAHAHAHAHA!

"There aren't any bad people left in Europe". That was a good one!

(Uh, Europe was colonized too.)


Mark L. Fergerson
Robert Carnegie
2019-11-01 22:08:09 UTC
Permalink
Post by John Halpenny
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
While that seems like an outrageous thing to do, perhaps matters are so desperate,
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
Ah. An answer to the Fermi paradox, then.
I don't really think that if it's trivially easy for people to destroy the world,
that we would be able to engineer people whose wisdom and ethics are reliable
enough so that this is not a problem. Plus, if all the countries on Earth would
consent to participate in such a project, we would already have world peace.
Instead, I suspect this will push the world in the direction of political
structures that are effective at keeping dangerous toys out of most people's hands
- your typical totalitarian dictatorship.
John Savard
There is an old school approach that may also result in better people.
First we must judge which people are the "bad" people. This is a difficult task, and we can leave it to special people we will call "judges". Then we have to execute the decisions of the judges to remove these people, and we can leave that to those we call "executioners". If we continue the process for a few centuries, we can gradually eliminate undesirables from the population.
Isn't this what has been going on ever since we invented the legal system?
John
In fact this acts to eliminate poor people. And yet
at the moment there seem to be more and more. So it
doesn't work.
John Halpenny
2019-11-02 00:15:54 UTC
Permalink
Post by Robert Carnegie
Post by John Halpenny
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
While that seems like an outrageous thing to do, perhaps matters are so desperate,
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
Ah. An answer to the Fermi paradox, then.
I don't really think that if it's trivially easy for people to destroy the world,
that we would be able to engineer people whose wisdom and ethics are reliable
enough so that this is not a problem. Plus, if all the countries on Earth would
consent to participate in such a project, we would already have world peace.
Instead, I suspect this will push the world in the direction of political
structures that are effective at keeping dangerous toys out of most people's hands
- your typical totalitarian dictatorship.
John Savard
There is an old school approach that may also result in better people.
First we must judge which people are the "bad" people. This is a difficult task, and we can leave it to special people we will call "judges". Then we have to execute the decisions of the judges to remove these people, and we can leave that to those we call "executioners". If we continue the process for a few centuries, we can gradually eliminate undesirables from the population.
Isn't this what has been going on ever since we invented the legal system?
John
In fact this acts to eliminate poor people. And yet
at the moment there seem to be more and more. So it
doesn't work.
Countries with historically developed legal systems seem to have a far lower fraction of poor people than those without.

John
J. Clarke
2019-11-02 00:38:51 UTC
Permalink
On Fri, 1 Nov 2019 17:15:54 -0700 (PDT), John Halpenny
Post by John Halpenny
Post by Robert Carnegie
Post by John Halpenny
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
While that seems like an outrageous thing to do, perhaps matters are so desperate,
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
Ah. An answer to the Fermi paradox, then.
I don't really think that if it's trivially easy for people to destroy the world,
that we would be able to engineer people whose wisdom and ethics are reliable
enough so that this is not a problem. Plus, if all the countries on Earth would
consent to participate in such a project, we would already have world peace.
Instead, I suspect this will push the world in the direction of political
structures that are effective at keeping dangerous toys out of most people's hands
- your typical totalitarian dictatorship.
John Savard
There is an old school approach that may also result in better people.
First we must judge which people are the "bad" people. This is a difficult task, and we can leave it to special people we will call "judges". Then we have to execute the decisions of the judges to remove these people, and we can leave that to those we call "executioners". If we continue the process for a few centuries, we can gradually eliminate undesirables from the population.
Isn't this what has been going on ever since we invented the legal system?
John
In fact this acts to eliminate poor people. And yet
at the moment there seem to be more and more. So it
doesn't work.
Countries with historically developed legal systems seem to have a far lower fraction of poor people than those without.
Define "poor people". Is a !Kung who owns a loincloth and a stick and
the skills to run just about anything that lives in Africa into the
groun and doesn't know or care what money is "poor"?
Post by John Halpenny
John
nuny@bid.nes
2019-11-02 01:18:57 UTC
Permalink
Post by John Halpenny
Post by Robert Carnegie
Post by John Halpenny
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
While that seems like an outrageous thing to do, perhaps matters are so desperate,
we might have to resort to it, since with technology progressing as it is, how
long will we live in a world where anyone can easily destroy the world?
Ah. An answer to the Fermi paradox, then.
I don't really think that if it's trivially easy for people to destroy the world,
that we would be able to engineer people whose wisdom and ethics are reliable
enough so that this is not a problem. Plus, if all the countries on Earth would
consent to participate in such a project, we would already have world peace.
Instead, I suspect this will push the world in the direction of political
structures that are effective at keeping dangerous toys out of most people's hands
- your typical totalitarian dictatorship.
John Savard
There is an old school approach that may also result in better people.
First we must judge which people are the "bad" people. This is a difficult task, and we can leave it to special people we will call "judges". Then we have to execute the decisions of the judges to remove these people, and we can leave that to those we call "executioners". If we continue the process for a few centuries, we can gradually eliminate undesirables from the population.
Isn't this what has been going on ever since we invented the legal system?
John
In fact this acts to eliminate poor people. And yet
at the moment there seem to be more and more. So it
doesn't work.
Countries with historically developed legal systems seem to have a far
lower fraction of poor people than those without.
(cough) India (cough)


Mark L. Fergerson
David Johnston
2019-11-01 18:59:28 UTC
Permalink
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
Peter Trei
2019-11-03 03:05:22 UTC
Permalink
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
There seems to be a good deal of evidence that human aggression has been
decreasing over time.

pt
Titus G
2019-11-03 04:01:50 UTC
Permalink
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
There seems to be a good deal of evidence that human aggression has been
decreasing over time.
pt
Oryx and Crake by Margaret Atwood covered docility as follows.

"Gone were its destructive features, the features responsible for the
world’s current illnesses. For instance, racism – or, as they referred
to it in Paradice, pseudospeciation – had been eliminated in the model
group, merely by switching the bonding mechanism: the Paradice people
simply did not register skin colour. Hierarchy could not exist among
them, because they lacked the neural complexes that would have created
it. Since they were neither hunters nor agriculturalists hungry for
land, there was no territoriality: the king-of-the-castle hard-wiring
that had plagued humanity had, in them, been unwired. They ate nothing
but leaves and grass and roots and a berry or two; thus their foods were
plentiful and always available. Their sexuality was not a constant
torment to them, not a cloud of turbulent hormones: they came into heat
at regular intervals, as did most mammals other than man.
In fact, as there would never be anything for these people to inherit,
there would be no family trees, no marriages, and no divorces. They were
perfectly adjusted to their habitat, so they would never have to create
houses or tools or weapons, or, for that matter, clothing. They would
have no need to invent any harmful symbolisms, such as kingdoms, icons,
gods, or money." Chapter Paradice.
Quadibloc
2019-11-03 04:38:58 UTC
Permalink
Post by Titus G
Oryx and Crake by Margaret Atwood covered docility as follows.
"Gone were its destructive features, the features responsible for the
world’s current illnesses. For instance, racism – or, as they referred
to it in Paradice, pseudospeciation – had been eliminated in the model
group, merely by switching the bonding mechanism: the Paradice people
simply did not register skin colour. Hierarchy could not exist among
them, because they lacked the neural complexes that would have created
it. Since they were neither hunters nor agriculturalists hungry for
land, there was no territoriality: the king-of-the-castle hard-wiring
that had plagued humanity had, in them, been unwired. They ate nothing
but leaves and grass and roots and a berry or two; thus their foods were
plentiful and always available. Their sexuality was not a constant
torment to them, not a cloud of turbulent hormones: they came into heat
at regular intervals, as did most mammals other than man.
In fact, as there would never be anything for these people to inherit,
there would be no family trees, no marriages, and no divorces. They were
perfectly adjusted to their habitat, so they would never have to create
houses or tools or weapons, or, for that matter, clothing. They would
have no need to invent any harmful symbolisms, such as kingdoms, icons,
gods, or money." Chapter Paradice.
Did she mean for us to think of this as a _good_ thing, or as too extreme a
response to our problems, which ended up reducing humans to the level of
animals?

John Savard
Quadibloc
2019-11-03 04:45:54 UTC
Permalink
Post by Quadibloc
Did she mean for us to think of this as a _good_ thing, or as too extreme a
response to our problems, which ended up reducing humans to the level of
animals?
SPOILER WARNING for Oryx and Crake by Margaret Atwood

I have found information online that enables me to answer my own question.
It turns out Margaret Atwood is channelling Aldous Huxley, specifically Brave
New World.

Oryx and Crake work for a company which intends to re-engineer humanity to solve
its problems... and make a profit. If the re-engineering involves shortening the
human life span, or other changes the customers might object to when it is too
late, they are not concerned.

So the Crakers of Paradice are the outcome of activity of highly dubious ethical
character, rather than a sincere altruistic effort to address humanity's
problems.

John Savard
nuny@bid.nes
2019-11-03 07:00:22 UTC
Permalink
Post by Quadibloc
Post by Quadibloc
Did she mean for us to think of this as a _good_ thing, or as too extreme a
response to our problems, which ended up reducing humans to the level of
animals?
SPOILER WARNING for Oryx and Crake by Margaret Atwood
I have found information online that enables me to answer my own question.
It turns out Margaret Atwood is channelling Aldous Huxley, specifically Brave
New World.
Oryx and Crake work for a company which intends to re-engineer humanity to
solve its problems... and make a profit. If the re-engineering involves
shortening the human life span, or other changes the customers might object
to when it is too late, they are not concerned.
So the Crakers of Paradice are the outcome of activity of highly dubious
ethical character, rather than a sincere altruistic effort to address
humanity's problems.
You don't get it.

There is no shortage of people who believe that is exactly a sincere altruistic effort and the stated end situation would be a Good Thing. Any
profit involved in the process would be "plowed back" into whatever it took to
make it happen including propaganda like the constant refrain of Malthus and
"First World Guilt" over AGW we hear from the "Progressives".

Now, go back and reread your original post and explain exactly where the breakpoint is between your vision and theirs, and how to measure for it.


Mark L. Fergerson
Quadibloc
2019-11-03 12:35:55 UTC
Permalink
Post by ***@bid.nes
Now, go back and reread your original post and explain exactly where the
breakpoint is between your vision and theirs, and how to measure for it.
This wasn't "my" vision. I learned of this proposal, unfortunately in a
paywalled article.

The article noted that technology keeps advancing, and that means what people
can do at home keeps advancing. If someone who was angry and suicidal could
easily wipe out humanity, humanity wouldn't live long.

So, if one assumes one *can't* stop or control technological progress, because
hackers or because North Korea, despite it being terribly dubious, we may have
no choice but to make people "nicer".

Yes, the idea has lots of problems. Practical ones as well as ethical ones.

I hadn't read Margaret Atwood's book; I just got a slightly better picture, no
doubt flawed, by seeing a course help summary of one chapter. Even if what was
being done was really intended altruistically, it was still bad for obvious
reasons.

John Savard
Titus G
2019-11-03 19:25:10 UTC
Permalink
Post by ***@bid.nes
Post by Quadibloc
Post by Quadibloc
Did she mean for us to think of this as a _good_ thing, or as too extreme a
response to our problems, which ended up reducing humans to the level of
animals?
SPOILER WARNING for Oryx and Crake by Margaret Atwood
I have found information online that enables me to answer my own question.
It turns out Margaret Atwood is channelling Aldous Huxley, specifically Brave
New World.
Oryx and Crake work for a company which intends to re-engineer humanity to
solve its problems... and make a profit. If the re-engineering involves
shortening the human life span, or other changes the customers might object
to when it is too late, they are not concerned.
So the Crakers of Paradice are the outcome of activity of highly dubious
ethical character, rather than a sincere altruistic effort to address
humanity's problems.
The short answer is no. In an earlier post to this thread which I no
longer have, the idea that a rogue individual could create a major
catastrophe was made. Crake is that rogue individual. I recommend
reading the book rather than reviews. Although a depressing topic, there
are plenty of other issues and Atwood's writing maintains interest and
suspense.
Post by ***@bid.nes
You don't get it.
There is no shortage of people who believe that is exactly a sincere altruistic effort and the stated end situation would be a Good Thing. Any
profit involved in the process would be "plowed back" into whatever it took to
make it happen including propaganda like the constant refrain of Malthus and
"First World Guilt" over AGW we hear from the "Progressives".
Now, go back and reread your original post and explain exactly where the breakpoint is between your vision and theirs, and how to measure for it.
Mark L. Fergerson
Robert Carnegie
2019-11-03 13:35:28 UTC
Permalink
Post by Titus G
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
There seems to be a good deal of evidence that human aggression has been
decreasing over time.
pt
Oryx and Crake by Margaret Atwood covered docility as follows.
"Gone were its destructive features, the features responsible for the
world’s current illnesses. For instance, racism – or, as they referred
to it in Paradice, pseudospeciation – had been eliminated in the model
group, merely by switching the bonding mechanism: the Paradice people
simply did not register skin colour. Hierarchy could not exist among
them, because they lacked the neural complexes that would have created
it. Since they were neither hunters nor agriculturalists hungry for
land, there was no territoriality: the king-of-the-castle hard-wiring
that had plagued humanity had, in them, been unwired. They ate nothing
but leaves and grass and roots and a berry or two; thus their foods were
plentiful and always available. Their sexuality was not a constant
torment to them, not a cloud of turbulent hormones: they came into heat
at regular intervals, as did most mammals other than man.
In fact, as there would never be anything for these people to inherit,
there would be no family trees, no marriages, and no divorces. They were
perfectly adjusted to their habitat, so they would never have to create
houses or tools or weapons, or, for that matter, clothing. They would
have no need to invent any harmful symbolisms, such as kingdoms, icons,
gods, or money." Chapter Paradice.
And then someone discovers unobtanium (sic) under
their land?
Quadibloc
2019-11-03 14:46:53 UTC
Permalink
Post by Robert Carnegie
And then someone discovers unobtanium (sic) under
their land?
No, Margaret Atwood didn't think of that plot twist, and so James Cameron didn't
have to pay her any royalties.

John Savard
Titus G
2019-11-03 19:26:43 UTC
Permalink
Post by Robert Carnegie
Post by Titus G
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
There seems to be a good deal of evidence that human aggression has been
decreasing over time.
pt
Oryx and Crake by Margaret Atwood covered docility as follows.
"Gone were its destructive features, the features responsible for the
world’s current illnesses. For instance, racism – or, as they referred
to it in Paradice, pseudospeciation – had been eliminated in the model
group, merely by switching the bonding mechanism: the Paradice people
simply did not register skin colour. Hierarchy could not exist among
them, because they lacked the neural complexes that would have created
it. Since they were neither hunters nor agriculturalists hungry for
land, there was no territoriality: the king-of-the-castle hard-wiring
that had plagued humanity had, in them, been unwired. They ate nothing
but leaves and grass and roots and a berry or two; thus their foods were
plentiful and always available. Their sexuality was not a constant
torment to them, not a cloud of turbulent hormones: they came into heat
at regular intervals, as did most mammals other than man.
In fact, as there would never be anything for these people to inherit,
there would be no family trees, no marriages, and no divorces. They were
perfectly adjusted to their habitat, so they would never have to create
houses or tools or weapons, or, for that matter, clothing. They would
have no need to invent any harmful symbolisms, such as kingdoms, icons,
gods, or money." Chapter Paradice.
And then someone discovers unobtanium (sic) under
their land?
The above quote from the character named Crake was addressing the
comment: "The idea that you can genetically program ethics into people
seems a bit silly." and the reply regarding aggression. Oryx and Crake
is the first book of a trilogy and the humans discussed in that quote
are an experimental "model group" in a controlled environment. Atwood's
writing skills are sufficient to obviate the discovery of unobtainium.
p***@hotmail.com
2019-11-04 02:32:40 UTC
Permalink
Post by Titus G
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
There seems to be a good deal of evidence that human aggression has been
decreasing over time.
pt
Oryx and Crake by Margaret Atwood covered docility as follows.
"Gone were its destructive features, the features responsible for the
world’s current illnesses. For instance, racism – or, as they referred
to it in Paradice, pseudospeciation – had been eliminated in the model
group, merely by switching the bonding mechanism: the Paradice people
simply did not register skin colour. Hierarchy could not exist among
them, because they lacked the neural complexes that would have created
it. Since they were neither hunters nor agriculturalists hungry for
land, there was no territoriality: the king-of-the-castle hard-wiring
that had plagued humanity had, in them, been unwired. They ate nothing
but leaves and grass and roots and a berry or two; thus their foods were
plentiful and always available.
Do these organisms have a digestive tract capable of digesting cellulose?
Every known animal that digests cellulose does so by means of symbiotic
bacteria, which needs a place in the digestive for the bacteria to live
and work. It is also possible for an organism to ingest something without
being able to digest it; for example, giant pandas eat mostly bamboo but
cannot digest cellulose and have to subsist on the cell contents, which
is something like chewing up a maple tree for the sap. As a result, pandas
have to eat (and excrete) a tremendous amount of bamboo and the carrying
capacity per acre of bamboo forest is small.

Also, grass has evolved silica granules in its tissues that tend to wear
out the teeth of animals; successful grass-eaters have evolved measures
to handle this. For example, bovines have molars that grow continuously
throughout the animal's lifetime. Elephants have a series of molars that
move into engagement as the previous set wears out, and the life span of
an elephant in the wild is often determined by when its last set of molars
wears out and it can no longer feed. How do the creatures in the story
eat grass?

Their sexuality was not a constant
Post by Titus G
torment to them, not a cloud of turbulent hormones: they came into heat
at regular intervals, as did most mammals other than man.
In fact, as there would never be anything for these people to inherit,
there would be no family trees, no marriages, and no divorces. They were
perfectly adjusted to their habitat, so they would never have to create
houses or tools or weapons, or, for that matter, clothing. They would
have no need to invent any harmful symbolisms, such as kingdoms, icons,
gods, or money." Chapter Paradice.
Do they live in a temperate zone? How do they survive winter, hibernate?
Migrate? Being perfectly adjusted to their habitat would have to include
being adjusted to the other organisms that share that habitat, including
predators. There are many successful strategies used by various animals
to deal with predators. Elephants are simply too big for any present
predators. Bison and many African species migrate so far and so fast
that ambush predators can't keep up with them. As our immediate ancestor
Homo erectus spread across Africa over the last two million years they
already had stone tools and stone-tipped spears, and eighty percent of
the African carnivores went extinct over the same time interval. How do
the creatures in _Oryx and Crake_ handle predators?

Peter Wezeman
anti-social Darwinist
Titus G
2019-11-04 09:47:42 UTC
Permalink
Post by p***@hotmail.com
Post by Titus G
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
There seems to be a good deal of evidence that human aggression has been
decreasing over time.
pt
Oryx and Crake by Margaret Atwood covered docility as follows.
"Gone were its destructive features, the features responsible for the
world’s current illnesses. For instance, racism – or, as they referred
to it in Paradice, pseudospeciation – had been eliminated in the model
group, merely by switching the bonding mechanism: the Paradice people
simply did not register skin colour. Hierarchy could not exist among
them, because they lacked the neural complexes that would have created
it. Since they were neither hunters nor agriculturalists hungry for
land, there was no territoriality: the king-of-the-castle hard-wiring
that had plagued humanity had, in them, been unwired. They ate nothing
but leaves and grass and roots and a berry or two; thus their foods were
plentiful and always available.
Do these organisms have a digestive tract capable of digesting cellulose?
Yes.
I remember a modified appendix and eating digestive end product more
than once.
Post by p***@hotmail.com
Their sexuality was not a constant
Post by Titus G
torment to them, not a cloud of turbulent hormones: they came into heat
at regular intervals, as did most mammals other than man.
In fact, as there would never be anything for these people to inherit,
there would be no family trees, no marriages, and no divorces. They were
perfectly adjusted to their habitat, so they would never have to create
houses or tools or weapons, or, for that matter, clothing. They would
have no need to invent any harmful symbolisms, such as kingdoms, icons,
gods, or money." Chapter Paradice.
Do they live in a temperate zone? How do they survive winter, hibernate?
Migrate? Being perfectly adjusted to their habitat would have to include
being adjusted to the other organisms that share that habitat, including
predators.
In the first book of the trilogy, they are in a predator free controlled
environment.

There are many successful strategies used by various animals
Post by p***@hotmail.com
to deal with predators. Elephants are simply too big for any present
predators. Bison and many African species migrate so far and so fast
that ambush predators can't keep up with them. As our immediate ancestor
Homo erectus spread across Africa over the last two million years they
already had stone tools and stone-tipped spears, and eighty percent of
the African carnivores went extinct over the same time interval. How do
the creatures in _Oryx and Crake_ handle predators?
Peter Wezeman
anti-social Darwinist
Robert Woodward
2019-11-03 05:17:22 UTC
Permalink
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that
we might use CRISPR to modify humanity to make people more
ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
There seems to be a good deal of evidence that human aggression has been
decreasing over time.
Some forms have decreased, but I think nurture is responsible because
the rate has decreased too fast for nature.
--
"We have advanced to new and surprising levels of bafflement."
Imperial Auditor Miles Vorkosigan describes progress in _Komarr_.
—-----------------------------------------------------
Robert Woodward ***@drizzle.com
Peter Trei
2019-11-04 03:15:03 UTC
Permalink
Post by Robert Woodward
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that
we might use CRISPR to modify humanity to make people more
ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
There seems to be a good deal of evidence that human aggression has been
decreasing over time.
Some forms have decreased, but I think nurture is responsible because
the rate has decreased too fast for nature.
Estimates based on observed causes of death in prehistoric peoples indicate
that the decline has been underway for tens of thousands of years.

pt
Paul S Person
2019-11-04 17:36:39 UTC
Permalink
On Sun, 3 Nov 2019 19:15:03 -0800 (PST), Peter Trei
Post by Peter Trei
Post by Robert Woodward
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that
we might use CRISPR to modify humanity to make people more
ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
There seems to be a good deal of evidence that human aggression has been
decreasing over time.
Some forms have decreased, but I think nurture is responsible because
the rate has decreased too fast for nature.
Estimates based on observed causes of death in prehistoric peoples indicate
that the decline has been underway for tens of thousands of years.
Ah, but what happens when you factor in /unobserved/ causes?

Or do we have a /complete/ set of corpses from these peoples? That is,
every single individual?
--
"I begin to envy Petronius."
"I have envied him long since."
Jaimie Vandenbergh
2019-11-06 10:16:10 UTC
Permalink
On Mon, 04 Nov 2019 09:36:39 -0800, Paul S Person
Post by Paul S Person
On Sun, 3 Nov 2019 19:15:03 -0800 (PST), Peter Trei
Post by Peter Trei
Post by Robert Woodward
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that
we might use CRISPR to modify humanity to make people more
ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
There seems to be a good deal of evidence that human aggression has been
decreasing over time.
Some forms have decreased, but I think nurture is responsible because
the rate has decreased too fast for nature.
Estimates based on observed causes of death in prehistoric peoples indicate
that the decline has been underway for tens of thousands of years.
Ah, but what happens when you factor in /unobserved/ causes?
Or do we have a /complete/ set of corpses from these peoples? That is,
every single individual?
Are you really saying this with a straight face, expecting population
scientists and historical researchers to go "Oh fuck! We never accounted
for that possibility. We'd better go back and redo everything from
scratch"?

Cheers - Jaimie
--
...most SF writers are small blokes; they spent a lot of time grubbing
around on the floor for old SF mags, not stretching up to the top shelf
for pornography... As an aside, Douglas Adams is quite tall.
- Terry Pratchett
Chrysi Cat
2019-11-08 02:57:18 UTC
Permalink
On Wed, 06 Nov 2019 10:16:10 +0000, Jaimie Vandenbergh
Post by Jaimie Vandenbergh
On Mon, 04 Nov 2019 09:36:39 -0800, Paul S Person
Post by Paul S Person
On Sun, 3 Nov 2019 19:15:03 -0800 (PST), Peter Trei
Post by Peter Trei
Post by Robert Woodward
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that
we might use CRISPR to modify humanity to make people more
ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
There seems to be a good deal of evidence that human aggression has been
decreasing over time.
Some forms have decreased, but I think nurture is responsible because
the rate has decreased too fast for nature.
Estimates based on observed causes of death in prehistoric peoples indicate
that the decline has been underway for tens of thousands of years.
Ah, but what happens when you factor in /unobserved/ causes?
Or do we have a /complete/ set of corpses from these peoples? That is,
every single individual?
Are you really saying this with a straight face, expecting population
scientists and historical researchers to go "Oh fuck! We never accounted
for that possibility. We'd better go back and redo everything from
scratch"?
I am asking whether their sample might not be a little ... biased.
That is, that the very reason they are /finding/ corpses showing
violence is because of how the victims of violence were treated after
death, as opposed to those who died of other causes. For example, they
might have been more likely to be buried together, or buried in
impressive graves that still survive. We may be looking at how the
1%-ers died, while The Rest of Us are unobserved because /our/ bodies
were burned for fuel. Who can say?
And I'm not dealing with "population scientists and historical
researchers". I am dealing with a statement about the "observed causes
of death", which can mean just about anything -- except, of course,
tha someone invented time travel, went back, and actually /observed/
how people back then died.
Suppose it turned out that they are saying "when we look at the
prehistoric dead we find, say, 60% died violently, but when we look at
a still-in-use normal (as opposed to military) cemetary, only 10% did
so". How impressive would /that/ be, when we can be fairly certain
that the % of violent deaths in a /military/ cemetary might well be as
high as with the prehistoric dead?
And the fact that you uncritically accept a Holy Book that explicitly
states "There is nothing New under the Sun" and "The fundamental nature
of Man will never change. All is as it will always be" while suggesting
there _is_ a solution that involves letting a dead man actively control
your thoughts, has nothing to do with your insistence that the real
non-violent-death rate was much higher?
--
Chrysi Cat
1/2 anthrocat, nearly 1/2 anthrofox, all magical
Transgoddess, quick to anger.
Call me Chrysi or call me Kat, I'll respond to either!
Chrysi Cat
2019-11-08 03:00:43 UTC
Permalink
Post by Chrysi Cat
On Wed, 06 Nov 2019 10:16:10 +0000, Jaimie Vandenbergh
Post by Jaimie Vandenbergh
On Mon, 04 Nov 2019 09:36:39 -0800, Paul S Person
Post by Paul S Person
On Sun, 3 Nov 2019 19:15:03 -0800 (PST), Peter Trei
Post by Peter Trei
Post by Robert Woodward
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that
we might use CRISPR to modify humanity to make people more
ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
There seems to be a good deal of evidence that human aggression has been
decreasing over time.
Some forms have decreased, but I think nurture is responsible because
the rate has decreased too fast for nature.
Estimates based on observed causes of death in prehistoric peoples indicate
that the decline has been underway for tens of thousands of years.
Ah, but what happens when you factor in /unobserved/ causes?
Or do we have a /complete/ set of corpses from these peoples? That is,
every single individual?
Are you really saying this with a straight face, expecting population
scientists and historical researchers to go "Oh fuck! We never accounted
for that possibility. We'd better go back and redo everything from
scratch"?
I am asking whether their sample might not be a little ... biased.
That is, that the very reason they are /finding/ corpses showing
violence is because of how the victims of violence were treated after
death, as opposed to those who died of other causes. For example, they
might have been more likely to be buried together, or buried in
impressive graves that still survive. We may be looking at how the
1%-ers died, while The Rest of Us are unobserved because /our/ bodies
were burned for fuel. Who can say?
And I'm not dealing with "population scientists and historical
researchers". I am dealing with a statement about the "observed causes
of death", which can mean just about anything -- except, of course,
tha someone invented time travel, went back, and actually /observed/
how people back then died.
Suppose it turned out that they are saying "when we look at the
prehistoric dead we find, say, 60% died violently, but when we look at
a still-in-use normal (as opposed to military) cemetary, only 10% did
so". How impressive would /that/ be, when we can be fairly certain
that the % of violent deaths in a /military/ cemetary might well be as
high as with the prehistoric dead?
And the fact that you uncritically accept a Holy Book that explicitly
states "There is nothing New under the Sun" and "The fundamental nature
of Man will never change. All is as it will always be" while suggesting
there _is_ a solution that involves letting a dead man actively control
your thoughts, has nothing to do with your insistence that the real
non-violent-death rate was much higher?
(For the record, if that solution _does_ exist, I'm actively damned,
because I've "Invited Christ" and my thought processes are still
noticeably my own).
--
Chrysi Cat
1/2 anthrocat, nearly 1/2 anthrofox, all magical
Transgoddess, quick to anger.
Call me Chrysi or call me Kat, I'll respond to either!
Quadibloc
2019-11-08 05:34:48 UTC
Permalink
Post by Chrysi Cat
(For the record, if that solution _does_ exist, I'm actively damned,
because I've "Invited Christ" and my thought processes are still
noticeably my own).
Isn't that only true under Calvinism?

John Savard
Paul S Person
2019-11-08 17:09:49 UTC
Permalink
Post by Quadibloc
Post by Chrysi Cat
(For the record, if that solution _does_ exist, I'm actively damned,
because I've "Invited Christ" and my thought processes are still
noticeably my own).
Isn't that only true under Calvinism?
I'm not sure that even Calvinism, as such, believes that the believer
becomes a robot (that is, in Divine mind control).

Some groups, of course, do make statements that might imply this.

However, there can be a great difference between what a group's
/official theology/ means and what individual believers understand it
to mean.

So there is a chance that /no/ group officially believes this,
although clearly at least some members of some group do.

As it happens, I am reading HR Haggard. I just finished
/Regeneration/, which is a book he was asked to write describing the
work of the Salvation Army at a time (1910 or so) when it was being
attacked by many groups, mostly for daring to actually /help/ poor
people.

It includes the "Articles of War" as they were then. This is a
statement of faith. It does include immediate, full regeneration
(leaving the believer sinless) but it also asserts that all can be
saved, which appears to me to be a denial of the Calvinist doctrine of
double predestination. So, some non-Calvinist groups may hold
positions which can be misunderstood by their members as "mind
control".
--
"I begin to envy Petronius."
"I have envied him long since."
Paul S Person
2019-11-08 17:01:06 UTC
Permalink
Post by Chrysi Cat
Post by Chrysi Cat
On Wed, 06 Nov 2019 10:16:10 +0000, Jaimie Vandenbergh
Post by Jaimie Vandenbergh
On Mon, 04 Nov 2019 09:36:39 -0800, Paul S Person
Post by Paul S Person
On Sun, 3 Nov 2019 19:15:03 -0800 (PST), Peter Trei
Post by Peter Trei
Post by Robert Woodward
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that
we might use CRISPR to modify humanity to make people more
ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
There seems to be a good deal of evidence that human aggression has been
decreasing over time.
Some forms have decreased, but I think nurture is responsible because
the rate has decreased too fast for nature.
Estimates based on observed causes of death in prehistoric peoples indicate
that the decline has been underway for tens of thousands of years.
Ah, but what happens when you factor in /unobserved/ causes?
Or do we have a /complete/ set of corpses from these peoples? That is,
every single individual?
Are you really saying this with a straight face, expecting population
scientists and historical researchers to go "Oh fuck! We never accounted
for that possibility. We'd better go back and redo everything from
scratch"?
I am asking whether their sample might not be a little ... biased.
That is, that the very reason they are /finding/ corpses showing
violence is because of how the victims of violence were treated after
death, as opposed to those who died of other causes. For example, they
might have been more likely to be buried together, or buried in
impressive graves that still survive. We may be looking at how the
1%-ers died, while The Rest of Us are unobserved because /our/ bodies
were burned for fuel. Who can say?
And I'm not dealing with "population scientists and historical
researchers". I am dealing with a statement about the "observed causes
of death", which can mean just about anything -- except, of course,
tha someone invented time travel, went back, and actually /observed/
how people back then died.
Suppose it turned out that they are saying "when we look at the
prehistoric dead we find, say, 60% died violently, but when we look at
a still-in-use normal (as opposed to military) cemetary, only 10% did
so". How impressive would /that/ be, when we can be fairly certain
that the % of violent deaths in a /military/ cemetary might well be as
high as with the prehistoric dead?
And the fact that you uncritically accept a Holy Book that explicitly
states "There is nothing New under the Sun" and "The fundamental nature
of Man will never change. All is as it will always be" while suggesting
there _is_ a solution that involves letting a dead man actively control
your thoughts, has nothing to do with your insistence that the real
non-violent-death rate was much higher?
(For the record, if that solution _does_ exist, I'm actively damned,
because I've "Invited Christ" and my thought processes are still
noticeably my own).
If you expected to be controlled like a robot, you were (from a
Lutheran perspective) very much misinformed.
--
"I begin to envy Petronius."
"I have envied him long since."
Quadibloc
2019-11-08 05:33:41 UTC
Permalink
Post by Chrysi Cat
And the fact that you uncritically accept a Holy Book that explicitly
states "There is nothing New under the Sun"
In that case, I must accept that I am a fool. But in that case, I wouldn't be a
fool, because I would accept the existence of God.

John Savard
Paul S Person
2019-11-08 16:59:05 UTC
Permalink
Post by Chrysi Cat
On Wed, 06 Nov 2019 10:16:10 +0000, Jaimie Vandenbergh
Post by Jaimie Vandenbergh
On Mon, 04 Nov 2019 09:36:39 -0800, Paul S Person
Post by Paul S Person
On Sun, 3 Nov 2019 19:15:03 -0800 (PST), Peter Trei
Post by Peter Trei
Post by Robert Woodward
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that
we might use CRISPR to modify humanity to make people more
ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
There seems to be a good deal of evidence that human aggression has been
decreasing over time.
Some forms have decreased, but I think nurture is responsible because
the rate has decreased too fast for nature.
Estimates based on observed causes of death in prehistoric peoples indicate
that the decline has been underway for tens of thousands of years.
Ah, but what happens when you factor in /unobserved/ causes?
Or do we have a /complete/ set of corpses from these peoples? That is,
every single individual?
Are you really saying this with a straight face, expecting population
scientists and historical researchers to go "Oh fuck! We never accounted
for that possibility. We'd better go back and redo everything from
scratch"?
I am asking whether their sample might not be a little ... biased.
That is, that the very reason they are /finding/ corpses showing
violence is because of how the victims of violence were treated after
death, as opposed to those who died of other causes. For example, they
might have been more likely to be buried together, or buried in
impressive graves that still survive. We may be looking at how the
1%-ers died, while The Rest of Us are unobserved because /our/ bodies
were burned for fuel. Who can say?
And I'm not dealing with "population scientists and historical
researchers". I am dealing with a statement about the "observed causes
of death", which can mean just about anything -- except, of course,
tha someone invented time travel, went back, and actually /observed/
how people back then died.
Suppose it turned out that they are saying "when we look at the
prehistoric dead we find, say, 60% died violently, but when we look at
a still-in-use normal (as opposed to military) cemetary, only 10% did
so". How impressive would /that/ be, when we can be fairly certain
that the % of violent deaths in a /military/ cemetary might well be as
high as with the prehistoric dead?
And the fact that you uncritically accept a Holy Book that explicitly
states "There is nothing New under the Sun" and "The fundamental nature
of Man will never change. All is as it will always be" while suggesting
there _is_ a solution that involves letting a dead man actively control
your thoughts, has nothing to do with your insistence that the real
non-violent-death rate was much higher?
Why on earth would it?

Even if your understanding of the Bible were not so ... twisted.
--
"I begin to envy Petronius."
"I have envied him long since."
Paul S Person
2019-11-03 17:47:54 UTC
Permalink
On Sat, 2 Nov 2019 20:05:22 -0700 (PDT), Peter Trei
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
There seems to be a good deal of evidence that human aggression has been
decreasing over time.
In some alternate reality, perhaps.

In some circles, where everyone has enough income to live well,
possibly.

But generally? World-wide? All cultures? I don't think so.
--
"I begin to envy Petronius."
"I have envied him long since."
David Johnston
2019-11-03 22:38:34 UTC
Permalink
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
There seems to be a good deal of evidence that human aggression has been
decreasing over time.
pt
Reducing aggression isn't at all the same thing as making people more
ethical.
Titus G
2019-11-04 01:53:10 UTC
Permalink
Post by David Johnston
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion
that we might use CRISPR to modify humanity to make people
more ethical and compassionate.
The idea that you can genetically program ethics into people
seems a bit silly.
There seems to be a good deal of evidence that human aggression
has been decreasing over time.
pt
Reducing aggression isn't at all the same thing as making people
more ethical.
No. It is not but it is a strong nudge in that direction as most
aggression is unethical despite every children's cartoon's opposing
message. I have only read the first of Atwood's MaddAddam trilogy and at
this stage, the people created in the "lab", an artificial controlled
environment with no predators, completely lack any aggression as well as
completely lacking any concept of ethics whilst extremely compassionate.
In this fiction, the aim is to Make Better People but the technology is
more advanced than just genetically programming. The pseudo science was
explained but I have forgotten that aspect.
David Johnston
2019-11-06 04:24:25 UTC
Permalink
Post by Titus G
Post by David Johnston
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that
we might use CRISPR to modify humanity to make people
more ethical and compassionate.
The idea that you can genetically program ethics into people seems a
bit silly.
There seems to be a good deal of evidence that human aggression
has been decreasing over time.
pt
Reducing aggression isn't at all the same thing as making people
more ethical.
No. It is not but it is a strong nudge in that direction as most
aggression is unethical despite every children's cartoon's opposing
message.
The problem is that someone incapable of aggression is also incapable of
fighting back.
Quadibloc
2019-11-06 04:49:45 UTC
Permalink
Post by David Johnston
The problem is that someone incapable of aggression is also incapable of
fighting back.
Fighting back is not aggression.

Therefore, fighting back and aggression are two different things.

Therefore, being incapable of aggression does not imply being incapable of
fighting back.

Of course, since both of these things involve fighting, they are sufficiently
similar that most _simplistic_ ways of making people incapable of aggression
would also make them incapable of fighting back.

But going back to where this thread began, the requirement isn't even to make
people incapable of all forms of aggression. Instead, there's just one kind of
thing that needs to be prevented: the willingness to engage in mass destruction
in one moment of despair or anger.

John Savard
David Johnston
2019-11-06 06:14:44 UTC
Permalink
Post by Quadibloc
Post by David Johnston
The problem is that someone incapable of aggression is also incapable of
fighting back.
Fighting back is not aggression.
Therefore, fighting back and aggression are two different things.
Therefore, being incapable of aggression does not imply being incapable of
fighting back.
Your line of reasoning is specious. Doubly so since your definition of
"aggression" is "violence Quadibloc does not support". You have in fact
supported aggression. You simply pretend that initiating violence
doesn't qualify if the victim is sufficiently unlikeable.
Post by Quadibloc
Of course, since both of these things involve fighting, they are sufficiently
similar that most _simplistic_ ways of making people incapable of aggression
would also make them incapable of fighting back.
Genetic engineering would not be capable of controlling behaviour in
complex ways.
Post by Quadibloc
But going back to where this thread began, the requirement isn't even to make
people incapable of all forms of aggression. Instead, there's just one kind of
thing that needs to be prevented: the willingness to engage in mass destruction
in one moment of despair or anger.
It takes more than one moment.
Quadibloc
2019-11-06 14:30:14 UTC
Permalink
Post by David Johnston
You have in fact
supported aggression. You simply pretend that initiating violence
doesn't qualify if the victim is sufficiently unlikeable.
You mean I've claimed that victims can become unlikeable in ways other than
initiating violence?

Of course, I do allow for transfers of governmental authority in jurisdictions
that have failed to suppress initiations of violence with sufficient diligence.

John Savard
Chrysi Cat
2019-11-08 13:57:34 UTC
Permalink
Post by Quadibloc
Post by David Johnston
You have in fact
supported aggression. You simply pretend that initiating violence
doesn't qualify if the victim is sufficiently unlikeable.
You mean I've claimed that victims can become unlikeable in ways other than
initiating violence?
Of course, I do allow for transfers of governmental authority in jurisdictions
that have failed to suppress initiations of violence with sufficient diligence.
John Savard
That had /really/ better not be a fancy way of saying "I agree with the
55% of Republicans who believe that since Mexico refused to /invite/ the
US Army to destroy the cartel problem after the Mormon Massacre, it's
America's responsibility to /invade/ because the Mexicans still can't
take care of their ess".

Then again, it might not be; the ruling class of Mexico, after all, is
rather conspicuously almost as white as the middle class of the US.
--
Chrysi Cat
1/2 anthrocat, nearly 1/2 anthrofox, all magical
Transgoddess, quick to anger.
Call me Chrysi or call me Kat, I'll respond to either!
Quadibloc
2019-11-08 18:54:20 UTC
Permalink
Post by Chrysi Cat
That had /really/ better not be a fancy way of saying "I agree with the
55% of Republicans who believe that since Mexico refused to /invite/ the
US Army to destroy the cartel problem after the Mormon Massacre, it's
America's responsibility to /invade/ because the Mexicans still can't
take care of their ess".
Then again, it might not be; the ruling class of Mexico, after all, is
rather conspicuously almost as white as the middle class of the US.
Actually, I was thinking of invading Egypt to protect its Coptic Christians.

As for Mexico, a U.S. invasion to take care of the Zetas is something that I was
favorably disposed towards back when I first heard of their horrible cruelty.
However, given Donald Trump's infamous remarks about Mexicans, I quite
understand if the current leadership of Mexico were hesitant about allowing the
U.S. military _carte blanche_ to operate on their soil.

John Savard
David Johnston
2019-11-08 19:12:53 UTC
Permalink
Post by Quadibloc
Post by Chrysi Cat
That had /really/ better not be a fancy way of saying "I agree with the
55% of Republicans who believe that since Mexico refused to /invite/ the
US Army to destroy the cartel problem after the Mormon Massacre, it's
America's responsibility to /invade/ because the Mexicans still can't
take care of their ess".
Then again, it might not be; the ruling class of Mexico, after all, is
rather conspicuously almost as white as the middle class of the US.
Actually, I was thinking of invading Egypt to protect its Coptic Christians.
Yup. You were fine with getting a hundred times as many people killed
to "save" people who would actually be worse off after the intervention.
But here's the thing. People programmed not to engage in aggression
could not do such a thing because attacking people to "save" other
people you haven't even seen is not something any set of brain chemicals
could distinguish from aggression.
Quadibloc
2019-11-08 19:28:31 UTC
Permalink
Post by David Johnston
Yup. You were fine with getting a hundred times as many people killed
to "save" people who would actually be worse off after the intervention.
I don't see a contradiction in killing as many Arab Muslims in Egypt as they
choose to make necessary - no one needs to get hurt if they just immediately do
what they're told - to help the Coptic Christians in Egypt. That's not where the
war would be fought.

But sometimes war is not needed. Thus, a trade embargo is a proportionate
response to Japan's mistreatment of its Korean minority, denying them
citizenship and hence passports, or things like this:

https://nationalpost.com/news/world/japanese-women-push-back-against-glasses-ban-that-doesnt-apply-to-men-at-work

Countries should not have human rights violations if they want access to Western
markets. Of course, that means giving up on cheap Chinese products for a long
time to come.

John Savard
J. Clarke
2019-11-08 22:22:47 UTC
Permalink
Post by Quadibloc
Post by David Johnston
Yup. You were fine with getting a hundred times as many people killed
to "save" people who would actually be worse off after the intervention.
I don't see a contradiction in killing as many Arab Muslims in Egypt as they
choose to make necessary - no one needs to get hurt if they just immediately do
what they're told - to help the Coptic Christians in Egypt. That's not where the
war would be fought.
So how many Coptic Christians are going to die in your efforts to
"save" them?
Post by Quadibloc
But sometimes war is not needed. Thus, a trade embargo is a proportionate
response to Japan's mistreatment of its Korean minority, denying them
citizenship and hence passports,
Uh, check again. If a Korean wants to be a Japanese citizen there's
no real problem. However to become Japanese, he or she has to give up
Korean citizenship. Even if the Japanese allowed dual citizenship,
the Koreans do not.
Post by Quadibloc
https://nationalpost.com/news/world/japanese-women-push-back-against-glasses-ban-that-doesnt-apply-to-men-at-work
Countries should not have human rights violations if they want access to Western
markets. Of course, that means giving up on cheap Chinese products for a long
time to come.
John Savard
David Johnston
2019-11-09 01:53:01 UTC
Permalink
Post by Quadibloc
Post by David Johnston
Yup. You were fine with getting a hundred times as many people killed
to "save" people who would actually be worse off after the intervention.
I don't see a contradiction in killing as many Arab Muslims in Egypt as they
choose to make necessary - no one needs to get hurt if they just immediately do
what they're told -
Which of course they won't do.
Post by Quadibloc
to help the Coptic Christians in Egypt.
Which of course wouldn't happen.
Paul S Person
2019-11-09 17:48:01 UTC
Permalink
Post by Quadibloc
Post by David Johnston
Yup. You were fine with getting a hundred times as many people killed
to "save" people who would actually be worse off after the intervention.
I don't see a contradiction in killing as many Arab Muslims in Egypt as they
choose to make necessary - no one needs to get hurt if they just immediately do
what they're told - to help the Coptic Christians in Egypt. That's not where the
war would be fought.
But sometimes war is not needed. Thus, a trade embargo is a proportionate
response to Japan's mistreatment of its Korean minority, denying them
https://nationalpost.com/news/world/japanese-women-push-back-against-glasses-ban-that-doesnt-apply-to-men-at-work
Countries should not have human rights violations if they want access to Western
markets. Of course, that means giving up on cheap Chinese products for a long
time to come.
Giving up cheap Chinese products, judging from the reviews on Amazon
of what are clearly East Asian Technotrash (I have seen this with
video format convertors and 3.5" floppy drives with USB interfaces,
but other products may be affected as well), might not be a bad idea.

I have concluded that buying a desktop computer without a 3.5" drive
was a bit ... premature ... but I plan to pay more (about twice as
much, at least) to get a drive with a brand I recognize and reviews
that provide at least /some/ reason to believe it will actually work.

This is in accord with my general approach to electronics problems:
throw enough money at it, and it will go away. This has always worked
in the past.
--
"I begin to envy Petronius."
"I have envied him long since."
Paul S Person
2019-11-09 17:50:03 UTC
Permalink
Post by Quadibloc
Post by Chrysi Cat
That had /really/ better not be a fancy way of saying "I agree with the
55% of Republicans who believe that since Mexico refused to /invite/ the
US Army to destroy the cartel problem after the Mormon Massacre, it's
America's responsibility to /invade/ because the Mexicans still can't
take care of their ess".
Then again, it might not be; the ruling class of Mexico, after all, is
rather conspicuously almost as white as the middle class of the US.
Actually, I was thinking of invading Egypt to protect its Coptic Christians.
As for Mexico, a U.S. invasion to take care of the Zetas is something that I was
favorably disposed towards back when I first heard of their horrible cruelty.
However, given Donald Trump's infamous remarks about Mexicans, I quite
understand if the current leadership of Mexico were hesitant about allowing the
U.S. military _carte blanche_ to operate on their soil.
Since when did we need their permission to operate in Mexico?

And, besides, how else is Trump going to get them to pay for the Wall?

Which, it appears, can be cut through with a $100 reciprocating saw.
Some Wall!
--
"I begin to envy Petronius."
"I have envied him long since."
Chrysi Cat
2019-11-09 20:41:51 UTC
Permalink
Post by Paul S Person
Post by Quadibloc
Post by Chrysi Cat
That had /really/ better not be a fancy way of saying "I agree with the
55% of Republicans who believe that since Mexico refused to /invite/ the
US Army to destroy the cartel problem after the Mormon Massacre, it's
America's responsibility to /invade/ because the Mexicans still can't
take care of their ess".
Then again, it might not be; the ruling class of Mexico, after all, is
rather conspicuously almost as white as the middle class of the US.
Actually, I was thinking of invading Egypt to protect its Coptic Christians.
As for Mexico, a U.S. invasion to take care of the Zetas is something that I was
favorably disposed towards back when I first heard of their horrible cruelty.
However, given Donald Trump's infamous remarks about Mexicans, I quite
understand if the current leadership of Mexico were hesitant about allowing the
U.S. military _carte blanche_ to operate on their soil.
Since when did we need their permission to operate in Mexico?
Well, this is TECHNICALLY a correct question to ask--and the answer is
"I guess never--BUT without an invitation, the only way for the US Armed
Forces to be operating in Mexico is via act-of-war. This would likely
precipitate the OFFICIAL breakup of NATO, and since there's a NATO
member on the Security Council, they wouldn't fear its condemnation when
they came in and fired on Americans _at the request of the Obrador
government."_

But assuming you _don't_ want to all-but-officially leave the UN on the
same ash heap of history as the League of Nations, commit the US to
their only potential allies being Russia and a couple other autocracies,
_and_ potentially encourage the secession of most of the territory west
of the continental centre as the Republic of Pacifica, it's a question
whose practical answer is "always". And I don't think there's anyone in
the US who subscribes to a Usenet server and wants that.
Post by Paul S Person
And, besides, how else is Trump going to get them to pay for the Wall?
Which, it appears, can be cut through with a $100 reciprocating saw.
Some Wall!
--
Chrysi Cat
1/2 anthrocat, nearly 1/2 anthrofox, all magical
Transgoddess, quick to anger.
Call me Chrysi or call me Kat, I'll respond to either!
Paul S Person
2019-11-06 17:43:37 UTC
Permalink
On Tue, 5 Nov 2019 23:14:44 -0700, David Johnston
Post by David Johnston
Post by Quadibloc
Post by David Johnston
The problem is that someone incapable of aggression is also incapable of
fighting back.
Fighting back is not aggression.
Therefore, fighting back and aggression are two different things.
Therefore, being incapable of aggression does not imply being incapable of
fighting back.
Your line of reasoning is specious. Doubly so since your definition of
"aggression" is "violence Quadibloc does not support". You have in fact
supported aggression. You simply pretend that initiating violence
doesn't qualify if the victim is sufficiently unlikeable.
Post by Quadibloc
Of course, since both of these things involve fighting, they are sufficiently
similar that most _simplistic_ ways of making people incapable of aggression
would also make them incapable of fighting back.
Genetic engineering would not be capable of controlling behaviour in
complex ways.
I am amazed to learn that we have advanced so far into genetic
engineering as to actually know this.

But then, most science fiction about controlling behavior use drugs.
Post by David Johnston
Post by Quadibloc
But going back to where this thread began, the requirement isn't even to make
people incapable of all forms of aggression. Instead, there's just one kind of
thing that needs to be prevented: the willingness to engage in mass destruction
in one moment of despair or anger.
It takes more than one moment.
--
"I begin to envy Petronius."
"I have envied him long since."
David Johnston
2019-11-07 04:25:52 UTC
Permalink
Post by Paul S Person
On Tue, 5 Nov 2019 23:14:44 -0700, David Johnston
Post by David Johnston
Post by Quadibloc
Post by David Johnston
The problem is that someone incapable of aggression is also incapable of
fighting back.
Fighting back is not aggression.
Therefore, fighting back and aggression are two different things.
Therefore, being incapable of aggression does not imply being incapable of
fighting back.
Your line of reasoning is specious. Doubly so since your definition of
"aggression" is "violence Quadibloc does not support". You have in fact
supported aggression. You simply pretend that initiating violence
doesn't qualify if the victim is sufficiently unlikeable.
Post by Quadibloc
Of course, since both of these things involve fighting, they are sufficiently
similar that most _simplistic_ ways of making people incapable of aggression
would also make them incapable of fighting back.
Genetic engineering would not be capable of controlling behaviour in
complex ways.
I am amazed to learn that we have advanced so far into genetic
engineering as to actually know this.
But then, most science fiction about controlling behavior use drugs.
Because screwing with brain chemistry is how behaviour is controlled.
Paul S Person
2019-11-07 17:23:36 UTC
Permalink
On Wed, 6 Nov 2019 21:25:52 -0700, David Johnston
Post by David Johnston
Post by Paul S Person
On Tue, 5 Nov 2019 23:14:44 -0700, David Johnston
Post by David Johnston
Post by Quadibloc
Post by David Johnston
The problem is that someone incapable of aggression is also incapable of
fighting back.
Fighting back is not aggression.
Therefore, fighting back and aggression are two different things.
Therefore, being incapable of aggression does not imply being incapable of
fighting back.
Your line of reasoning is specious. Doubly so since your definition of
"aggression" is "violence Quadibloc does not support". You have in fact
supported aggression. You simply pretend that initiating violence
doesn't qualify if the victim is sufficiently unlikeable.
Post by Quadibloc
Of course, since both of these things involve fighting, they are sufficiently
similar that most _simplistic_ ways of making people incapable of aggression
would also make them incapable of fighting back.
Genetic engineering would not be capable of controlling behaviour in
complex ways.
I am amazed to learn that we have advanced so far into genetic
engineering as to actually know this.
But then, most science fiction about controlling behavior use drugs.
Because screwing with brain chemistry is how behaviour is controlled.
I was thinking more along the lines of "because drugs don't mutate and
reproduce, thereby morphing into who-knows-what", but your statement
is, of course, perfectly true.

I'm not sure it actually says anything that is not obvious, however.
--
"I begin to envy Petronius."
"I have envied him long since."
David Johnston
2019-11-08 06:25:29 UTC
Permalink
Post by Paul S Person
On Wed, 6 Nov 2019 21:25:52 -0700, David Johnston
Post by David Johnston
Post by Paul S Person
On Tue, 5 Nov 2019 23:14:44 -0700, David Johnston
Post by David Johnston
Post by Quadibloc
Post by David Johnston
The problem is that someone incapable of aggression is also incapable of
fighting back.
Fighting back is not aggression.
Therefore, fighting back and aggression are two different things.
Therefore, being incapable of aggression does not imply being incapable of
fighting back.
Your line of reasoning is specious. Doubly so since your definition of
"aggression" is "violence Quadibloc does not support". You have in fact
supported aggression. You simply pretend that initiating violence
doesn't qualify if the victim is sufficiently unlikeable.
Post by Quadibloc
Of course, since both of these things involve fighting, they are sufficiently
similar that most _simplistic_ ways of making people incapable of aggression
would also make them incapable of fighting back.
Genetic engineering would not be capable of controlling behaviour in
complex ways.
I am amazed to learn that we have advanced so far into genetic
engineering as to actually know this.
But then, most science fiction about controlling behavior use drugs.
Because screwing with brain chemistry is how behaviour is controlled.
I was thinking more along the lines of "because drugs don't mutate and
reproduce, thereby morphing into who-knows-what", but your statement
is, of course, perfectly true.
I'm not sure it actually says anything that is not obvious, however.
It says something that doesn't seem to be obvious to anyone who thinks
that specific concepts and rules can be genetically programmed into a
brain as if it was a computer.
Paul S Person
2019-11-08 17:12:55 UTC
Permalink
On Thu, 7 Nov 2019 23:25:29 -0700, David Johnston
Post by David Johnston
Post by Paul S Person
On Wed, 6 Nov 2019 21:25:52 -0700, David Johnston
Post by David Johnston
Post by Paul S Person
On Tue, 5 Nov 2019 23:14:44 -0700, David Johnston
Post by David Johnston
Post by Quadibloc
Post by David Johnston
The problem is that someone incapable of aggression is also incapable of
fighting back.
Fighting back is not aggression.
Therefore, fighting back and aggression are two different things.
Therefore, being incapable of aggression does not imply being incapable of
fighting back.
Your line of reasoning is specious. Doubly so since your definition of
"aggression" is "violence Quadibloc does not support". You have in fact
supported aggression. You simply pretend that initiating violence
doesn't qualify if the victim is sufficiently unlikeable.
Post by Quadibloc
Of course, since both of these things involve fighting, they are sufficiently
similar that most _simplistic_ ways of making people incapable of aggression
would also make them incapable of fighting back.
Genetic engineering would not be capable of controlling behaviour in
complex ways.
I am amazed to learn that we have advanced so far into genetic
engineering as to actually know this.
But then, most science fiction about controlling behavior use drugs.
Because screwing with brain chemistry is how behaviour is controlled.
I was thinking more along the lines of "because drugs don't mutate and
reproduce, thereby morphing into who-knows-what", but your statement
is, of course, perfectly true.
I'm not sure it actually says anything that is not obvious, however.
It says something that doesn't seem to be obvious to anyone who thinks
that specific concepts and rules can be genetically programmed into a
brain as if it was a computer.
Not to disagree with your sentiment, but I wasn't aware that it had
actually been established that it was /not/ possible to do this.

Not currently, of course. But that may be because we don't know
enough. Yet.
--
"I begin to envy Petronius."
"I have envied him long since."
J. Clarke
2019-11-06 22:42:07 UTC
Permalink
Post by Quadibloc
Post by David Johnston
The problem is that someone incapable of aggression is also incapable of
fighting back.
Fighting back is not aggression.
Therefore, fighting back and aggression are two different things.
Therefore, being incapable of aggression does not imply being incapable of
fighting back.
Of course, since both of these things involve fighting, they are sufficiently
similar that most _simplistic_ ways of making people incapable of aggression
would also make them incapable of fighting back.
But going back to where this thread began, the requirement isn't even to make
people incapable of all forms of aggression. Instead, there's just one kind of
thing that needs to be prevented: the willingness to engage in mass destruction
in one moment of despair or anger.
So how it engaging in mass destruction after long and careful planning
and preparation better?
Quadibloc
2019-11-07 02:04:20 UTC
Permalink
Post by J. Clarke
Post by Quadibloc
But going back to where this thread began, the requirement isn't even to make
people incapable of all forms of aggression. Instead, there's just one kind of
thing that needs to be prevented: the willingness to engage in mass destruction
in one moment of despair or anger.
So how it engaging in mass destruction after long and careful planning
and preparation better?
Not better. It's just that it happens less often.

John Savard
J. Clarke
2019-11-07 02:14:56 UTC
Permalink
Post by Quadibloc
Post by J. Clarke
Post by Quadibloc
But going back to where this thread began, the requirement isn't even to make
people incapable of all forms of aggression. Instead, there's just one kind of
thing that needs to be prevented: the willingness to engage in mass destruction
in one moment of despair or anger.
So how it engaging in mass destruction after long and careful planning
and preparation better?
Not better. It's just that it happens less often.
It does?

Can you give us some examples where someone engaged in successful mass
destruction in one moment of despair or anger?
Quadibloc
2019-11-07 16:34:23 UTC
Permalink
Post by J. Clarke
Post by Quadibloc
Not better. It's just that it happens less often.
It does?
Can you give us some examples where someone engaged in successful mass
destruction in one moment of despair or anger?
You're quite correct. I should have expressed myself more clearly.

The preplanning required for successful acts of mass destruction with today's
technology... happens quite seldom.

Acts of local violence in one moment of despair or anger, however, are very
common.

If future technology makes it so much easier to succeed at mass destruction, by
putting enormous energies to hand for everyone, that someone in a similar fit of
despair or anger could succeed in mass destruction... we would have a problem.

Making better people was proposed as one way of solving it. Perhaps keeping the
dangerous toys locked up would be an easier way?

John Savard
David Johnston
2019-11-07 04:28:06 UTC
Permalink
Post by Quadibloc
Post by J. Clarke
Post by Quadibloc
But going back to where this thread began, the requirement isn't even to make
people incapable of all forms of aggression. Instead, there's just one kind of
thing that needs to be prevented: the willingness to engage in mass destruction
in one moment of despair or anger.
So how it engaging in mass destruction after long and careful planning
and preparation better?
Not better. It's just that it happens less often.
Not really Most mass destruction takes preplanning to happen at all.
nuny@bid.nes
2019-11-07 07:41:52 UTC
Permalink
Post by Quadibloc
Post by David Johnston
The problem is that someone incapable of aggression is also incapable of
fighting back.
Fighting back is not aggression.
Therefore, fighting back and aggression are two different things.
Please tell that to those in charge of schools these days.
Post by Quadibloc
Therefore, being incapable of aggression does not imply being incapable of
fighting back.
What is your example of that assertion in Nature?
Post by Quadibloc
Of course, since both of these things involve fighting, they are sufficiently
similar that most _simplistic_ ways of making people incapable of aggression
would also make them incapable of fighting back.
Do you know the difference between say Taekwondo and Aikido?
Post by Quadibloc
But going back to where this thread began, the requirement isn't even
to make people incapable of all forms of aggression. Instead, there's
just one kind of thing that needs to be prevented: the willingness to
engage in mass destruction in one moment of despair or anger.
And you actually expect there to be genes for that.


Mark L. Fergerson
Titus G
2019-11-06 05:01:23 UTC
Permalink
Post by David Johnston
Post by Titus G
Post by David Johnston
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that
we might use CRISPR to modify humanity to make people
more ethical and compassionate.
The idea that you can genetically program ethics into people seems
a bit silly.
There seems to be a good deal of evidence that human aggression
has been decreasing over time.
pt
Reducing aggression isn't at all the same thing as making people
more ethical.
No. It is not but it is a strong nudge in that direction as most
aggression is unethical despite every children's cartoon's opposing
message.
The problem is that someone incapable of aggression is also incapable of
fighting back.
But if everyone is genetically programmed to be incapable of aggression,
there will be no one to fight back? (I realise what a big word
"everyone" is and am 99.99% certain that things will go wrong before the
Atwood MaddAddam trilogy ends.)
Paul S Person
2019-11-06 17:55:04 UTC
Permalink
Post by Titus G
Post by David Johnston
Post by Titus G
Post by David Johnston
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that
we might use CRISPR to modify humanity to make people
more ethical and compassionate.
The idea that you can genetically program ethics into people seems
a bit silly.
There seems to be a good deal of evidence that human aggression
has been decreasing over time.
pt
Reducing aggression isn't at all the same thing as making people
more ethical.
No. It is not but it is a strong nudge in that direction as most
aggression is unethical despite every children's cartoon's opposing
message.
The problem is that someone incapable of aggression is also incapable of
fighting back.
But if everyone is genetically programmed to be incapable of aggression,
there will be no one to fight back? (I realise what a big word
"everyone" is and am 99.99% certain that things will go wrong before the
Atwood MaddAddam trilogy ends.)
On an impulse, I once bought a DVD of a film named /Serenity/. This
was based (apparently) on a TV series, so some of the
characterizations were a bit ... wierd ... but the story itself is
quite exciting:

The peace of a multi-planet system is disturbed not only by the evil
central authorities but also by raid by the "Reavers", horrendously
violent people who kill normal humans in, well, think "Zombies in /28
Days Later/ but who never starve to death" and you'll get the idea.

The evil central authorities tested, on a remote planet, an agent
(biological, IIRC) that made everyone ... peaceful, thus wiping out
all forms of violence. There were just two problems:

1. Those it worked on got so peaceful they stopped breathing.
2. 1% (or maybe it was 0.1%) of the population became Reavers.

And then the race is on to inform the rest of the system what the evil
central authorities have done.

So, yes, I would say that your suspicion that something will go wrong
is well-founded, not only in storytelling terms, but in real-world
terms as well.
--
"I begin to envy Petronius."
"I have envied him long since."
Robert Carnegie
2019-11-07 00:13:57 UTC
Permalink
_Perry's Planet_ by Jack Haldeman is a Star Trek novel
which visits a colony planet that, it turns out, has a
virus to prevent aggressive and even defensive force,
psychologically. This has drawbacks, such as when
the Enterprise crew catches it and then a Klingon ship
arrives.
nuny@bid.nes
2019-11-07 07:51:48 UTC
Permalink
Post by Paul S Person
Post by Titus G
Post by David Johnston
Post by Titus G
Post by David Johnston
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that
we might use CRISPR to modify humanity to make people
more ethical and compassionate.
The idea that you can genetically program ethics into people seems
a bit silly.
There seems to be a good deal of evidence that human aggression
has been decreasing over time.
pt
Reducing aggression isn't at all the same thing as making people
more ethical.
No. It is not but it is a strong nudge in that direction as most
aggression is unethical despite every children's cartoon's opposing
message.
The problem is that someone incapable of aggression is also incapable of
fighting back.
But if everyone is genetically programmed to be incapable of aggression,
there will be no one to fight back? (I realise what a big word
"everyone" is and am 99.99% certain that things will go wrong before the
Atwood MaddAddam trilogy ends.)
On an impulse, I once bought a DVD of a film named /Serenity/. This
was based (apparently) on a TV series, so some of the
characterizations were a bit ... wierd ... but the story itself is
The series was (mostly) much less awful, but also quite exciting.

Still weird though.
Post by Paul S Person
The peace of a multi-planet system is disturbed not only by the evil
central authorities but also by raid by the "Reavers", horrendously
violent people who kill normal humans in, well, think "Zombies in /28
Days Later/ but who never starve to death" and you'll get the idea.
"If they take the ship, they'll rape us to death, eat our flesh, and
sew our skins into their clothing – and if we're very, very lucky,
they'll do it in that order."
Post by Paul S Person
The evil central authorities tested, on a remote planet, an agent
(biological, IIRC) that made everyone ... peaceful, thus wiping out
1. Those it worked on got so peaceful they stopped breathing.
Not quite- they were simply incapable of *active* defense- the ones
who avoided the initial Reaver attacks locked themselves into buildings
that were Reaver-proof and starved to death.
Post by Paul S Person
2. 1% (or maybe it was 0.1%) of the population became Reavers.
And then the race is on to inform the rest of the system what the evil
central authorities have done.
So, yes, I would say that your suspicion that something will go wrong
is well-founded, not only in storytelling terms, but in real-world
terms as well.
Something *always* goes wrong. That's something Quaddie just can't seem to grasp.


Mark L. Fergerson
Paul S Person
2019-11-07 17:36:19 UTC
Permalink
Post by ***@bid.nes
Post by Paul S Person
Post by Titus G
Post by David Johnston
Post by Titus G
Post by David Johnston
Post by Peter Trei
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that
we might use CRISPR to modify humanity to make people
more ethical and compassionate.
The idea that you can genetically program ethics into people seems
a bit silly.
There seems to be a good deal of evidence that human aggression
has been decreasing over time.
pt
Reducing aggression isn't at all the same thing as making people
more ethical.
No. It is not but it is a strong nudge in that direction as most
aggression is unethical despite every children's cartoon's opposing
message.
The problem is that someone incapable of aggression is also incapable of
fighting back.
But if everyone is genetically programmed to be incapable of aggression,
there will be no one to fight back? (I realise what a big word
"everyone" is and am 99.99% certain that things will go wrong before the
Atwood MaddAddam trilogy ends.)
On an impulse, I once bought a DVD of a film named /Serenity/. This
was based (apparently) on a TV series, so some of the
characterizations were a bit ... wierd ... but the story itself is
The series was (mostly) much less awful, but also quite exciting.
I didn't find the film awful at all. It just took a few viewings
before I got used to the characters, who I presume were the same as in
the TV series.
Post by ***@bid.nes
Still weird though.
Definitely wierd. But, as I /like/ wierd, that is a /good/ thing.
Post by ***@bid.nes
Post by Paul S Person
The peace of a multi-planet system is disturbed not only by the evil
central authorities but also by raid by the "Reavers", horrendously
violent people who kill normal humans in, well, think "Zombies in /28
Days Later/ but who never starve to death" and you'll get the idea.
"If they take the ship, they'll rape us to death, eat our flesh, and
sew our skins into their clothing – and if we're very, very lucky,
they'll do it in that order."
Post by Paul S Person
The evil central authorities tested, on a remote planet, an agent
(biological, IIRC) that made everyone ... peaceful, thus wiping out
1. Those it worked on got so peaceful they stopped breathing.
Not quite- they were simply incapable of *active* defense- the ones
who avoided the initial Reaver attacks locked themselves into buildings
that were Reaver-proof and starved to death.
I decided to watch it again last night. I'm not going to quote but the
Research+Rescue person said something like "Then they stopped. They
stopped eating. They stopped going to work. And, finally, they stopped
breathing."

This does /not/ suggest that they starved to death hiding from
Reavers. It suggests they died from the Pax itself, as they relaxed
more and more and more.

Granted, it probably does mean that they died of starvation. Or
possibly thirst. But from laziness, not from fear.

Consider the earlier scene where the "teacher" tells her "class" to
just lie down. And then one of the bodies is shown having done exactly
that. And, clearly, never bothered to get up again.
Post by ***@bid.nes
Post by Paul S Person
2. 1% (or maybe it was 0.1%) of the population became Reavers.
It was, in fact, 0.1%.
Post by ***@bid.nes
Post by Paul S Person
And then the race is on to inform the rest of the system what the evil
central authorities have done.
So, yes, I would say that your suspicion that something will go wrong
is well-founded, not only in storytelling terms, but in real-world
terms as well.
Something *always* goes wrong. That's something Quaddie just can't seem to grasp.
Well, lot's of other people have problems with the concept too.
--
"I begin to envy Petronius."
"I have envied him long since."
Kevrob
2019-11-03 03:23:05 UTC
Permalink
Or perhaps an entomologist, on how
entomology suggested capital punishment should be treated.
I would eaferly await such an article by the likes
of E O Wilson.

https://en.wikipedia.org/wiki/E._O._Wilson

https://en.wikipedia.org/wiki/Sociobiology

--
Kevin R
a.a #2310
Robert Carnegie
2019-11-04 02:55:47 UTC
Permalink
On Fri, 1 Nov 2019 12:59:28 -0600, David Johnston
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
It might be helpful to know what article that is.
Before they went the anti-religious-fanatic route and I ceased to
renew my subscription, the editor of /Skeptical Inquirer/ asserted
that "science could provide ethics as well as religion" (this is, of
course, a paraphrase.
He was roundly trasked. I, myself, wrote and suggested that he publish
articles by, say, an astrophysicist on the moral issue of abortion,
explaining how astrophysics (not the scientist, the subject of study)
suggested it be regarded morally. Or perhaps an entomologist, on how
entomology suggested capital punishment should be treated.
The next bimonthly issue the editor explained that he meant that, /if
you happen to believe in Utilitarianism/, science might be helpful in
determining how to produce the greatest good for the greatest number.
And this may be another case of some great mind overreaching itself to
a point where it is almost a satire of itself.
--
"I begin to envy Petronius."
"I have envied him long since."
Come now.

Knowledge of science relevant to an ethical question
is applicable.

Knowledge of science not relevant to a question of science
isn't applicable. Many leading minds have strayed outside
their specialism and stumbled.
Paul S Person
2019-11-04 17:31:51 UTC
Permalink
On Sun, 3 Nov 2019 18:55:47 -0800 (PST), Robert Carnegie
Post by Robert Carnegie
On Fri, 1 Nov 2019 12:59:28 -0600, David Johnston
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
It might be helpful to know what article that is.
Before they went the anti-religious-fanatic route and I ceased to
renew my subscription, the editor of /Skeptical Inquirer/ asserted
that "science could provide ethics as well as religion" (this is, of
course, a paraphrase.
He was roundly trasked. I, myself, wrote and suggested that he publish
articles by, say, an astrophysicist on the moral issue of abortion,
explaining how astrophysics (not the scientist, the subject of study)
suggested it be regarded morally. Or perhaps an entomologist, on how
entomology suggested capital punishment should be treated.
The next bimonthly issue the editor explained that he meant that, /if
you happen to believe in Utilitarianism/, science might be helpful in
determining how to produce the greatest good for the greatest number.
And this may be another case of some great mind overreaching itself to
a point where it is almost a satire of itself.
--
"I begin to envy Petronius."
"I have envied him long since."
Come now.
Knowledge of science relevant to an ethical question
is applicable.
Applicable, yes. But the original claim was that it was
/determinative/. You know, like the Bible is, for believers -- but, in
the case of science, for skeptics.
Post by Robert Carnegie
Knowledge of science not relevant to a question of science
isn't applicable. Many leading minds have strayed outside
their specialism and stumbled.
Precisely my point.

The editor was /way/ out of bounds.

As he admitted.
--
"I begin to envy Petronius."
"I have envied him long since."
nuny@bid.nes
2019-11-05 08:58:24 UTC
Permalink
Post by Robert Carnegie
On Fri, 1 Nov 2019 12:59:28 -0600, David Johnston
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that we
might use CRISPR to modify humanity to make people more ethical and
compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
It might be helpful to know what article that is.
Before they went the anti-religious-fanatic route and I ceased to
renew my subscription, the editor of /Skeptical Inquirer/ asserted
that "science could provide ethics as well as religion" (this is, of
course, a paraphrase.
He was roundly trasked. I, myself, wrote and suggested that he publish
articles by, say, an astrophysicist on the moral issue of abortion,
explaining how astrophysics (not the scientist, the subject of study)
suggested it be regarded morally. Or perhaps an entomologist, on how
entomology suggested capital punishment should be treated.
The next bimonthly issue the editor explained that he meant that, /if
you happen to believe in Utilitarianism/, science might be helpful in
determining how to produce the greatest good for the greatest number.
And this may be another case of some great mind overreaching itself to
a point where it is almost a satire of itself.
--
"I begin to envy Petronius."
"I have envied him long since."
Come now.
Knowledge of science relevant to an ethical question
is applicable.
Knowledge of science not relevant to a question of science
isn't applicable. Many leading minds have strayed outside
their specialism and stumbled.
But that implies you want to limit the discussion of a basis for ethics
to philosophers, who are experts in nothing. Certainly all of their efforts
to this point in history have produced trivial results.


Mark L. Fergerson
Quadibloc
2019-11-05 13:16:46 UTC
Permalink
Post by ***@bid.nes
But that implies you want to limit the discussion of a basis for ethics
to philosophers, who are experts in nothing. Certainly all of their efforts
to this point in history have produced trivial results.
Amateur philosophers, drawing unwarranted analogies from limited fields of
specialization, are hardly to be expected to produce better results. However,
given that the experts haven't gotten anywhere, I suppose out of desperation, if
nothing else, one might reach elsewhere.

At least scientists have the advantage of having _proven_ their intelligence (if
not wisdom) in direct combat with reality. Since what philosophers have done is
trivial, it might not be unreasonable to expect a layperson to be able to rival
them.

John Savard
p***@hotmail.com
2019-11-05 16:50:00 UTC
Permalink
Post by Quadibloc
Post by ***@bid.nes
But that implies you want to limit the discussion of a basis for ethics
to philosophers, who are experts in nothing. Certainly all of their efforts
to this point in history have produced trivial results.
Amateur philosophers, drawing unwarranted analogies from limited fields of
specialization, are hardly to be expected to produce better results. However,
given that the experts haven't gotten anywhere, I suppose out of desperation, if
nothing else, one might reach elsewhere.
At least scientists have the advantage of having _proven_ their intelligence (if
not wisdom) in direct combat with reality. Since what philosophers have done is
trivial, it might not be unreasonable to expect a layperson to be able to rival
them.
OBSF, from _Galactic Patrol_ by Edward E. Smith:

"Now as to the Lens itself. Like every one else, you have known of it ever
since you could talk, but you know nothing of its origin or its nature.
Now that you are Lensmen, I can tell you what little I know about it.
Questions?"

"We have all wondered about the Lens, sir, of course," Maitland ventured.
"The outlaws apparently keep up with us in science. I have always supposed
that what science can build, science can duplicate. Surely more than one
Lens has fallen into the hands of the outlaws?"

"If it had been a scientific invention or discovery it would have been
duplicated long ago," the Commandant made surprising answer. "It is, however,
not essentially scientific in nature. It is almost entirely philosophical,
and was developed for us by the Arisians.

Peter Wezeman
anti-social Darwinist
nuny@bid.nes
2019-11-07 07:57:14 UTC
Permalink
Post by p***@hotmail.com
Post by Quadibloc
Post by ***@bid.nes
But that implies you want to limit the discussion of a basis for
ethics to philosophers, who are experts in nothing. Certainly all of
their efforts to this point in history have produced trivial results.
Amateur philosophers, drawing unwarranted analogies from limited fields
of specialization, are hardly to be expected to produce better results.
However, given that the experts haven't gotten anywhere, I suppose out
of desperation, if nothing else, one might reach elsewhere.
At least scientists have the advantage of having _proven_ their
intelligence (if not wisdom) in direct combat with reality.
Which reminds me of the best possible counterargument against the topic
of this thread; "Just because we *can* do it doesn't mean we *should* do it"
Post by p***@hotmail.com
Post by Quadibloc
Since what philosophers have done is trivial, it might not be unreasonable
to expect a layperson to be able to rival them.
Laypeople manage triviality all the time...
Post by p***@hotmail.com
"Now as to the Lens itself. Like every one else, you have known of it ever
since you could talk, but you know nothing of its origin or its nature.
Now that you are Lensmen, I can tell you what little I know about it.
Questions?"
"We have all wondered about the Lens, sir, of course," Maitland ventured.
"The outlaws apparently keep up with us in science. I have always supposed
that what science can build, science can duplicate. Surely more than one
Lens has fallen into the hands of the outlaws?"
"If it had been a scientific invention or discovery it would have been
duplicated long ago," the Commandant made surprising answer. "It is, however,
not essentially scientific in nature. It is almost entirely philosophical,
and was developed for us by the Arisians.
That was Smith pulling "A Wizard Did It" before it became popular.


Mark L. Fergerson
Paul S Person
2019-11-05 17:29:10 UTC
Permalink
Post by ***@bid.nes
Post by Robert Carnegie
On Fri, 1 Nov 2019 12:59:28 -0600, David Johnston
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that we
might use CRISPR to modify humanity to make people more ethical and
compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
It might be helpful to know what article that is.
Before they went the anti-religious-fanatic route and I ceased to
renew my subscription, the editor of /Skeptical Inquirer/ asserted
that "science could provide ethics as well as religion" (this is, of
course, a paraphrase.
He was roundly trasked. I, myself, wrote and suggested that he publish
articles by, say, an astrophysicist on the moral issue of abortion,
explaining how astrophysics (not the scientist, the subject of study)
suggested it be regarded morally. Or perhaps an entomologist, on how
entomology suggested capital punishment should be treated.
The next bimonthly issue the editor explained that he meant that, /if
you happen to believe in Utilitarianism/, science might be helpful in
determining how to produce the greatest good for the greatest number.
And this may be another case of some great mind overreaching itself to
a point where it is almost a satire of itself.
--
"I begin to envy Petronius."
"I have envied him long since."
Come now.
Knowledge of science relevant to an ethical question
is applicable.
Knowledge of science not relevant to a question of science
isn't applicable. Many leading minds have strayed outside
their specialism and stumbled.
But that implies you want to limit the discussion of a basis for ethics
to philosophers, who are experts in nothing. Certainly all of their efforts
to this point in history have produced trivial results.
It actually leaves us with two choices:

1) Scientists that base ethics on their scientific knowledge. Thus, a
physicist dedicated professionally to finding a particular particle in
experimental results might, based on his /knowlege of physics/,
produce an answer to the moral question: should a man have only one
wife? This is, of course, absurd.

2) Scientists that do no such thing. These are, essentially, acting as
philosophers since they have no actual expertise in ethics. There is
(or was) at least one Nobel Prize winner in Physics (or Chemistry) who
parlayed his fame into an authoritative position from which to hawk
vitamin supplements. He could just as well have used it to oppose
abortion or the death penalty -- it would have been just as valid
(which is to say, not valid at all).

Indeed, as recent scandals with plagiarism, falsified data, and
genetically-modified babies would suggest, any moral foundation
scientists may once have had (which, given the history of science,
would almost certainly have been based on Christian ethics) is
eroding. They aren't only no more qualified to speak on these issues
than anyone else, they are showing signs of losing their own moral
compasses (in some cases, not, of course, in general -- so far) and so
being /less/ qualified than the average philosopher.
--
"I begin to envy Petronius."
"I have envied him long since."
nuny@bid.nes
2019-11-07 08:08:50 UTC
Permalink
Post by Paul S Person
Post by ***@bid.nes
Post by Robert Carnegie
On Fri, 1 Nov 2019 12:59:28 -0600, David Johnston
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that we
might use CRISPR to modify humanity to make people more ethical and
compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
It might be helpful to know what article that is.
Before they went the anti-religious-fanatic route and I ceased to
renew my subscription, the editor of /Skeptical Inquirer/ asserted
that "science could provide ethics as well as religion" (this is, of
course, a paraphrase.
He was roundly trasked. I, myself, wrote and suggested that he publish
articles by, say, an astrophysicist on the moral issue of abortion,
explaining how astrophysics (not the scientist, the subject of study)
suggested it be regarded morally. Or perhaps an entomologist, on how
entomology suggested capital punishment should be treated.
The next bimonthly issue the editor explained that he meant that, /if
you happen to believe in Utilitarianism/, science might be helpful in
determining how to produce the greatest good for the greatest number.
And this may be another case of some great mind overreaching itself to
a point where it is almost a satire of itself.
--
"I begin to envy Petronius."
"I have envied him long since."
Come now.
Knowledge of science relevant to an ethical question
is applicable.
Knowledge of science not relevant to a question of science
isn't applicable. Many leading minds have strayed outside
their specialism and stumbled.
But that implies you want to limit the discussion of a basis for ethics
to philosophers, who are experts in nothing. Certainly all of their efforts
to this point in history have produced trivial results.
1) Scientists that base ethics on their scientific knowledge. Thus, a
physicist dedicated professionally to finding a particular particle in
experimental results might, based on his /knowlege of physics/,
produce an answer to the moral question: should a man have only one
wife? This is, of course, absurd.
This presumes that an attempt at human ethics would be based on one
person's knowledge of one narrow field *and only that knowledge*.

That's silly on the face of it.
Post by Paul S Person
2) Scientists that do no such thing. These are, essentially, acting as
philosophers since they have no actual expertise in ethics.
Only psychopaths have NO expertise in ethics.

(Most of us know about Pauling. Really poor example.)
Post by Paul S Person
Indeed, as recent scandals with plagiarism, falsified data, and
genetically-modified babies would suggest, any moral foundation
scientists may once have had (which, given the history of science,
would almost certainly have been based on Christian ethics)
The bit about Christianity demolishes what you've written so far all by
itself. Anyone, scientist or otherwise, will take into account the prevailing
moral and ethical systems in the culture(s) they grew up in/are immersed in
whether they are part of that person's belief system or not.
Post by Paul S Person
is eroding.
That is not justified by what went before. Those same faults existed in
the sciences long before they became recently publicized, *and* in every
other field of human endeavor *including philosophy*.
Post by Paul S Person
They aren't only no more qualified to speak on these issues
than anyone else, they are showing signs of losing their own moral
compasses (in some cases, not, of course, in general -- so far) and so
being /less/ qualified than the average philosopher.
Horseshit. Philosophers are no less narrow-minded, arrogant, and flawed
than anyone else.


Mark L. Fergerson
Paul S Person
2019-11-07 17:20:57 UTC
Permalink
Post by ***@bid.nes
Post by Paul S Person
Post by ***@bid.nes
Post by Robert Carnegie
On Fri, 1 Nov 2019 12:59:28 -0600, David Johnston
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that we
might use CRISPR to modify humanity to make people more ethical and
compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
It might be helpful to know what article that is.
Before they went the anti-religious-fanatic route and I ceased to
renew my subscription, the editor of /Skeptical Inquirer/ asserted
that "science could provide ethics as well as religion" (this is, of
course, a paraphrase.
He was roundly trasked. I, myself, wrote and suggested that he publish
articles by, say, an astrophysicist on the moral issue of abortion,
explaining how astrophysics (not the scientist, the subject of study)
suggested it be regarded morally. Or perhaps an entomologist, on how
entomology suggested capital punishment should be treated.
The next bimonthly issue the editor explained that he meant that, /if
you happen to believe in Utilitarianism/, science might be helpful in
determining how to produce the greatest good for the greatest number.
And this may be another case of some great mind overreaching itself to
a point where it is almost a satire of itself.
--
"I begin to envy Petronius."
"I have envied him long since."
Come now.
Knowledge of science relevant to an ethical question
is applicable.
Knowledge of science not relevant to a question of science
isn't applicable. Many leading minds have strayed outside
their specialism and stumbled.
But that implies you want to limit the discussion of a basis for ethics
to philosophers, who are experts in nothing. Certainly all of their efforts
to this point in history have produced trivial results.
Fine, provide a third alternative.
Post by ***@bid.nes
Post by Paul S Person
1) Scientists that base ethics on their scientific knowledge. Thus, a
physicist dedicated professionally to finding a particular particle in
experimental results might, based on his /knowlege of physics/,
produce an answer to the moral question: should a man have only one
wife? This is, of course, absurd.
This presumes that an attempt at human ethics would be based on one
person's knowledge of one narrow field *and only that knowledge*.
That's silly on the face of it.
My point exactly: claiming that science can /produce/ ethics is "silly
on the face of it".

That it can help apply ethical principles in some situations is
obvious.
Post by ***@bid.nes
Post by Paul S Person
2) Scientists that do no such thing. These are, essentially, acting as
philosophers since they have no actual expertise in ethics.
Only psychopaths have NO expertise in ethics.
(Most of us know about Pauling. Really poor example.)
Post by Paul S Person
Indeed, as recent scandals with plagiarism, falsified data, and
genetically-modified babies would suggest, any moral foundation
scientists may once have had (which, given the history of science,
would almost certainly have been based on Christian ethics)
The bit about Christianity demolishes what you've written so far all by
itself. Anyone, scientist or otherwise, will take into account the prevailing
moral and ethical systems in the culture(s) they grew up in/are immersed in
whether they are part of that person's belief system or not.
I don't see that it "demolishes" anything.

But thanks for agreeing that the ethics of science, since science
arose in Western Europe when Christianity was in charge, are derived
from that source.
Post by ***@bid.nes
Post by Paul S Person
is eroding.
That is not justified by what went before. Those same faults existed in
the sciences long before they became recently publicized, *and* in every
other field of human endeavor *including philosophy*.
That is what wrong-doers always say: "everbody else is doing it".

Perhaps they were, perhaps they weren't.

But, by your own statement, only recently have they become /so
blatant/ that they became /publiciy known/.

For your remark on philosophers to make sense, you would first have to
show at what point they developed a form of ethics sufficient to make,
say, plagiarism wrong.

The ancients mostly preferred to not publish their thoughts.

There is one letter generally considered to have actually been
/written/ by Plato. In it he reveals the function of his dialogues:
for those who have taken the course, they serve as reminders of the
/actual/ teachings; for those who have not (particularly for young men
whose fathers have enough money to pay the fees) they are advertising
circulars for his school.

Aristotle's works exist mostly (if not entirely) because, /after he
wsa dead and could not object/, his fellow scholars cleaned up his
lecture notes (the ones he used to lecture from), added additional
information to fill gaps as needed from their student notes, and
published them.

Trade secrets were the rule in a world with no copyright and where the
philosopher's income depended on students having to pay to acquire his
knowledge.
Post by ***@bid.nes
Post by Paul S Person
They aren't only no more qualified to speak on these issues
than anyone else, they are showing signs of losing their own moral
compasses (in some cases, not, of course, in general -- so far) and so
being /less/ qualified than the average philosopher.
Horseshit. Philosophers are no less narrow-minded, arrogant, and flawed
than anyone else.
I dare say you are right, but that doesn't make scientists any better
off, now, does it.

At least a competent philosopher has thought long and hard on the
issues involved. A competent scientist has indeed also thought long
and hard -- but on different issues entirely.

It is the /philosopher/ that is speaking about topics he might
reasonably be presumed to know something about when the talk turns to
ethics.
--
"I begin to envy Petronius."
"I have envied him long since."
nuny@bid.nes
2019-11-08 05:45:29 UTC
Permalink
Post by Paul S Person
Post by ***@bid.nes
Post by Paul S Person
Post by ***@bid.nes
Post by Robert Carnegie
On Fri, 1 Nov 2019 12:59:28 -0600, David Johnston
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion
that we might use CRISPR to modify humanity to make people
more ethical and compassionate.
The idea that you can genetically program ethics into people
seems a bit silly.
It might be helpful to know what article that is.
Before they went the anti-religious-fanatic route and I ceased to
renew my subscription, the editor of /Skeptical Inquirer/ asserted
that "science could provide ethics as well as religion" (this is, of
course, a paraphrase.
He was roundly trasked. I, myself, wrote and suggested that he publish
articles by, say, an astrophysicist on the moral issue of abortion,
explaining how astrophysics (not the scientist, the subject of study)
suggested it be regarded morally. Or perhaps an entomologist, on how
entomology suggested capital punishment should be treated.
The next bimonthly issue the editor explained that he meant that, /if
you happen to believe in Utilitarianism/, science might be helpful in
determining how to produce the greatest good for the greatest number.
And this may be another case of some great mind overreaching itself to
a point where it is almost a satire of itself.
--
"I begin to envy Petronius."
"I have envied him long since."
Come now.
Knowledge of science relevant to an ethical question
is applicable.
Knowledge of science not relevant to a question of science
isn't applicable. Many leading minds have strayed outside
their specialism and stumbled.
But that implies you want to limit the discussion of a basis for
ethics to philosophers, who are experts in nothing. Certainly all
of their efforts to this point in history have produced trivial
results.
Fine, provide a third alternative.
You tried to pre-emptively kill it before I could even offer it.
Post by Paul S Person
Post by ***@bid.nes
Post by Paul S Person
1) Scientists that base ethics on their scientific knowledge. Thus, a
physicist dedicated professionally to finding a particular particle in
experimental results might, based on his /knowlege of physics/,
produce an answer to the moral question: should a man have only one
wife? This is, of course, absurd.
This presumes that an attempt at human ethics would be based on one
person's knowledge of one narrow field *and only that knowledge*.
That's silly on the face of it.
My point exactly: claiming that science can /produce/ ethics is "silly
on the face of it".
Read that again. "one person's knowledge of one narrow field *and only that knowledge*. That limitation is what's silly.
Post by Paul S Person
That it can help apply ethical principles in some situations is
obvious.
Yet you still claim that science can *generate* no ethical principles.
Post by Paul S Person
Post by ***@bid.nes
Post by Paul S Person
2) Scientists that do no such thing. These are, essentially, acting as
philosophers since they have no actual expertise in ethics.
Only psychopaths have NO expertise in ethics.
I note that you do not try to refute that, probably because it trashes your entire line of thought.
Post by Paul S Person
Post by ***@bid.nes
(Most of us know about Pauling. Really poor example.)
Or that.
Post by Paul S Person
Post by ***@bid.nes
Post by Paul S Person
Indeed, as recent scandals with plagiarism, falsified data, and
genetically-modified babies would suggest, any moral foundation
scientists may once have had (which, given the history of science,
would almost certainly have been based on Christian ethics)
The bit about Christianity demolishes what you've written so far all by
itself. Anyone, scientist or otherwise, will take into account the prevailing
moral and ethical systems in the culture(s) they grew up in/are immersed in
whether they are part of that person's belief system or not.
I don't see that it "demolishes" anything.
But thanks for agreeing that the ethics of science, since science
arose in Western Europe when Christianity was in charge, are derived
from that source.
You jumped from "take into account" to "derived from" with no justification, and I most certainly do not agree with that leap.

You also seem to think like Quaddie by ascribing all of science to
Westerners and by implication Caucasians. Did you forget accidentally, or on purpose, the Arab, Chinese, and other writings that those white Christian Europeans "took into account"? Did you assume that because a scientist grew up in and is immersed in a Christian culture, that that scientist is automatically a Christian and includes all Christian principles in their thinking?

Counterexamples- Copernicus, Galileo.
Post by Paul S Person
Post by ***@bid.nes
Post by Paul S Person
is eroding.
That is not justified by what went before. Those same faults existed in
the sciences long before they became recently publicized, *and* in every
other field of human endeavor *including philosophy*.
That is what wrong-doers always say: "everbody else is doing it".
Wow, now by implication I'm a "wrong-doer". Now you attempt to refute by ad hominem.

Oh, and I suppose I have to admit that the charge of "falsified data" can't apply to philosopher since they don't work with data.
Post by Paul S Person
Perhaps they were, perhaps they weren't.
Of course they were- you admit it below.
Post by Paul S Person
But, by your own statement, only recently have they become /so
blatant/ that they became /publiciy known/.
That's because information dissemination happens so much faster now.
Post by Paul S Person
For your remark on philosophers to make sense, you would first have to
show at what point they developed a form of ethics sufficient to make,
say, plagiarism wrong.
Who came up with the Golden Rule?
Post by Paul S Person
The ancients mostly preferred to not publish their thoughts.
Really? That's why we have so much of what they said and wrote available
to us then?
Post by Paul S Person
There is one letter generally considered to have actually been
for those who have taken the course, they serve as reminders of the
/actual/ teachings; for those who have not (particularly for young men
whose fathers have enough money to pay the fees) they are advertising
circulars for his school.
Many of the central figures of philosophy were said to be illiterate too,
and what we know of what they thought and said was written by others.

That applies to most religions too, by the way.
Post by Paul S Person
Aristotle's works exist mostly (if not entirely) because, /after he
wsa dead and could not object/, his fellow scholars cleaned up his
lecture notes (the ones he used to lecture from), added additional
information to fill gaps as needed from their student notes, and
published them.
And? How much of those faults were added and hidden by those "cleaner-uppers"?

Are you now claiming that Plato and Aristotle were not envious of each other and lied about each other and stole from each other, for just two example?
Post by Paul S Person
Trade secrets were the rule in a world with no copyright and where the
philosopher's income depended on students having to pay to acquire his
knowledge.
Except those "secrets" could not stay secret when those students had to announce where they learned what they used to justify their "reasoning".
Post by Paul S Person
Post by ***@bid.nes
Post by Paul S Person
They aren't only no more qualified to speak on these issues
than anyone else, they are showing signs of losing their own moral
compasses (in some cases, not, of course, in general -- so far) and so
being /less/ qualified than the average philosopher.
Horseshit. Philosophers are no less narrow-minded, arrogant, and flawed
than anyone else.
I dare say you are right, but that doesn't make scientists any better
off, now, does it.
And yet you place philosophers on such a pedestal...
Post by Paul S Person
At least a competent philosopher has thought long and hard on the
issues involved.
To trivial result. By what criteria do you call a philosopher "competent"?
Post by Paul S Person
A competent scientist has indeed also thought long
and hard -- but on different issues entirely.
Only one scientist? Which one?
Post by Paul S Person
It is the /philosopher/ that is speaking about topics he might
reasonably be presumed to know something about when the talk turns to
ethics.
Why might they be so reasonably presumed? Because they use the words and
claim they know what they're talking about?


Mark L. Fergerson
Paul S Person
2019-11-08 16:57:48 UTC
Permalink
On Thu, 7 Nov 2019 21:45:29 -0800 (PST), "***@bid.nes"
<***@gmail.com> wrote:

This is getting out of hand.

The original theory was very much that /science/ could produce ethics.

And it's proposer, when challenged, clarified that he /really/ meant
"could help a utilitarian decide what to do to achieve his goal".
Which is fine.

This was in the pages of /Skeptical Inquirer/ back /before/ its foray
into anti-religious fanaticism, which prompted a name change. As the
name has changed back, it is possible that they have regained their
sanity. But not their trustworthiness.

If you /agree/ with the /original theory/, say so.

If you /don't/ agree with it, but merely believe that scientists have
as much right to discuss ethics as anyone else -- we are in agreement.
Whether you like it or not.

As the origin of scientific /ethics/: are you claiming that they came
from India or China? If not, that other cultures developed science is
not relevant here. And what is not relevant need not be mentioned.

Perhaps you should as yourself this question: Voltaire ended up
attacking the RC Church because of a notoriously unfair case. But how
did he know it was unfair? Who taught him the /ethics/ that told him
it was unfair? Was it not that very church?
--
"I begin to envy Petronius."
"I have envied him long since."
Johnny1A
2019-11-04 08:20:14 UTC
Permalink
On Fri, 1 Nov 2019 12:59:28 -0600, David Johnston
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
It might be helpful to know what article that is.
Before they went the anti-religious-fanatic route and I ceased to
renew my subscription, the editor of /Skeptical Inquirer/ asserted
that "science could provide ethics as well as religion" (this is, of
course, a paraphrase.
He was roundly trasked. I, myself, wrote and suggested that he publish
articles by, say, an astrophysicist on the moral issue of abortion,
explaining how astrophysics (not the scientist, the subject of study)
suggested it be regarded morally. Or perhaps an entomologist, on how
entomology suggested capital punishment should be treated.
The next bimonthly issue the editor explained that he meant that, /if
you happen to believe in Utilitarianism/, science might be helpful in
determining how to produce the greatest good for the greatest number.
And this may be another case of some great mind overreaching itself to
a point where it is almost a satire of itself.
--
"I begin to envy Petronius."
"I have envied him long since."
Several years ago, one of the popular science magazines did a feature asking scientists to opine on current events, politics, and the like. The most interesting aspect of it was that it demonstrated that the offensive stereotype of the clueless naif academic/scientist contains a considerable element of truth. The single unifying trend of most of the comments was naivete about things outside their specialties, esp. politics.
J. Clarke
2019-11-04 12:34:22 UTC
Permalink
On Mon, 4 Nov 2019 00:20:14 -0800 (PST), Johnny1A
Post by Johnny1A
On Fri, 1 Nov 2019 12:59:28 -0600, David Johnston
Post by David Johnston
Post by Quadibloc
A while back I read an article that played with a suggestion that we might use
CRISPR to modify humanity to make people more ethical and compassionate.
The idea that you can genetically program ethics into people seems a bit
silly.
It might be helpful to know what article that is.
Before they went the anti-religious-fanatic route and I ceased to
renew my subscription, the editor of /Skeptical Inquirer/ asserted
that "science could provide ethics as well as religion" (this is, of
course, a paraphrase.
He was roundly trasked. I, myself, wrote and suggested that he publish
articles by, say, an astrophysicist on the moral issue of abortion,
explaining how astrophysics (not the scientist, the subject of study)
suggested it be regarded morally. Or perhaps an entomologist, on how
entomology suggested capital punishment should be treated.
The next bimonthly issue the editor explained that he meant that, /if
you happen to believe in Utilitarianism/, science might be helpful in
determining how to produce the greatest good for the greatest number.
And this may be another case of some great mind overreaching itself to
a point where it is almost a satire of itself.
--
"I begin to envy Petronius."
"I have envied him long since."
Several years ago, one of the popular science magazines did a feature asking scientists to opine on current events, politics, and the like. The most interesting aspect of it was that it demonstrated that the offensive stereotype of the clueless naif academic/scientist contains a considerable element of truth. The single unifying trend of most of the comments was naivete about things outside their specialties, esp. politics.
If you come across that one again or come up with the right keywords
to find it, could you let us know?
h***@gmail.com
2019-11-04 12:46:56 UTC
Permalink
Post by Johnny1A
Several years ago, one of the popular science magazines did a feature asking scientists to opine on current events, politics, and the like. The most interesting aspect of it was that it demonstrated that the offensive stereotype of the clueless naif academic/scientist contains a considerable element of truth. The single unifying trend of most of the comments was naivete about things outside their specialties, esp. politics.
Or at least you thought that because they didn't agree with you.
Johnny1A
2019-11-06 06:28:29 UTC
Permalink
Post by h***@gmail.com
Post by Johnny1A
Several years ago, one of the popular science magazines did a feature asking scientists to opine on current events, politics, and the like. The most interesting aspect of it was that it demonstrated that the offensive stereotype of the clueless naif academic/scientist contains a considerable element of truth. The single unifying trend of most of the comments was naivete about things outside their specialties, esp. politics.
Or at least you thought that because they didn't agree with you.
No, the ones who shared my politics (at least goals wise) also came across as naïve. They overrated the power of logic and good will.
Loading...