Discussion:
Asimov's ideas on robotics inspire EU
(too old to reply)
a***@gmail.com
2017-01-12 12:09:04 UTC
Permalink
The EU has come out with a report in which they have rules on how humans interact with robots and AI. The report draws on Asimov's three laws of robotics. Asimov was ahead of his time.

Abhinav Lal
Writer & Investor
Dorothy J Heydt
2017-01-12 13:45:15 UTC
Permalink
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.

And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?

(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
Dimensional Traveler
2017-01-12 15:48:04 UTC
Permalink
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
AI doesn't "think" like people. I remember hearing about a military
experiment with a learning computer and and what it learned from some
intelligence/recon photos. One of the things it learned was that tanks
were a form of mushroom because they kept appearing in a clearing after
it had rained.
--
Running the rec.arts.TV Channels Watched Survey.
Winter 2016 survey began Dec 01 and will end Feb 28
Juho Julkunen
2017-01-12 20:04:11 UTC
Permalink
Post by Dimensional Traveler
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
AI doesn't "think" like people. I remember hearing about a military
experiment with a learning computer and and what it learned from some
intelligence/recon photos. One of the things it learned was that tanks
were a form of mushroom because they kept appearing in a clearing after
it had rained.
Scope of AI is larger than just classifiers.

I remember reading about a military experiment where a neural net
learned to distinguish between photos with grass/not grass, because
certain tanks were always shown on grass. Or between nighttime/daytime,
because certain Russian vehicles were always shown at night. All
versions seem like they were inspired by the same illustrative example,
which may or may not be based on a true event.

You don't know what inputs a neural net uses to reach a conclusion, and
even after a number of successes in a row, you can get a result that
throws you for a loop. Then again, most of the processing in your brain
happens below the threshold of consciousness.
--
Juho Julkunen
Dorothy J Heydt
2017-01-12 20:13:42 UTC
Permalink
Post by Juho Julkunen
Post by Dimensional Traveler
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
AI doesn't "think" like people. I remember hearing about a military
experiment with a learning computer and and what it learned from some
intelligence/recon photos. One of the things it learned was that tanks
were a form of mushroom because they kept appearing in a clearing after
it had rained.
Scope of AI is larger than just classifiers.
I remember reading about a military experiment where a neural net
learned to distinguish between photos with grass/not grass, because
certain tanks were always shown on grass. Or between nighttime/daytime,
because certain Russian vehicles were always shown at night. All
versions seem like they were inspired by the same illustrative example,
which may or may not be based on a true event.
You don't know what inputs a neural net uses to reach a conclusion, and
even after a number of successes in a row, you can get a result that
throws you for a loop. Then again, most of the processing in your brain
happens below the threshold of consciousness.
Is it germane to recall here the adventure of the military AI in
whose simulation some helicopters buzzed some kangaroos and they
retaliated with anti-aircraft fire?
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
unknown
2017-01-12 20:56:26 UTC
Permalink
Post by Dorothy J Heydt
Post by Juho Julkunen
Post by Dimensional Traveler
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
AI doesn't "think" like people. I remember hearing about a military
experiment with a learning computer and and what it learned from some
intelligence/recon photos. One of the things it learned was that tanks
were a form of mushroom because they kept appearing in a clearing after
it had rained.
Scope of AI is larger than just classifiers.
I remember reading about a military experiment where a neural net
learned to distinguish between photos with grass/not grass, because
certain tanks were always shown on grass. Or between nighttime/daytime,
because certain Russian vehicles were always shown at night. All
versions seem like they were inspired by the same illustrative example,
which may or may not be based on a true event.
You don't know what inputs a neural net uses to reach a conclusion, and
even after a number of successes in a row, you can get a result that
throws you for a loop. Then again, most of the processing in your brain
happens below the threshold of consciousness.
Is it germane to recall here the adventure of the military AI in
whose simulation some helicopters buzzed some kangaroos and they
retaliated with anti-aircraft fire?
Not exactly
<http://www.snopes.com/humor/nonsense/kangaroo.asp>
the fire was beach balls
--
Mark
Dorothy J Heydt
2017-01-13 00:02:57 UTC
Permalink
Post by unknown
Post by Dorothy J Heydt
Post by Juho Julkunen
Post by Dimensional Traveler
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
AI doesn't "think" like people. I remember hearing about a military
experiment with a learning computer and and what it learned from some
intelligence/recon photos. One of the things it learned was that tanks
were a form of mushroom because they kept appearing in a clearing after
it had rained.
Scope of AI is larger than just classifiers.
I remember reading about a military experiment where a neural net
learned to distinguish between photos with grass/not grass, because
certain tanks were always shown on grass. Or between nighttime/daytime,
because certain Russian vehicles were always shown at night. All
versions seem like they were inspired by the same illustrative example,
which may or may not be based on a true event.
You don't know what inputs a neural net uses to reach a conclusion, and
even after a number of successes in a row, you can get a result that
throws you for a loop. Then again, most of the processing in your brain
happens below the threshold of consciousness.
Is it germane to recall here the adventure of the military AI in
whose simulation some helicopters buzzed some kangaroos and they
retaliated with anti-aircraft fire?
Not exactly
<http://www.snopes.com/humor/nonsense/kangaroo.asp>
the fire was beach balls
Awwwww. That's so cool. Thanks.
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
Stephen Harker
2017-01-13 01:10:35 UTC
Permalink
Post by Dorothy J Heydt
Post by unknown
Post by Dorothy J Heydt
Post by Juho Julkunen
Post by Dimensional Traveler
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
AI doesn't "think" like people. I remember hearing about a military
experiment with a learning computer and and what it learned from some
intelligence/recon photos. One of the things it learned was that tanks
were a form of mushroom because they kept appearing in a clearing after
it had rained.
Scope of AI is larger than just classifiers.
I remember reading about a military experiment where a neural net
learned to distinguish between photos with grass/not grass, because
certain tanks were always shown on grass. Or between nighttime/daytime,
because certain Russian vehicles were always shown at night. All
versions seem like they were inspired by the same illustrative example,
which may or may not be based on a true event.
You don't know what inputs a neural net uses to reach a conclusion, and
even after a number of successes in a row, you can get a result that
throws you for a loop. Then again, most of the processing in your brain
happens below the threshold of consciousness.
Is it germane to recall here the adventure of the military AI in
whose simulation some helicopters buzzed some kangaroos and they
retaliated with anti-aircraft fire?
Not exactly
<http://www.snopes.com/humor/nonsense/kangaroo.asp>
the fire was beach balls
Awwwww. That's so cool. Thanks.
I think the story you recall is:

http://www.netfunny.com/rhf/jokes/99/Jun/caution.html

This is just a VR simulator, not AI.
--
Stephen Harker ***@netspace.net.au
http://sjharker.customer.netspace.net.au/
Dimensional Traveler
2017-01-12 22:08:50 UTC
Permalink
Post by Dorothy J Heydt
Post by Juho Julkunen
Post by Dimensional Traveler
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
AI doesn't "think" like people. I remember hearing about a military
experiment with a learning computer and and what it learned from some
intelligence/recon photos. One of the things it learned was that tanks
were a form of mushroom because they kept appearing in a clearing after
it had rained.
Scope of AI is larger than just classifiers.
I remember reading about a military experiment where a neural net
learned to distinguish between photos with grass/not grass, because
certain tanks were always shown on grass. Or between nighttime/daytime,
because certain Russian vehicles were always shown at night. All
versions seem like they were inspired by the same illustrative example,
which may or may not be based on a true event.
You don't know what inputs a neural net uses to reach a conclusion, and
even after a number of successes in a row, you can get a result that
throws you for a loop. Then again, most of the processing in your brain
happens below the threshold of consciousness.
Is it germane to recall here the adventure of the military AI in
whose simulation some helicopters buzzed some kangaroos and they
retaliated with anti-aircraft fire?
Not related. That was a simulation for human pilots where the
programmers had done a quick-n-dirty job of creating kangaroos by
overlaying kangaroo animation over some code for AFVs without changing
the stats of the AFVs to do things like remove the SAMs from them. :)
--
Running the rec.arts.TV Channels Watched Survey.
Winter 2016 survey began Dec 01 and will end Feb 28
Dorothy J Heydt
2017-01-12 20:12:16 UTC
Permalink
Post by Dimensional Traveler
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
AI doesn't "think" like people. I remember hearing about a military
experiment with a learning computer and and what it learned from some
intelligence/recon photos. One of the things it learned was that tanks
were a form of mushroom because they kept appearing in a clearing after
it had rained.
O-o-o-kay, why (in real world terms) *did* the tanks appear in
the clearing after it had rained?
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
Gutless Umbrella Carrying Sissy
2017-01-12 20:32:48 UTC
Permalink
Post by Dorothy J Heydt
Post by Dimensional Traveler
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the
first thing the Humanoids do is to make people stop smoking.
On the grounds that fire is dangerous. Right action, wrong
reason.
AI doesn't "think" like people. I remember hearing about a
military experiment with a learning computer and and what it
learned from some intelligence/recon photos. One of the things
it learned was that tanks were a form of mushroom because they
kept appearing in a clearing after it had rained.
O-o-o-kay, why (in real world terms) *did* the tanks appear in
the clearing after it had rained?
There are dozens of variations of that story. The one I saw
involved blurry, low quality pictures of Russian tanks vs. high
quality, sharply focused pictures of US tanks, so the computer
learned to detect blurry photos. I've also seen "pictures of
camouflage tanks in trees vs pictures of trees with no tanks" in
which the tank pictures were taken on a cloudy day, but the no-tank
pictures on a sunny day, so the computer learned to detect clouds.

What I have never see, however, is enough detail to determine is
anything remotely like that ever actually took place, so I believe
it's apocryphal.

The point, however, is very valid: When you get into neurel net
computers, they you can teach them to do something very, very well,
but you can't always tell _what_ you're teaching them to do.

(I did see a credible article once, many years ago, about a simple
neural net -not actually a computer, so much as a bank of
transistors - that was evolved to detect a tone of a certain
frequency, without using a system clock. It was *very* accurate
after a few generations of modifications, but they had no idea how
it worked. There were transistors that had no power going to them,
no input, no output, but if they disconnected them, it stopped
working.)
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Dimensional Traveler
2017-01-12 22:06:38 UTC
Permalink
Post by Dorothy J Heydt
Post by Dimensional Traveler
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
AI doesn't "think" like people. I remember hearing about a military
experiment with a learning computer and and what it learned from some
intelligence/recon photos. One of the things it learned was that tanks
were a form of mushroom because they kept appearing in a clearing after
it had rained.
O-o-o-kay, why (in real world terms) *did* the tanks appear in
the clearing after it had rained?
There wasn't any correspondence in the real world. It just happened
that the photos they grabbed to show the neural net (thank you to Juho
Julkunen for reminding me that was the term) only showed tanks after it
had rained. The humans grabbing the pictures didn't think anything of
it because they "knew" tanks were human made and controlled by humans.
A background assumption from 20+ years of life that the computer
software didn't have.
--
Running the rec.arts.TV Channels Watched Survey.
Winter 2016 survey began Dec 01 and will end Feb 28
Peter Trei
2017-01-12 15:58:26 UTC
Permalink
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
I recall Asimov writing that the 3 Laws had enough ambiguity in them to
write interesting stories.

I can't recall one where he set up a 'Trolley Problem'. Such a story would
have given us some insight into how he expected them to work.

BTW: Here's the actual EU report:

http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN

http://preview.tinyurl.com/gpvzt6m

...and yes, whoever wrote it was familiar with classic robot SF.

It doesn't have much to say about the military use of robots.

pt
Robert Carnegie
2017-01-12 23:58:07 UTC
Permalink
Post by Peter Trei
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
I recall Asimov writing that the 3 Laws had enough ambiguity in them to
write interesting stories.
I can't recall one where he set up a 'Trolley Problem'. Such a story would
have given us some insight into how he expected them to work.
They tend to break and cease to function.
But "Little Lost Robot" involves testing
robots' responses to First-Law scenarios.
David Johnston
2017-01-12 18:52:37 UTC
Permalink
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
Gutless Umbrella Carrying Sissy
2017-01-12 18:00:32 UTC
Permalink
Post by David Johnston
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing
is basically intended to generate problems. There's no reason
to have it.
That's the intentional consequence. Dorothy's is the unintentional
one, that Azimov didn't realize at first. How does a robot respond
when _all_ action - and _all_ incation - cause harm?

"I cannot let you eat that donut, it will make you gain weight in
an unleathy way."

"Donuts relieve my depression. If I don't get them, I'll become
suicidal. Now go get me more donuts."

Or, as Star Trek fans might call it, the Captain Kirk Syndrome.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Dorothy J Heydt
2017-01-12 20:15:32 UTC
Permalink
Post by Gutless Umbrella Carrying Sissy
Post by David Johnston
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing
is basically intended to generate problems. There's no reason
to have it.
That's the intentional consequence. Dorothy's is the unintentional
one, that Azimov didn't realize at first. How does a robot respond
when _all_ action - and _all_ incation - cause harm?
It freaks out, sometimes fatally. (Fatally to the robot, I
mean.)
Post by Gutless Umbrella Carrying Sissy
"I cannot let you eat that donut, it will make you gain weight in
an unleathy way."
"Donuts relieve my depression. If I don't get them, I'll become
suicidal. Now go get me more donuts."
Or, as Star Trek fans might call it, the Captain Kirk Syndrome.
Yes, we saw a fair amount of that in TOS.
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
Gutless Umbrella Carrying Sissy
2017-01-12 20:21:09 UTC
Permalink
Post by Dorothy J Heydt
Post by Gutless Umbrella Carrying Sissy
Post by David Johnston
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules
on how humans interact with robots and AI. The report draws
on Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you
get Williamson's _The Humanoids_: how much judgment does the
robot get as to what can harm a human?
The whole "through inaction cause humans to come to harm"
thing is basically intended to generate problems. There's no
reason to have it.
That's the intentional consequence. Dorothy's is the
unintentional one, that Azimov didn't realize at first. How does
a robot respond when _all_ action - and _all_ incation - cause
harm?
It freaks out, sometimes fatally. (Fatally to the robot, I
mean.)
Fortunately, "freaks out" implies a certain degree of self
awareness that no one alive today is likely to live long enough to
see.
Post by Dorothy J Heydt
Post by Gutless Umbrella Carrying Sissy
"I cannot let you eat that donut, it will make you gain weight
in an unleathy way."
"Donuts relieve my depression. If I don't get them, I'll become
suicidal. Now go get me more donuts."
Or, as Star Trek fans might call it, the Captain Kirk Syndrome.
Yes, we saw a fair amount of that in TOS.
I have a vague memory of one of someone in one of the other ST
series trying that trick, and the android laughing at them. It
might even be a real memory.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Peter Trei
2017-01-12 21:43:55 UTC
Permalink
Post by Dorothy J Heydt
Post by Gutless Umbrella Carrying Sissy
Post by David Johnston
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing
is basically intended to generate problems. There's no reason
to have it.
That's the intentional consequence. Dorothy's is the unintentional
one, that Azimov didn't realize at first. How does a robot respond
when _all_ action - and _all_ incation - cause harm?
It freaks out, sometimes fatally. (Fatally to the robot, I
mean.)
IIRC, Asimov did cover this one, in "Liar!"
Post by Dorothy J Heydt
Post by Gutless Umbrella Carrying Sissy
"I cannot let you eat that donut, it will make you gain weight in
an unleathy way."
"Donuts relieve my depression. If I don't get them, I'll become
suicidal. Now go get me more donuts."
Or, as Star Trek fans might call it, the Captain Kirk Syndrome.
Yes, we saw a fair amount of that in TOS.
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
Dorothy J Heydt
2017-01-13 00:05:42 UTC
Permalink
Post by Peter Trei
Post by Dorothy J Heydt
Post by Gutless Umbrella Carrying Sissy
Post by David Johnston
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing
is basically intended to generate problems. There's no reason
to have it.
That's the intentional consequence. Dorothy's is the unintentional
one, that Azimov didn't realize at first. How does a robot respond
when _all_ action - and _all_ incation - cause harm?
It freaks out, sometimes fatally. (Fatally to the robot, I
mean.)
IIRC, Asimov did cover this one, in "Liar!"
Yup. Calvin did that deliberately, because she was mad as hell.
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
Anthony Nance
2017-01-13 13:12:31 UTC
Permalink
Post by Gutless Umbrella Carrying Sissy
Post by David Johnston
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing
is basically intended to generate problems. There's no reason
to have it.
That's the intentional consequence. Dorothy's is the unintentional
one, that Azimov didn't realize at first. How does a robot respond
when _all_ action - and _all_ incation - cause harm?
"I cannot let you eat that donut, it will make you gain weight in
an unleathy way."
"Donuts relieve my depression. If I don't get them, I'll become
suicidal. Now go get me more donuts."
Or, as Star Trek fans might call it, the Captain Kirk Syndrome.
HeyWaitJustAMinuteThere - Kirk ate donuts? I completely missed that.
- Tony, whose reading habits sometimes generate amusement at the
expense of accuracy. It's a fair cop.
Dorothy J Heydt
2017-01-13 13:51:47 UTC
Permalink
Post by Anthony Nance
Post by Gutless Umbrella Carrying Sissy
Post by David Johnston
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing
is basically intended to generate problems. There's no reason
to have it.
That's the intentional consequence. Dorothy's is the unintentional
one, that Azimov didn't realize at first. How does a robot respond
when _all_ action - and _all_ incation - cause harm?
"I cannot let you eat that donut, it will make you gain weight in
an unleathy way."
"Donuts relieve my depression. If I don't get them, I'll become
suicidal. Now go get me more donuts."
Or, as Star Trek fans might call it, the Captain Kirk Syndrome.
HeyWaitJustAMinuteThere - Kirk ate donuts? I completely missed that.
Well, he dud gave trouble keeping his weight down, all through
his career, in spite of leading a fairly active life -- he was a
bow hunter and rode a motorcycle. I remember seeing him on a TV
show once, in which a panel of celebrities were shown a weird
object and asked to guess what it was; then they'd be told what
it really was. One of the object was a maybe eighteen-inch metal
bar with a weight on the end. My husband immediately identified
it as a stabilizer bar for a modern bow -- and so did Shatner.
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
J. Clarke
2017-01-13 14:24:03 UTC
Permalink
Post by Dorothy J Heydt
Post by Anthony Nance
Post by Gutless Umbrella Carrying Sissy
Post by David Johnston
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing
is basically intended to generate problems. There's no reason
to have it.
That's the intentional consequence. Dorothy's is the unintentional
one, that Azimov didn't realize at first. How does a robot respond
when _all_ action - and _all_ incation - cause harm?
"I cannot let you eat that donut, it will make you gain weight in
an unleathy way."
"Donuts relieve my depression. If I don't get them, I'll become
suicidal. Now go get me more donuts."
Or, as Star Trek fans might call it, the Captain Kirk Syndrome.
HeyWaitJustAMinuteThere - Kirk ate donuts? I completely missed that.
Well, he dud gave trouble keeping his weight down, all through
his career, in spite of leading a fairly active life -- he was a
bow hunter and rode a motorcycle. I remember seeing him on a TV
show once, in which a panel of celebrities were shown a weird
object and asked to guess what it was; then they'd be told what
it really was. One of the object was a maybe eighteen-inch metal
bar with a weight on the end. My husband immediately identified
it as a stabilizer bar for a modern bow -- and so did Shatner.
But how about Captain Kirk?
Gene Wirchenko
2017-01-13 18:42:21 UTC
Permalink
On Fri, 13 Jan 2017 09:24:03 -0500, "J. Clarke"
[snip]
Post by J. Clarke
Post by Dorothy J Heydt
bar with a weight on the end. My husband immediately identified
it as a stabilizer bar for a modern bow -- and so did Shatner.
But how about Captain Kirk?
Maybe he would have, too. ISTR reading that he had a degree in
history.

Sincerely,

Gene Wirchenko
Kevrob
2017-01-13 15:14:37 UTC
Permalink
Post by Dorothy J Heydt
Post by Anthony Nance
Post by Gutless Umbrella Carrying Sissy
Post by David Johnston
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing
is basically intended to generate problems. There's no reason
to have it.
That's the intentional consequence. Dorothy's is the unintentional
one, that Azimov didn't realize at first. How does a robot respond
when _all_ action - and _all_ incation - cause harm?
"I cannot let you eat that donut, it will make you gain weight in
an unleathy way."
"Donuts relieve my depression. If I don't get them, I'll become
suicidal. Now go get me more donuts."
Or, as Star Trek fans might call it, the Captain Kirk Syndrome.
HeyWaitJustAMinuteThere - Kirk ate donuts? I completely missed that.
Well, he dud gave trouble keeping his weight down, all through
his career, in spite of leading a fairly active life -- he was a
bow hunter and rode a motorcycle. I remember seeing him on a TV
show once, in which a panel of celebrities were shown a weird
object and asked to guess what it was; then they'd be told what
it really was. One of the object was a maybe eighteen-inch metal
bar with a weight on the end. My husband immediately identified
it as a stabilizer bar for a modern bow -- and so did Shatner.
Starfleet was sometimes a military force, other times diplomats,
but acted as "space cops" too. So, of course they ate donuts!

Are any ST pastries named for (in)famous people, like a Napoleon?
(aka mille-feuille)

"I'll have a Khan and, Spock, you like a Kodos, don't you?
And a Mugato Horn for Bones."

Kevin R
Gutless Umbrella Carrying Sissy
2017-01-13 16:20:55 UTC
Permalink
Post by Kevrob
Post by Dorothy J Heydt
Post by Anthony Nance
Post by Gutless Umbrella Carrying Sissy
Post by David Johnston
In article
Post by a***@gmail.com
The EU has come out with a report in which they have
rules on how humans interact with robots and AI. The
report draws on Asimov's three laws of robotics. Asimov
was ahead of his time.
But, as pointed out upthread, the Second and Third Laws
would need a great deal of tweaking; you do not want *any*
human telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you
get Williamson's _The Humanoids_: how much judgment does
the robot get as to what can harm a human?
The whole "through inaction cause humans to come to harm"
thing is basically intended to generate problems. There's
no reason to have it.
That's the intentional consequence. Dorothy's is the
unintentional one, that Azimov didn't realize at first. How
does a robot respond when _all_ action - and _all_ incation
- cause harm?
"I cannot let you eat that donut, it will make you gain
weight in an unleathy way."
"Donuts relieve my depression. If I don't get them, I'll
become suicidal. Now go get me more donuts."
Or, as Star Trek fans might call it, the Captain Kirk
Syndrome.
HeyWaitJustAMinuteThere - Kirk ate donuts? I completely missed that.
Well, he dud gave trouble keeping his weight down, all through
his career, in spite of leading a fairly active life -- he was
a bow hunter and rode a motorcycle. I remember seeing him on a
TV show once, in which a panel of celebrities were shown a
weird object and asked to guess what it was; then they'd be
told what it really was. One of the object was a maybe
eighteen-inch metal bar with a weight on the end. My husband
immediately identified it as a stabilizer bar for a modern bow
-- and so did Shatner.
Starfleet was sometimes a military force, other times diplomats,
but acted as "space cops" too. So, of course they ate donuts!
Space donuts. The sprinkles are shaped like stars and moons.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Don Kuenz
2017-01-13 16:34:49 UTC
Permalink
Post by Dorothy J Heydt
Post by Anthony Nance
Post by Gutless Umbrella Carrying Sissy
Post by David Johnston
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing
is basically intended to generate problems. There's no reason
to have it.
That's the intentional consequence. Dorothy's is the unintentional
one, that Azimov didn't realize at first. How does a robot respond
when _all_ action - and _all_ incation - cause harm?
"I cannot let you eat that donut, it will make you gain weight in
an unleathy way."
"Donuts relieve my depression. If I don't get them, I'll become
suicidal. Now go get me more donuts."
Or, as Star Trek fans might call it, the Captain Kirk Syndrome.
HeyWaitJustAMinuteThere - Kirk ate donuts? I completely missed that.
Well, he dud gave trouble keeping his weight down, all through
his career, in spite of leading a fairly active life -- he was a
bow hunter and rode a motorcycle. I remember seeing him on a TV
show once, in which a panel of celebrities were shown a weird
object and asked to guess what it was; then they'd be told what
it really was. One of the object was a maybe eighteen-inch metal
bar with a weight on the end. My husband immediately identified
it as a stabilizer bar for a modern bow -- and so did Shatner.
I heard [Shatner's] Harley coming down the mountain road
toward my house long before he shot halfway past the driveway,
decided to course-correct doing 35 on a steep slope, and laid
his bike down in the tarmac with a hideous wounded-beast
screech.
The scar from his wheelie remains to this day in my
driveway, though the 1994 Northridge earthquake splintered it
some.
He came limping up to the door, I opened it, and in came
The Great Actor, to peruse my humble offering.

_The City on the Edge of Forever_ (Ellison)

Thank you,

--
Don Kuenz KB7RPU

I wasn't a kid. I should have heard the sound of creeping actors in the
night. - Ellison
Gutless Umbrella Carrying Sissy
2017-01-13 16:19:31 UTC
Permalink
Post by Anthony Nance
Post by Gutless Umbrella Carrying Sissy
Post by David Johnston
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules
on how humans interact with robots and AI. The report draws
on Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you
get Williamson's _The Humanoids_: how much judgment does the
robot get as to what can harm a human?
The whole "through inaction cause humans to come to harm"
thing is basically intended to generate problems. There's no
reason to have it.
That's the intentional consequence. Dorothy's is the
unintentional one, that Azimov didn't realize at first. How
does a robot respond when _all_ action - and _all_ incation -
cause harm?
"I cannot let you eat that donut, it will make you gain weight
in an unleathy way."
"Donuts relieve my depression. If I don't get them, I'll become
suicidal. Now go get me more donuts."
Or, as Star Trek fans might call it, the Captain Kirk Syndrome.
HeyWaitJustAMinuteThere - Kirk ate donuts? I completely missed
that. - Tony, whose reading habits sometimes generate amusement
at the
expense of accuracy. It's a fair cop.
Well, Shatner kinda looks like a donut most of the time.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Ted Nolan <tednolan>
2017-01-12 19:19:43 UTC
Permalink
Post by David Johnston
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
There was an in-story justification in one of the shorts. Something to the
effect a malicious robot could drop an anvil towards a human's head, knowing
full well he was fast enough to get there and catch it, and then choosing
not to. The action caused no harm..

But yeah, the laws were good story generators.
--
------
columbiaclosings.com
What's not in Columbia anymore..
Gutless Umbrella Carrying Sissy
2017-01-12 18:31:45 UTC
Permalink
Post by Ted Nolan <tednolan>
Post by David Johnston
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing
is basically intended to generate problems. There's no reason
to have it.
There was an in-story justification in one of the shorts.
Something to the effect a malicious robot could drop an anvil
towards a human's head, knowing full well he was fast enough to
get there and catch it, and then choosing not to. The action
caused no harm..
The decision not to is an action.
Post by Ted Nolan <tednolan>
But yeah, the laws were good story generators.
And that's the purpose.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Kevrob
2017-01-12 21:12:18 UTC
Permalink
Post by Ted Nolan <tednolan>
Post by David Johnston
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
There was an in-story justification in one of the shorts. Something to the
effect a malicious robot could drop an anvil towards a human's head, knowing
full well he was fast enough to get there and catch it, and then choosing
not to. The action caused no harm..
No PHYSICAL harm. Maybe there needs to be a Law 1.1a: "nor shall
it by its action cause psychological harm, or cause a human to
harm itself"? You could hurt yourself pretty badly trying to avoid
a plummeting anvil, and you might hope to be wearing brown pants!
Post by Ted Nolan <tednolan>
But yeah, the laws were good story generators.
Indeed.

The Zeroth law logically leads to a robo-dictatorship, where we
all get treated like pampered pets. Creepy.

Did Asimov ever write about anyone hacking Calvin's positronic
brains? If and when we get autonomous bots, that's what I'd worry
about. We've already had malware infecting automated household
appliances. Imagine a network of Microvacs infecting the Galactic
AC!

Kevin R
Lynn McGuire
2017-01-12 22:18:26 UTC
Permalink
Post by Kevrob
Post by Ted Nolan <tednolan>
Post by David Johnston
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
There was an in-story justification in one of the shorts. Something to the
effect a malicious robot could drop an anvil towards a human's head, knowing
full well he was fast enough to get there and catch it, and then choosing
not to. The action caused no harm..
No PHYSICAL harm. Maybe there needs to be a Law 1.1a: "nor shall
it by its action cause psychological harm, or cause a human to
harm itself"? You could hurt yourself pretty badly trying to avoid
a plummeting anvil, and you might hope to be wearing brown pants!
Post by Ted Nolan <tednolan>
But yeah, the laws were good story generators.
Indeed.
The Zeroth law logically leads to a robo-dictatorship, where we
all get treated like pampered pets. Creepy.
Did Asimov ever write about anyone hacking Calvin's positronic
brains? If and when we get autonomous bots, that's what I'd worry
about. We've already had malware infecting automated household
appliances. Imagine a network of Microvacs infecting the Galactic
AC!
Kevin R
Ransomware scares the you know what out of me in this case.

Lynn
David Johnston
2017-01-12 21:40:43 UTC
Permalink
Post by Ted Nolan <tednolan>
Post by David Johnston
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
There was an in-story justification in one of the shorts. Something to the
effect a malicious robot could drop an anvil towards a human's head, knowing
full well he was fast enough to get there and catch it, and then choosing
not to. The action caused no harm..
Ah yes, specious rationalization. On the part of the author.
Gene Wirchenko
2017-01-13 03:55:28 UTC
Permalink
On Thu, 12 Jan 2017 11:52:37 -0700, David Johnston
<***@yahoo.com> wrote:

[snip]
Post by David Johnston
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
That was covered in one story. A robot could release a heavy
weight above a human knowing it could stop the weight from hitting the
human. Without the inaction law, once the robot released the weight,
it would be under no obligation to stop the weight from hitting the
human.

Sincerely,

Gene Wirchenko
David Johnston
2017-01-13 06:28:58 UTC
Permalink
Post by Gene Wirchenko
On Thu, 12 Jan 2017 11:52:37 -0700, David Johnston
[snip]
Post by David Johnston
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
That was covered in one story. A robot could release a heavy
weight above a human knowing it could stop the weight from hitting the
human.
Except that it would know it had no such intention. The potential
problems in having robots that run amuck in attempts to "save humans
from danger" and can't be ordered to stand down are infinitely greater
than the possibility that robots would be capable of that degree of
double-think.

Without the inaction law, once the robot released the weight,
Post by Gene Wirchenko
it would be under no obligation to stop the weight from hitting the
human.
Gene Wirchenko
2017-01-13 18:43:31 UTC
Permalink
On Thu, 12 Jan 2017 23:28:58 -0700, David Johnston
Post by David Johnston
Post by Gene Wirchenko
On Thu, 12 Jan 2017 11:52:37 -0700, David Johnston
[snip]
Post by David Johnston
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
That was covered in one story. A robot could release a heavy
weight above a human knowing it could stop the weight from hitting the
human.
Except that it would know it had no such intention. The potential
Not necessarily the case. What if in the split=second after,
priorities change?
Post by David Johnston
problems in having robots that run amuck in attempts to "save humans
from danger" and can't be ordered to stand down are infinitely greater
than the possibility that robots would be capable of that degree of
double-think.
Without the inaction law, once the robot released the weight,
Post by Gene Wirchenko
it would be under no obligation to stop the weight from hitting the
human.
Sincerely,

Gene Wirchenko
David Johnston
2017-01-13 20:03:34 UTC
Permalink
Post by Gene Wirchenko
On Thu, 12 Jan 2017 23:28:58 -0700, David Johnston
Post by David Johnston
Post by Gene Wirchenko
On Thu, 12 Jan 2017 11:52:37 -0700, David Johnston
[snip]
Post by David Johnston
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
That was covered in one story. A robot could release a heavy
weight above a human knowing it could stop the weight from hitting the
human.
Except that it would know it had no such intention. The potential
Not necessarily the case. What if in the split=second after,
priorities change?
If a robot was dropping an anvil for some totally legitimate reason in
the first place, then that a human happened to end up standing under it
while that happened is the kind of accident that happens all the time in
industrial settings and be handled with mundane safety protocols and
instructions.
Dimensional Traveler
2017-01-13 21:24:08 UTC
Permalink
Post by David Johnston
Post by Gene Wirchenko
On Thu, 12 Jan 2017 23:28:58 -0700, David Johnston
Post by David Johnston
Post by Gene Wirchenko
On Thu, 12 Jan 2017 11:52:37 -0700, David Johnston
[snip]
Post by David Johnston
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
That was covered in one story. A robot could release a heavy
weight above a human knowing it could stop the weight from hitting the
human.
Except that it would know it had no such intention. The potential
Not necessarily the case. What if in the split=second after,
priorities change?
If a robot was dropping an anvil for some totally legitimate reason in
the first place, then that a human happened to end up standing under it
while that happened is the kind of accident that happens all the time in
industrial settings and be handled with mundane safety protocols and
instructions.
"Mundane" safety protocols and instructions do not cover context and
background learning that every human goes thru over the 15 to 20 years
before they start their first job. This is the big hang-up with
programming robots and such. There are so many layers of information
and experience behind the any procedures written for humans that we are
not even consciously aware of. Trying to become aware of them and then
write them in ways that cover every possible contingency just is not
realistic.
--
Running the rec.arts.TV Channels Watched Survey.
Winter 2016 survey began Dec 01 and will end Feb 28
David Johnston
2017-01-13 21:45:32 UTC
Permalink
Post by Dimensional Traveler
Post by David Johnston
Post by Gene Wirchenko
On Thu, 12 Jan 2017 23:28:58 -0700, David Johnston
Post by David Johnston
Post by Gene Wirchenko
On Thu, 12 Jan 2017 11:52:37 -0700, David Johnston
[snip]
Post by David Johnston
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
That was covered in one story. A robot could release a heavy
weight above a human knowing it could stop the weight from hitting the
human.
Except that it would know it had no such intention. The potential
Not necessarily the case. What if in the split=second after,
priorities change?
If a robot was dropping an anvil for some totally legitimate reason in
the first place, then that a human happened to end up standing under it
while that happened is the kind of accident that happens all the time in
industrial settings and be handled with mundane safety protocols and
instructions.
"Mundane" safety protocols and instructions do not cover context and
background learning that every human goes thru over the 15 to 20 years
before they start their first job. This is the big hang-up with
programming robots and such. There are so many layers of information
and experience behind the any procedures written for humans that we are
not even consciously aware of. Trying to become aware of them and then
write them in ways that cover every possible contingency just is not
realistic.
I was thinking in terms of "Don't stand underneath Anvilbot"
instructions for the human. Expecting the robot to do all the work is
unreasonable.
Gutless Umbrella Carrying Sissy
2017-01-13 20:48:48 UTC
Permalink
Post by David Johnston
Expecting the robot to do all the
work is unreasonable.
While I agree, that's *not* what the people wanting billions in
venture capital to develop better robots are saying. Especially for
self driving cars.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Kevrob
2017-01-13 22:59:01 UTC
Permalink
Post by David Johnston
I was thinking in terms of "Don't stand underneath Anvilbot"
instructions for the human. Expecting the robot to do all the work is
unreasonable.
Anvilbot has to be an Acme product.

Kevin R
Gutless Umbrella Carrying Sissy
2017-01-13 22:06:35 UTC
Permalink
Post by Kevrob
Post by David Johnston
I was thinking in terms of "Don't stand underneath Anvilbot"
instructions for the human. Expecting the robot to do all the
work is unreasonable.
Anvilbot has to be an Acme product.
I note that a Google search for "anvilbot" produces nearly 600
results, and they're not all the same thing.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Dimensional Traveler
2017-01-13 23:01:25 UTC
Permalink
Post by David Johnston
Post by Dimensional Traveler
Post by David Johnston
Post by Gene Wirchenko
On Thu, 12 Jan 2017 23:28:58 -0700, David Johnston
Post by David Johnston
Post by Gene Wirchenko
On Thu, 12 Jan 2017 11:52:37 -0700, David Johnston
[snip]
Post by David Johnston
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
That was covered in one story. A robot could release a heavy
weight above a human knowing it could stop the weight from hitting the
human.
Except that it would know it had no such intention. The potential
Not necessarily the case. What if in the split=second after,
priorities change?
If a robot was dropping an anvil for some totally legitimate reason in
the first place, then that a human happened to end up standing under it
while that happened is the kind of accident that happens all the time in
industrial settings and be handled with mundane safety protocols and
instructions.
"Mundane" safety protocols and instructions do not cover context and
background learning that every human goes thru over the 15 to 20 years
before they start their first job. This is the big hang-up with
programming robots and such. There are so many layers of information
and experience behind the any procedures written for humans that we are
not even consciously aware of. Trying to become aware of them and then
write them in ways that cover every possible contingency just is not
realistic.
I was thinking in terms of "Don't stand underneath Anvilbot"
instructions for the human. Expecting the robot to do all the work is
unreasonable.
True. But you know why its so hard to make something fool-proof don't
you? The (human) fools are so ingenious as getting into trouble.
--
Running the rec.arts.TV Channels Watched Survey.
Winter 2016 survey began Dec 01 and will end Feb 28
Gutless Umbrella Carrying Sissy
2017-01-13 22:07:15 UTC
Permalink
Post by Dimensional Traveler
Post by David Johnston
Post by Dimensional Traveler
Post by David Johnston
Post by Gene Wirchenko
On Thu, 12 Jan 2017 23:28:58 -0700, David Johnston
Post by David Johnston
Post by Gene Wirchenko
On Thu, 12 Jan 2017 11:52:37 -0700, David Johnston
[snip]
Post by David Johnston
The whole "through inaction cause humans to come to harm"
thing is basically intended to generate problems.
There's no reason to have it.
That was covered in one story. A robot could release a heavy
weight above a human knowing it could stop the weight from hitting the
human.
Except that it would know it had no such intention. The
potential
Not necessarily the case. What if in the split=second after,
priorities change?
If a robot was dropping an anvil for some totally legitimate
reason in the first place, then that a human happened to end
up standing under it while that happened is the kind of
accident that happens all the time in industrial settings and
be handled with mundane safety protocols and instructions.
"Mundane" safety protocols and instructions do not cover
context and background learning that every human goes thru
over the 15 to 20 years before they start their first job.
This is the big hang-up with programming robots and such.
There are so many layers of information and experience behind
the any procedures written for humans that we are not even
consciously aware of. Trying to become aware of them and then
write them in ways that cover every possible contingency just
is not realistic.
I was thinking in terms of "Don't stand underneath Anvilbot"
instructions for the human. Expecting the robot to do all the
work is unreasonable.
True. But you know why its so hard to make something fool-proof
don't you? The (human) fools are so ingenious as getting into
trouble.
"If you make something idiot proof, somebody will invent a better
idiot."
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Gutless Umbrella Carrying Sissy
2017-01-13 20:47:13 UTC
Permalink
Post by Dimensional Traveler
Post by David Johnston
Post by Gene Wirchenko
On Thu, 12 Jan 2017 23:28:58 -0700, David Johnston
Post by David Johnston
Post by Gene Wirchenko
On Thu, 12 Jan 2017 11:52:37 -0700, David Johnston
[snip]
Post by David Johnston
The whole "through inaction cause humans to come to harm"
thing is basically intended to generate problems. There's
no reason to have it.
That was covered in one story. A robot could release a heavy
weight above a human knowing it could stop the weight from
hitting the human.
Except that it would know it had no such intention. The
potential
Not necessarily the case. What if in the split=second
after,
priorities change?
If a robot was dropping an anvil for some totally legitimate
reason in the first place, then that a human happened to end up
standing under it while that happened is the kind of accident
that happens all the time in industrial settings and be handled
with mundane safety protocols and instructions.
"Mundane" safety protocols and instructions do not cover context
and background learning that every human goes thru over the 15
to 20 years before they start their first job. This is the big
hang-up with programming robots and such. There are so many
layers of information and experience behind the any procedures
written for humans that we are not even consciously aware of.
Trying to become aware of them and then write them in ways that
cover every possible contingency just is not realistic.
Which is why this entire discussion is irrelevant to real life. For
the three laws to matter, robots need *judgement*, human level
judgement. And there's no evidence that we are even beginning to
develop that ability. (I'm not sure this rule doesn't apply to self
driving cars, too. While there is a large potential for them in
certain, controlled circumstances, manual driving won't be going
away during out lifetimes.)
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Robert Carnegie
2017-01-13 08:26:06 UTC
Permalink
Post by David Johnston
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
IIRC, how Asimov told it is that John Campbell
inferred the Laws from what Asimov made robots do
in his early robot stories. The robot imperative
to act to save a human being from harm features
in the first, "Robbie", where the robot is a
nursemaid - and children put themselves in harm's
way a /lot/. So I suppose that the second part
of the First Law comes from that.

As for robots receiving unauthorised orders...
I think I proposed last time that a robot takes
into account the harm to its owner of being
deprived of an expensive robot's services
if it obeys unauthorised orders. But this
isn't quite right because the robots are all
owned by the company. So maybe the First Law
is that a robot may not harm U.S. Robotics, Inc.
As for harm to human beings, well that would
get the company sued. Harm to the company.

As would a Trolley Problem, usually.

I wonder how my theory applies to the safety of
a person who is /already/ suing the robot company.
David Johnston
2017-01-13 20:11:19 UTC
Permalink
Post by Robert Carnegie
Post by David Johnston
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
IIRC, how Asimov told it is that John Campbell
inferred the Laws from what Asimov made robots do
in his early robot stories. The robot imperative
to act to save a human being from harm features
in the first, "Robbie", where the robot is a
nursemaid - and children put themselves in harm's
way a /lot/.
Note that means that Robbie was just doing his job. You don't need
robots who ignore orders in order to physically restrain humans from
bungee jumping to get a lifeguard bot that will go out and rescue
distressed swimmers. That's just his job. The tricky part is teaching
him how to identify the difference between distressed swimmers and
people playing with a beach ball in the water.


So I suppose that the second part
Post by Robert Carnegie
of the First Law comes from that.
As for robots receiving unauthorised orders...
I think I proposed last time that a robot takes
into account the harm to its owner of being
deprived of an expensive robot's services
if it obeys unauthorised orders. But this
isn't quite right because the robots are all
owned by the company.
The lessor also has an interest in retaining the services of the bot
he's paying for. But that line of reasoning requires a robot with the
kind of understanding of economics you DON'T want in a first law robot.
Lynn McGuire
2017-01-12 19:13:26 UTC
Permalink
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
<snipped>

Yes, you do want *any* human telling *any* robot what to do. "Get off me" is the most common phrase screamed at industrial robots.
The second most common phrase is "let me go". Sadly, industrial robots rarely have hearing capability.

Lynn
Gutless Umbrella Carrying Sissy
2017-01-12 18:30:22 UTC
Permalink
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
You don't read Freefall, do you?
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Lynn McGuire
2017-01-12 19:35:46 UTC
Permalink
Post by Gutless Umbrella Carrying Sissy
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
You don't read Freefall, do you?
Yes, I do.

My Mechanical Engineering magazine had an article about industrial robots five or ten years ago (my time sense is gone). The
statistics on people getting run over or grabbed and assembled by industrial robots was grim.

Imagine 200 to 500 lb robots running around the place. The avoidance algorithm for robots will need to be without fault. Better
than the human variant. And no grabbing.

Lynn
Gutless Umbrella Carrying Sissy
2017-01-12 20:19:30 UTC
Permalink
Post by Lynn McGuire
Post by Gutless Umbrella Carrying Sissy
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules
on how humans interact with robots and AI. The report draws
on Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
You don't read Freefall, do you?
Yes, I do.
Then you just don't understand it. Note the absence of a question
mark.
Post by Lynn McGuire
My Mechanical Engineering magazine had an article about
industrial robots five or ten years ago (my time sense is gone).
The statistics on people getting run over or grabbed and
assembled by industrial robots was grim.
That does not equate to "any human any robot any time."
Post by Lynn McGuire
Imagine 200 to 500 lb robots running around the place.
Imagine Joe the Thief telling your robot to bring him your bank
account details. Or your car.

That there are circumstances where any human _who is there_ needs
to be able to give _specific_ instructions to any robot _that is
there_ is a far cry from the 2nd law.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
J. Clarke
2017-01-13 00:13:45 UTC
Permalink
Post by Lynn McGuire
Post by Gutless Umbrella Carrying Sissy
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
You don't read Freefall, do you?
Yes, I do.
My Mechanical Engineering magazine had an article about industrial robots five or ten years ago (my time sense is gone). The
statistics on people getting run over or grabbed and assembled by industrial robots was grim.
Imagine 200 to 500 lb robots running around the place. The avoidance algorithm for robots will need to be without fault. Better
than the human variant. And no grabbing.
It is said that the robot loader in certain
Russian tanks had a tendency to try to load the
gunner, to the detriment of same.
Dorothy J Heydt
2017-01-12 20:19:01 UTC
Permalink
Post by Gutless Umbrella Carrying Sissy
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
You don't read Freefall, do you?
Yes, I do. Note that the robot problem there is in the process
of solving itself.
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
Gutless Umbrella Carrying Sissy
2017-01-12 20:22:11 UTC
Permalink
Post by Lynn McGuire
Post by Gutless Umbrella Carrying Sissy
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
You don't read Freefall, do you?
Yes, I do.
That was to Lynn, who claims he does, but clearly doesn't
understand it.
Post by Lynn McGuire
Note that the robot problem there is in the process
of solving itself.
And not by "any human any robot any time." Since the author's not
an idiot.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Dorothy J Heydt
2017-01-12 20:18:23 UTC
Permalink
Post by Lynn McGuire
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do. "Get off
me" is the most common phrase screamed at industrial robots.
The second most common phrase is "let me go". Sadly, industrial robots
rarely have hearing capability.
Awww. And my WIP practically begins with a couple of
AIs-who-are-not-very-I being instructed to grasp and hold loose
cargo, which they do, not realizing that the "cargo" is three
humans who are trying to attack and loot the transport on which
the bots are riding. They are not, as the protagonist notes at
the time, the sharpest relays in the rack.
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
Dimensional Traveler
2017-01-12 22:12:27 UTC
Permalink
Post by Dorothy J Heydt
Post by Lynn McGuire
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do. "Get off
me" is the most common phrase screamed at industrial robots.
The second most common phrase is "let me go". Sadly, industrial robots
rarely have hearing capability.
Awww. And my WIP practically begins with a couple of
AIs-who-are-not-very-I being instructed to grasp and hold loose
cargo, which they do, not realizing that the "cargo" is three
humans who are trying to attack and loot the transport on which
the bots are riding. They are not, as the protagonist notes at
the time, the sharpest relays in the rack.
The real issue is authorization. Some commands you want anyone to be
able to give. Some commands you only want some people to be able to
give. Some people you don't want to be able to give any commands. (And
yes, there is a conflict there.) Trying to get the hierarchy of "these
people can give these kinds of commands at these times" in all its
variations is very difficult to get right even after you can get the
robots to understand all the variations in HOW to give just one command.
--
Running the rec.arts.TV Channels Watched Survey.
Winter 2016 survey began Dec 01 and will end Feb 28
Gutless Umbrella Carrying Sissy
2017-01-12 21:22:04 UTC
Permalink
Post by Dimensional Traveler
Post by Dorothy J Heydt
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules
on how humans interact with robots and AI. The report draws
on Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
"Get off me" is the most common phrase screamed at industrial
robots. The second most common phrase is "let me go". Sadly,
industrial robots rarely have hearing capability.
Awww. And my WIP practically begins with a couple of
AIs-who-are-not-very-I being instructed to grasp and hold loose
cargo, which they do, not realizing that the "cargo" is three
humans who are trying to attack and loot the transport on which
the bots are riding. They are not, as the protagonist notes at
the time, the sharpest relays in the rack.
The real issue is authorization. Some commands you want anyone
to be able to give. Some commands you only want some people to
be able to give. Some people you don't want to be able to give
any commands. (And yes, there is a conflict there.) Trying to
get the hierarchy of "these people can give these kinds of
commands at these times" in all its variations is very difficult
to get right even after you can get the robots to understand all
the variations in HOW to give just one command.
There's also a problem with commands you don't anticipate anyone
might give, in one or both of the first two categories. Default
"no" means people without high level authorizaiton, who need
something specific from a robot can only get what was expected. You
anticipated "Cease all fucntion immediately," sure, but did you
anticipate "Jump in front of the terrorists' explosive laden van"?
Default "yes" means that anybody can order the robot to do anything
the programmers didn't think to prohibit. You anticipated "Do no
follow orders to kill someone," but did you anticipate "Do not
follow orders to unplug the life support machine," which is not an
action directly upon a human? Even the third category can be fuzzy,
with "default yes" and "default no" having similiar opportunity for
confusion. What happens when the CEO of the company gets fired, ala
Robocop?
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Gene Wirchenko
2017-01-13 18:48:56 UTC
Permalink
On Thu, 12 Jan 2017 14:22:04 -0700, Gutless Umbrella Carrying Sissy
<***@gmail.com> wrote:

[snip]
Post by Gutless Umbrella Carrying Sissy
There's also a problem with commands you don't anticipate anyone
might give, in one or both of the first two categories. Default
"no" means people without high level authorizaiton, who need
something specific from a robot can only get what was expected. You
anticipated "Cease all fucntion immediately," sure, but did you
anticipate "Jump in front of the terrorists' explosive laden van"?
Default "yes" means that anybody can order the robot to do anything
the programmers didn't think to prohibit. You anticipated "Do no
follow orders to kill someone," but did you anticipate "Do not
follow orders to unplug the life support machine," which is not an
And an order that is vaguer. "Unplug that cord." (which is for a
life support machine, but the robot does not know that). You and I
might check what is going to get unplugged. Would a robot? Should a
robot?
Post by Gutless Umbrella Carrying Sissy
action directly upon a human? Even the third category can be fuzzy,
with "default yes" and "default no" having similiar opportunity for
confusion. What happens when the CEO of the company gets fired, ala
Robocop?
What happens if all of the people [authorised to authorise [who
can order robots and what they can order]] get fired? I hope the
brackets clarify what modifies what.

Sincerely,

Gene Wirchenko
Gutless Umbrella Carrying Sissy
2017-01-13 17:53:58 UTC
Permalink
Post by Gene Wirchenko
On Thu, 12 Jan 2017 14:22:04 -0700, Gutless Umbrella Carrying
[snip]
Post by Gutless Umbrella Carrying Sissy
There's also a problem with commands you don't anticipate anyone
might give, in one or both of the first two categories. Default
"no" means people without high level authorizaiton, who need
something specific from a robot can only get what was expected.
You anticipated "Cease all fucntion immediately," sure, but did
you anticipate "Jump in front of the terrorists' explosive laden
van"? Default "yes" means that anybody can order the robot to do
anything the programmers didn't think to prohibit. You
anticipated "Do no follow orders to kill someone," but did you
anticipate "Do not follow orders to unplug the life support
machine," which is not an
And an order that is vaguer. "Unplug that cord." (which is for a
life support machine, but the robot does not know that). You
and I might check what is going to get unplugged. Would a
robot? Should a robot?
And more important, can you anticipate that it should or shouldn't?
Post by Gene Wirchenko
Post by Gutless Umbrella Carrying Sissy
action directly upon a human? Even the third category can be
fuzzy, with "default yes" and "default no" having similiar
opportunity for confusion. What happens when the CEO of the
company gets fired, ala Robocop?
What happens if all of the people [authorised to authorise [who
can order robots and what they can order]] get fired? I hope
the brackets clarify what modifies what.
Hence the current discussion in the EU about kill switches, I
guess. The bottom line is, if you have access to the hardware, you
*can* ultimately control the behavior.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Dorothy J Heydt
2017-01-13 00:11:15 UTC
Permalink
Post by Dimensional Traveler
Post by Dorothy J Heydt
Post by Lynn McGuire
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do. "Get off
me" is the most common phrase screamed at industrial robots.
The second most common phrase is "let me go". Sadly, industrial robots
rarely have hearing capability.
Awww. And my WIP practically begins with a couple of
AIs-who-are-not-very-I being instructed to grasp and hold loose
cargo, which they do, not realizing that the "cargo" is three
humans who are trying to attack and loot the transport on which
the bots are riding. They are not, as the protagonist notes at
the time, the sharpest relays in the rack.
The real issue is authorization.
Oh, yes. The protagonist begins by giving the bots his
authorization code (which is pretty high, since he is both a
Patrol officer and an engineer).
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
David Johnston
2017-01-12 21:09:27 UTC
Permalink
Post by Lynn McGuire
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do. "Get off
me" is the most common phrase screamed at industrial robots. The second
most common phrase is "let me go". Sadly, industrial robots rarely have
hearing capability.
You do not however, want orders by unauthorized users taking equal
priority with orders by owners.
Lynn McGuire
2017-01-12 22:17:22 UTC
Permalink
Post by Lynn McGuire
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do. "Get off
me" is the most common phrase screamed at industrial robots. The second
most common phrase is "let me go". Sadly, industrial robots rarely have
hearing capability.
You do not however, want orders by unauthorized users taking equal priority with orders by owners.
It is a tough call to make. Are you willing to be personally responsible for the actions of your robot ?

And the Rogue One robot was most interesting.

Lynn
Gutless Umbrella Carrying Sissy
2017-01-12 21:23:03 UTC
Permalink
Post by Lynn McGuire
Post by David Johnston
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules
on how humans interact with robots and AI. The report draws
on Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
"Get off me" is the most common phrase screamed at industrial
robots. The second most common phrase is "let me go". Sadly,
industrial robots rarely have hearing capability.
You do not however, want orders by unauthorized users taking
equal priority with orders by owners.
It is a tough call to make. Are you willing to be personally
responsible for the actions of your robot ?
Are you willing to be personally responsible for the actions of
your robot _that were ordered by a complete stranger_? That *is*
what you're proposing, here.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Lynn McGuire
2017-01-12 22:41:00 UTC
Permalink
Post by Lynn McGuire
Post by Lynn McGuire
Post by David Johnston
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules
on how humans interact with robots and AI. The report draws
on Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
"Get off me" is the most common phrase screamed at industrial
robots. The second most common phrase is "let me go". Sadly,
industrial robots rarely have hearing capability.
You do not however, want orders by unauthorized users taking
equal priority with orders by owners.
It is a tough call to make. Are you willing to be personally
responsible for the actions of your robot ?
Are you willing to be personally responsible for the actions of
your robot _that were ordered by a complete stranger_? That *is*
what you're proposing, here.
The robot's black box will say what the stranger did to it.

But, of course the courts will probably ignore that he said "throw me off the building".

Lynn
Gutless Umbrella Carrying Sissy
2017-01-12 21:57:04 UTC
Permalink
Post by Lynn McGuire
Post by Lynn McGuire
Post by Lynn McGuire
Post by David Johnston
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules
on how humans interact with robots and AI. The report
draws on Asimov's three laws of robotics. Asimov was
ahead of his time.
But, as pointed out upthread, the Second and Third Laws
would need a great deal of tweaking; you do not want *any*
human telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
"Get off me" is the most common phrase screamed at
industrial robots. The second most common phrase is "let me
go". Sadly, industrial robots rarely have hearing
capability.
You do not however, want orders by unauthorized users taking
equal priority with orders by owners.
It is a tough call to make. Are you willing to be personally
responsible for the actions of your robot ?
Are you willing to be personally responsible for the actions of
your robot _that were ordered by a complete stranger_? That
*is* what you're proposing, here.
The robot's black box will say what the stranger did to it.
But it's *your* robot.
Post by Lynn McGuire
But, of course the courts will probably ignore that he said
"throw me off the building".
I'll worry about if I live long enough for self-aware robots to
exist, which is exceedingly unlikely.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Bill Dugan
2017-01-13 15:52:41 UTC
Permalink
On Thu, 12 Jan 2017 14:57:04 -0700, Gutless Umbrella Carrying Sissy
Post by Gutless Umbrella Carrying Sissy
Post by Lynn McGuire
Post by Lynn McGuire
Post by Lynn McGuire
Post by David Johnston
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules
on how humans interact with robots and AI. The report
draws on Asimov's three laws of robotics. Asimov was
ahead of his time.
But, as pointed out upthread, the Second and Third Laws
would need a great deal of tweaking; you do not want *any*
human telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
"Get off me" is the most common phrase screamed at
industrial robots. The second most common phrase is "let me
go". Sadly, industrial robots rarely have hearing
capability.
You do not however, want orders by unauthorized users taking
equal priority with orders by owners.
It is a tough call to make. Are you willing to be personally
responsible for the actions of your robot ?
Are you willing to be personally responsible for the actions of
your robot _that were ordered by a complete stranger_? That
*is* what you're proposing, here.
The robot's black box will say what the stranger did to it.
But it's *your* robot.
Post by Lynn McGuire
But, of course the courts will probably ignore that he said
"throw me off the building".
I'll worry about if I live long enough for self-aware robots to
exist, which is exceedingly unlikely.
They don't have to be self-aware in any meaningful sense for such
problems to arise. Voice controlled home automation systems are
already having some of these authorization issues.
Gutless Umbrella Carrying Sissy
2017-01-13 16:22:41 UTC
Permalink
Post by Bill Dugan
On Thu, 12 Jan 2017 14:57:04 -0700, Gutless Umbrella Carrying
Post by Gutless Umbrella Carrying Sissy
Post by Lynn McGuire
Post by Lynn McGuire
Post by Lynn McGuire
Post by David Johnston
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have
rules on how humans interact with robots and AI. The
report draws on Asimov's three laws of robotics. Asimov
was ahead of his time.
But, as pointed out upthread, the Second and Third Laws
would need a great deal of tweaking; you do not want
*any* human telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to
do. "Get off me" is the most common phrase screamed at
industrial robots. The second most common phrase is "let
me go". Sadly, industrial robots rarely have hearing
capability.
You do not however, want orders by unauthorized users
taking equal priority with orders by owners.
It is a tough call to make. Are you willing to be
personally responsible for the actions of your robot ?
Are you willing to be personally responsible for the actions
of your robot _that were ordered by a complete stranger_?
That *is* what you're proposing, here.
The robot's black box will say what the stranger did to it.
But it's *your* robot.
Post by Lynn McGuire
But, of course the courts will probably ignore that he said
"throw me off the building".
I'll worry about if I live long enough for self-aware robots to
exist, which is exceedingly unlikely.
They don't have to be self-aware in any meaningful sense for
such problems to arise. Voice controlled home automation systems
are already having some of these authorization issues.
But aren't making decisions in any meaningful, three laws sense.
There is no computer in the world capable of doing anything other
than what it's programme to do, and no indication there will be
any time soon.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Bill Dugan
2017-01-13 18:20:41 UTC
Permalink
On Fri, 13 Jan 2017 09:22:41 -0700, Gutless Umbrella Carrying Sissy
Post by Gutless Umbrella Carrying Sissy
Post by Bill Dugan
On Thu, 12 Jan 2017 14:57:04 -0700, Gutless Umbrella Carrying
Post by Gutless Umbrella Carrying Sissy
Post by Lynn McGuire
Post by Lynn McGuire
Post by Lynn McGuire
Post by David Johnston
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have
rules on how humans interact with robots and AI. The
report draws on Asimov's three laws of robotics. Asimov
was ahead of his time.
But, as pointed out upthread, the Second and Third Laws
would need a great deal of tweaking; you do not want
*any* human telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to
do. "Get off me" is the most common phrase screamed at
industrial robots. The second most common phrase is "let
me go". Sadly, industrial robots rarely have hearing
capability.
You do not however, want orders by unauthorized users
taking equal priority with orders by owners.
It is a tough call to make. Are you willing to be
personally responsible for the actions of your robot ?
Are you willing to be personally responsible for the actions
of your robot _that were ordered by a complete stranger_?
That *is* what you're proposing, here.
The robot's black box will say what the stranger did to it.
But it's *your* robot.
Post by Lynn McGuire
But, of course the courts will probably ignore that he said
"throw me off the building".
I'll worry about if I live long enough for self-aware robots to
exist, which is exceedingly unlikely.
They don't have to be self-aware in any meaningful sense for
such problems to arise. Voice controlled home automation systems
are already having some of these authorization issues.
But aren't making decisions in any meaningful, three laws sense.
There is no computer in the world capable of doing anything other
than what it's programme to do, and no indication there will be
any time soon.
Probably true. I was responding to the discussion about who was
authorized to do what.

Even today you can have issues with voice-controlled systems. I
wouldn't want the small grandchildren or some casual guest to be
able to give all commands.
Gutless Umbrella Carrying Sissy
2017-01-13 17:50:56 UTC
Permalink
Post by Bill Dugan
On Fri, 13 Jan 2017 09:22:41 -0700, Gutless Umbrella Carrying
Post by Gutless Umbrella Carrying Sissy
Post by Bill Dugan
On Thu, 12 Jan 2017 14:57:04 -0700, Gutless Umbrella Carrying
Post by Gutless Umbrella Carrying Sissy
Post by Lynn McGuire
Post by Lynn McGuire
Post by Lynn McGuire
Post by David Johnston
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have
rules on how humans interact with robots and AI. The
report draws on Asimov's three laws of robotics.
Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws
would need a great deal of tweaking; you do not want
*any* human telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to
do. "Get off me" is the most common phrase screamed at
industrial robots. The second most common phrase is "let
me go". Sadly, industrial robots rarely have hearing
capability.
You do not however, want orders by unauthorized users
taking equal priority with orders by owners.
It is a tough call to make. Are you willing to be
personally responsible for the actions of your robot ?
Are you willing to be personally responsible for the
actions of your robot _that were ordered by a complete
stranger_? That *is* what you're proposing, here.
The robot's black box will say what the stranger did to it.
But it's *your* robot.
Post by Lynn McGuire
But, of course the courts will probably ignore that he said
"throw me off the building".
I'll worry about if I live long enough for self-aware robots
to exist, which is exceedingly unlikely.
They don't have to be self-aware in any meaningful sense for
such problems to arise. Voice controlled home automation
systems are already having some of these authorization issues.
But aren't making decisions in any meaningful, three laws sense.
There is no computer in the world capable of doing anything
other than what it's programme to do, and no indication there
will be any time soon.
Probably true. I was responding to the discussion about who was
authorized to do what.
In the context of whether or not Azomov's three laws should be the
guiding principles for robotics today - with no robots capable of
implementing them in existence, or likely to be in our lifetimes.
From a practical standpoint, it's a meaningless discussion. But
this ia an sf group, and it's entirely on-topic.
Post by Bill Dugan
Even today you can have issues with voice-controlled systems. I
wouldn't want the small grandchildren or some casual guest to
be able to give all commands.
Since the technology isn't capable of making the distinction, it's
still a pointless discussion on any practical, real-life level. As
evidence by the recent news stories about a TV news story on Alexa
triggering Alexas in many homes watching the news to order a
dollhouse.

There are issues, but they are issues with programming that cannot
do what is claimed by the marketing drones selling the goods. And
that's an issue as old as marketing, and has nothing to do with
AI. That's just an issue with lying marketing drones.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
J. Clarke
2017-01-13 19:11:46 UTC
Permalink
Post by Bill Dugan
On Fri, 13 Jan 2017 09:22:41 -0700, Gutless Umbrella Carrying Sissy
Post by Gutless Umbrella Carrying Sissy
Post by Bill Dugan
On Thu, 12 Jan 2017 14:57:04 -0700, Gutless Umbrella Carrying
Post by Gutless Umbrella Carrying Sissy
Post by Lynn McGuire
Post by Lynn McGuire
Post by Lynn McGuire
Post by David Johnston
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have
rules on how humans interact with robots and AI. The
report draws on Asimov's three laws of robotics. Asimov
was ahead of his time.
But, as pointed out upthread, the Second and Third Laws
would need a great deal of tweaking; you do not want
*any* human telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to
do. "Get off me" is the most common phrase screamed at
industrial robots. The second most common phrase is "let
me go". Sadly, industrial robots rarely have hearing
capability.
You do not however, want orders by unauthorized users
taking equal priority with orders by owners.
It is a tough call to make. Are you willing to be
personally responsible for the actions of your robot ?
Are you willing to be personally responsible for the actions
of your robot _that were ordered by a complete stranger_?
That *is* what you're proposing, here.
The robot's black box will say what the stranger did to it.
But it's *your* robot.
Post by Lynn McGuire
But, of course the courts will probably ignore that he said
"throw me off the building".
I'll worry about if I live long enough for self-aware robots to
exist, which is exceedingly unlikely.
They don't have to be self-aware in any meaningful sense for
such problems to arise. Voice controlled home automation systems
are already having some of these authorization issues.
But aren't making decisions in any meaningful, three laws sense.
There is no computer in the world capable of doing anything other
than what it's programme to do, and no indication there will be
any time soon.
Probably true. I was responding to the discussion about who was
authorized to do what.
Even today you can have issues with voice-controlled systems. I
wouldn't want the small grandchildren or some casual guest to be
able to give all commands.
Uh, the downside on all this is that the only
people in the house likely to actually know how
to program the thing are the small
grandchildren. This is why the "v-chip" struck
me as such an abysmally stupid idea.
Juho Julkunen
2017-01-13 20:10:35 UTC
Permalink
Post by J. Clarke
Post by Bill Dugan
On Fri, 13 Jan 2017 09:22:41 -0700, Gutless Umbrella Carrying Sissy
Post by Gutless Umbrella Carrying Sissy
Post by Bill Dugan
On Thu, 12 Jan 2017 14:57:04 -0700, Gutless Umbrella Carrying
Post by Gutless Umbrella Carrying Sissy
Post by Lynn McGuire
Post by Lynn McGuire
Post by Lynn McGuire
Post by David Johnston
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have
rules on how humans interact with robots and AI. The
report draws on Asimov's three laws of robotics. Asimov
was ahead of his time.
But, as pointed out upthread, the Second and Third Laws
would need a great deal of tweaking; you do not want
*any* human telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to
do. "Get off me" is the most common phrase screamed at
industrial robots. The second most common phrase is "let
me go". Sadly, industrial robots rarely have hearing
capability.
You do not however, want orders by unauthorized users
taking equal priority with orders by owners.
It is a tough call to make. Are you willing to be
personally responsible for the actions of your robot ?
Are you willing to be personally responsible for the actions
of your robot _that were ordered by a complete stranger_?
That *is* what you're proposing, here.
The robot's black box will say what the stranger did to it.
But it's *your* robot.
Post by Lynn McGuire
But, of course the courts will probably ignore that he said
"throw me off the building".
I'll worry about if I live long enough for self-aware robots to
exist, which is exceedingly unlikely.
They don't have to be self-aware in any meaningful sense for
such problems to arise. Voice controlled home automation systems
are already having some of these authorization issues.
But aren't making decisions in any meaningful, three laws sense.
There is no computer in the world capable of doing anything other
than what it's programme to do, and no indication there will be
any time soon.
Probably true. I was responding to the discussion about who was
authorized to do what.
Even today you can have issues with voice-controlled systems. I
wouldn't want the small grandchildren or some casual guest to be
able to give all commands.
Uh, the downside on all this is that the only
people in the house likely to actually know how
to program the thing are the small
grandchildren. This is why the "v-chip" struck
me as such an abysmally stupid idea.
We are past peak tech-savvy. The kids these days have grown up with
tablets and smartphones that don't require much expertise to operate,
unlike the personal computers of the recent past.

Turns out the digital natives don't actually acquire advanced computer
skills without being taught, and are largely consumers of digital
content.
--
Juho Julkunen
J. Clarke
2017-01-13 21:00:09 UTC
Permalink
In article
Post by Juho Julkunen
Post by J. Clarke
Post by Bill Dugan
On Fri, 13 Jan 2017 09:22:41 -0700, Gutless Umbrella Carrying Sissy
Post by Gutless Umbrella Carrying Sissy
Post by Bill Dugan
On Thu, 12 Jan 2017 14:57:04 -0700, Gutless Umbrella Carrying
Post by Gutless Umbrella Carrying Sissy
Post by Lynn McGuire
Post by Lynn McGuire
Post by Lynn McGuire
Post by David Johnston
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have
rules on how humans interact with robots and AI. The
report draws on Asimov's three laws of robotics. Asimov
was ahead of his time.
But, as pointed out upthread, the Second and Third Laws
would need a great deal of tweaking; you do not want
*any* human telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to
do. "Get off me" is the most common phrase screamed at
industrial robots. The second most common phrase is "let
me go". Sadly, industrial robots rarely have hearing
capability.
You do not however, want orders by unauthorized users
taking equal priority with orders by owners.
It is a tough call to make. Are you willing to be
personally responsible for the actions of your robot ?
Are you willing to be personally responsible for the actions
of your robot _that were ordered by a complete stranger_?
That *is* what you're proposing, here.
The robot's black box will say what the stranger did to it.
But it's *your* robot.
Post by Lynn McGuire
But, of course the courts will probably ignore that he said
"throw me off the building".
I'll worry about if I live long enough for self-aware robots to
exist, which is exceedingly unlikely.
They don't have to be self-aware in any meaningful sense for
such problems to arise. Voice controlled home automation systems
are already having some of these authorization issues.
But aren't making decisions in any meaningful, three laws sense.
There is no computer in the world capable of doing anything other
than what it's programme to do, and no indication there will be
any time soon.
Probably true. I was responding to the discussion about who was
authorized to do what.
Even today you can have issues with voice-controlled systems. I
wouldn't want the small grandchildren or some casual guest to be
able to give all commands.
Uh, the downside on all this is that the only
people in the house likely to actually know how
to program the thing are the small
grandchildren. This is why the "v-chip" struck
me as such an abysmally stupid idea.
We are past peak tech-savvy. The kids these days have grown up with
tablets and smartphones that don't require much expertise to operate,
unlike the personal computers of the recent past.
Turns out the digital natives don't actually acquire advanced computer
skills without being taught, and are largely consumers of digital
content.
Which is why the cell-phone packin' mama still
needs to have the kid set the clock on the
microwave.
Gene Wirchenko
2017-01-14 01:07:34 UTC
Permalink
On Fri, 13 Jan 2017 16:00:09 -0500, "J. Clarke"
In article
[snip]
Post by Juho Julkunen
Turns out the digital natives don't actually acquire advanced computer
skills without being taught, and are largely consumers of digital
content.
Which is why the cell-phone packin' mama still
needs to have the kid set the clock on the
microwave.
Some older folk learn how. Some do not and even refuse to.

Sincerely,

Gene Wirchenko

Gene Wirchenko
2017-01-14 01:06:29 UTC
Permalink
On Fri, 13 Jan 2017 22:10:35 +0200, Juho Julkunen
<***@hotmail.com> wrote:

[snip]
Post by Juho Julkunen
Turns out the digital natives don't actually acquire advanced computer
skills without being taught, and are largely consumers of digital
content.
Quite. I saw this during my uni time. "The young folk who know
so much about computers" know mainly from a consumer perspective, not
a technical one.

Sincerely,

Gene Wirchenko
Don Bruder
2017-01-13 21:44:25 UTC
Permalink
Post by Bill Dugan
On Fri, 13 Jan 2017 09:22:41 -0700, Gutless Umbrella Carrying Sissy
Post by Gutless Umbrella Carrying Sissy
Post by Bill Dugan
On Thu, 12 Jan 2017 14:57:04 -0700, Gutless Umbrella Carrying
Post by Gutless Umbrella Carrying Sissy
Post by Lynn McGuire
Post by Lynn McGuire
Post by Lynn McGuire
Post by David Johnston
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have
rules on how humans interact with robots and AI. The
report draws on Asimov's three laws of robotics. Asimov
was ahead of his time.
But, as pointed out upthread, the Second and Third Laws
would need a great deal of tweaking; you do not want
*any* human telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to
do. "Get off me" is the most common phrase screamed at
industrial robots. The second most common phrase is "let
me go". Sadly, industrial robots rarely have hearing
capability.
You do not however, want orders by unauthorized users
taking equal priority with orders by owners.
It is a tough call to make. Are you willing to be
personally responsible for the actions of your robot ?
Are you willing to be personally responsible for the actions
of your robot _that were ordered by a complete stranger_?
That *is* what you're proposing, here.
The robot's black box will say what the stranger did to it.
But it's *your* robot.
Post by Lynn McGuire
But, of course the courts will probably ignore that he said
"throw me off the building".
I'll worry about if I live long enough for self-aware robots to
exist, which is exceedingly unlikely.
They don't have to be self-aware in any meaningful sense for
such problems to arise. Voice controlled home automation systems
are already having some of these authorization issues.
But aren't making decisions in any meaningful, three laws sense.
There is no computer in the world capable of doing anything other
than what it's programme to do, and no indication there will be
any time soon.
Probably true. I was responding to the discussion about who was
authorized to do what.
Even today you can have issues with voice-controlled systems. I
wouldn't want the small grandchildren or some casual guest to be
able to give all commands.
Y'mean like the story I just caught on the morning radio show a day or
three ago?

The gist: Family got one of those alexa things for Xmas. Apparently the
house four-year-old told alexa she wanted some cookies and a dollhouse.
Not long after, 40-odd pounds of sugar cookies and a $600 dollhouse
appeared. Folks not happy, obviously...
--
Brought to you by the letter K and the number .357
Security provided by Horace S. & Dan W.
Gene Wirchenko
2017-01-13 18:51:34 UTC
Permalink
Post by Bill Dugan
On Thu, 12 Jan 2017 14:57:04 -0700, Gutless Umbrella Carrying Sissy
[snip]
Post by Bill Dugan
Post by Gutless Umbrella Carrying Sissy
I'll worry about if I live long enough for self-aware robots to
exist, which is exceedingly unlikely.
They don't have to be self-aware in any meaningful sense for such
problems to arise. Voice controlled home automation systems are
already having some of these authorization issues.
I have seen it in IT news this week. Here is one story:

http://www.cnn.com/2017/01/05/health/amazon-alexa-dollhouse-trnd/index.html

Sincerely,

Gene Wirchenko
Gutless Umbrella Carrying Sissy
2017-01-13 17:55:07 UTC
Permalink
On Fri, 13 Jan 2017 07:52:41 -0800, Bill Dugan
Post by Bill Dugan
On Thu, 12 Jan 2017 14:57:04 -0700, Gutless Umbrella Carrying
[snip]
Post by Bill Dugan
Post by Gutless Umbrella Carrying Sissy
I'll worry about if I live long enough for self-aware robots to
exist, which is exceedingly unlikely.
They don't have to be self-aware in any meaningful sense for
such problems to arise. Voice controlled home automation systems
are already having some of these authorization issues.
http://www.cnn.com/2017/01/05/health/amazon-alexa-dollhouse-trnd/
index.html
And while there are approaches to mitigating that, they are band-
aids, at best. The technology to make the distinction simply isn't
there. Hell, the technology to take voice commands and interpret them
correctly as well as a human could isn't there yet.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Peter Trei
2017-01-13 19:28:00 UTC
Permalink
Post by Bill Dugan
On Thu, 12 Jan 2017 14:57:04 -0700, Gutless Umbrella Carrying Sissy
Post by Gutless Umbrella Carrying Sissy
Post by Lynn McGuire
Post by Lynn McGuire
Post by Lynn McGuire
Post by David Johnston
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules
on how humans interact with robots and AI. The report
draws on Asimov's three laws of robotics. Asimov was
ahead of his time.
But, as pointed out upthread, the Second and Third Laws
would need a great deal of tweaking; you do not want *any*
human telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
"Get off me" is the most common phrase screamed at
industrial robots. The second most common phrase is "let me
go". Sadly, industrial robots rarely have hearing
capability.
You do not however, want orders by unauthorized users taking
equal priority with orders by owners.
It is a tough call to make. Are you willing to be personally
responsible for the actions of your robot ?
Are you willing to be personally responsible for the actions of
your robot _that were ordered by a complete stranger_? That
*is* what you're proposing, here.
The robot's black box will say what the stranger did to it.
But it's *your* robot.
Post by Lynn McGuire
But, of course the courts will probably ignore that he said
"throw me off the building".
I'll worry about if I live long enough for self-aware robots to
exist, which is exceedingly unlikely.
They don't have to be self-aware in any meaningful sense for such
problems to arise. Voice controlled home automation systems are
already having some of these authorization issues.
I've heard three stories already.

1. Burgler shouts 'Alexa, unlock the front door', while on the doorstep.
Inside, Alexa complies.

2. TV news anchor tries Amazon Echo out: 'Echo, order me a dollhouse.'.
It does so, as do dozens of Echoes in houses of people watching that show.

3. Toddler asks for his favorite song, 'Digger Digger'. Alexa responds with available porn film titles.
Gutless Umbrella Carrying Sissy
2017-01-13 18:33:49 UTC
Permalink
Post by Peter Trei
1. Burgler shouts 'Alexa, unlock the front door', while on the
doorstep. Inside, Alexa complies.
Anybody who hooks a device like that up to a door lock deserves to
never own anything, ever again.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Peter Trei
2017-01-13 21:04:30 UTC
Permalink
Post by Gutless Umbrella Carrying Sissy
Post by Peter Trei
1. Burgler shouts 'Alexa, unlock the front door', while on the
doorstep. Inside, Alexa complies.
Anybody who hooks a device like that up to a door lock deserves to
never own anything, ever again.
I can't remember the details - it may have been someone testing that
scenario (successfully) rather than an actual burglar. I think the owner
just hadn't considered that the device would respond to voices from outside
the door.

pt
Gutless Umbrella Carrying Sissy
2017-01-13 20:44:42 UTC
Permalink
On Friday, January 13, 2017 at 2:33:55 PM UTC-5, Gutless
Post by Gutless Umbrella Carrying Sissy
Post by Peter Trei
1. Burgler shouts 'Alexa, unlock the front door', while on
the doorstep. Inside, Alexa complies.
Anybody who hooks a device like that up to a door lock deserves
to never own anything, ever again.
I can't remember the details - it may have been someone testing
that scenario (successfully) rather than an actual burglar. I
think the owner just hadn't considered that the device would
respond to voices from outside the door.
Or it's apocryphal. The closest I find is an account of a neighbor
using Siri to open a door, but that doesn't seem to be the same
thing.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Peter Trei
2017-01-13 22:18:31 UTC
Permalink
Post by Gutless Umbrella Carrying Sissy
On Friday, January 13, 2017 at 2:33:55 PM UTC-5, Gutless
Post by Gutless Umbrella Carrying Sissy
Post by Peter Trei
1. Burgler shouts 'Alexa, unlock the front door', while on
the doorstep. Inside, Alexa complies.
Anybody who hooks a device like that up to a door lock deserves
to never own anything, ever again.
I can't remember the details - it may have been someone testing
that scenario (successfully) rather than an actual burglar. I
think the owner just hadn't considered that the device would
respond to voices from outside the door.
Or it's apocryphal. The closest I find is an account of a neighbor
using Siri to open a door, but that doesn't seem to be the same
thing.
As I suggested, my memory of the incident is vague. Here's what appears
to be the root story:

https://www.reddit.com/r/technology/comments/532gmg/my_neighbor_just_let_himself_into_my_locked_house/

There are going to be a lot more people wanting to do things like this,
and I wonder how long it will be before a criminal leverages the system.

My wife has suggested that we replace our front door lock with one that
is like my car door - when I pull the handle, it wirelessly checks for a
transponder in my pocket, and unlocks if its present.

I'm not too keen on the idea.

pt
J. Clarke
2017-01-13 22:34:44 UTC
Permalink
In article <81320a52-cefd-4f59-9396-
Post by Peter Trei
Post by Gutless Umbrella Carrying Sissy
On Friday, January 13, 2017 at 2:33:55 PM UTC-5, Gutless
Post by Gutless Umbrella Carrying Sissy
Post by Peter Trei
1. Burgler shouts 'Alexa, unlock the front door', while on
the doorstep. Inside, Alexa complies.
Anybody who hooks a device like that up to a door lock deserves
to never own anything, ever again.
I can't remember the details - it may have been someone testing
that scenario (successfully) rather than an actual burglar. I
think the owner just hadn't considered that the device would
respond to voices from outside the door.
Or it's apocryphal. The closest I find is an account of a neighbor
using Siri to open a door, but that doesn't seem to be the same
thing.
As I suggested, my memory of the incident is vague. Here's what appears
https://www.reddit.com/r/technology/comments/532gmg/my_neighbor_just_let_himself_into_my_locked_house/
There are going to be a lot more people wanting to do things like this,
and I wonder how long it will be before a criminal leverages the system.
My wife has suggested that we replace our front door lock with one that
is like my car door - when I pull the handle, it wirelessly checks for a
transponder in my pocket, and unlocks if its present.
I'm not too keen on the idea.
Makes more sense that it was Siri than Alexa,
otoh, Siri actually doing anything when
addressed through a door is surprising.

As for the wireless front door, how's that
particular arrangement better than a plain old
fashioned key? Harder to pick maybe but
burglars don't pick locks, they break them.
Gutless Umbrella Carrying Sissy
2017-01-13 21:52:14 UTC
Permalink
Post by J. Clarke
In article <81320a52-cefd-4f59-9396-
On Friday, January 13, 2017 at 4:44:45 PM UTC-5, Gutless
Post by Gutless Umbrella Carrying Sissy
On Friday, January 13, 2017 at 2:33:55 PM UTC-5, Gutless
Post by Gutless Umbrella Carrying Sissy
Post by Peter Trei
1. Burgler shouts 'Alexa, unlock the front door', while
on the doorstep. Inside, Alexa complies.
Anybody who hooks a device like that up to a door lock
deserves to never own anything, ever again.
I can't remember the details - it may have been someone
testing that scenario (successfully) rather than an actual
burglar. I think the owner just hadn't considered that the
device would respond to voices from outside the door.
Or it's apocryphal. The closest I find is an account of a
neighbor using Siri to open a door, but that doesn't seem to
be the same thing.
As I suggested, my memory of the incident is vague. Here's what
https://www.reddit.com/r/technology/comments/532gmg/my_neighbor_
just_let_himself_into_my_locked_house/
There are going to be a lot more people wanting to do things
like this, and I wonder how long it will be before a criminal
leverages the system.
My wife has suggested that we replace our front door lock with
one that is like my car door - when I pull the handle, it
wirelessly checks for a transponder in my pocket, and unlocks
if its present.
I'm not too keen on the idea.
Makes more sense that it was Siri than Alexa, otoh,
Siri actually doing anything when
addressed through a door is surprising.
It makes equal sense either way. All you need is the volume, and
frankly, it doesn't take that much if the microphone is near the
door. The technology to distinguish between individuals by voice
print simply isn't there.
Post by J. Clarke
As for the wireless front door, how's that
particular arrangement better than a plain old
fashioned key? Harder to pick maybe but
burglars don't pick locks, they break them.
Or break windows, or any of a dozen other methods of entry. But
traditional burglary tehcniques at least require physical tools.
With this, someone who is buck naked can get into your house. It's
the same as not having any lock at all.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Gutless Umbrella Carrying Sissy
2017-01-13 21:48:36 UTC
Permalink
On Friday, January 13, 2017 at 4:44:45 PM UTC-5, Gutless
Post by Gutless Umbrella Carrying Sissy
On Friday, January 13, 2017 at 2:33:55 PM UTC-5, Gutless
Post by Gutless Umbrella Carrying Sissy
Post by Peter Trei
1. Burgler shouts 'Alexa, unlock the front door', while on
the doorstep. Inside, Alexa complies.
Anybody who hooks a device like that up to a door lock
deserves to never own anything, ever again.
I can't remember the details - it may have been someone
testing that scenario (successfully) rather than an actual
burglar. I think the owner just hadn't considered that the
device would respond to voices from outside the door.
Or it's apocryphal. The closest I find is an account of a
neighbor using Siri to open a door, but that doesn't seem to be
the same thing.
As I suggested, my memory of the incident is vague. Here's what
https://www.reddit.com/r/technology/comments/532gmg/my_neighbor_j
ust_let_himself_into_my_locked_house/
That's the one I saw.
There are going to be a lot more people wanting to do things
like this, and I wonder how long it will be before a criminal
leverages the system.
As I said, anybody who installs this deserves what they get.
My wife has suggested that we replace our front door lock with
one that is like my car door - when I pull the handle, it
wirelessly checks for a transponder in my pocket, and unlocks if
its present.
There isn't a car system that hasn't been hacked, and thoroughly
so. If someone is at all inclined, for less than $100, IIRC, they
can set up a repeater that's near you - wherever you are - that
sends the appropriate signal to a confederate by the door.
Encryption doesn't matter; the legitimate key fob is talking to the
lock. It's been demonstrated on cars.
I'm not too keen on the idea.
I don't blame you.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
J. Clarke
2017-01-13 22:27:19 UTC
Permalink
In article <e967c3fe-2bf0-4baf-bf6d-cf4f6f631149
@googlegroups.com>, ***@gmail.com says...
Post by Peter Trei
Post by Gutless Umbrella Carrying Sissy
Post by Peter Trei
1. Burgler shouts 'Alexa, unlock the front door', while on the
doorstep. Inside, Alexa complies.
Anybody who hooks a device like that up to a door lock deserves to
never own anything, ever again.
I can't remember the details - it may have been someone testing that
scenario (successfully) rather than an actual burglar. I think the owner
just hadn't considered that the device would respond to voices from outside
the door.
C-Net had enough trouble getting it to unlock
the door for the people for whom it was supposed
to unlock it.

This is most likely an urban legend at this
point--there's just not enough of this stuff out
there and interfacing Alexa to a door lock takes
more fiddling than most people are going to
bother with.
Dimensional Traveler
2017-01-13 22:59:30 UTC
Permalink
Post by Peter Trei
Post by Gutless Umbrella Carrying Sissy
Post by Peter Trei
1. Burgler shouts 'Alexa, unlock the front door', while on the
doorstep. Inside, Alexa complies.
Anybody who hooks a device like that up to a door lock deserves to
never own anything, ever again.
I can't remember the details - it may have been someone testing that
scenario (successfully) rather than an actual burglar. I think the owner
just hadn't considered that the device would respond to voices from outside
the door.
And its exactly those kinds of unthinking assumptions that make
programming multi-function robots so difficult.
--
Running the rec.arts.TV Channels Watched Survey.
Winter 2016 survey began Dec 01 and will end Feb 28
Gutless Umbrella Carrying Sissy
2017-01-13 22:05:38 UTC
Permalink
Post by Dimensional Traveler
On Friday, January 13, 2017 at 2:33:55 PM UTC-5, Gutless
Post by Gutless Umbrella Carrying Sissy
Post by Peter Trei
1. Burgler shouts 'Alexa, unlock the front door', while on
the doorstep. Inside, Alexa complies.
Anybody who hooks a device like that up to a door lock
deserves to never own anything, ever again.
I can't remember the details - it may have been someone testing
that scenario (successfully) rather than an actual burglar. I
think the owner just hadn't considered that the device would
respond to voices from outside the door.
And its exactly those kinds of unthinking assumptions that make
programming multi-function robots so difficult.
I wouldn't assume a *person* wouldn't respond to a command to open
the door from outside. There are a lot of stupid people in the world.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Lynn McGuire
2017-01-13 23:22:53 UTC
Permalink
Post by Gutless Umbrella Carrying Sissy
Post by Peter Trei
1. Burgler shouts 'Alexa, unlock the front door', while on the
doorstep. Inside, Alexa complies.
Anybody who hooks a device like that up to a door lock deserves to
never own anything, ever again.
Locks are to tell honest people where the limits are.

Lynn
Gutless Umbrella Carrying Sissy
2017-01-13 23:37:04 UTC
Permalink
Post by Lynn McGuire
Post by Gutless Umbrella Carrying Sissy
Post by Peter Trei
1. Burgler shouts 'Alexa, unlock the front door', while on the
doorstep. Inside, Alexa complies.
Anybody who hooks a device like that up to a door lock deserves to
never own anything, ever again.
Locks are to tell honest people where the limits are.
A lock that will open for anyone standing outside the door . . .
isn't a lock. This doesn't even require tools to open. I'm not sure
it would even qualify as breaking and entering.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Gutless Umbrella Carrying Sissy
2017-01-13 18:35:21 UTC
Permalink
I've just been thinking that even Star Trek recognized the hazards of
relying on voice recognition for high security apps. The self
destruct required not only a voice print, but a spoken personal code,
to activate.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Dorothy J Heydt
2017-01-13 00:13:09 UTC
Permalink
Post by Lynn McGuire
Post by Lynn McGuire
Post by David Johnston
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules
on how humans interact with robots and AI. The report draws
on Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
"Get off me" is the most common phrase screamed at industrial
robots. The second most common phrase is "let me go". Sadly,
industrial robots rarely have hearing capability.
You do not however, want orders by unauthorized users taking
equal priority with orders by owners.
It is a tough call to make. Are you willing to be personally
responsible for the actions of your robot ?
Are you willing to be personally responsible for the actions of
your robot _that were ordered by a complete stranger_? That *is*
what you're proposing, here.
Well, again, my guy gives orders to a robot that were
diametrically opposed to the wishes of its owner, who didn't in
fact know it was possible to give those orders. But it's all in
a good cause.
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
Gutless Umbrella Carrying Sissy
2017-01-13 02:28:38 UTC
Permalink
Post by Dorothy J Heydt
Post by Lynn McGuire
Post by Lynn McGuire
Post by David Johnston
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules
on how humans interact with robots and AI. The report
draws on Asimov's three laws of robotics. Asimov was
ahead of his time.
But, as pointed out upthread, the Second and Third Laws
would need a great deal of tweaking; you do not want *any*
human telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
"Get off me" is the most common phrase screamed at
industrial robots. The second most common phrase is "let me
go". Sadly, industrial robots rarely have hearing
capability.
You do not however, want orders by unauthorized users taking
equal priority with orders by owners.
It is a tough call to make. Are you willing to be personally
responsible for the actions of your robot ?
Are you willing to be personally responsible for the actions of
your robot _that were ordered by a complete stranger_? That *is*
what you're proposing, here.
Well, again, my guy gives orders to a robot that were
diametrically opposed to the wishes of its owner, who didn't in
fact know it was possible to give those orders.
If the rule is that anybody can give any robot any order, then not
knowing your robot could receive that order is criminal negligence
on your part. Or stupidity so profound you need to be
institutionalized for your own safety.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
David Johnston
2017-01-13 04:33:42 UTC
Permalink
Post by Lynn McGuire
Post by David Johnston
Post by Lynn McGuire
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do. "Get off
me" is the most common phrase screamed at industrial robots. The second
most common phrase is "let me go". Sadly, industrial robots rarely have
hearing capability.
You do not however, want orders by unauthorized users taking equal
priority with orders by owners.
It is a tough call to make.
Not especially. You just ensure that people with a legitimate reason to
be in proximity to a robot are authorized users. And that the
safeguards against robot's hurting people are solid.

Are you willing to be personally
Post by Lynn McGuire
responsible for the actions of your robot ?
And the Rogue One robot was most interesting.
Lynn
Gene Wirchenko
2017-01-13 18:54:25 UTC
Permalink
On Thu, 12 Jan 2017 21:33:42 -0700, David Johnston
<***@yahoo.com> wrote:

[snip]
Post by David Johnston
Not especially. You just ensure that people with a legitimate reason to
be in proximity to a robot are authorized users. And that the
How? (Implementation can be a real bear.)

Consider a sheriff with a court order to seize a robot. He could
give an innocent-sounding order that could cause big trouble.
Post by David Johnston
safeguards against robot's hurting people are solid.
[snip]

Sincerely,

Gene Wirchenko
David Johnston
2017-01-13 20:15:16 UTC
Permalink
Post by Gene Wirchenko
On Thu, 12 Jan 2017 21:33:42 -0700, David Johnston
[snip]
Post by David Johnston
Not especially. You just ensure that people with a legitimate reason to
be in proximity to a robot are authorized users. And that the
How? (Implementation can be a real bear.)
That's a problem already faced and dealt with on a daily basis with less
advanced computers. And yeah, the safeguards can be worked around or
malfunction. Still better than not having any.
Gene Wirchenko
2017-01-13 03:58:58 UTC
Permalink
On Thu, 12 Jan 2017 14:09:27 -0700, David Johnston
[snip]
Post by David Johnston
Post by Lynn McGuire
Yes, you do want *any* human telling *any* robot what to do. "Get off
me" is the most common phrase screamed at industrial robots. The second
most common phrase is "let me go". Sadly, industrial robots rarely have
hearing capability.
You do not however, want orders by unauthorized users taking equal
priority with orders by owners.
I do not want a tour of that plant.

Sincerely,

Gene Wirchenko
Lynn McGuire
2017-01-13 18:58:07 UTC
Permalink
Post by Gene Wirchenko
On Thu, 12 Jan 2017 14:09:27 -0700, David Johnston
[snip]
Post by David Johnston
Post by Lynn McGuire
Yes, you do want *any* human telling *any* robot what to do. "Get off
me" is the most common phrase screamed at industrial robots. The second
most common phrase is "let me go". Sadly, industrial robots rarely have
hearing capability.
You do not however, want orders by unauthorized users taking equal
priority with orders by owners.
I do not want a tour of that plant.
Sincerely,
Gene Wirchenko
Me too. I've seen a picture where an industrial robot tried to assemble a person who ventured into its space into a product. It was
not pretty. It looked like a lot of screaming was involved before the person expired.

Lynn
Scott Lurndal
2017-01-13 19:11:26 UTC
Permalink
Post by Lynn McGuire
Post by Gene Wirchenko
On Thu, 12 Jan 2017 14:09:27 -0700, David Johnston
[snip]
Post by David Johnston
Post by Lynn McGuire
Yes, you do want *any* human telling *any* robot what to do. "Get off
me" is the most common phrase screamed at industrial robots. The second
most common phrase is "let me go". Sadly, industrial robots rarely have
hearing capability.
You do not however, want orders by unauthorized users taking equal
priority with orders by owners.
I do not want a tour of that plant.
Sincerely,
Gene Wirchenko
Me too. I've seen a picture where an industrial robot tried to assemble a person who ventured into its space into a product. It was
not pretty. It looked like a lot of screaming was involved before the person expired.
Darwin in action, usually.
Lynn McGuire
2017-01-13 19:19:17 UTC
Permalink
Post by Scott Lurndal
Post by Lynn McGuire
Post by Gene Wirchenko
On Thu, 12 Jan 2017 14:09:27 -0700, David Johnston
[snip]
Post by David Johnston
Post by Lynn McGuire
Yes, you do want *any* human telling *any* robot what to do. "Get off
me" is the most common phrase screamed at industrial robots. The second
most common phrase is "let me go". Sadly, industrial robots rarely have
hearing capability.
You do not however, want orders by unauthorized users taking equal
priority with orders by owners.
I do not want a tour of that plant.
Sincerely,
Gene Wirchenko
Me too. I've seen a picture where an industrial robot tried to assemble a person who ventured into its space into a product. It was
not pretty. It looked like a lot of screaming was involved before the person expired.
Darwin in action, usually.
The space was clearly marked on the floor. I believe that a corral has been added to the space now.

Lynn
Loading...