Discussion:
Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian
(too old to reply)
Mr. Man-wai Chang
2018-03-20 11:53:13 UTC
Permalink
Garbage in, garbage out.
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian

<https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe>
--
@~@ Remain silent! Drink, Blink, Stretch! Live long and prosper!!
/ v \ Simplicity is Beauty!
/( _ )\ May the Force and farces be with you!
^ ^ (x86_64 Ubuntu 9.10) Linux 2.6.39.3
不借貸! 不詐騙! 不賭錢! 不援交! 不打交! 不打劫! 不自殺! 不求神! 請考慮綜援
(CSSA):
http://www.swd.gov.hk/tc/index/site_pubsvc/page_socsecu/sub_addressesa
Unsteadyken
2018-03-20 13:20:14 UTC
Permalink
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
If this is the "First fatal crash to involve a pedestrian", then the
implication is that there have been one or more fatal crashes that did
not involve pedestrians.
I think Uber should come clean about these shocking deaths, or headline
writers should learn to right proper English like what I do.
Mr. Man-wai Chang
2018-03-20 14:24:33 UTC
Permalink
Post by Unsteadyken
If this is the "First fatal crash to involve a pedestrian", then the
implication is that there have been one or more fatal crashes that did
not involve pedestrians.
I think Uber should come clean about these shocking deaths, or headline
writers should learn to right proper English like what I do.
You will need an accurate, complete news database to check whether it's
a first for Uber. Do *ALL* news agencies on Earth have that? As good as
your brain? :)
--
@~@ Remain silent! Drink, Blink, Stretch! Live long and prosper!!
/ v \ Simplicity is Beauty!
/( _ )\ May the Force and farces be with you!
^ ^ (x86_64 Ubuntu 9.10) Linux 2.6.39.3
不借貸! 不詐騙! 不賭錢! 不援交! 不打交! 不打劫! 不自殺! 不求神! 請考慮綜援
(CSSA):
http://www.swd.gov.hk/tc/index/site_pubsvc/page_socsecu/sub_addressesa
Big Al
2018-03-20 14:26:35 UTC
Permalink
Post by Unsteadyken
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
If this is the "First fatal crash to involve a pedestrian", then the
implication is that there have been one or more fatal crashes that did
not involve pedestrians.
I think Uber should come clean about these shocking deaths, or headline
writers should learn to right proper English like what I do.
The word 'what' is superfluous and 'right' should be 'write'. "learn
to write proper English like I do."
Wolf K
2018-03-20 15:24:47 UTC
Permalink
Post by Unsteadyken
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
If this is the "First fatal crash to involve a pedestrian", then the
implication is that there have been one or more fatal crashes that did
not involve pedestrians.
Ther was, but with a Tesla, not an Uber car. Guy drive on "auto-pilot",
car failed to recognise a truck-trailer in front of it as an obstacle,
and dive straight into it.
Post by Unsteadyken
I think Uber should come clean about these shocking deaths, or headline
writers should learn to right proper English like what I do.
:-)
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
Mr. Man-wai Chang
2018-03-20 15:48:06 UTC
Permalink
Post by Wolf K
Ther was, but with a Tesla, not an Uber car. Guy drive on "auto-pilot",
car failed to recognise a truck-trailer in front of it as an obstacle,
and dive straight into it.
Did it see the back of that truck as a tunnel? :)
--
@~@ Remain silent! Drink, Blink, Stretch! Live long and prosper!!
/ v \ Simplicity is Beauty!
/( _ )\ May the Force and farces be with you!
^ ^ (x86_64 Ubuntu 9.10) Linux 2.6.39.3
不借貸! 不詐騙! 不賭錢! 不援交! 不打交! 不打劫! 不自殺! 不求神! 請考慮綜援
(CSSA):
http://www.swd.gov.hk/tc/index/site_pubsvc/page_socsecu/sub_addressesa
Wolf K
2018-03-20 19:41:57 UTC
Permalink
Post by Mr. Man-wai Chang
Post by Wolf K
Ther was, but with a Tesla, not an Uber car. Guy drive on "auto-pilot",
car failed to recognise a truck-trailer in front of it as an obstacle,
and dive straight into it.
Did it see the back of that truck as a tunnel? :)
IIRC, the truck was painted white, and the Tesla's visual system
couldn't tell that it was not a piece of sky. In addition, "auto pilot"
on the car did/does not mean "autonomous".
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
Scott Lurndal
2018-03-20 19:47:39 UTC
Permalink
Post by Wolf K
Post by Mr. Man-wai Chang
Post by Wolf K
Ther was, but with a Tesla, not an Uber car. Guy drive on "auto-pilot",
car failed to recognise a truck-trailer in front of it as an obstacle,
and dive straight into it.
Did it see the back of that truck as a tunnel? :)
IIRC, the truck was painted white, and the Tesla's visual system
couldn't tell that it was not a piece of sky. In addition, "auto pilot"
on the car did/does not mean "autonomous".
https://www.engadget.com/2017/06/20/tesla-driver-in-fatal-autopilot-crash-ignored-safety-warnings/
Paul
2018-03-20 16:20:22 UTC
Permalink
Post by Wolf K
Post by Unsteadyken
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
If this is the "First fatal crash to involve a pedestrian", then the
implication is that there have been one or more fatal crashes that did
not involve pedestrians.
Ther was, but with a Tesla, not an Uber car. Guy drive on "auto-pilot",
car failed to recognise a truck-trailer in front of it as an obstacle,
and dive straight into it.
They've been working on pedestrian protection, so the notion isn't new.

https://www.popsci.com/cars/article/2013-02/volvos-new-airbags-protect-pedestrians-too

The incident in question, might not be the fault of the car.

https://arstechnica.com/cars/2018/03/police-chief-uber-self-driving-car-likely-not-at-fault-in-fatal-crash/

Autonomous vehicles simply aren't equipped to predict what
erratic humans are about to do. I don't see a reason why
incidents like this won't be repeated. So we're going to need
more of those Volvos from the first article.

Paul
Boris
2018-03-21 00:22:49 UTC
Permalink
Post by Paul
Post by Wolf K
Post by Unsteadyken
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
If this is the "First fatal crash to involve a pedestrian", then the
implication is that there have been one or more fatal crashes that did
not involve pedestrians.
Ther was, but with a Tesla, not an Uber car. Guy drive on "auto-pilot",
car failed to recognise a truck-trailer in front of it as an obstacle,
and dive straight into it.
They've been working on pedestrian protection, so the notion isn't new.
https://www.popsci.com/cars/article/2013-02/volvos-new-airbags-protect-
pedestrians-too
Post by Paul
The incident in question, might not be the fault of the car.
https://arstechnica.com/cars/2018/03/police-chief-uber-self-driving-car-
likely-not-at-fault-in-fatal-crash/
Post by Paul
Autonomous vehicles simply aren't equipped to predict what
erratic humans are about to do. I don't see a reason why
incidents like this won't be repeated. So we're going to need
more of those Volvos from the first article.
Paul
You're probably right.

I don't care much for autonomous vehicles. It surprises me that even test
vehicles are allowed on public roads, but of course, that's the best way
to test. I wonder if manufacturers will be so anxious to be the first to
get them to market that they will release them to the public full of buggy
software/hardware, and the public will be, like with so many hi-tech
products rushed to market, the quality assurers. I feel this way about
may high tech products, esp. cell phones. But they are not deadly if they
fail. Well, maybe Samsung exploding batteries could kill. OTOH,
government may steop in, like the FDA, and say no, no. Not enough
testing.

I think one of the biggest issues with autonomous cars is the decision
making programmed into the car under 'someone has to be sacrificed'
scenarios. There is a problem-solving principle, whose name escapes me at
the moment. which addresses this. (Not "Occam's razor"...if it looks like
a duck, quacks like a duch, and walks like a duck, etc.), It's more like
"On the horns of a dilema", that is, there is no favorable outcome, only a
less unfavorable outcome.

For instance, man will make the software to choose between two (or more)
unfavorable outcomes. Say you are in your autonomous car, driving in the
city with signal controlled intersections. You and your wife, an older
couple are in the car and about to cross an intersection. The light is
green, and a young woman pushing a baby stroller, with a baby onboard,
dashes into the intersection. You, but moreover your car's software,
calculates that there is not time to continue in a straight path, and
stop, to avoid hitting and probably killing both the young woman and baby.
However, your software calculates there is enough time to veer to the
right and crash into a lamp post, that will probably kill you and your
wife.

What the car eventually does is decided by the software, which was coded
by human beings. I kind of think if it was coded by google or facebook
types, me and my wife would be dead.

Anyone know what this 'dilema' is called?
Rene Lamontagne
2018-03-21 00:30:42 UTC
Permalink
Post by Paul
Post by Paul
Post by Wolf K
Post by Unsteadyken
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
If this is the "First fatal crash to involve a pedestrian", then the
implication is that there have been one or more fatal crashes that did
not involve pedestrians.
Ther was, but with a Tesla, not an Uber car. Guy drive on "auto-pilot",
car failed to recognise a truck-trailer in front of it as an obstacle,
and dive straight into it.
They've been working on pedestrian protection, so the notion isn't new.
https://www.popsci.com/cars/article/2013-02/volvos-new-airbags-protect-
pedestrians-too
Post by Paul
The incident in question, might not be the fault of the car.
https://arstechnica.com/cars/2018/03/police-chief-uber-self-driving-car-
likely-not-at-fault-in-fatal-crash/
Post by Paul
Autonomous vehicles simply aren't equipped to predict what
erratic humans are about to do. I don't see a reason why
incidents like this won't be repeated. So we're going to need
more of those Volvos from the first article.
Paul
You're probably right.
I don't care much for autonomous vehicles. It surprises me that even test
vehicles are allowed on public roads, but of course, that's the best way
to test. I wonder if manufacturers will be so anxious to be the first to
get them to market that they will release them to the public full of buggy
software/hardware, and the public will be, like with so many hi-tech
products rushed to market, the quality assurers. I feel this way about
may high tech products, esp. cell phones. But they are not deadly if they
fail. Well, maybe Samsung exploding batteries could kill. OTOH,
government may steop in, like the FDA, and say no, no. Not enough
testing.
I think one of the biggest issues with autonomous cars is the decision
making programmed into the car under 'someone has to be sacrificed'
scenarios. There is a problem-solving principle, whose name escapes me at
the moment. which addresses this. (Not "Occam's razor"...if it looks like
a duck, quacks like a duch, and walks like a duck, etc.), It's more like
"On the horns of a dilema", that is, there is no favorable outcome, only a
less unfavorable outcome.
For instance, man will make the software to choose between two (or more)
unfavorable outcomes. Say you are in your autonomous car, driving in the
city with signal controlled intersections. You and your wife, an older
couple are in the car and about to cross an intersection. The light is
green, and a young woman pushing a baby stroller, with a baby onboard,
dashes into the intersection. You, but moreover your car's software,
calculates that there is not time to continue in a straight path, and
stop, to avoid hitting and probably killing both the young woman and baby.
However, your software calculates there is enough time to veer to the
right and crash into a lamp post, that will probably kill you and your
wife.
What the car eventually does is decided by the software, which was coded
by human beings. I kind of think if it was coded by google or facebook
types, me and my wife would be dead.
Anyone know what this 'dilema' is called?
"Sigh" , The world was doing pretty good until the Human race came along.

Rene
Paul
2018-03-21 10:07:39 UTC
Permalink
Post by Paul
Post by Paul
Post by Wolf K
Post by Unsteadyken
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
If this is the "First fatal crash to involve a pedestrian", then the
implication is that there have been one or more fatal crashes that did
not involve pedestrians.
Ther was, but with a Tesla, not an Uber car. Guy drive on "auto-pilot",
car failed to recognise a truck-trailer in front of it as an obstacle,
and dive straight into it.
They've been working on pedestrian protection, so the notion isn't new.
https://www.popsci.com/cars/article/2013-02/volvos-new-airbags-protect-
pedestrians-too
Post by Paul
The incident in question, might not be the fault of the car.
https://arstechnica.com/cars/2018/03/police-chief-uber-self-driving-car-
likely-not-at-fault-in-fatal-crash/
Post by Paul
Autonomous vehicles simply aren't equipped to predict what
erratic humans are about to do. I don't see a reason why
incidents like this won't be repeated. So we're going to need
more of those Volvos from the first article.
Paul
You're probably right.
I don't care much for autonomous vehicles. It surprises me that even test
vehicles are allowed on public roads, but of course, that's the best way
to test. I wonder if manufacturers will be so anxious to be the first to
get them to market that they will release them to the public full of buggy
software/hardware, and the public will be, like with so many hi-tech
products rushed to market, the quality assurers. I feel this way about
may high tech products, esp. cell phones. But they are not deadly if they
fail. Well, maybe Samsung exploding batteries could kill. OTOH,
government may steop in, like the FDA, and say no, no. Not enough
testing.
I think one of the biggest issues with autonomous cars is the decision
making programmed into the car under 'someone has to be sacrificed'
scenarios. There is a problem-solving principle, whose name escapes me at
the moment. which addresses this. (Not "Occam's razor"...if it looks like
a duck, quacks like a duch, and walks like a duck, etc.), It's more like
"On the horns of a dilema", that is, there is no favorable outcome, only a
less unfavorable outcome.
For instance, man will make the software to choose between two (or more)
unfavorable outcomes. Say you are in your autonomous car, driving in the
city with signal controlled intersections. You and your wife, an older
couple are in the car and about to cross an intersection. The light is
green, and a young woman pushing a baby stroller, with a baby onboard,
dashes into the intersection. You, but moreover your car's software,
calculates that there is not time to continue in a straight path, and
stop, to avoid hitting and probably killing both the young woman and baby.
However, your software calculates there is enough time to veer to the
right and crash into a lamp post, that will probably kill you and your
wife.
What the car eventually does is decided by the software, which was coded
by human beings. I kind of think if it was coded by google or facebook
types, me and my wife would be dead.
Anyone know what this 'dilema' is called?
But this smacks of sentience.

These cars aren't sentient, therefore they will never
"perceive" a situation as a balancing of such possibilities.

If the sensors cannot see the woman and baby, the woman and baby
are simply run over.

You're assuming that driving requires sentience, and
the people building these cars don't think so. They think
the process of driving is mechanical enough, to put these
vehicles on the road. They're not waiting for sentience.

When anarchists attempt to "prank" these cars, what's
going to happen ? Should be interesting.

Paul
J. P. Gilliver (John)
2018-03-21 14:17:01 UTC
Permalink
[]
Post by Paul
Post by Boris
For instance, man will make the software to choose between two (or
more) unfavorable outcomes. Say you are in your autonomous car,
driving in the city with signal controlled intersections. You and
your wife, an older couple are in the car and about to cross an
intersection. The light is green, and a young woman pushing a baby
stroller, with a baby onboard, dashes into the intersection. You,
but moreover your car's software, calculates that there is not time
to continue in a straight path, and stop, to avoid hitting and
probably killing both the young woman and baby. However, your
software calculates there is enough time to veer to the right and
crash into a lamp post, that will probably kill you and your wife.
What the car eventually does is decided by the software, which was
coded by human beings. I kind of think if it was coded by google or
facebook types, me and my wife would be dead. Anyone know what
this 'dilema' is called?
But this smacks of sentience.
These cars aren't sentient, therefore they will never
"perceive" a situation as a balancing of such possibilities.
If the sensors cannot see the woman and baby, the woman and baby
are simply run over.
You're avoiding the question. Boris posed the situation where the car's
sensors _do_ detect the woman and baby, and "know" (have calculated, if
you find "know" too anthropomorphic a word) that it cannot stop before
hitting them or the lamppost, and cannot swerve to avoid either without
(say) turning over.

What does it do? Hit the mother and baby, probably killing them, or hit
the lamppost, probably killing the occupants? [Leaving aside that it
must be a very sturdy lamppost! Substitute wall, or bridge support, or
cliff edge, if necessary, for lamppost.]

I have no answer to this, by the way. If _I_ was driving, instead of a
computer, I'm afraid I'd probably select self-preservation, as I suspect
most humans would - and would do so faster than I could explain or even
think _consciously_ about it. I'm not saying that's the preferred
option, either, just that I think it's human nature.
Post by Paul
You're assuming that driving requires sentience, and
the people building these cars don't think so. They think
the process of driving is mechanical enough, to put these
vehicles on the road. They're not waiting for sentience.
They're never going to be perfect. One argument says that if they are
just safer than human-driven cars, they ought to be allowed - but things
are never that simple, and mostly, they will cause (or be involved in,
subject to, whatever) different _kinds_ of accident to those caused by
humans, which people on both sides of the argument will claim supports
their "side". Plus, most the accidents _avoided_ by them will tend _not_
to be recorded. (And the same applies to most of the accidents avoided
by humans, too.)
Post by Paul
When anarchists attempt to "prank" these cars, what's
going to happen ? Should be interesting.
That definitely has to be taken into consideration. I would say there's
a strong argument for an absolute ban on _any_ external control, but I
can see law enforcement agencies wanting that. (They will also bring up
the situation of the driver being dead at the wheel - though I think
that's a red herring; such a car should continue safely, or stop,
anyway.)
Post by Paul
Paul
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)***@T+H+Sh0!:`)DNAf

The average age of a single mum in this country is 37
- Jane Rackham, RT 2016/5/28-6/3
Boris
2018-03-21 15:10:19 UTC
Permalink
Post by Paul
Post by Paul
Post by Paul
Post by Wolf K
Post by Unsteadyken
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash
involving pedestrian
If this is the "First fatal crash to involve a pedestrian", then the
implication is that there have been one or more fatal crashes that
did not involve pedestrians.
Ther was, but with a Tesla, not an Uber car. Guy drive on
"auto-pilot", car failed to recognise a truck-trailer in front of it
as an obstacle, and dive straight into it.
They've been working on pedestrian protection, so the notion isn't new.
https://www.popsci.com/cars/article/2013-02/volvos-new-airbags-
protect-
Post by Paul
Post by Paul
pedestrians-too
Post by Paul
The incident in question, might not be the fault of the car.
https://arstechnica.com/cars/2018/03/police-chief-uber-self-driving-
car
Post by Paul
Post by Paul
Post by Paul
-
likely-not-at-fault-in-fatal-crash/
Post by Paul
Autonomous vehicles simply aren't equipped to predict what
erratic humans are about to do. I don't see a reason why
incidents like this won't be repeated. So we're going to need
more of those Volvos from the first article.
Paul
You're probably right.
I don't care much for autonomous vehicles. It surprises me that even
test vehicles are allowed on public roads, but of course, that's the
best way to test. I wonder if manufacturers will be so anxious to be
the first to get them to market that they will release them to the
public full of buggy software/hardware, and the public will be, like
with so many hi-tech products rushed to market, the quality assurers.
I feel this way about may high tech products, esp. cell phones. But
they are not deadly if they fail. Well, maybe Samsung exploding
batteries could kill. OTOH, government may steop in, like the FDA, and
say no, no. Not enough testing.
I think one of the biggest issues with autonomous cars is the decision
making programmed into the car under 'someone has to be sacrificed'
scenarios. There is a problem-solving principle, whose name escapes me
at the moment. which addresses this. (Not "Occam's razor"...if it
looks like a duck, quacks like a duch, and walks like a duck, etc.),
It's more like "On the horns of a dilema", that is, there is no
favorable outcome, only a less unfavorable outcome.
For instance, man will make the software to choose between two (or
more) unfavorable outcomes. Say you are in your autonomous car,
driving in the city with signal controlled intersections. You and your
wife, an older couple are in the car and about to cross an
intersection. The light is green, and a young woman pushing a baby
stroller, with a baby onboard, dashes into the intersection. You, but
moreover your car's software, calculates that there is not time to
continue in a straight path, and stop, to avoid hitting and probably
killing both the young woman and baby. However, your software
calculates there is enough time to veer to the right and crash into a
lamp post, that will probably kill you and your wife.
What the car eventually does is decided by the software, which was
coded by human beings. I kind of think if it was coded by google or
facebook types, me and my wife would be dead.
Anyone know what this 'dilema' is called?
But this smacks of sentience.
These cars aren't sentient, therefore they will never
"perceive" a situation as a balancing of such possibilities.
Of course the autonomous cars of the future will not be sentient, just as
my car today is not sentient. But, the programmers are. That's the whole
point.
Post by Paul
If the sensors cannot see the woman and baby, the woman and baby
are simply run over.
Then that autonomous car should not be on the road. Bad QA.
Post by Paul
You're assuming that driving requires sentience, and
the people building these cars don't think so. They think
the process of driving is mechanical enough, to put these
vehicles on the road.
Yes, exactly.

They're not waiting for sentience.

Are you implying that these cars will at some time be sentient?
Post by Paul
When anarchists attempt to "prank" these cars, what's
going to happen ? Should be interesting.
I believe Chrysler showed how cars of today can be 'pranked' already.
Post by Paul
Paul
Paul
2018-03-21 17:15:51 UTC
Permalink
Post by Paul
Post by Paul
Post by Paul
Post by Paul
Post by Wolf K
Post by Unsteadyken
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash
involving pedestrian
If this is the "First fatal crash to involve a pedestrian", then the
implication is that there have been one or more fatal crashes that
did not involve pedestrians.
Ther was, but with a Tesla, not an Uber car. Guy drive on
"auto-pilot", car failed to recognise a truck-trailer in front of it
as an obstacle, and dive straight into it.
They've been working on pedestrian protection, so the notion isn't new.
https://www.popsci.com/cars/article/2013-02/volvos-new-airbags-
protect-
Post by Paul
Post by Paul
pedestrians-too
Post by Paul
The incident in question, might not be the fault of the car.
https://arstechnica.com/cars/2018/03/police-chief-uber-self-driving-
car
Post by Paul
Post by Paul
Post by Paul
-
likely-not-at-fault-in-fatal-crash/
Post by Paul
Autonomous vehicles simply aren't equipped to predict what
erratic humans are about to do. I don't see a reason why
incidents like this won't be repeated. So we're going to need
more of those Volvos from the first article.
Paul
You're probably right.
I don't care much for autonomous vehicles. It surprises me that even
test vehicles are allowed on public roads, but of course, that's the
best way to test. I wonder if manufacturers will be so anxious to be
the first to get them to market that they will release them to the
public full of buggy software/hardware, and the public will be, like
with so many hi-tech products rushed to market, the quality assurers.
I feel this way about may high tech products, esp. cell phones. But
they are not deadly if they fail. Well, maybe Samsung exploding
batteries could kill. OTOH, government may steop in, like the FDA, and
say no, no. Not enough testing.
I think one of the biggest issues with autonomous cars is the decision
making programmed into the car under 'someone has to be sacrificed'
scenarios. There is a problem-solving principle, whose name escapes me
at the moment. which addresses this. (Not "Occam's razor"...if it
looks like a duck, quacks like a duch, and walks like a duck, etc.),
It's more like "On the horns of a dilema", that is, there is no
favorable outcome, only a less unfavorable outcome.
For instance, man will make the software to choose between two (or
more) unfavorable outcomes. Say you are in your autonomous car,
driving in the city with signal controlled intersections. You and your
wife, an older couple are in the car and about to cross an
intersection. The light is green, and a young woman pushing a baby
stroller, with a baby onboard, dashes into the intersection. You, but
moreover your car's software, calculates that there is not time to
continue in a straight path, and stop, to avoid hitting and probably
killing both the young woman and baby. However, your software
calculates there is enough time to veer to the right and crash into a
lamp post, that will probably kill you and your wife.
What the car eventually does is decided by the software, which was
coded by human beings. I kind of think if it was coded by google or
facebook types, me and my wife would be dead.
Anyone know what this 'dilema' is called?
But this smacks of sentience.
These cars aren't sentient, therefore they will never
"perceive" a situation as a balancing of such possibilities.
Of course the autonomous cars of the future will not be sentient, just as
my car today is not sentient. But, the programmers are. That's the whole
point.
Post by Paul
If the sensors cannot see the woman and baby, the woman and baby
are simply run over.
Then that autonomous car should not be on the road. Bad QA.
Post by Paul
You're assuming that driving requires sentience, and
the people building these cars don't think so. They think
the process of driving is mechanical enough, to put these
vehicles on the road.
Yes, exactly.
They're not waiting for sentience.
Are you implying that these cars will at some time be sentient?
Post by Paul
When anarchists attempt to "prank" these cars, what's
going to happen ? Should be interesting.
I believe Chrysler showed how cars of today can be 'pranked' already.
Post by Paul
Paul
Some of the hardware used in cars now, has classifiers. It can in
principle, tell the difference between a woman, a dog, a fire hydrant,
a lamp post. This is done with pattern matching and neural nets.
Some of the overlays on Lidar images, show this process in action.

But are the software designers crass enough to assign "weights"
to them, and compare the value of the target, to the actual
vehicle occupants ? Your assumption assumes a chain of if-then-else
statements written in C or something.

All the vehicle can do is:

1) Slam on the brakes.
2) Not aim for the lamp post, because the first directive
is not to hit stationary objects.

The woman and baby get hit, if the sensor reach isn't far enough
to detect the woman and baby in time. Or if there isn't sufficient
clearance to drive around the object. If the woman was in the
middle of the multiple lanes, there might not be sufficient room
to drive around.

You cannot "aim" for the lamp post, because the lamp post
could fall on a tanker full of gasoline, sparks from the
power lines ignite the gasoline, the tanker explodes and
takes out a whole apartment building full of people. Now,
we've made an incorrect analysis of the "value" and "risks"
of each target. We don't really know that hitting a lamp
post is "safe". The person standing directly behind the
lamp post, doesn't think so.

A human driver, might be able to analyze a scene and make
these connections. We cannot rely on driving software with
a "real time" requirement, to be making these calls in
20 milliseconds. The software is likely to have about
the same event horizon, as that of a rabbit brain.
To be successful in making a response, the car has to
start moving the wheel, well before it gets even
close to the target. And that doesn't give time
to mull over multiple possible outcomes.

*******

It's refreshing to know that the decision has already been made.
The problem is apparently called the Trolley Problem. You won't
like the answer, but, it's an answer none-the-less.

https://blog.caranddriver.com/self-driving-mercedes-will-prioritize-occupant-safety-over-pedestrians/

https://jalopnik.com/now-mercedes-says-its-driverless-cars-wont-run-over-ped-1787890432

Hahaha. The Volvo approach.

http://www.thedrive.com/tech/13458/waymo-patents-padded-self-driving-car-to-protect-pedestrians

Good clean fun.

https://en.wikipedia.org/wiki/Trolley_problem

Here's a Tesla, predicting a crash up ahead, before it happens.
Obviously using radar, because it cannot "see" what is about
to happen. The radar is capable of "over the horizon" coverage
like that. The radar detects a delta_V problem with the
two vehicles.



Here's a Tesla, failing to predict something. And it's because
of the choice of radar versus Lidar. The radar cannot detect
a stationary object, because the entire landscape is stationary,
and such velocity measurements hard to distinguish from the
majority of the landscape.

http://www.wired.co.uk/article/tesla-fire-truck-crash-autopilot-accident

I still cannot find the article that claims an semi-autonomous
car saved a pedestrian. I wanted to see if the pedestrian was
saved via braking, or via a steering action.

Paul
Char Jackson
2018-03-21 19:26:38 UTC
Permalink
Post by Paul
You cannot "aim" for the lamp post, because the lamp post
could fall on a tanker full of gasoline, sparks from the
power lines ignite the gasoline, the tanker explodes and
takes out a whole apartment building full of people. Now,
we've made an incorrect analysis of the "value" and "risks"
of each target. We don't really know that hitting a lamp
post is "safe". The person standing directly behind the
lamp post, doesn't think so.
A human driver, might be able to analyze a scene and make
these connections. We cannot rely on driving software with
a "real time" requirement, to be making these calls in
20 milliseconds.
All we can rely on is that the computer will be able to assess the
situation much faster than the typical human. Once assessed, the
computer can begin to take action, once again, much faster than the
typical human. Those time savings can't help but lead to better outcomes
in general.

No one has ever claimed that human injuries and fatalities will drop to
zero once computers are allowed to take over, but if fatalities only
drop by, say 60-80%, that's still a huge improvement.
Post by Paul
The software is likely to have about
the same event horizon, as that of a rabbit brain.
To be successful in making a response, the car has to
start moving the wheel, well before it gets even
close to the target. And that doesn't give time
to mull over multiple possible outcomes.
How often do humans freeze up and just slam on the brakes rather than
evaluating a situation and taking corrective action? Computers don't
have to do that. They don't have the fears and emotions that humans
have, nor do computers have the distractions that humans have. Computers
will be able to respond much faster, and their response should much more
accurately resolve the situation for the best outcome instead of blindly
slamming into something with only the brakes applied.

Human drivers have amply demonstrated over time that, as a group, they
aren't capable of applying maximum braking *and* evasive steering at the
same time, especially under changing conditions and other distractions,
such as shrieking passengers yelling, "We're all gonna die!". Computers,
on the other hand, even the relatively primitive first gen stuff being
tested today, has demonstrated that it has no problems with that.

I love to drive, and for that reason I'm not fully looking forward to
autonomous vehicles, but the safety aspects make it a total winner. For
safety alone, we can't get there fast enough. The sooner, the better.

In the meantime, we already have adaptive cruise control, blind spot
monitors, lane departure warnings, automatic low speed braking when
obstacles appear, etc. People can drag their feet all they want, but
this autonomous stuff is coming faster than they think, and it
reportedly already works much better than some people think. We live in
interesting times.
Keith Nuttle
2018-03-21 19:47:36 UTC
Permalink
Post by Char Jackson
No one has ever claimed that human injuries and fatalities will drop to
zero once computers are allowed to take over, but if fatalities only
drop by, say 60-80%, that's still a huge improvement.By this logic and used to promote every safety system in the last 70
years, the implementation of this system will save a percentage of lives.

Over the past 70 years, and many safety devices have been implemented.
Every thing form mirrors that can be adjusted by the driver, turn
signals, Power brakes, High rear brake lights, etc. If the reduction in
traffic fatalities were as advertised there would be no traffic
fatalities to day. 10% X 10% X 10% X 10% X ........... approaches zero
--
2018: The year we learn to play the great game of Euchre
Wolf K
2018-03-21 21:18:21 UTC
Permalink
Post by Keith Nuttle
Post by Char Jackson
No one has ever claimed that human injuries and fatalities will drop to
zero once computers are allowed to take over, but if fatalities only
drop by, say 60-80%, that's still a huge improvement.By this logic and
used to promote every safety system in the last 70
years, the implementation of this system will save a percentage of lives.
Over the past 70 years, and many safety devices have been implemented.
Every thing form mirrors that can be adjusted by the driver, turn
signals, Power brakes, High rear brake lights, etc.  If the reduction in
traffic fatalities were as advertised there would be no traffic
fatalities to day.  10% X  10% X  10% X  10% X ........... approaches zero
Traffic fatality rates have been dropping pretty steadily since the
early 1900s.

https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in_U.S._by_year

Fatality rate per vehicle mile is now about 5% of what it was 100 years
ago. IOW, despite the vast increase in traffic, you're much more likely
to survive a road trip now than 100 years ago.

The accident rate has also dropped. I couldn't find the same range of
data as for fatalities, so this will have to do:

https://elsnerlawfirm.com/car-accidents-rates-statistics/

The accident rate is about half what it was in the 1950s (I got my
driver's licence in 1956). So you have about twice the chance of coming
home with an unscarred vehicle as back then.

Even so, well over 30,000 people are killed on US highways every year.
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
Wolf K
2018-03-21 17:33:41 UTC
Permalink
On 2018-03-21 11:10, Boris wrote:
[...]
Post by Boris
Of course the autonomous cars of the future will not be sentient, just as
my car today is not sentient. But, the programmers are. That's the whole
point.
[...]

You can't program AI. That is the key insight from the latest successes
of "deep learning" neural networks. You can't even explain how they
programmed themselves.

I think this is analogous to the uncomfortable discovery that complex
programs will do things you didn't program them to do. These actions are
usually called "bugs". Simple bugs are simply errors (such as typos),
and are easy to fix. Complex bugs result from unforeseen interactions
between program modules. They may be unfixable. They may not even be
easily reproducible.

Best,
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
Boris
2018-03-22 20:17:10 UTC
Permalink
Post by Wolf K
[...]
Post by Boris
Of course the autonomous cars of the future will not be sentient, just
as my car today is not sentient. But, the programmers are. That's the
whole point.
[...]
You can't program AI. That is the key insight from the latest successes
of "deep learning" neural networks. You can't even explain how they
programmed themselves.
I think this is analogous to the uncomfortable discovery that complex
programs will do things you didn't program them to do. These actions are
usually called "bugs". Simple bugs are simply errors (such as typos),
and are easy to fix. Complex bugs result from unforeseen interactions
between program modules. They may be unfixable. They may not even be
easily reproducible.
Best,
You got me interested in the application of neural networks to autonomous
cars. But, any more than a superficial understanding of neural networks
is beyond my capabilities. While doing some reading today, I ran into
this statement:

"Backpropagational neural networks (and many other types of networks) are
in a sense the ultimate 'black boxes'. Apart from defining the general
archetecture of a network and perhaps initially seeding it with a random
numbers, the user has no other role than to feed it input and watch it
train and await the output."

That statement added some understanding (for me) to your post, but I'll
ask to be sure.

So, you are saying that neural networking is not a way to make autonomous
cars safer? It's more of a serial 'if this then that' process?

Puddintane
2018-03-22 16:32:19 UTC
Permalink
You mean like Microsoft programmers that cannot program their way out of
a paper bag ? bug after bug after bug .........
Paul
2018-03-22 17:52:55 UTC
Permalink
Post by Puddintane
You mean like Microsoft programmers that cannot program their way out of
a paper bag ? bug after bug after bug .........
https://arstechnica.com/gadgets/2018/03/building-windows-4-million-commits-10-million-work-items/

Github

80,000 user accounts

7,000 developers and 4,000 designers, program managers,
and service engineers working within WDG.

300GB repository with 3.5 million files (six times the files of Chromium :-) )

4 million commits
10 million work items

A delay queue added to github, to ease merges.

Where does the quality come from ?
Why, practice, practice, practice...

If they hire 993,000 more developers, they will then
have "a million monkeys typing". The Great Novel
is bound to come out. It was a dark and stormy night.

*******

By the way, my webcam doesn't work in Win10 again.
It freezes up before I can push the button and take
a picture. Thank you, frameserve. Good thing I could
boot Win7 and do it there. Every copy of Win10 comes
with Win7, right ? Am I right ?

It used to work. In 10586 or so.

Paul
Boris
2018-03-22 19:44:03 UTC
Permalink
Post by Puddintane
You mean like Microsoft programmers that cannot program their way out of
a paper bag ? bug after bug after bug .........
I was seriously tempted to say the same, but I was feeling generous. I'm glad
you did.
Rene Lamontagne
2018-03-22 19:58:14 UTC
Permalink
Post by Puddintane
You mean like Microsoft programmers that cannot program their way out of
a paper bag ?   bug after  bug after bug .........
Don't see many paper bags lately, Mostly Plastic. :-)


Rene
Gene Wirchenko
2018-03-20 16:51:00 UTC
Permalink
On Tue, 20 Mar 2018 13:20:14 -0000, Unsteadyken
Post by Unsteadyken
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
If this is the "First fatal crash to involve a pedestrian", then the
implication is that there have been one or more fatal crashes that did
not involve pedestrians.
There is no such implication. It is simply a limited statement
that does not state anything outside the limited area.
Post by Unsteadyken
I think Uber should come clean about these shocking deaths, or headline
writers should learn to right proper English like what I do.
You might find reviewing your English of value.

Sincerely,

Gene Wirchenko
J. P. Gilliver (John)
2018-03-20 19:10:27 UTC
Permalink
Post by Gene Wirchenko
On Tue, 20 Mar 2018 13:20:14 -0000, Unsteadyken
[]
Post by Gene Wirchenko
Post by Unsteadyken
If this is the "First fatal crash to involve a pedestrian", then the
implication is that there have been one or more fatal crashes that did
not involve pedestrians.
There is no such implication. It is simply a limited statement
that does not state anything outside the limited area.
[]
Looking at this purely as a use of English matter, there _is_ such an
implication; if there wasn't, the words "to involve a pedestrian" would
be omitted. Their presence implies what Unsteadyken says they do.
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)***@T+H+Sh0!:`)DNAf

"Bother,"saidPoohwhenhisspacebarrefusedtowork.
Wolf K
2018-03-20 19:46:08 UTC
Permalink
Post by Gene Wirchenko
On Tue, 20 Mar 2018 13:20:14 -0000, Unsteadyken
Post by Unsteadyken
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
If this is the "First fatal crash to involve a pedestrian", then the
implication is that there have been one or more fatal crashes that did
not involve pedestrians.
There is no such implication. It is simply a limited statement
that does not state anything outside the limited area.
I'm afraid there is, according to all the rules of parsing English that
I know of. If the writer intended that it was the first such incident in
Arizona, those qualification should have been added.
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
Gene Wirchenko
2018-03-21 16:56:06 UTC
Permalink
Post by Wolf K
Post by Gene Wirchenko
On Tue, 20 Mar 2018 13:20:14 -0000, Unsteadyken
Post by Unsteadyken
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
If this is the "First fatal crash to involve a pedestrian", then the
implication is that there have been one or more fatal crashes that did
not involve pedestrians.
There is no such implication. It is simply a limited statement
that does not state anything outside the limited area.
I'm afraid there is, according to all the rules of parsing English that
I know of. If the writer intended that it was the first such incident in
Arizona, those qualification should have been added.
Nope. It mentions the first fatal crash involving a pedestrian.
The person who wrote the article might have researched that exact
point and not anything about other types of crashes. Consequently,
responsible reporting would require that he limit his sttement
appropriately.

This is similar to a statement "We will go on a picnic if it does
not rain." does not state anything about if it does rain. The people
might still go on a picnic. Or they might not. We do not know.

Sincerely,

Gene Wirchenko
Wolf K
2018-03-21 17:24:04 UTC
Permalink
Post by Gene Wirchenko
Post by Wolf K
Post by Gene Wirchenko
On Tue, 20 Mar 2018 13:20:14 -0000, Unsteadyken
Post by Unsteadyken
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
If this is the "First fatal crash to involve a pedestrian", then the
implication is that there have been one or more fatal crashes that did
not involve pedestrians.
There is no such implication. It is simply a limited statement
that does not state anything outside the limited area.
I'm afraid there is, according to all the rules of parsing English that
I know of. If the writer intended that it was the first such incident in
Arizona, those qualification should have been added.
Nope. It mentions the first fatal crash involving a pedestrian.
The person who wrote the article might have researched that exact
point and not anything about other types of crashes. Consequently,
responsible reporting would require that he limit his sttement
appropriately.
This is similar to a statement "We will go on a picnic if it does
not rain." does not state anything about if it does rain. The people
might still go on a picnic. Or they might not. We do not know.
Sincerely,
Gene Wirchenko
I'm afraid you're confusing logic and English usage. Trust me: English
usage means that "First X with property Y" implies that the speaker is
singling out those Xs that have property Y, and ignoring all the other
Xs that have property Z (etc). Otherwise, the speaker would have merely
referred to X without qualification. That is, in English (and all the
languages I speak/read), any qualification implies a subset. The speaker
may make their ignorance of other possible subsets explicit ("...if
any..."), but if they don't do so, you must infer that they know of or
believe that there are other subsets. Thus, by qualifying X as having
property Y, the speaker implies they know of or believe that there are X
that have property Z (etc).

BTW, it's such differences as these between natural language usage and
logic that must be taught explicitly when teaching students how to parse
an argument. It ain't easy. I taught logic (Aristotelian and Boolean)
for many years.

Best,
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
Gene Wirchenko
2018-03-21 23:17:20 UTC
Permalink
Post by Wolf K
Post by Gene Wirchenko
Post by Wolf K
Post by Gene Wirchenko
On Tue, 20 Mar 2018 13:20:14 -0000, Unsteadyken
Post by Unsteadyken
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
If this is the "First fatal crash to involve a pedestrian", then the
implication is that there have been one or more fatal crashes that did
not involve pedestrians.
There is no such implication. It is simply a limited statement
that does not state anything outside the limited area.
I'm afraid there is, according to all the rules of parsing English that
I know of. If the writer intended that it was the first such incident in
Arizona, those qualification should have been added.
Nope. It mentions the first fatal crash involving a pedestrian.
The person who wrote the article might have researched that exact
point and not anything about other types of crashes. Consequently,
responsible reporting would require that he limit his sttement
appropriately.
This is similar to a statement "We will go on a picnic if it does
not rain." does not state anything about if it does rain. The people
might still go on a picnic. Or they might not. We do not know.
I'm afraid you're confusing logic and English usage. Trust me: English
usage means that "First X with property Y" implies that the speaker is
singling out those Xs that have property Y, and ignoring all the other
Xs that have property Z (etc). Otherwise, the speaker would have merely
Ignoring them, because he might well not know anything about
those with property Z.
Post by Wolf K
referred to X without qualification. That is, in English (and all the
languages I speak/read), any qualification implies a subset. The speaker
may make their ignorance of other possible subsets explicit ("...if
any..."), but if they don't do so, you must infer that they know of or
believe that there are other subsets. Thus, by qualifying X as having
No, you do not.

People *might* mean that, but they *need not*.
Post by Wolf K
property Y, the speaker implies they know of or believe that there are X
that have property Z (etc).
First you state "... and ignoring all the other Xs that have
property Z (etc)." and now you state the speaker knows about or
believes about that same group. Which is it?
Post by Wolf K
BTW, it's such differences as these between natural language usage and
logic that must be taught explicitly when teaching students how to parse
an argument. It ain't easy. I taught logic (Aristotelian and Boolean)
for many years.
Well, if you can make a mistake about it, it is no wonder that
many others do, too.

Sincerely,

Gene Wirchenko
Wolf K
2018-03-21 23:49:57 UTC
Permalink
Post by Gene Wirchenko
First you state "... and ignoring all the other Xs that have
property Z (etc)." and now you state the speaker knows about or
believes about that same group. Which is it?
Both. Because "ignoring" in this context means "without explicit
reference to".

See, English is what it is. It ain't logical. Your arguments are valid,
but unsound.that often happens.
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
Gene Wirchenko
2018-03-22 18:52:51 UTC
Permalink
Post by Wolf K
Post by Gene Wirchenko
First you state "... and ignoring all the other Xs that have
property Z (etc)." and now you state the speaker knows about or
believes about that same group. Which is it?
Both. Because "ignoring" in this context means "without explicit
reference to".
See, English is what it is. It ain't logical. Your arguments are valid,
but unsound.that often happens.
But it is not what you say it is.

Do you get into trouble with others complaining that you are
putting words in their mouths?

Sincerely,

Gene Wirchenko
Char Jackson
2018-03-20 16:07:42 UTC
Permalink
On Tue, 20 Mar 2018 19:53:13 +0800, "Mr. Man-wai Chang"
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
<https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe>
[multiple groups removed]

AFAIK, the facts are still being gathered, but I'd be curious to know
how many pedestrians were killed by human drivers over the same period.
If it's more than 1, which I'm assuming is the case, then I'm not
alarmed by this incident other than having sympathy for the deceased and
her family. Initial reports said she was crossing the street, but not in
or near a crosswalk, so I wonder if it would have made a positive
difference if a human had been driving. I hope testing doesn't get
curtailed by this incident.
Gene Wirchenko
2018-03-20 16:53:13 UTC
Permalink
Post by Char Jackson
On Tue, 20 Mar 2018 19:53:13 +0800, "Mr. Man-wai Chang"
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
<https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe>
[multiple groups removed]
AFAIK, the facts are still being gathered, but I'd be curious to know
how many pedestrians were killed by human drivers over the same period.
If it's more than 1, which I'm assuming is the case, then I'm not
alarmed by this incident other than having sympathy for the deceased and
her family. Initial reports said she was crossing the street, but not in
or near a crosswalk, so I wonder if it would have made a positive
difference if a human had been driving. I hope testing doesn't get
curtailed by this incident.
Why not? The very fact that there is testing suggests that
things can go wrong, and it might be something to do with the car
design.

How many more people are you willing to see killed during
testing? Would you like to be one of them?

Sincerely,

Gene Wirchenko
Jeff Barnett
2018-03-20 21:12:19 UTC
Permalink
Post by Char Jackson
On Tue, 20 Mar 2018 19:53:13 +0800, "Mr. Man-wai Chang"
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
<https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe>
[multiple groups removed]
AFAIK, the facts are still being gathered, but I'd be curious to know
how many pedestrians were killed by human drivers over the same period.
If it's more than 1, which I'm assuming is the case, then I'm not
alarmed by this incident other than having sympathy for the deceased and
her family. Initial reports said she was crossing the street, but not in
or near a crosswalk, so I wonder if it would have made a positive
difference if a human had been driving. I hope testing doesn't get
curtailed by this incident.
I believe the number last year was 6000. The real question is how do
auto driven car accident statistics compare with human drivers.

The Uber accident was not necessarily the car/drivers fault. A woman was
walking a bicycle and started to cross the street, not in the crosswalk,
and was hit just as she went into the street. The car was traveling
around 40-45mph. In other words, it was HIGHLY likely she didn't look
before crossing. The municipal police are still investigating and
certainly have not assigned any blame yet. In fact they have speculated
that this might be one of those where no primary blame is asserted.

As someone else said: You should be wary and afraid of those automated
vehicles. But you should be god-awful more afraid of all those idiots
out there jacking off with their smart phones while driving.

My personnel bet is that 5 years from now we will see self drive cars
doing spectacularly better than human drive cars - better safety, better
millage, faster trips - and still a bunch of idiots (the same ones who
opposed autopilots and computer assisted landings for planes) bitching
about the supremacy of human drivers, vinyl records, doctors reading
x-rays, etc, etc, etc.
--
Jeff Barnett
Char Jackson
2018-03-20 23:08:09 UTC
Permalink
Post by Jeff Barnett
Post by Char Jackson
On Tue, 20 Mar 2018 19:53:13 +0800, "Mr. Man-wai Chang"
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
<https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe>
[multiple groups removed]
AFAIK, the facts are still being gathered, but I'd be curious to know
how many pedestrians were killed by human drivers over the same period.
If it's more than 1, which I'm assuming is the case, then I'm not
alarmed by this incident other than having sympathy for the deceased and
her family. Initial reports said she was crossing the street, but not in
or near a crosswalk, so I wonder if it would have made a positive
difference if a human had been driving. I hope testing doesn't get
curtailed by this incident.
I believe the number last year was 6000. The real question is how do
auto driven car accident statistics compare with human drivers.
The Uber accident was not necessarily the car/drivers fault. A woman was
walking a bicycle and started to cross the street, not in the crosswalk,
and was hit just as she went into the street. The car was traveling
around 40-45mph. In other words, it was HIGHLY likely she didn't look
before crossing. The municipal police are still investigating and
certainly have not assigned any blame yet. In fact they have speculated
that this might be one of those where no primary blame is asserted.
As someone else said: You should be wary and afraid of those automated
vehicles. But you should be god-awful more afraid of all those idiots
out there jacking off with their smart phones while driving.
My personnel bet is that 5 years from now we will see self drive cars
doing spectacularly better than human drive cars - better safety, better
millage, faster trips - and still a bunch of idiots (the same ones who
opposed autopilots and computer assisted landings for planes) bitching
about the supremacy of human drivers, vinyl records, doctors reading
x-rays, etc, etc, etc.
I'm with you 100%. From everything I've read, the technology is coming
along much faster than I would have ever thought.

There are two major hurdles that I see. The first, of course, is the
technology itself. We already have anti-lock brakes, lane departure
warnings, adaptive cruise control, blind spot monitors, and automatic
parallel parking, oh and 360-degree virtual overhead view on the
dashboard stitched together from multiple exterior cameras, on virtually
all new vehicles. ICBW, but I think all of those things are mandated by
2020. With that much automation already in place, it's a logical (but
difficult) next step to stitch it all together and make it work without
significant human intervention.

The second hurdle is the transition period, where semi-autonomous
vehicles are forced to share the world with us humans. We're the weakest
link by far, so the sooner we can get the humans out of the picture the
better off we'll be. If people insist on playing with Facebook while
they drive, let them play on Facebook while the car drives itself.

Uber Trucking has a good initial approach. A human drops off a semi
trailer at a hub near the edge of a city, then an Uber truck is hooked
up. The Uber truck takes the trailer to the next city or across the
country, where it's once again dropped off at a trucking hub and a human
takes it into the city. Out on the highway, there's still a human in the
truck, but he or she is there just in case, not as a primary driver.
Keith Nuttle
2018-03-21 00:03:17 UTC
Permalink
Post by Char Jackson
Post by Jeff Barnett
Post by Char Jackson
On Tue, 20 Mar 2018 19:53:13 +0800, "Mr. Man-wai Chang"
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
<https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe>
[multiple groups removed]
AFAIK, the facts are still being gathered, but I'd be curious to know
how many pedestrians were killed by human drivers over the same period.
If it's more than 1, which I'm assuming is the case, then I'm not
alarmed by this incident other than having sympathy for the deceased and
her family. Initial reports said she was crossing the street, but not in
or near a crosswalk, so I wonder if it would have made a positive
difference if a human had been driving. I hope testing doesn't get
curtailed by this incident.
I believe the number last year was 6000. The real question is how do
auto driven car accident statistics compare with human drivers.
The Uber accident was not necessarily the car/drivers fault. A woman was
walking a bicycle and started to cross the street, not in the crosswalk,
and was hit just as she went into the street. The car was traveling
around 40-45mph. In other words, it was HIGHLY likely she didn't look
before crossing. The municipal police are still investigating and
certainly have not assigned any blame yet. In fact they have speculated
that this might be one of those where no primary blame is asserted.
As someone else said: You should be wary and afraid of those automated
vehicles. But you should be god-awful more afraid of all those idiots
out there jacking off with their smart phones while driving.
My personnel bet is that 5 years from now we will see self drive cars
doing spectacularly better than human drive cars - better safety, better
millage, faster trips - and still a bunch of idiots (the same ones who
opposed autopilots and computer assisted landings for planes) bitching
about the supremacy of human drivers, vinyl records, doctors reading
x-rays, etc, etc, etc.
I'm with you 100%. From everything I've read, the technology is coming
along much faster than I would have ever thought.
There are two major hurdles that I see. The first, of course, is the
technology itself. We already have anti-lock brakes, lane departure
warnings, adaptive cruise control, blind spot monitors, and automatic
parallel parking, oh and 360-degree virtual overhead view on the
dashboard stitched together from multiple exterior cameras, on virtually
all new vehicles. ICBW, but I think all of those things are mandated by
2020. With that much automation already in place, it's a logical (but
difficult) next step to stitch it all together and make it work without
significant human intervention.
The second hurdle is the transition period, where semi-autonomous
vehicles are forced to share the world with us humans. We're the weakest
link by far, so the sooner we can get the humans out of the picture the
better off we'll be. If people insist on playing with Facebook while
they drive, let them play on Facebook while the car drives itself.
Uber Trucking has a good initial approach. A human drops off a semi
trailer at a hub near the edge of a city, then an Uber truck is hooked
up. The Uber truck takes the trailer to the next city or across the
country, where it's once again dropped off at a trucking hub and a human
takes it into the city. Out on the highway, there's still a human in the
truck, but he or she is there just in case, not as a primary driver.
While there is currently great enthusiasm for auto driving vehicles I am
afraid the complexity of the system is more than current technology can
handle.

We have already seen a death where the automated system did not
understand that it was looking under the truck, and the human occupant
was killed. It will be difficult for the software to be designed to
make the reaches that are slightly beyond the facts.

One of the problems that I see is a simple one. Yes the car will slow
down and stop in traffic, but what will be used to increase the speed as
traffic thins.

What will trigger start ups after it stops in stop-and-go traffic?

While in some places the placement of traffic lights are somewhat
standardize, will a auto car be able to find the traffic light in all
occasions? What about the stop sign that is mostly hidden by
vegetation, will it recognize them.

How will it be able to detect a person directing traffic? Could be a
policeman, but could be a construction worker of a civilian directing
traffic around an accident.

The beginning and ending of speed zones will also be a problem, There
are several places I drive where there is a sign as you come into a
small community, but none after you leave the area for 10 miles, Will
the self driving car know it is suppose to return to the default speed
limit after passing through the community.

Yes Garmin shows speed limits but there are times when the posted speed
limits are different than what Garmin shows. How will the auto car no
the difference.

What about a brand new highway that has just opened. Last summer we
drove for 20 miles on a newly opened high ways that completely confused
the Garmin. Will a self driving car be able to handle that situation?

These are just few common situations that I have encountered. Until
these are reliably resolved, I will put my auto driving car in the
garage next to my flying car. Remember when we got those about 50 years
ago. What about the Segeway that was going to revolutionize
transportation.

There is more to driving than start, stop, and staying in the lane.
--
2018: The year we learn to play the great game of Euchre
Paul
2018-03-21 12:35:03 UTC
Permalink
Post by Keith Nuttle
Post by Char Jackson
Post by Jeff Barnett
Post by Char Jackson
On Tue, 20 Mar 2018 19:53:13 +0800, "Mr. Man-wai Chang"
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
<https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe>
[multiple groups removed]
AFAIK, the facts are still being gathered, but I'd be curious to know
how many pedestrians were killed by human drivers over the same period.
If it's more than 1, which I'm assuming is the case, then I'm not
alarmed by this incident other than having sympathy for the deceased and
her family. Initial reports said she was crossing the street, but not in
or near a crosswalk, so I wonder if it would have made a positive
difference if a human had been driving. I hope testing doesn't get
curtailed by this incident.
I believe the number last year was 6000. The real question is how do
auto driven car accident statistics compare with human drivers.
The Uber accident was not necessarily the car/drivers fault. A woman was
walking a bicycle and started to cross the street, not in the crosswalk,
and was hit just as she went into the street. The car was traveling
around 40-45mph. In other words, it was HIGHLY likely she didn't look
before crossing. The municipal police are still investigating and
certainly have not assigned any blame yet. In fact they have speculated
that this might be one of those where no primary blame is asserted.
As someone else said: You should be wary and afraid of those automated
vehicles. But you should be god-awful more afraid of all those idiots
out there jacking off with their smart phones while driving.
My personnel bet is that 5 years from now we will see self drive cars
doing spectacularly better than human drive cars - better safety, better
millage, faster trips - and still a bunch of idiots (the same ones who
opposed autopilots and computer assisted landings for planes) bitching
about the supremacy of human drivers, vinyl records, doctors reading
x-rays, etc, etc, etc.
I'm with you 100%. From everything I've read, the technology is coming
along much faster than I would have ever thought.
There are two major hurdles that I see. The first, of course, is the
technology itself. We already have anti-lock brakes, lane departure
warnings, adaptive cruise control, blind spot monitors, and automatic
parallel parking, oh and 360-degree virtual overhead view on the
dashboard stitched together from multiple exterior cameras, on virtually
all new vehicles. ICBW, but I think all of those things are mandated by
2020. With that much automation already in place, it's a logical (but
difficult) next step to stitch it all together and make it work without
significant human intervention.
The second hurdle is the transition period, where semi-autonomous
vehicles are forced to share the world with us humans. We're the weakest
link by far, so the sooner we can get the humans out of the picture the
better off we'll be. If people insist on playing with Facebook while
they drive, let them play on Facebook while the car drives itself.
Uber Trucking has a good initial approach. A human drops off a semi
trailer at a hub near the edge of a city, then an Uber truck is hooked
up. The Uber truck takes the trailer to the next city or across the
country, where it's once again dropped off at a trucking hub and a human
takes it into the city. Out on the highway, there's still a human in the
truck, but he or she is there just in case, not as a primary driver.
While there is currently great enthusiasm for auto driving vehicles I am
afraid the complexity of the system is more than current technology can
handle.
We have already seen a death where the automated system did not
understand that it was looking under the truck, and the human occupant
was killed.
The occupant of that vehicle (Model S), insisted a Level 2 design drive at Level 5.

Here's the new DMV written test for Model S [potential] owners.

1) In your Model S, you can be

a) Drunk and slumped over asleep, in the driver seat.
b) Playing Nintendo while the car drives me home.
c) Driving with my hands on the wheel, in case I need to take over.

If you don't answer "C", you can't get your plates for
your Model S.

The Model S has a camera and a radar system (no Lidar). If
the software had paid attention to the radar a bit more, the
car might have stopped in time.

https://techcrunch.com/2016/09/11/tesla-autopilot-8-0-uses-radar-to-prevent-accidents-like-the-fatal-model-s-crash/

Paul
Keith Nuttle
2018-03-21 12:47:33 UTC
Permalink
Post by Paul
Post by Keith Nuttle
Post by Char Jackson
Post by Jeff Barnett
Post by Char Jackson
On Tue, 20 Mar 2018 19:53:13 +0800, "Mr. Man-wai Chang"
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
<https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe>
[multiple groups removed]
AFAIK, the facts are still being gathered, but I'd be curious to know
how many pedestrians were killed by human drivers over the same period.
If it's more than 1, which I'm assuming is the case, then I'm not
alarmed by this incident other than having sympathy for the deceased and
her family. Initial reports said she was crossing the street, but not in
or near a crosswalk, so I wonder if it would have made a positive
difference if a human had been driving. I hope testing doesn't get
curtailed by this incident.
I believe the number last year was 6000. The real question is how do
auto driven car accident statistics compare with human drivers.
The Uber accident was not necessarily the car/drivers fault. A woman was
walking a bicycle and started to cross the street, not in the crosswalk,
and was hit just as she went into the street. The car was traveling
around 40-45mph. In other words, it was HIGHLY likely she didn't look
before crossing. The municipal police are still investigating and
certainly have not assigned any blame yet. In fact they have speculated
that this might be one of those where no primary blame is asserted.
As someone else said: You should be wary and afraid of those automated
vehicles. But you should be god-awful more afraid of all those idiots
out there jacking off with their smart phones while driving.
My personnel bet is that 5 years from now we will see self drive cars
doing spectacularly better than human drive cars - better safety, better
millage, faster trips - and still a bunch of idiots (the same ones who
opposed autopilots and computer assisted landings for planes) bitching
about the supremacy of human drivers, vinyl records, doctors reading
x-rays, etc, etc, etc.
I'm with you 100%. From everything I've read, the technology is coming
along much faster than I would have ever thought.
There are two major hurdles that I see. The first, of course, is the
technology itself. We already have anti-lock brakes, lane departure
warnings, adaptive cruise control, blind spot monitors, and automatic
parallel parking, oh and 360-degree virtual overhead view on the
dashboard stitched together from multiple exterior cameras, on virtually
all new vehicles. ICBW, but I think all of those things are mandated by
2020. With that much automation already in place, it's a logical (but
difficult) next step to stitch it all together and make it work without
significant human intervention.
The second hurdle is the transition period, where semi-autonomous
vehicles are forced to share the world with us humans. We're the weakest
link by far, so the sooner we can get the humans out of the picture the
better off we'll be. If people insist on playing with Facebook while
they drive, let them play on Facebook while the car drives itself.
Uber Trucking has a good initial approach. A human drops off a semi
trailer at a hub near the edge of a city, then an Uber truck is hooked
up. The Uber truck takes the trailer to the next city or across the
country, where it's once again dropped off at a trucking hub and a human
takes it into the city. Out on the highway, there's still a human in the
truck, but he or she is there just in case, not as a primary driver.
While there is currently great enthusiasm for auto driving vehicles I
am afraid the complexity of the system is more than current technology
can handle.
We have already seen a death where the automated system did not
understand that it was looking under the truck, and the human occupant
was killed.
The occupant of that vehicle (Model S), insisted a Level 2 design drive at Level 5.
Here's the new DMV written test for Model S [potential] owners.
1) In your Model S, you can be
   a) Drunk and slumped over asleep, in the driver seat.
   b) Playing Nintendo while the car drives me home.
   c) Driving with my hands on the wheel, in case I need to take over.
If you don't answer "C", you can't get your plates for
your Model S.
The Model S has a camera and a radar system (no Lidar). If
the software had paid attention to the radar a bit more, the
car might have stopped in time.
https://techcrunch.com/2016/09/11/tesla-autopilot-8-0-uses-radar-to-prevent-accidents-like-the-fatal-model-s-crash/
   Paul
What ever the licensing requirement the software must be capable of
Correctly analyzing the situations I mentioned in my post, plus a
million other situations that do not have yes/no answers.
--
2018: The year we learn to play the great game of Euchre
Paul
2018-03-21 13:56:08 UTC
Permalink
Post by Keith Nuttle
Post by Paul
Post by Keith Nuttle
Post by Char Jackson
Post by Jeff Barnett
Post by Char Jackson
On Tue, 20 Mar 2018 19:53:13 +0800, "Mr. Man-wai Chang"
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
<https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe>
[multiple groups removed]
AFAIK, the facts are still being gathered, but I'd be curious to know
how many pedestrians were killed by human drivers over the same period.
If it's more than 1, which I'm assuming is the case, then I'm not
alarmed by this incident other than having sympathy for the deceased and
her family. Initial reports said she was crossing the street, but not in
or near a crosswalk, so I wonder if it would have made a positive
difference if a human had been driving. I hope testing doesn't get
curtailed by this incident.
I believe the number last year was 6000. The real question is how do
auto driven car accident statistics compare with human drivers.
The Uber accident was not necessarily the car/drivers fault. A woman was
walking a bicycle and started to cross the street, not in the crosswalk,
and was hit just as she went into the street. The car was traveling
around 40-45mph. In other words, it was HIGHLY likely she didn't look
before crossing. The municipal police are still investigating and
certainly have not assigned any blame yet. In fact they have speculated
that this might be one of those where no primary blame is asserted.
As someone else said: You should be wary and afraid of those automated
vehicles. But you should be god-awful more afraid of all those idiots
out there jacking off with their smart phones while driving.
My personnel bet is that 5 years from now we will see self drive cars
doing spectacularly better than human drive cars - better safety, better
millage, faster trips - and still a bunch of idiots (the same ones who
opposed autopilots and computer assisted landings for planes) bitching
about the supremacy of human drivers, vinyl records, doctors reading
x-rays, etc, etc, etc.
I'm with you 100%. From everything I've read, the technology is coming
along much faster than I would have ever thought.
There are two major hurdles that I see. The first, of course, is the
technology itself. We already have anti-lock brakes, lane departure
warnings, adaptive cruise control, blind spot monitors, and automatic
parallel parking, oh and 360-degree virtual overhead view on the
dashboard stitched together from multiple exterior cameras, on virtually
all new vehicles. ICBW, but I think all of those things are mandated by
2020. With that much automation already in place, it's a logical (but
difficult) next step to stitch it all together and make it work without
significant human intervention.
The second hurdle is the transition period, where semi-autonomous
vehicles are forced to share the world with us humans. We're the weakest
link by far, so the sooner we can get the humans out of the picture the
better off we'll be. If people insist on playing with Facebook while
they drive, let them play on Facebook while the car drives itself.
Uber Trucking has a good initial approach. A human drops off a semi
trailer at a hub near the edge of a city, then an Uber truck is hooked
up. The Uber truck takes the trailer to the next city or across the
country, where it's once again dropped off at a trucking hub and a human
takes it into the city. Out on the highway, there's still a human in the
truck, but he or she is there just in case, not as a primary driver.
While there is currently great enthusiasm for auto driving vehicles I
am afraid the complexity of the system is more than current
technology can handle.
We have already seen a death where the automated system did not
understand that it was looking under the truck, and the human
occupant was killed.
The occupant of that vehicle (Model S), insisted a Level 2 design drive at Level 5.
Here's the new DMV written test for Model S [potential] owners.
1) In your Model S, you can be
a) Drunk and slumped over asleep, in the driver seat.
b) Playing Nintendo while the car drives me home.
c) Driving with my hands on the wheel, in case I need to take over.
If you don't answer "C", you can't get your plates for
your Model S.
The Model S has a camera and a radar system (no Lidar). If
the software had paid attention to the radar a bit more, the
car might have stopped in time.
https://techcrunch.com/2016/09/11/tesla-autopilot-8-0-uses-radar-to-prevent-accidents-like-the-fatal-model-s-crash/
Paul
What ever the licensing requirement the software must be capable of
Correctly analyzing the situations I mentioned in my post, plus a
million other situations that do not have yes/no answers.
Level 5 is the only level that makes no assumptions about the driver.

For everything else, you had better be a Rocket Scientist.

Paul
J. P. Gilliver (John)
2018-03-21 14:23:09 UTC
Permalink
[]
Post by Keith Nuttle
What ever the licensing requirement the software must be capable of
Correctly analyzing the situations I mentioned in my post, plus a
million other situations that do not have yes/no answers.
[]
Why? For a start, what _is_ the "correct" analysis of a situation that
has no yes/no answer; to go on, do you test human drivers' decisions in
the same situation?

I'm not saying I have the answer (I like technology [presumably like
most here] but do have some concerns about autonomous vehicles, and many
other such matters for that matter); I just view the above as not being
a valid argument _against_, at least _unless_ you say what would be
right for a human in such a situation. [Which you can't if there _is_ no
"correct" choice.]
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)***@T+H+Sh0!:`)DNAf

The average age of a single mum in this country is 37
- Jane Rackham, RT 2016/5/28-6/3
Char Jackson
2018-03-21 14:17:01 UTC
Permalink
On Wed, 21 Mar 2018 08:47:33 -0400, Keith Nuttle
Post by Keith Nuttle
Post by Paul
Post by Keith Nuttle
Post by Char Jackson
Post by Jeff Barnett
Post by Char Jackson
On Tue, 20 Mar 2018 19:53:13 +0800, "Mr. Man-wai Chang"
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
<https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe>
[multiple groups removed]
AFAIK, the facts are still being gathered, but I'd be curious to know
how many pedestrians were killed by human drivers over the same period.
If it's more than 1, which I'm assuming is the case, then I'm not
alarmed by this incident other than having sympathy for the deceased and
her family. Initial reports said she was crossing the street, but not in
or near a crosswalk, so I wonder if it would have made a positive
difference if a human had been driving. I hope testing doesn't get
curtailed by this incident.
I believe the number last year was 6000. The real question is how do
auto driven car accident statistics compare with human drivers.
The Uber accident was not necessarily the car/drivers fault. A woman was
walking a bicycle and started to cross the street, not in the crosswalk,
and was hit just as she went into the street. The car was traveling
around 40-45mph. In other words, it was HIGHLY likely she didn't look
before crossing. The municipal police are still investigating and
certainly have not assigned any blame yet. In fact they have speculated
that this might be one of those where no primary blame is asserted.
As someone else said: You should be wary and afraid of those automated
vehicles. But you should be god-awful more afraid of all those idiots
out there jacking off with their smart phones while driving.
My personnel bet is that 5 years from now we will see self drive cars
doing spectacularly better than human drive cars - better safety, better
millage, faster trips - and still a bunch of idiots (the same ones who
opposed autopilots and computer assisted landings for planes) bitching
about the supremacy of human drivers, vinyl records, doctors reading
x-rays, etc, etc, etc.
I'm with you 100%. From everything I've read, the technology is coming
along much faster than I would have ever thought.
There are two major hurdles that I see. The first, of course, is the
technology itself. We already have anti-lock brakes, lane departure
warnings, adaptive cruise control, blind spot monitors, and automatic
parallel parking, oh and 360-degree virtual overhead view on the
dashboard stitched together from multiple exterior cameras, on virtually
all new vehicles. ICBW, but I think all of those things are mandated by
2020. With that much automation already in place, it's a logical (but
difficult) next step to stitch it all together and make it work without
significant human intervention.
The second hurdle is the transition period, where semi-autonomous
vehicles are forced to share the world with us humans. We're the weakest
link by far, so the sooner we can get the humans out of the picture the
better off we'll be. If people insist on playing with Facebook while
they drive, let them play on Facebook while the car drives itself.
Uber Trucking has a good initial approach. A human drops off a semi
trailer at a hub near the edge of a city, then an Uber truck is hooked
up. The Uber truck takes the trailer to the next city or across the
country, where it's once again dropped off at a trucking hub and a human
takes it into the city. Out on the highway, there's still a human in the
truck, but he or she is there just in case, not as a primary driver.
While there is currently great enthusiasm for auto driving vehicles I
am afraid the complexity of the system is more than current technology
can handle.
We have already seen a death where the automated system did not
understand that it was looking under the truck, and the human occupant
was killed.
The occupant of that vehicle (Model S), insisted a Level 2 design drive at Level 5.
Here's the new DMV written test for Model S [potential] owners.
1) In your Model S, you can be
   a) Drunk and slumped over asleep, in the driver seat.
   b) Playing Nintendo while the car drives me home.
   c) Driving with my hands on the wheel, in case I need to take over.
If you don't answer "C", you can't get your plates for
your Model S.
The Model S has a camera and a radar system (no Lidar). If
the software had paid attention to the radar a bit more, the
car might have stopped in time.
https://techcrunch.com/2016/09/11/tesla-autopilot-8-0-uses-radar-to-prevent-accidents-like-the-fatal-model-s-crash/
   Paul
What ever the licensing requirement the software must be capable of
Correctly analyzing the situations I mentioned in my post, plus a
million other situations that do not have yes/no answers.
I'm guessing you'd be surprised to learn how far the technology has come
in the last couple of years. Most of the situations you brought up have
been put to bed some time ago and they're now working on the edge cases.
mechanic
2018-03-21 17:59:45 UTC
Permalink
Post by Keith Nuttle
What ever the licensing requirement the software must be capable
of Correctly analyzing the situations I mentioned in my post,
plus a million other situations that do not have yes/no answers.
No it just has to be better than the average human. This leads to a
reduction in accidents/deaths on the road.
Gene Wirchenko
2018-03-21 23:22:23 UTC
Permalink
Post by mechanic
Post by Keith Nuttle
What ever the licensing requirement the software must be capable
of Correctly analyzing the situations I mentioned in my post,
plus a million other situations that do not have yes/no answers.
No it just has to be better than the average human. This leads to a
reduction in accidents/deaths on the road.
Not necessarily. One issue is that people do not know how these
cars will react. If they are supposedly better but cause accidents
ostensibily by motorists but actually caused to due unpredictable
actiions, there might be more accidents overall. They would then be a
net minus.

Parts of systems interact. Just looking at one part is not
enough.

Sincerely,

Gene Wirchenko
Jeff Barnett
2018-03-21 17:39:35 UTC
Permalink
Post by Paul
Post by Keith Nuttle
Post by Char Jackson
Post by Jeff Barnett
Post by Char Jackson
On Tue, 20 Mar 2018 19:53:13 +0800, "Mr. Man-wai Chang"
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
<https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe>
[multiple groups removed]
AFAIK, the facts are still being gathered, but I'd be curious to know
how many pedestrians were killed by human drivers over the same period.
If it's more than 1, which I'm assuming is the case, then I'm not
alarmed by this incident other than having sympathy for the deceased and
her family. Initial reports said she was crossing the street, but not in
or near a crosswalk, so I wonder if it would have made a positive
difference if a human had been driving. I hope testing doesn't get
curtailed by this incident.
I believe the number last year was 6000. The real question is how do
auto driven car accident statistics compare with human drivers.
The Uber accident was not necessarily the car/drivers fault. A woman was
walking a bicycle and started to cross the street, not in the crosswalk,
and was hit just as she went into the street. The car was traveling
around 40-45mph. In other words, it was HIGHLY likely she didn't look
before crossing. The municipal police are still investigating and
certainly have not assigned any blame yet. In fact they have speculated
that this might be one of those where no primary blame is asserted.
As someone else said: You should be wary and afraid of those automated
vehicles. But you should be god-awful more afraid of all those idiots
out there jacking off with their smart phones while driving.
My personnel bet is that 5 years from now we will see self drive cars
doing spectacularly better than human drive cars - better safety, better
millage, faster trips - and still a bunch of idiots (the same ones who
opposed autopilots and computer assisted landings for planes) bitching
about the supremacy of human drivers, vinyl records, doctors reading
x-rays, etc, etc, etc.
I'm with you 100%. From everything I've read, the technology is coming
along much faster than I would have ever thought.
There are two major hurdles that I see. The first, of course, is the
technology itself. We already have anti-lock brakes, lane departure
warnings, adaptive cruise control, blind spot monitors, and automatic
parallel parking, oh and 360-degree virtual overhead view on the
dashboard stitched together from multiple exterior cameras, on virtually
all new vehicles. ICBW, but I think all of those things are mandated by
2020. With that much automation already in place, it's a logical (but
difficult) next step to stitch it all together and make it work without
significant human intervention.
The second hurdle is the transition period, where semi-autonomous
vehicles are forced to share the world with us humans. We're the weakest
link by far, so the sooner we can get the humans out of the picture the
better off we'll be. If people insist on playing with Facebook while
they drive, let them play on Facebook while the car drives itself.
Uber Trucking has a good initial approach. A human drops off a semi
trailer at a hub near the edge of a city, then an Uber truck is hooked
up. The Uber truck takes the trailer to the next city or across the
country, where it's once again dropped off at a trucking hub and a human
takes it into the city. Out on the highway, there's still a human in the
truck, but he or she is there just in case, not as a primary driver.
While there is currently great enthusiasm for auto driving vehicles I
am afraid the complexity of the system is more than current technology
can handle.
We have already seen a death where the automated system did not
understand that it was looking under the truck, and the human occupant
was killed.
The occupant of that vehicle (Model S), insisted a Level 2 design drive at Level 5.
Here's the new DMV written test for Model S [potential] owners.
1) In your Model S, you can be
   a) Drunk and slumped over asleep, in the driver seat.
   b) Playing Nintendo while the car drives me home.
   c) Driving with my hands on the wheel, in case I need to take over.
If you don't answer "C", you can't get your plates for
your Model S.
The Model S has a camera and a radar system (no Lidar). If
the software had paid attention to the radar a bit more, the
car might have stopped in time.
From what I read, she was hit just after stepping in the street by a
car gin a legal 45mph. That doesn't sound like lack of driver or
computer attention or lack of reflexes. I await a full investigation by
LEOS, technologist, and pseudo pundits (the media). Let's hope they get
their facts straight. Let us also hope that somewhere along the line we
get comparison statistics between human and automation.
Post by Paul
https://techcrunch.com/2016/09/11/tesla-autopilot-8-0-uses-radar-to-prevent-accidents-like-the-fatal-model-s-crash/
--
Jeff Barnett
Rodney Pont
2018-03-21 21:12:12 UTC
Permalink
Post by Jeff Barnett
From what I read, she was hit just after stepping in the street by a
car gin a legal 45mph. That doesn't sound like lack of driver or
computer attention or lack of reflexes.
I wonder if she saw the lidar on top of it and thought 'it's one of
those computer cars, it'll stop if I step out in front of it'.
Unfortunately since she didn't survive we will never know.
--
Faster, cheaper, quieter than HS2
and built in 5 years;
UKUltraspeed <http://www.500kmh.com/>
Ken Blake
2018-03-21 21:28:08 UTC
Permalink
On Wed, 21 Mar 2018 21:12:12 +0000 (GMT), "Rodney Pont"
Post by Rodney Pont
Post by Jeff Barnett
From what I read, she was hit just after stepping in the street by a
car gin a legal 45mph. That doesn't sound like lack of driver or
computer attention or lack of reflexes.
I wonder if she saw the lidar on top of it and thought 'it's one of
those computer cars, it'll stop if I step out in front of it'.
Unfortunately since she didn't survive we will never know.
Right, there's no way to know for sure. But lots of us have a very
good guess: no, she didn't.

I'm not a gambling man, but I'd put money on that.
Char Jackson
2018-03-22 04:32:50 UTC
Permalink
On Wed, 21 Mar 2018 21:12:12 +0000 (GMT), "Rodney Pont"
Post by Rodney Pont
Post by Jeff Barnett
From what I read, she was hit just after stepping in the street by a
car gin a legal 45mph. That doesn't sound like lack of driver or
computer attention or lack of reflexes.
I wonder if she saw the lidar on top of it and thought 'it's one of
those computer cars, it'll stop if I step out in front of it'.
Unfortunately since she didn't survive we will never know.
I don't think that's a widely held belief, so we can probably safely
assume that the pedestrian didn't think that. The laws of physics are
still in effect.
Paul
2018-03-22 13:49:54 UTC
Permalink
Post by Rodney Pont
Post by Jeff Barnett
From what I read, she was hit just after stepping in the street by a
car gin a legal 45mph. That doesn't sound like lack of driver or
computer attention or lack of reflexes.
I wonder if she saw the lidar on top of it and thought 'it's one of
those computer cars, it'll stop if I step out in front of it'.
Unfortunately since she didn't survive we will never know.
There is video available now.

https://static01.nyt.com/images/2018/03/20/us/self-driving-uber-pedestrian-killed-promo-1521571927974/self-driving-uber-pedestrian-killed-promo-1521571927974-master495.jpg

https://www.theverge.com/2018/3/21/17149958/tempe-police-fatal-crash-self-driving-uber-video-released

She was in the middle of the road.

This wasn't a "take one step off median, get clipped" case.

The car should have detected this. It didn't.

The weather conditions are perfect. And, it's nighttime.

Now the question is, what part of the car failed. Did
the computer crash ? Did the classifier hardware crash ?
What exception condition or BSOD was it throwing at the time ?

The car doesn't react at all.

The Safety Driver isn't much of a safety driver.

Paul
J. P. Gilliver (John)
2018-03-22 13:57:44 UTC
Permalink
In message <p90c9v$crp$***@dont-email.me>, Paul <***@needed.invalid>
writes:
[]
Post by Paul
She was in the middle of the road.
This wasn't a "take one step off median, get clipped" case.
The car should have detected this. It didn't.
Indeed.
Post by Paul
The weather conditions are perfect. And, it's nighttime.
Now the question is, what part of the car failed. Did
the computer crash ? Did the classifier hardware crash ?
What exception condition or BSOD was it throwing at the time ?
The car doesn't react at all.
It certainly didn't seem to.
Post by Paul
The Safety Driver isn't much of a safety driver.
no, he was useless.
Post by Paul
Paul
I would be interested to know a couple of things (though these should
_not_ determine any decisions, because you have to allow for deaf
pedestrians):

Was it an electric car (and thus rather quiet)? Was she hard of hearing?
Do any of the autonomous systems currently being developed use the horn?
(I'm pretty sure the answer to that one is no, probably because the
false positives would result in its over-use and thus be used [as
another argument] against autonomous vehicles; however, once things
improve in that respect, I'd say they should - if audible warnings
_inside_ the car are a good thing, then they would be outside too.)
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)***@T+H+Sh0!:`)DNAf

"Victory does not bring with it a sense of triumph - rather the dull numbness
of relief..." - Cecil Beaton quoted by Anthony Horowitz, RT 2015/1/3-9
Paul
2018-03-22 14:46:41 UTC
Permalink
Post by J. P. Gilliver (John)
I would be interested to know a couple of things (though these should
_not_ determine any decisions, because you have to allow for deaf
Was it an electric car (and thus rather quiet)? Was she hard of hearing?
Do any of the autonomous systems currently being developed use the horn?
(I'm pretty sure the answer to that one is no, probably because the
false positives would result in its over-use and thus be used [as
another argument] against autonomous vehicles; however, once things
improve in that respect, I'd say they should - if audible warnings
_inside_ the car are a good thing, then they would be outside too.)
I tried to track that information down.

Uber has set up a contract with Volvo, for up to 24,000 cars.
The Volvo is a regular car, with "gubbins" added for smart control.
So, for example, the steering would have an electric motor to
move the rack and pinion. Uber then installs a kit, to provide
"drive" to the steering motor and make it autonomous.

Volvo makes a whole pile of cars, under the stated Uber contract
model number.

The current generation of Volvo, offered both gasoline and tdi (diesel)
models. With various transmission offerings.

In 2019, some of the Uber fleet cars they might take delivery
of, would include plug-in hybrid cars. It will be 2019, in theory,
before a "quiet" car could be a problem. Could they have received
prototypes of those ? Not likely.

But, it's pretty hard to get reliable information, as the initial
orders for Uber are filled with "table scraps" on a lot by lot
basis. The overall contract is for UC60 cars, and 100 cars
in Pittsburgh are UV60 (tdi) cars. Apparently. But the reliability
of this information is suspect.

All I can really say at this point, is it's a car, and it
came from the Chinese owner of Volvo.

"Geely Automobile Holdings Ltd"
https://www.reuters.com/article/us-geely-idUSTRE66S1TC20100802

Paul
Char Jackson
2018-03-22 16:54:49 UTC
Permalink
On Thu, 22 Mar 2018 13:57:44 +0000, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
Was it an electric car (and thus rather quiet)? Was she hard of hearing?
Do any of the autonomous systems currently being developed use the horn?
(I'm pretty sure the answer to that one is no, probably because the
false positives would result in its over-use and thus be used [as
another argument] against autonomous vehicles; however, once things
improve in that respect, I'd say they should - if audible warnings
_inside_ the car are a good thing, then they would be outside too.)
For a very long time now, I've thought that horns are obsolete. They may
have made sense a century ago when cows roamed freely and drivers hadn't
yet agreed on a common set of driving rules, but not so much anymore. A
non-directional blast of noise has never been very effective or very
efficient.

I think the last time I used a car horn was well over 45 years ago. The
last time I used the horn on my motorcycle was in the summer of 2012. I
made sure I was well out in the country with no other vehicles around. I
just wanted to see if it worked. It would never occur to me to use a
horn where others might hear it, which of course, rather defeats its
purpose.
Paul
2018-03-22 19:04:34 UTC
Permalink
Post by Char Jackson
On Thu, 22 Mar 2018 13:57:44 +0000, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
Was it an electric car (and thus rather quiet)? Was she hard of hearing?
Do any of the autonomous systems currently being developed use the horn?
(I'm pretty sure the answer to that one is no, probably because the
false positives would result in its over-use and thus be used [as
another argument] against autonomous vehicles; however, once things
improve in that respect, I'd say they should - if audible warnings
_inside_ the car are a good thing, then they would be outside too.)
For a very long time now, I've thought that horns are obsolete. They may
have made sense a century ago when cows roamed freely and drivers hadn't
yet agreed on a common set of driving rules, but not so much anymore. A
non-directional blast of noise has never been very effective or very
efficient.
I think the last time I used a car horn was well over 45 years ago. The
last time I used the horn on my motorcycle was in the summer of 2012. I
made sure I was well out in the country with no other vehicles around. I
just wanted to see if it worked. It would never occur to me to use a
horn where others might hear it, which of course, rather defeats its
purpose.
You don't have to use a horn solely as an "indicator of displeasure".
It's not an "editorial comment" button.

It can also be used to send positional information to another
driver, who may be showing signs of not realizing you're present.

I have avoided accidents by using the horn for that purpose.

After you've had your first accident where you *should* have
used the horn, you'll be less timid in reaching for it the
next time. Trust me. Ask me some time, what it cost me
to learn that.

It doesn't matter how many years you haven't used it.

And for God sake, practice :-) The next time you're at a left
turn, and the driver in front of you is a microsecond slow
hitting the accelerator , pretend displeasure by giving them
a "pip" with the horn. That gives you a hardware test opportunity.
I've had horns rust out on a Honda, and be completely inoperable
before (there's a high and a low horn), so an occasional hardware
test is called for. And the practice, of not being afraid to use
the button, will do you good. It makes you less timid in a
situation where you are attempting to "maintain dignity" when
instead you should be "lettin it rip".

A typical place to use a horn, is in an "old person backing up"
incident. Where they cannot see what they're doing, and a
little "pip" on the horn, will get them to pull back into
their hidey hole. Don't say to yourself "shirely they can
see me", when you have an opportunity to leave no doubt.

That's what the horn is for.

Paul
Mayayana
2018-03-21 13:20:15 UTC
Permalink
"Keith Nuttle" <***@sbcglobal.net> wrote

| One of the problems that I see is a simple one. Yes the car will slow
| down and stop in traffic, but what will be used to increase the speed as
| traffic thins.
|
| What will trigger start ups after it stops in stop-and-go traffic?
|
| While in some places the placement of traffic lights are somewhat
| standardize, will a auto car be able to find the traffic light in all
| occasions? What about the stop sign that is mostly hidden by
| vegetation, will it recognize them.
|

I saw an interesting case awhile back: Driverless
cars were having a hard time at 4-way stops. Humans
at those intersections often start and then pause,
or wave each other on. It often requires negotiation.
The driverless car can only calculate when its turn
arrives and is confused by the "erratic" behavior
of cooperation between drivers.

| What about the Segeway that was going to revolutionize
| transportation.
|

Except for the man who died wheeling over a cliff,
and the fact that there's no suitable venue for
Segways, I think they worked out pretty well.
You can now go to the nation's capital and terrorize
pedestrians with them, while taking in the sights:

https://www.citysegwaytours.com/washington-dc
Gene Wirchenko
2018-03-21 17:17:52 UTC
Permalink
On Tue, 20 Mar 2018 20:03:17 -0400, Keith Nuttle
<***@sbcglobal.net> wrote:

[snip]
Post by Keith Nuttle
While there is currently great enthusiasm for auto driving vehicles I am
afraid the complexity of the system is more than current technology can
handle.
Getting the first bits is easy. Getting it all is much more
difficult.
Post by Keith Nuttle
We have already seen a death where the automated system did not
understand that it was looking under the truck, and the human occupant
was killed. It will be difficult for the software to be designed to
make the reaches that are slightly beyond the facts.
And the driver was not doing his job. Having one's hands on
steering wheel is rather basic. Many automated devices allow a person
to pay less attention. Where attention is then required to handle an
emergency, trouble may ensue.

When you are in a car and not the driver, do you pay as much
attention to the road? I doubt it. (I do not either.)

I had one case where I was driving a company vehicle and my
supervisor queried me about a turn I had just made. It was a
perfectly legal turn. What my non-driver supervisor was not aware of
is that the road had been redone and there were now two turning lanes.
I was in the new one. He did not have to know or even pay attention,
but I did and had.
Post by Keith Nuttle
One of the problems that I see is a simple one. Yes the car will slow
down and stop in traffic, but what will be used to increase the speed as
traffic thins.
What will trigger start ups after it stops in stop-and-go traffic?
Greater distance between the vehicle and the one in front. Oh,
but what if it is the first car in line?
Post by Keith Nuttle
While in some places the placement of traffic lights are somewhat
standardize, will a auto car be able to find the traffic light in all
occasions? What about the stop sign that is mostly hidden by
vegetation, will it recognize them.
Not a fair question. Some people might not be able to see them.
Post by Keith Nuttle
How will it be able to detect a person directing traffic? Could be a
policeman, but could be a construction worker of a civilian directing
traffic around an accident.
Or a prankster.
Post by Keith Nuttle
The beginning and ending of speed zones will also be a problem, There
are several places I drive where there is a sign as you come into a
small community, but none after you leave the area for 10 miles, Will
the self driving car know it is suppose to return to the default speed
limit after passing through the community.
Good one. The same applies after a construction zone. Since
some highways have varying speed limits, this is not trivial. What if
a regular speed change sign is within the construction zone? It
should get ignored until the "Thank You Resume Speed" sign whereupon
the regular speed is that new speed (and not actually resuming the
previous speed).
Post by Keith Nuttle
Yes Garmin shows speed limits but there are times when the posted speed
limits are different than what Garmin shows. How will the auto car no
the difference.
For that matter, I do not know how to handle signs of the form
speed X unless otherwise posted. (I just called the police to find
out.)
Post by Keith Nuttle
What about a brand new highway that has just opened. Last summer we
drove for 20 miles on a newly opened high ways that completely confused
the Garmin. Will a self driving car be able to handle that situation?
If it follows instructions as given by the system I used on one
rental, it would have multiple opportunities to drive off the road. I
sure did.
Post by Keith Nuttle
These are just few common situations that I have encountered. Until
these are reliably resolved, I will put my auto driving car in the
garage next to my flying car. Remember when we got those about 50 years
ago. What about the Segeway that was going to revolutionize
transportation.
The owner of the company went off a cliff in one. Wrong kind of
difference.
Post by Keith Nuttle
There is more to driving than start, stop, and staying in the lane.
Quite.

Sincerely,

Gene Wirchenko
Wolf K
2018-03-21 17:41:07 UTC
Permalink
On 2018-03-21 13:17, Gene Wirchenko wrote:

[snip interesting discussion]
Post by Gene Wirchenko
Good one. The same applies after a construction zone. Since
some highways have varying speed limits, this is not trivial. What if
a regular speed change sign is within the construction zone? It
should get ignored until the "Thank You Resume Speed" sign whereupon
the regular speed is that new speed (and not actually resuming the
previous speed).
[...]

AHA!

The speed limit/change signs will have to be "live", and transmit the
data to the car.

IOW, the (car+driver) is part of a system. When the driver is not human,
the highway has o be adapted to it.

Best,
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
Keith Nuttle
2018-03-21 18:03:22 UTC
Permalink
Post by Wolf K
[snip interesting discussion]
      Good one.  The same applies after a construction zone.  Since
some highways have varying speed limits, this is not trivial.  What if
a regular speed change sign is within the construction zone?  It
should get ignored until the "Thank You Resume Speed" sign whereupon
the regular speed is that new speed (and not actually resuming the
previous speed).
[...]
AHA!
The speed limit/change signs will have to be "live", and transmit the
data to the car.
IOW, the (car+driver) is part of a system. When the driver is not human,
the highway has o be adapted to it.
Best,
This will take years, and a lot of money which many governments
organizations do not have. Even then what will happen when the Live
speed sign is vandalized, hit by a car, or a deer is standing where the
car can not communicate with the sign
--
2018: The year we learn to play the great game of Euchre
Char Jackson
2018-03-21 19:03:43 UTC
Permalink
On Wed, 21 Mar 2018 14:03:22 -0400, Keith Nuttle
Post by Keith Nuttle
Post by Wolf K
[snip interesting discussion]
      Good one.  The same applies after a construction zone.  Since
some highways have varying speed limits, this is not trivial.  What if
a regular speed change sign is within the construction zone?  It
should get ignored until the "Thank You Resume Speed" sign whereupon
the regular speed is that new speed (and not actually resuming the
previous speed).
[...]
AHA!
The speed limit/change signs will have to be "live", and transmit the
data to the car.
IOW, the (car+driver) is part of a system. When the driver is not human,
the highway has o be adapted to it.
Best,
This will take years,
True. Most prognosticators that I've read are saying it will take 2-5
years for fully autonomous cars to be roaming among us. That's years,
but not a lot of years.
Post by Keith Nuttle
and a lot of money which many governments
organizations do not have.
I haven't seen anything that puts a financial burden on governments. So
far, it's private industry that's funding the bulk of the R&D.
Obviously, they hope to gain in the long run. That includes Google,
Uber, Toyota, Nissan, Tesla, and probably a few I'm forgetting.
Post by Keith Nuttle
Even then what will happen when the Live
speed sign is vandalized, hit by a car, or a deer is standing where the
car can not communicate with the sign
Exactly the same thing that happens when a human driver encounters such
a thing, except that the computer will analyze the situation and make a
decision much faster than humans can.
J. P. Gilliver (John)
2018-03-21 19:42:50 UTC
Permalink
Post by Char Jackson
On Wed, 21 Mar 2018 14:03:22 -0400, Keith Nuttle
[]
Post by Char Jackson
Post by Keith Nuttle
Post by Wolf K
The speed limit/change signs will have to be "live", and transmit the
data to the car.
[]
Post by Char Jackson
I haven't seen anything that puts a financial burden on governments. So
The above would. Not just the cost of the signs, but their supporting
infrastructure, and its and their maintenance. Even just the cost of the
wire, in some areas, would not be insignificant.
Post by Char Jackson
far, it's private industry that's funding the bulk of the R&D.
Obviously, they hope to gain in the long run. That includes Google,
Uber, Toyota, Nissan, Tesla, and probably a few I'm forgetting.
Post by Keith Nuttle
Even then what will happen when the Live
speed sign is vandalized, hit by a car, or a deer is standing where the
car can not communicate with the sign
Exactly the same thing that happens when a human driver encounters such
Good point.
Post by Char Jackson
a thing, except that the computer will analyze the situation and make a
decision much faster than humans can.
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)***@T+H+Sh0!:`)DNAf

Of course some of it [television] is bad. But some of everything is bad -
books, music, family ... - Melvyn Bragg, RT 2017/7/1-7
Wolf K
2018-03-21 20:49:47 UTC
Permalink
[Live highway signs] would. Not just the cost of the signs, but their supporting
infrastructure, and its and their maintenance. Even just the cost of the
wire, in some areas, would not be insignificant.
Use solar power and rechargeable batteries (already used in a lot of
situations up here, mostly for temp signs and lights in construction
zones), and low power radio to communicate with the car. Easy-peasy (a
hobbyist could probably whip one up on a Raspberry Pi in a day or less).
The chips can be reprogrammed remotely as needed, lots of choice for that.
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
Gene Wirchenko
2018-03-21 23:27:41 UTC
Permalink
Post by Wolf K
[Live highway signs] would. Not just the cost of the signs, but their supporting
infrastructure, and its and their maintenance. Even just the cost of the
wire, in some areas, would not be insignificant.
Use solar power and rechargeable batteries (already used in a lot of
situations up here, mostly for temp signs and lights in construction
zones), and low power radio to communicate with the car. Easy-peasy (a
hobbyist could probably whip one up on a Raspberry Pi in a day or less).
The chips can be reprogrammed remotely as needed, lots of choice for that.
Snow-covered signs?

Sincerely,

Gene Wirchenko
Wolf K
2018-03-21 23:54:06 UTC
Permalink
Post by Gene Wirchenko
Post by Wolf K
[Live highway signs] would. Not just the cost of the signs, but their supporting
infrastructure, and its and their maintenance. Even just the cost of the
wire, in some areas, would not be insignificant.
Use solar power and rechargeable batteries (already used in a lot of
situations up here, mostly for temp signs and lights in construction
zones), and low power radio to communicate with the car. Easy-peasy (a
hobbyist could probably whip one up on a Raspberry Pi in a day or less).
The chips can be reprogrammed remotely as needed, lots of choice for that.
Snow-covered signs?
????
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
Gene Wirchenko
2018-03-22 18:57:30 UTC
Permalink
Post by Gene Wirchenko
Post by Wolf K
[Live highway signs] would. Not just the cost of the signs, but their supporting
infrastructure, and its and their maintenance. Even just the cost of the
wire, in some areas, would not be insignificant.
Use solar power and rechargeable batteries (already used in a lot of
situations up here, mostly for temp signs and lights in construction
zones), and low power radio to communicate with the car. Easy-peasy (a
hobbyist could probably whip one up on a Raspberry Pi in a day or less).
The chips can be reprogrammed remotely as needed, lots of choice for that.
Snow-covered signs?
????
In British Columbia (and probably some other places), snow can
stick to signs to the point where the signs can not be read.

Sincerely,

Gene Wirchenko
J. P. Gilliver (John)
2018-03-22 12:33:51 UTC
Permalink
Post by Wolf K
[Live highway signs] would. Not just the cost of the signs, but their
supporting infrastructure, and its and their maintenance. Even just
the cost of the wire, in some areas, would not be insignificant.
Use solar power and rechargeable batteries (already used in a lot of
situations up here, mostly for temp signs and lights in construction
zones), and low power radio to communicate with the car. Easy-peasy (a
hobbyist could probably whip one up on a Raspberry Pi in a day or
less). The chips can be reprogrammed remotely as needed, lots of choice
for that.
Char said "I haven't seen anything that puts a financial burden on
governments." Then someone else posited live highway signs, which I said
would impose such a burden.

OK, I take your point that solar/rechargeable would save the cost of the
wire; they'd make the cost of the individual sign assemblies more, but
still the total cost would be less (and there'd be less need to maintain
the supplying infrastructure, though the functioning of the panels and
batteries would still have to be checked occasionally). As for the
remote reprogramming, that's only on where mobile coverage exists - or,
you build quite powerful transmitters, or send someone along the road to
do the reprogramming, or use satellite, all of which cost money.

So no "active sign" would not cost administrations _something_. We can
argue about how much. FWIW, I'd consider it a worthwhile use of
taxpayers' money, if it was carefully monitored, but it isn't nothing.
(If autonomous vehicles develop enough, then the saving in emergency
service provision might exceed the cost of the signs.)

Such signs could be made much cheaper if they _only_ communicated with
autonomous vehicles - i. e. did _not_ have a display human drivers could
read. I would consider that way of thinking dangerous though, unless
roads _only_ open to autonomous vehicles were being considered - which I
would itself consider a dangerous precedent. I'd accept, _maybe_, a road
where autonomous vehicles were allowed to travel _faster_ over the
sections between such signs.

Though thinking about it, having some roads - or, for that matter, all
roads - where autonomous vehicles were restricted to a _lower_ speed
[than the rest of us] might have wider public acceptance - but the
proponents of AVs wouldn't like that precedent.
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)***@T+H+Sh0!:`)DNAf

"Mary Poppins is a junkie" - bumper sticker on Julie Andrews' car in the '60s
Wolf K
2018-03-22 13:13:03 UTC
Permalink
Post by J. P. Gilliver (John)
Post by Wolf K
[Live highway signs] would. Not just the cost of the signs, but their
supporting  infrastructure, and its and their maintenance. Even just
the cost of the  wire, in some areas, would not be insignificant.
Use solar power and rechargeable batteries (already used in a lot of
situations up here, mostly for temp signs and lights in construction
zones), and low power radio to communicate with the car. Easy-peasy (a
hobbyist could probably whip one up on a Raspberry Pi in a day or
less). The chips can be reprogrammed remotely as needed, lots of
choice for that.
Char said "I haven't seen anything that puts a financial burden on
governments." Then someone else posited live highway signs, which I said
would impose such a burden.
[...]

You're complicating things. "Active signage" already exists. On
railways. Reprogramming via satellite also already exists. My receiver
has been reprogrammed at least three times. Solar powered road
information signs also already exist, I see dozens of them every summer
during construction season. Interactive signs also already exist for
humans (have you never encountered a "Your Speed is..." sign?) Etc.

Fact also is that roads (and cars) have been adapted for human drivers,
so additional adaptation for autonomous cars is a minor problem IMO.
That redesign is the main reason that accident and fatality rates have
fallen. It happened so long ago that only historians and ancient people
like me can remember what it was like when roads were good enough for
horse and wagon but not for cars. I can recall sideroads "built" in the
prairie by scraping off the topsoil. When it rained, the road was
undriveable. In dry weather, the dust plume was so dense you had to
drive about a mile behind the car in front.

Bottom line: Autonomous vehicles will come sooner than expected, and IMO
will initially be restricted to more or less fixed routes, both by
regulation and by the owner's common sense. That's why I think the
exercise is ultimately futile. We already have vehicles that travel
fixed routes: LRT, trolleys, subways, railroads. They are pretty nearly
autonomous already. But an autonomous vehicle that can navigate a bush
road IMO still a long way away. We'll see autonomous tanks first: they
make their own roads.

PS: "taxpayers money" is a red herring. You have only one wallet. TANSTAAFL.
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
J. P. Gilliver (John)
2018-03-22 13:51:22 UTC
Permalink
Post by Wolf K
Post by J. P. Gilliver (John)
Post by Wolf K
[Live highway signs] would. Not just the cost of the signs, but
their supporting  infrastructure, and its and their maintenance.
Even just the cost of the  wire, in some areas, would not be insignificant.
Use solar power and rechargeable batteries (already used in a lot of
situations up here, mostly for temp signs and lights in construction
zones), and low power radio to communicate with the car. Easy-peasy
(a hobbyist could probably whip one up on a Raspberry Pi in a day or
less). The chips can be reprogrammed remotely as needed, lots of
choice for that.
Char said "I haven't seen anything that puts a financial burden on
governments." Then someone else posited live highway signs, which I
said would impose such a burden.
[...]
You're complicating things. "Active signage" already exists. On
I never said it didn't. Adding it where it doesn't currently, however,
is additional cost. (I wasn't considering development cost - that's
mostly amortized already.)
Post by Wolf K
railways. Reprogramming via satellite also already exists. My receiver
has been reprogrammed at least three times. Solar powered road
information signs also already exist, I see dozens of them every summer
during construction season. Interactive signs also already exist for
humans (have you never encountered a "Your Speed is..." sign?) Etc.
Though I believe most of those (in UK anyway) are not in any way
networked, i. e. they don't generate speeding tickets.
Post by Wolf K
Fact also is that roads (and cars) have been adapted for human drivers,
Agreed ...
Post by Wolf K
so additional adaptation for autonomous cars is a minor problem IMO.
... but not agreed necessarily. I'd accept that any _new_ road being
built could be made autonomous-compatible (for want of a better
word/phrase) at minimal extra cost, but modifying _existing_ ones (or
de-modifying, if you like) would cost something. (_How_ much is
arguable; might be very little.)
[]
Post by Wolf K
Bottom line: Autonomous vehicles will come sooner than expected, and
I think they're here, more or less. Still with a "pilot" except in some
trials, but he's there much like airliner pilots are.
Post by Wolf K
IMO will initially be restricted to more or less fixed routes, both by
regulation and by the owner's common sense. That's why I think the
exercise is ultimately futile. We already have vehicles that travel
fixed routes: LRT, trolleys, subways, railroads. They are pretty nearly
Completely in some cases: we have driverless pods on the (London)
docklands light railway, and I'm pretty certain in some other places
such as airports; I presume ditto in USA.
Post by Wolf K
autonomous already. But an autonomous vehicle that can navigate a bush
road IMO still a long way away. We'll see autonomous tanks first: they
make their own roads.
PS: "taxpayers money" is a red herring. You have only one wallet. TANSTAAFL.
True, but different people have different opinions about how much
control they should have over what it gets spent on. For example, it is
my understanding that most USAnians choose to arrange their healthcare
themselves. Here, I can see some local authorities (or national
governments) arguing that extra expenditure on - say -
autonomous-interacting streetsigns can be offset against less
expenditure being required on the national health service (because of
fewer accidents); I could imagine some states/jurisdictions in the USA
(e. g. ones with a lot of roads, where most of the users of those roads
are just passing through) not wanting to spend on such matters because
the saving on healthcare would not benefit them, but the road users who
are not taxpayers in that area.

On the whole, I like technology, and would be all in favour of enhanced
and automated street signage - and, though I don't think it is actually
proven yet, I think autonomous vehicles will on the whole be a good
thing. (Though I'd be wary of any moves that _dis_advantage users of
non-such vehicles - in the same way as I'm not happy when various bodies
assume that all citizens have good broadband, or a smartphone, and make
changes that disadvantage those who don't.)
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)***@T+H+Sh0!:`)DNAf

A biochemist walks into a student bar and says to the barman: "I'd like a pint
of adenosine triphosphate, please." "Certainly," says the barman, "that'll be
ATP." (Quoted in) The Independent, 2013-7-13
Char Jackson
2018-03-22 16:42:15 UTC
Permalink
On Thu, 22 Mar 2018 13:51:22 +0000, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
Post by Wolf K
Post by J. P. Gilliver (John)
Post by Wolf K
[Live highway signs] would. Not just the cost of the signs, but
their supporting  infrastructure, and its and their maintenance.
Even just the cost of the  wire, in some areas, would not be insignificant.
Use solar power and rechargeable batteries (already used in a lot of
situations up here, mostly for temp signs and lights in construction
zones), and low power radio to communicate with the car. Easy-peasy
(a hobbyist could probably whip one up on a Raspberry Pi in a day or
less). The chips can be reprogrammed remotely as needed, lots of
choice for that.
Char said "I haven't seen anything that puts a financial burden on
governments." Then someone else posited live highway signs, which I
said would impose such a burden.
[...]
You're complicating things. "Active signage" already exists. On
I never said it didn't. Adding it where it doesn't currently, however,
is additional cost. (I wasn't considering development cost - that's
mostly amortized already.)
Post by Wolf K
railways. Reprogramming via satellite also already exists. My receiver
has been reprogrammed at least three times. Solar powered road
information signs also already exist, I see dozens of them every summer
during construction season. Interactive signs also already exist for
humans (have you never encountered a "Your Speed is..." sign?) Etc.
Though I believe most of those (in UK anyway) are not in any way
networked, i. e. they don't generate speeding tickets.
Post by Wolf K
Fact also is that roads (and cars) have been adapted for human drivers,
Agreed ...
Post by Wolf K
so additional adaptation for autonomous cars is a minor problem IMO.
... but not agreed necessarily. I'd accept that any _new_ road being
built could be made autonomous-compatible (for want of a better
word/phrase) at minimal extra cost, but modifying _existing_ ones (or
de-modifying, if you like) would cost something. (_How_ much is
arguable; might be very little.)
I'm not sure why you're pressing the point of expenditures being
required when the folks who are actively doing testing repeatedly say
they neither need nor expect anything different from what we have now.
Paul
2018-03-22 18:29:12 UTC
Permalink
Post by Char Jackson
On Thu, 22 Mar 2018 13:51:22 +0000, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
Post by Wolf K
Post by J. P. Gilliver (John)
Post by Wolf K
[Live highway signs] would. Not just the cost of the signs, but
their supporting infrastructure, and its and their maintenance.
Even just the cost of the wire, in some areas, would not be insignificant.
Use solar power and rechargeable batteries (already used in a lot of
situations up here, mostly for temp signs and lights in construction
zones), and low power radio to communicate with the car. Easy-peasy
(a hobbyist could probably whip one up on a Raspberry Pi in a day or
less). The chips can be reprogrammed remotely as needed, lots of
choice for that.
Char said "I haven't seen anything that puts a financial burden on
governments." Then someone else posited live highway signs, which I
said would impose such a burden.
[...]
You're complicating things. "Active signage" already exists. On
I never said it didn't. Adding it where it doesn't currently, however,
is additional cost. (I wasn't considering development cost - that's
mostly amortized already.)
Post by Wolf K
railways. Reprogramming via satellite also already exists. My receiver
has been reprogrammed at least three times. Solar powered road
information signs also already exist, I see dozens of them every summer
during construction season. Interactive signs also already exist for
humans (have you never encountered a "Your Speed is..." sign?) Etc.
Though I believe most of those (in UK anyway) are not in any way
networked, i. e. they don't generate speeding tickets.
Post by Wolf K
Fact also is that roads (and cars) have been adapted for human drivers,
Agreed ...
Post by Wolf K
so additional adaptation for autonomous cars is a minor problem IMO.
... but not agreed necessarily. I'd accept that any _new_ road being
built could be made autonomous-compatible (for want of a better
word/phrase) at minimal extra cost, but modifying _existing_ ones (or
de-modifying, if you like) would cost something. (_How_ much is
arguable; might be very little.)
I'm not sure why you're pressing the point of expenditures being
required when the folks who are actively doing testing repeatedly say
they neither need nor expect anything different from what we have now.
These things have features such as GPS and Google Maps.

Who needs road signs ?

Of course, a GPS signal isn't always available.

An autonomous car isn't a Roomba.

Loading Image...

Paul
Char Jackson
2018-03-22 04:30:22 UTC
Permalink
On Wed, 21 Mar 2018 19:42:50 +0000, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
Post by Char Jackson
On Wed, 21 Mar 2018 14:03:22 -0400, Keith Nuttle
[]
Post by Char Jackson
Post by Wolf K
The speed limit/change signs will have to be "live", and transmit the
data to the car.
[]
Post by Char Jackson
I haven't seen anything that puts a financial burden on governments. So
The above would. Not just the cost of the signs, but their supporting
infrastructure, and its and their maintenance. Even just the cost of the
wire, in some areas, would not be insignificant.
"The above would", but no one has proposed such a system, so it's not a
valid argument at this time.
Gene Wirchenko
2018-03-21 23:27:07 UTC
Permalink
On Wed, 21 Mar 2018 14:03:43 -0500, Char Jackson <***@none.invalid>
wrote:

[snip]
Post by Char Jackson
Exactly the same thing that happens when a human driver encounters such
a thing, except that the computer will analyze the situation and make a
decision much faster than humans can.
Will the decision be correct?

Sincerely,

Gene Wirchenko
Char Jackson
2018-03-22 04:49:22 UTC
Permalink
Post by Gene Wirchenko
[snip]
Post by Char Jackson
Exactly the same thing that happens when a human driver encounters such
a thing, except that the computer will analyze the situation and make a
decision much faster than humans can.
Will the decision be correct?
You can ask that question regardless of who or what is "behind the
wheel". Here in early 2018, I think the balance tips in favor of the
computer. As the months and next couple of years go by, I expect it to
tip overwhelmingly in that direction.
Paul
2018-03-22 12:56:16 UTC
Permalink
Post by Char Jackson
Post by Gene Wirchenko
[snip]
Post by Char Jackson
Exactly the same thing that happens when a human driver encounters such
a thing, except that the computer will analyze the situation and make a
decision much faster than humans can.
Will the decision be correct?
You can ask that question regardless of who or what is "behind the
wheel". Here in early 2018, I think the balance tips in favor of the
computer. As the months and next couple of years go by, I expect it to
tip overwhelmingly in that direction.
The street layout in the accident scene.

Loading Image...

This is video from in-car. Released by the police. The player wrapper
on the Verge seems to work, while trying the twitter one using that
URL, didn't.

https://www.theverge.com/2018/3/21/17149958/tempe-police-fatal-crash-self-driving-uber-video-released

pic.twitter.com/2dVP72TziQ

The woman hadn't just stepped off the media.

She was out in the middle of the fucking street when hit.

This wasn't "one step off median, clipped by car in left lane".

She was moving.

She has no retro-reflectors on the two bike wheels.

She has white sneakers.

The "Safety Driver" is looking down at something
in his lap, at the time of collision. Playing
with a Smart Phone ?

*******

Sorry, but in this case, the computer loses.

After reviewing a number of research projects and company
claims on websites, this kind of case ("difficult conditions")
is handled by correlation. Multiple sensors combine under
noisy conditions, a "classifier" (which normally works pretty
well in demos), draws a box around "threats".

In this case, the fact the car doesn't react at all, suggests
a portion of the self driving system was "down". A sensor such
as radar, can't "see" stationary objects. She was moving. The
radar should have seen an object at 3MPH, which is not 0MPH
like the scenery. She was running, but she wasn't a track star.
There is delta_V between her and the scenery.

The Lidars vary. In a student Electrical Engineering class project,
they use a cheap Lidar, with only very few data points per scan.
The correlation between that Lidar and a camera in daylight, picked
out a number of humans correctly (standing next to cars, on the
side of the street). But it drew a box around the side of a couple
of cars, where no human was standing. That sample system (not intended
for commercial usage) had false positives. But, it was also
data starved.

The Lidar on these cars, is 64 lasers (in infrared), collecting
a million data points per second. That scene would have been
lit up like a Christmas tree. But, Lidar has limited range.
(They're allowed to use more optical power, if the scanning lasers
run at 1550nm, which is more "eye safe" for the public on the street.)

For a non-reflective target (like the victim), the "range" is on
the order of 50 meters (150 feet). Cars can be detected from a longer
distance. The scanning assembly doesn't rotate that fast, so there's
potential response latency.

There's absolutely no sign in that video, that the Lidar (by
itself) saw anything. If the Lidar had info, and the vehicle camera
had picked her up on visual, the car would have swerved or braked
well before getting to her. It might still have hit her, but
not at 45MPH.

You would combine the Lidar "cloud of dots" ranging, with a second
sensor. In the video, the recording camera (which may not be the
camera used for driving), shows there wouldn't be a lot of visual.
There should be good detection at 50 meters (150 feet) for the
Lidar. The radar should have picked this up (as she was running).

The classifier used in the Electrical Engineering student project,
is able to pick up stationary people standing next to cars (so the
car functions as "interference" for the test).

I can only conclude from the info so far, that the Uber car tech
failed to meet objectives. And in clear dry (night time)
conditions. No raging rain storm. No snow storm. No forest
fire smoke. No fog. Just a clear night. Um, yikes!

If there was a tech failure, you'd think there would be
indicators in the cabin flashing or speaking, indicating
a portion of the system was non-functional. For a computer
crash, you could use watchdogs or majority voting. I have no idea
how redundancy is handled in these self-driving cars. Do they
have any ?

*******

This is a Tesla Model S threat test carried out by a car owner.

The system worked. But you can tell from the test results, that
there is a certain degree of separation, between camera detected
events and radar detected events. The system doesn't seem to
be combining all the sensors in a classifier sense before acting.
It might be using a slightly simpler method. Some threats generate
a notification, others the car just seems to come to a stop.
Or, the car keeps a distance from the human until they clear off
the road surface. Because the road is narrow, the car is probably
not allowed to violate the center line in the road and drive around.

https://electrek.co/2016/11/15/tesla-autopilot-pedestrian-detection-v8-renders-humans/

The Model S is not nearly as well equipped as the Uber. And
yet Musk thinks some day it will achieve the higher Level
ratings of the other cars. "It's just a software update."

Paul
Mayayana
2018-03-22 15:31:18 UTC
Permalink
"Paul" <***@needed.invalid> wrote

| This is video from in-car. Released by the police. The player wrapper
| on the Verge seems to work, while trying the twitter one using that
| URL, didn't.
|

As I linked below, here's the original on
youtube that all these parasites are linking to:

http://youtu.be/XtTB8hTgHbM

| The woman hadn't just stepped off the media.
|
| She was out in the middle of the fucking street when hit.
|

Yes. And there were almost 2 seconds of visibility.
Also, before she comes into the light there's a reflective
flash on the left, followed by visibility of her sneakers.
This wasn't an unusual situation for a human driver
to be able to stop or swerve.

This also highlights a problem with having a human in
a robot car. A driver will react in a split second. A human
in a robot car will probably take at least 1-2 seconds to
judge whether they need to take over. By then it's too late.
Paul
2018-03-22 16:05:00 UTC
Permalink
Post by Mayayana
| This is video from in-car. Released by the police. The player wrapper
| on the Verge seems to work, while trying the twitter one using that
| URL, didn't.
|
As I linked below, here's the original on
http://youtu.be/XtTB8hTgHbM
| The woman hadn't just stepped off the media.
|
| She was out in the middle of the fucking street when hit.
|
Yes. And there were almost 2 seconds of visibility.
Also, before she comes into the light there's a reflective
flash on the left, followed by visibility of her sneakers.
This wasn't an unusual situation for a human driver
to be able to stop or swerve.
This also highlights a problem with having a human in
a robot car. A driver will react in a split second. A human
in a robot car will probably take at least 1-2 seconds to
judge whether they need to take over. By then it's too late.
Yes. The "Supervision Paradox".

That Safety Dude, might as well have been stretch out
asleep, in the back seat.

Paul
Gene Wirchenko
2018-03-22 19:02:22 UTC
Permalink
Post by Char Jackson
Post by Gene Wirchenko
[snip]
Post by Char Jackson
Exactly the same thing that happens when a human driver encounters such
a thing, except that the computer will analyze the situation and make a
decision much faster than humans can.
Will the decision be correct?
You can ask that question regardless of who or what is "behind the
wheel". Here in early 2018, I think the balance tips in favor of the
computer. As the months and next couple of years go by, I expect it to
tip overwhelmingly in that direction.
Of course, but much of the speed issue is a red herring. Is the
decision correct, and is it fast enough?

Sincerely,

Gene Wirchenko
Gene Wirchenko
2018-03-21 23:26:09 UTC
Permalink
On Wed, 21 Mar 2018 14:03:22 -0400, Keith Nuttle
<***@sbcglobal.net> wrote:

[snip]
Post by Keith Nuttle
This will take years, and a lot of money which many governments
organizations do not have. Even then what will happen when the Live
speed sign is vandalized, hit by a car, or a deer is standing where the
car can not communicate with the sign
True. It is bad enough when snow covers the sign so it can not
be read. I have encountered this many times in British Columbia,
Canada.

Sincerely,

Gene Wirchenko
Char Jackson
2018-03-22 04:40:04 UTC
Permalink
Post by Char Jackson
On Wed, 21 Mar 2018 14:03:22 -0400, Keith Nuttle
[snip]
Post by Keith Nuttle
This will take years, and a lot of money which many governments
organizations do not have. Even then what will happen when the Live
speed sign is vandalized, hit by a car, or a deer is standing where the
car can not communicate with the sign
True. It is bad enough when snow covers the sign so it can not
be read. I have encountered this many times in British Columbia,
Canada.
And how did you deal with it, and why do you think the car won't deal
with it in a similar fashion?

I've been in the same situation many times and I can't say that it's
ever been a serious problem. Signs can be covered by snow, obscured by
trees or other vehicles, or the sun can be directly behind the sign,
etc. We all deal with similar situations all the time. Software is being
adapted to do likewise.
Gene Wirchenko
2018-03-22 19:09:47 UTC
Permalink
Post by Char Jackson
Post by Char Jackson
On Wed, 21 Mar 2018 14:03:22 -0400, Keith Nuttle
[snip]
Post by Keith Nuttle
This will take years, and a lot of money which many governments
organizations do not have. Even then what will happen when the Live
speed sign is vandalized, hit by a car, or a deer is standing where the
car can not communicate with the sign
True. It is bad enough when snow covers the sign so it can not
be read. I have encountered this many times in British Columbia,
Canada.
And how did you deal with it, and why do you think the car won't deal
with it in a similar fashion?
I knew the route and was expecting the sign. The road was also
steepening and getting curvy. (In case it matters, coming from
Keremeos, BC to the junction of highways 3 and 97 south of Penticton.)

I can not know that the car will have been there before or
otherwise know about it.

Had I not been there before, I might have had an accident. So
could a self-driving car.
Post by Char Jackson
I've been in the same situation many times and I can't say that it's
ever been a serious problem. Signs can be covered by snow, obscured by
trees or other vehicles, or the sun can be directly behind the sign,
etc. We all deal with similar situations all the time. Software is being
adapted to do likewise.
We do. We can not rely on what we can not perceive so we take
other factors into account. It is not as safe -- after all, if it
were, the sign would not be necessary -- but we do what we can.

What will a self-driving car do?

Sincerely,

Gene Wirchenko
Char Jackson
2018-03-21 19:36:39 UTC
Permalink
Post by Gene Wirchenko
On Tue, 20 Mar 2018 20:03:17 -0400, Keith Nuttle
[snip]
Post by Keith Nuttle
While there is currently great enthusiasm for auto driving vehicles I am
afraid the complexity of the system is more than current technology can
handle.
Getting the first bits is easy. Getting it all is much more
difficult.
Post by Keith Nuttle
We have already seen a death where the automated system did not
understand that it was looking under the truck, and the human occupant
was killed. It will be difficult for the software to be designed to
make the reaches that are slightly beyond the facts.
And the driver was not doing his job. Having one's hands on
steering wheel is rather basic. Many automated devices allow a person
to pay less attention. Where attention is then required to handle an
emergency, trouble may ensue.
When you are in a car and not the driver, do you pay as much
attention to the road? I doubt it. (I do not either.)
I had one case where I was driving a company vehicle and my
supervisor queried me about a turn I had just made. It was a
perfectly legal turn. What my non-driver supervisor was not aware of
is that the road had been redone and there were now two turning lanes.
I was in the new one. He did not have to know or even pay attention,
but I did and had.
Post by Keith Nuttle
One of the problems that I see is a simple one. Yes the car will slow
down and stop in traffic, but what will be used to increase the speed as
traffic thins.
What will trigger start ups after it stops in stop-and-go traffic?
Greater distance between the vehicle and the one in front. Oh,
but what if it is the first car in line?
Exactly the same as you do now. You evaluate the information available
to you and respond accordingly.
Post by Gene Wirchenko
Post by Keith Nuttle
The beginning and ending of speed zones will also be a problem, There
are several places I drive where there is a sign as you come into a
small community, but none after you leave the area for 10 miles, Will
the self driving car know it is suppose to return to the default speed
limit after passing through the community.
Good one. The same applies after a construction zone. Since
some highways have varying speed limits, this is not trivial. What if
a regular speed change sign is within the construction zone? It
should get ignored until the "Thank You Resume Speed" sign whereupon
the regular speed is that new speed (and not actually resuming the
previous speed).
From what I've read, all current testing has abandoned the notion of
driving according to pre-loaded maps, including pre-loaded speed zones.
Think Garmin, for example. Instead, they've moved to an adaptive system
that is closer to how humans do it: pay attention to the surroundings,
including informational road signs, and respond accordingly.
Post by Gene Wirchenko
Post by Keith Nuttle
Yes Garmin shows speed limits but there are times when the posted speed
limits are different than what Garmin shows. How will the auto car no
the difference.
See above. The auto car will read the road signs.
Post by Gene Wirchenko
For that matter, I do not know how to handle signs of the form
speed X unless otherwise posted. (I just called the police to find
out.)
Unless you're a brand new teenage driver, I'd say that's a troubling
admission. Most towns in the US seem to use that system, so surely
you've seen it numerous times.
Post by Gene Wirchenko
Post by Keith Nuttle
What about a brand new highway that has just opened. Last summer we
drove for 20 miles on a newly opened high ways that completely confused
the Garmin. Will a self driving car be able to handle that situation?
If it follows instructions as given by the system I used on one
rental, it would have multiple opportunities to drive off the road. I
sure did.
They no longer use preloaded maps. Instead, they use sensors to drive
where the road goes, just like us humans do.
Gene Wirchenko
2018-03-21 23:48:37 UTC
Permalink
Post by Char Jackson
Post by Gene Wirchenko
On Tue, 20 Mar 2018 20:03:17 -0400, Keith Nuttle
[snip]
Post by Char Jackson
Post by Gene Wirchenko
Post by Keith Nuttle
What will trigger start ups after it stops in stop-and-go traffic?
Greater distance between the vehicle and the one in front. Oh,
but what if it is the first car in line?
Exactly the same as you do now. You evaluate the information available
to you and respond accordingly.
1) The discussion was about self-driving cars, not human drivers.

2) What a non-answer. Of course, you evaluate the information
available and respond accordingly. That does not state what would be
done in that case.

[snip]
Post by Char Jackson
Post by Gene Wirchenko
For that matter, I do not know how to handle signs of the form
speed X unless otherwise posted. (I just called the police to find
out.)
Unless you're a brand new teenage driver, I'd say that's a troubling
admission. Most towns in the US seem to use that system, so surely
you've seen it numerous times.
I have seen it, but I have never understood it.

I just got off the phone with a police officer who said that he
had worked for ten years in traffic, and he said he did not know the
significance of the "unless otherwise posted".

Think about it. When you see the next black-and-white speed
sign, it cancels the previous one. This is true even if there is no
"unless otherwise posted", so what does "unless otherwise posted" add?

[snip]

Sincerely,

Gene Wirchenko
Char Jackson
2018-03-22 05:17:49 UTC
Permalink
Post by Gene Wirchenko
Post by Char Jackson
Post by Gene Wirchenko
For that matter, I do not know how to handle signs of the form
speed X unless otherwise posted. (I just called the police to find
out.)
Unless you're a brand new teenage driver, I'd say that's a troubling
admission. Most towns in the US seem to use that system, so surely
you've seen it numerous times.
I have seen it, but I have never understood it.
As I said above, I find that quite troubling. I wonder what other basic
information you haven't understood. Do you have any questions about turn
signals, or differences between Stop and Yield signs? Are you a licensed
driver? If not, then all is forgiven.
Post by Gene Wirchenko
I just got off the phone with a police officer who said that he
had worked for ten years in traffic, and he said he did not know the
significance of the "unless otherwise posted".
It's the law of averages or something. Sometimes ignorance only finds
more ignorance, and the needle isn't moved. :-)

As for the officer, well, he's a disappointment, but officers clearly
don't know everything, as you've shown. How does a guy work in traffic
for ten years and not know the basics? I wonder if he had ever written a
ticket for excessive speed. One that held up in court, I mean.
Post by Gene Wirchenko
Think about it. When you see the next black-and-white speed
sign, it cancels the previous one. This is true even if there is no
"unless otherwise posted", so what does "unless otherwise posted" add?
I thought about it 50 years ago when I first encountered it as a new
driver and it immediately made sense. It relieves the municipality of
putting speed signs on each and every street. IOW, it allows a
municipality to have a speed limit everywhere without the expense of
having speed limit signs everywhere. They only need to put up speed
signs on main roads and streets where the speed limit is not the
default. How is that not obvious?

Perhaps written driving tests should address basic things such as this.
J. P. Gilliver (John)
2018-03-22 12:54:30 UTC
Permalink
[]
Post by Char Jackson
Post by Gene Wirchenko
Think about it. When you see the next black-and-white speed
sign, it cancels the previous one. This is true even if there is no
"unless otherwise posted", so what does "unless otherwise posted" add?
I thought about it 50 years ago when I first encountered it as a new
driver and it immediately made sense. It relieves the municipality of
putting speed signs on each and every street. IOW, it allows a
municipality to have a speed limit everywhere without the expense of
having speed limit signs everywhere. They only need to put up speed
signs on main roads and streets where the speed limit is not the
default. How is that not obvious?
Perhaps written driving tests should address basic things such as this.
I don't know if things are different in the US (or some states), but
here (UK), _all_ speed limit signs _do_ apply until the next one is
encountered. So I too don't see how these "unless otherwise posted"
signs work, or at least save anything - unless the ones posting
otherwise say things like "for next quarter mile" or something. Do they?

To give an example: you pass a "30 mph unless otherwise posted" sign.
(The UK normal rating for built-up areas.) You then enter something with
a different limit: you turn into a residential area which has a "20 mph"
sign at its entrance, or maybe come onto a main road which has a "40
mph" sign. When you come out of that (reach the other end of the
residential area, or of the fast section), there must be some indication
that you have reached the (opposite) border of the modified area; in
many cases, this indication may well be on the same post that carries
the 20 or 40 limit for those entering that area from the other
direction.

So unless the modifiers _do_ have a range restriction ("20 mph within
this estate", "40 mph for next 1/4 mile"), I can't see there's any
saving to be made in signs. And here, at least, they don't _really_ like
extra text on speed limit signs, as it's considered to be dangerous
(distracts drivers for longer than a simple limit sign), and also it's
not necessarily obvious when the far end of the modifier has been
reached.

[If this _is_ how they work where you are, are they ever nested - e. g.
a 30 limit, within which is an area that is a 20 (say residential), and
within that a 10 (say passing a school)? If so, then a driver has to
stack in his mind what the limits are (so when he gets past the school
he remembers he can go back up to 20, then 30 when he gets out of the
estate).]
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)***@T+H+Sh0!:`)DNAf

"Mary Poppins is a junkie" - bumper sticker on Julie Andrews' car in the '60s
Wolf K
2018-03-22 14:22:35 UTC
Permalink
Post by J. P. Gilliver (John)
[]
Post by Char Jackson
    Think about it.  When you see the next black-and-white speed
sign, it cancels the previous one.  This is true even if there is no
"unless otherwise posted", so what does "unless otherwise posted" add?
I thought about it 50 years ago when I first encountered it as a new
driver and it immediately made sense. It relieves the municipality of
putting speed signs on each and every street. IOW, it allows a
municipality to have a speed limit everywhere without the expense of
having speed limit signs everywhere. They only need to put up speed
signs on main roads and streets where the speed limit is not the
default. How is that not obvious?
Perhaps written driving tests should address basic things such as this.
I don't know if things are different in the US (or some states), but
here (UK), _all_ speed limit signs _do_ apply until the next one is
encountered. So I too don't see how these "unless otherwise posted"
signs work, or at least save anything - unless the ones posting
otherwise say things like "for next quarter mile" or something. Do they?
[...]
It's just a reminder that the speed limit applies where there are no
signs, such as on side streets.
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
J. P. Gilliver (John)
2018-03-22 16:04:20 UTC
Permalink
Post by Wolf K
Post by J. P. Gilliver (John)
[]
Post by Char Jackson
    Think about it.  When you see the next black-and-white speed
sign, it cancels the previous one.  This is true even if there is no
"unless otherwise posted", so what does "unless otherwise posted" add?
I thought about it 50 years ago when I first encountered it as a new
driver and it immediately made sense. It relieves the municipality of
putting speed signs on each and every street. IOW, it allows a
municipality to have a speed limit everywhere without the expense of
having speed limit signs everywhere. They only need to put up speed
signs on main roads and streets where the speed limit is not the
default. How is that not obvious?
Perhaps written driving tests should address basic things such as this.
I don't know if things are different in the US (or some states), but
here (UK), _all_ speed limit signs _do_ apply until the next one is
encountered. So I too don't see how these "unless otherwise posted"
signs work, or at least save anything - unless the ones posting
otherwise say things like "for next quarter mile" or something. Do they?
[...]
It's just a reminder that the speed limit applies where there are no
signs, such as on side streets.
So are you saying they're not actually necessary anyway? (That's what
"reminder" would mean to me, but I can see there might be other
interpretations.)

In the UK, _all_ speed limit signs apply from the point you pass them
until the next one you pass, which cancels the last one and sets a new
one, or in some cases (white circle with black diagonal bar) means
"national speed limits apply", which then still applies until the next
sign. [The national speed limits are - all in m.p.h. - 30 in built-up
areas (even on dual carriageways I think), 60 on single-carriageway 70
on dual outside built-up areas. _In theory_ you could therefore pass
from outside to inside a built-up area - thus a change of limit
downwards - but I haven't for a long time seen such a transition without
an explicit (usually 30, occasionally 40) sign. And I've _never_ seen
the white-circle-with-bar anywhere _except_ when _leaving_ built-up
areas or at the end of roadworks outside built-up areas, so that
_always_ in effect means it's now 60 or 70.]
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)***@T+H+Sh0!:`)DNAf

The first objective of any tyrant in Whitehall would be to make Parliament
utterly subservient to his will; and the next to overturn or diminish trial by
jury ..." Lord Devlin (http://www.holbornchambers.co.uk)
Keith Nuttle
2018-03-22 13:19:43 UTC
Permalink
Post by Gene Wirchenko
Think about it. When you see the next black-and-white speed
sign, it cancels the previous one. This is true even if there is no
"unless otherwise posted", so what does "unless otherwise posted" add?
When you get your drivers License in North Carolina you are expected to
know Driver's manual. Unless the speed limits are posted otherwise the
following speed limit prevail. Page 51

https://www.ncdot.gov/download/dmv/handbooks_ncdl_english.pdf

Maximum Speed Limits
In cities and towns 35
For school buses 45
For school activity buses 55
Outside cities and towns 55
For interstates 70

I have seen similar information in Indiana and Ohio. I assume all
states have the default speed limits.
--
2018: The year we learn to play the great game of Euchre
Gene Wirchenko
2018-03-22 19:12:27 UTC
Permalink
On Thu, 22 Mar 2018 09:19:43 -0400, Keith Nuttle
Post by Keith Nuttle
Post by Gene Wirchenko
Think about it. When you see the next black-and-white speed
sign, it cancels the previous one. This is true even if there is no
"unless otherwise posted", so what does "unless otherwise posted" add?
When you get your drivers License in North Carolina you are expected to
know Driver's manual. Unless the speed limits are posted otherwise the
following speed limit prevail. Page 51
https://www.ncdot.gov/download/dmv/handbooks_ncdl_english.pdf
Maximum Speed Limits
In cities and towns 35
For school buses 45
For school activity buses 55
Outside cities and towns 55
For interstates 70
I have seen similar information in Indiana and Ohio. I assume all
states have the default speed limits.
Sure, but the moment you have a sign, these are overridden. I am
discussing signed limits.

Sincerely,

Gene Wirchenko
Ron C
2018-03-21 00:39:10 UTC
Permalink
Post by Char Jackson
Post by Jeff Barnett
Post by Char Jackson
On Tue, 20 Mar 2018 19:53:13 +0800, "Mr. Man-wai Chang"
Post by Mr. Man-wai Chang
Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian
<https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe>
[multiple groups removed]
AFAIK, the facts are still being gathered, but I'd be curious to know
how many pedestrians were killed by human drivers over the same period.
If it's more than 1, which I'm assuming is the case, then I'm not
alarmed by this incident other than having sympathy for the deceased and
her family. Initial reports said she was crossing the street, but not in
or near a crosswalk, so I wonder if it would have made a positive
difference if a human had been driving. I hope testing doesn't get
curtailed by this incident.
I believe the number last year was 6000. The real question is how do
auto driven car accident statistics compare with human drivers.
The Uber accident was not necessarily the car/drivers fault. A woman was
walking a bicycle and started to cross the street, not in the crosswalk,
and was hit just as she went into the street. The car was traveling
around 40-45mph. In other words, it was HIGHLY likely she didn't look
before crossing. The municipal police are still investigating and
certainly have not assigned any blame yet. In fact they have speculated
that this might be one of those where no primary blame is asserted.
As someone else said: You should be wary and afraid of those automated
vehicles. But you should be god-awful more afraid of all those idiots
out there jacking off with their smart phones while driving.
My personnel bet is that 5 years from now we will see self drive cars
doing spectacularly better than human drive cars - better safety, better
millage, faster trips - and still a bunch of idiots (the same ones who
opposed autopilots and computer assisted landings for planes) bitching
about the supremacy of human drivers, vinyl records, doctors reading
x-rays, etc, etc, etc.
I'm with you 100%. From everything I've read, the technology is coming
along much faster than I would have ever thought.
There are two major hurdles that I see. The first, of course, is the
technology itself. We already have anti-lock brakes, lane departure
warnings, adaptive cruise control, blind spot monitors, and automatic
parallel parking, oh and 360-degree virtual overhead view on the
dashboard stitched together from multiple exterior cameras, on virtually
all new vehicles. ICBW, but I think all of those things are mandated by
2020. With that much automation already in place, it's a logical (but
difficult) next step to stitch it all together and make it work without
significant human intervention.
The second hurdle is the transition period, where semi-autonomous
vehicles are forced to share the world with us humans. We're the weakest
link by far, so the sooner we can get the humans out of the picture the
better off we'll be. If people insist on playing with Facebook while
they drive, let them play on Facebook while the car drives itself.
Uber Trucking has a good initial approach. A human drops off a semi
trailer at a hub near the edge of a city, then an Uber truck is hooked
up. The Uber truck takes the trailer to the next city or across the
country, where it's once again dropped off at a trucking hub and a human
takes it into the city. Out on the highway, there's still a human in the
truck, but he or she is there just in case, not as a primary driver.
I'm guessing you missed today's Shannon Luminary Lecture Series at Nokia
Bell Labs
by Vint Cerf, Google's Internet Evangelist ( also widely known as a
“Father of the Internet”) where he talked about many of the pitfalls of
AI being integrated in to the real world.
~
Bottom line: Debugging AI algorithms ain't so simple.
~~
[ Just sayin' ]
--
==
Later...
Ron C
--
Wolf K
2018-03-21 02:59:24 UTC
Permalink
On 2018-03-20 20:39, Ron C wrote:
[...]
Post by Ron C
I'm guessing you missed today's Shannon Luminary Lecture Series at Nokia
Bell Labs
by Vint Cerf, Google's Internet Evangelist ( also widely known as a
“Father of the Internet”) where he talked about many of the pitfalls of
AI being integrated in to the real world.
~
Bottom line: Debugging AI algorithms ain't so simple.
Quite so. AI is not what many (perhaps most) people think it is. I'm not
expert, but I've tried to keep up with the field.

Key insight so far: You can't program an AI, but you can teach it. "Deep
learning" AIs (usually) do better than humans on single, well-defined,
and well understood problems, such as screening for cancer, or playing
games. But they do less well on vaguely defined, multiplex problems.
Humans are (still) better at interpreting novel images than AIs are.

And worst of all, the people who work on deep learning neural nets have
admitted they don't really know how they do what they do. You can tell
how well the AI is performing, but you can't tell why. Or why not. Well,
so far, anyway. Perhaps never: AFAICT, there is no consensus on what, if
any, limits there are to machine learning.

I think that automated driving assistance will be functional and will
increase safety well before we have a fully functional autonomous car
that's as flexible as a human. I think it's the flexibility that we
really want. Inflexible rides are already available, and much cheaper
than cars: trams, buses, LRT, subways. Guided vehicles. Automated, but
not fully autonomous.
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
J. P. Gilliver (John)
2018-03-21 09:41:43 UTC
Permalink
In message <***@4ax.com>, Char Jackson
<***@none.invalid> writes:
[]
Post by Char Jackson
Uber Trucking has a good initial approach. A human drops off a semi
trailer at a hub near the edge of a city, then an Uber truck is hooked
up. The Uber truck takes the trailer to the next city or across the
country, where it's once again dropped off at a trucking hub and a human
takes it into the city. Out on the highway, there's still a human in the
truck, but he or she is there just in case, not as a primary driver.
Isn't that just a variation on rail freight? OK, it's more flexible in
terms of being able to set up new "railheads", and there isn't as much
time spent building up trains (though if it takes off I can see "road
trains" as in Australia being pushed), but the principle isn't that
different as I see it.
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)***@T+H+Sh0!:`)DNAf

Astaire was, of course, peerless, but it's worth remembering that Rogers does
everything he does, only backwards and in high heels. - Barry Norman in Radio
Times 5-11 January 2013
Wolf K
2018-03-21 12:34:21 UTC
Permalink
Post by J. P. Gilliver (John)
[]
Post by Char Jackson
Uber Trucking has a good initial approach. A human drops off a semi
trailer at a hub near the edge of a city, then an Uber truck is hooked
up. The Uber truck takes the trailer to the next city or across the
country, where it's once again dropped off at a trucking hub and a human
takes it into the city. Out on the highway, there's still a human in the
truck, but he or she is there just in case, not as a primary driver.
Isn't that just a variation on rail freight? OK, it's more flexible in
terms of being able to set up new "railheads", and there isn't as much
time spent building up trains (though if it takes off I can see "road
trains" as in Australia being pushed), but the principle isn't that
different as I see it.
Rail is much safer. IMO the push for autonomous cars is a sign of of a
last desperate bid to maintain the private car, which is the most
inefficient transport we've ever devised.
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
Char Jackson
2018-03-21 14:25:47 UTC
Permalink
Post by Wolf K
Post by J. P. Gilliver (John)
[]
Post by Char Jackson
Uber Trucking has a good initial approach. A human drops off a semi
trailer at a hub near the edge of a city, then an Uber truck is hooked
up. The Uber truck takes the trailer to the next city or across the
country, where it's once again dropped off at a trucking hub and a human
takes it into the city. Out on the highway, there's still a human in the
truck, but he or she is there just in case, not as a primary driver.
Isn't that just a variation on rail freight? OK, it's more flexible in
terms of being able to set up new "railheads", and there isn't as much
time spent building up trains (though if it takes off I can see "road
trains" as in Australia being pushed), but the principle isn't that
different as I see it.
Rail is much safer.
I agree.
Post by Wolf K
IMO the push for autonomous cars is a sign of of a
last desperate bid to maintain the private car, which is the most
inefficient transport we've ever devised.
There are two points in there. I can't agree with either of them. :)
Char Jackson
2018-03-21 14:31:35 UTC
Permalink
On Wed, 21 Mar 2018 09:41:43 +0000, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
[]
Post by Char Jackson
Uber Trucking has a good initial approach. A human drops off a semi
trailer at a hub near the edge of a city, then an Uber truck is hooked
up. The Uber truck takes the trailer to the next city or across the
country, where it's once again dropped off at a trucking hub and a human
takes it into the city. Out on the highway, there's still a human in the
truck, but he or she is there just in case, not as a primary driver.
Isn't that just a variation on rail freight? OK, it's more flexible in
terms of being able to set up new "railheads", and there isn't as much
time spent building up trains (though if it takes off I can see "road
trains" as in Australia being pushed), but the principle isn't that
different as I see it.
No disagreement here, but a country like the U.S. isn't going to make it
on rail freight alone. We don't have the infrastructure for that, and
likely never will. What we do have is a decent highway system. OK, it's
crumbling due to neglect, but that can be fixed. They apparently just
need to build a wall first. They'll get to the highways and bridges
sometime after that. ;-)
Mayayana
2018-03-21 13:11:50 UTC
Permalink
"Jeff Barnett" <***@notatt.com> wrote

| As someone else said: You should be wary and afraid of those automated
| vehicles. But you should be god-awful more afraid of all those idiots
| out there jacking off with their smart phones while driving.
|

I hope you're not referring to me. I referred
to the phone problem in the time zone thread.

In this context I think it's misleading logic.
It's not an either/or choice. Technophiles are
expressing an almost frantic defense of auto-
driven cars following the AZ accident, and
they'll cook up any old logic to make their case.
Even if you think auto-driven cars are the future,
there's no reason they can't be limited to test
tracks until the technology is proven -- or not.

Whether the accident was avoidable is not
really the point. What about the man leaving
his Tesla on auto-pilot and fatally running into
a truck? These cases neither prove nor disprove
the safety of auto-driven cars. But they should
raise questions.

If you do favor auto-driven cars.... why? So
you can safely diddle your phone on your way
to work? Because you don't want to have to bother
to drive? Because you don't want to deal with
other people on the road? What rational reason
is there, after all, to have auto-driven cars? And
if there is a good reason, would it not also apply
to eating, walking and all the other unregulated
activities we do? Where do you draw the line?
Should you trust yourself to wrestle a chicken bone
without choking?
(Of course, it's true that some people don't
walk in any unofficial capacity. They pay a monthly
fee to stand on a treadmill, breathing indoor air,
under fluorescent lights, walking while they read
reports for work. Those people only walk when it's
an official, retail activity, duly recorded on their
computerized watch.... And I suppose we can't
really classify the intake of "power bars" as eating...)

I can see auto-driven cars in a controlled
environment where there are *only* auto-driven
cars (with giant rubber bumpers). Mixing them with
human drivers and uncontrolled circumstances
seems crazy to me. And there's no credible case
for the technology in the first place. It's a case
of "Jetson Futurism Disorder". JFD. It's all the
rage these days. The prescription is to spend a
week in the woods to reconnect with basic
physicality. :)


| My personnel bet is that 5 years from now we will see self drive cars
| doing spectacularly better than human drive cars - better safety, better
| millage, faster trips - and still a bunch of idiots (the same ones who
| opposed autopilots and computer assisted landings for planes) bitching
| about the supremacy of human drivers, vinyl records, doctors reading
| x-rays, etc, etc, etc.

And cooking? And dressing yourself? And
what's the problem with doctors reading
x-rays? Doesn't human experience count
for anything? You can't computerize life. It's
not digital.

One of my favorite examples to explain to
people the limits of computers and the marketing
of "AI" is to imagine an android that's programmed
to drive across the country. If such a thing were
done then people would be amazed. We'd be thinking
about buying androids to raise our kids, mow
our lawns..... But what if that android goes all
the way from NYC to Nevada and comes upon
something it's not programmed to deal with?
Say, for example, a road block, a sinkhole in
the road, or maybe a 3-way fork? Then the android
crashes. Either the software, the car, or both.
It made the drive all the way to Nevada only
because it was programmed to deal with the things
it encountered. That's not AI. It only looks intelligent
to the observer. But in reality it's simply complex
software that's limited to numeric, linear operations.

Unfortunately that also means that if we come
up with a Cherry 2000 it won't *really* be a lover
but only a high-tech masturbation toy. :)
pyotr filipivich
2018-03-21 14:42:39 UTC
Permalink
Post by Mayayana
If you do favor auto-driven cars.... why? So
you can safely diddle your phone on your way
to work? Because you don't want to have to bother
to drive? Because you don't want to deal with
other people on the road? What rational reason
is there, after all, to have auto-driven cars?
Same reasons exist as to why have someone else drive the car. One
less skillset to master, _I_ do not have to worry about finding
parking, "Tony" can get me home when I'm too "tired and emotional" to
be safe on the streets. And while Tony drives, I can "diddle" the
Times, the Daily Fishwrap, the Fizzbean Prospectus. Or I can say my
prayers, read a book, take a nap, have a cup of coffee.
Or because I can't drive, due to sensory issues, age issues, etc,
etc. For whatever reason I might not be safe on the roadways with a
car.

What we've seen happening over the last decades is the replacement
of humans with 'robots'. Robots "type" my papers, check my spelling,
will check my grammar (for some values of "checking"), etc, etc,. The
whole "smart house" Smart 'things', Internet of things, we've now got
'bots to replace the hired help. And still, as it always was - it is
so hard to get good help.

That is why I would like to see autonomous cars. So that "someone
else" can drive me & the missus to the gym on our schedule. Sure we
can take the bus - it's only an hour and a half trip and a 55 minute
wait before the class. We don't really have that much else we
consider is important, which we wanted to do today, right?

But I agree - these are the teething problems of a new technology.
And it is so hard to get good help these days.

tschus
pyotr
--
pyotr filipivich
Next month's Panel: Graft - Boon or blessing?
Jeff Barnett
2018-03-21 18:02:06 UTC
Permalink
Post by Mayayana
| As someone else said: You should be wary and afraid of those
automated | vehicles. But you should be god-awful more afraid of all
those idiots | out there jacking off with their smart phones while
driving. |
I hope you're not referring to me. I referred to the phone problem in
the time zone thread.
Not in particular - I don't recall reading the "time zone" thread. What
I said above, my pseudo quote, has been said in one way or another by
almost every rational observer of the of the evolving technology and its
inherent problems.

The capabilities of average human beings in most areas is rather
pathetic and only awesome in a few. Chess was once considered to be a
test of intelligence but many 1960s computer players could beat 98% of
all humans. So was the computer intelligent? No. We decided that chess
wasn't a good test. Auto drivers will soon be much better than we are;
it's silly to not believe that. Yet we still think we are superior, but
not for long. We neither have the attention span, reflexes, or the scene
recognition speed and accuracy to compete (a few years from now). I for
one will be happy to see texting drivers, road rage idiots, etc, removed
from car control. Hence my pseudo quote.
--
Jeff Barnett
mechanic
2018-03-21 18:07:03 UTC
Permalink
What rational reason is there, after all, to have auto-driven
cars?
Safety.
Mayayana
2018-03-22 02:38:16 UTC
Permalink
"Char Jackson" <***@none.invalid> wrote
...

Here's the video:



Driver looking down. There was a second or two that
an actual driver would have at least slammed on
the brakes. They might not have ben able to avoid
hitting the woman, but they wouldn't have hit her
at 38 mph.
Keith Thompson
2018-03-21 17:21:50 UTC
Permalink
"Mr. Man-wai Chang" <***@gmail.com> writes:
[...]
Post by Mr. Man-wai Chang
Self-driving Uber
[...]

If you want to discuss this, please drop all the irrelevant newsgroups,
particularly comp.lang.c. Don't feed the troll.
--
Keith Thompson (The_Other_Keith) kst-***@mib.org <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Kenny McCormack
2018-03-21 21:39:10 UTC
Permalink
Post by Keith Thompson
[...]
Post by Mr. Man-wai Chang
Self-driving Uber
[...]
If you want to discuss this, please drop all the irrelevant newsgroups,
particularly comp.lang.c. Don't feed the troll.
How/why would I want to do that, assuming both of the following are true:

1) I want to discuss this.

2) CLC is the only relevant (I trust it is clear what I mean by
relevant) newsgroup that I follow/monitor/post-to.

Really, Kiks, you need to think these things through.
--
The randomly chosen signature file that would have appeared here is more than 4
lines long. As such, it violates one or more Usenet RFCs. In order to remain
in compliance with said RFCs, the actual sig can be found at the following URL:
http://user.xmission.com/~gazelle/Sigs/God
Wolf K
2018-03-22 00:13:55 UTC
Permalink
Post by Kenny McCormack
Post by Keith Thompson
[...]
Post by Mr. Man-wai Chang
Self-driving Uber
[...]
If you want to discuss this, please drop all the irrelevant newsgroups,
particularly comp.lang.c. Don't feed the troll.
1) I want to discuss this.
2) CLC is the only relevant (I trust it is clear what I mean by
relevant) newsgroup that I follow/monitor/post-to.
Really, Kiks, you need to think these things through.
The fact that a lot of people still believe that one can program AI
should be enough IMO to make the topic relevant to comp.programming.

You can write a program to play a game, but you can't program it to play
better than a human. To do that you'd not only have to play better than
a human, but understand howcome you can play better. However, you can
program a neural net to play lots of go games against itself, and learn
from them. That's how the Go-AI (sorry, I forget the details of
who/what/when/where/why) was able to beat the best human Go players.

You can program an AI to always win or draw a game with a completely
defined winning strategy, such as for tic-tac-toe or nim. There's no
such strategy for Chess or Go. Nor for driving a car.

However, I have no doubt that driver-AIs can be trained to handle novel
situations at least as well as the best humans can. How soon? That I
won't even try to estimate. But even the best human drivers will vary in
their ability to handle any given novel situation. So my claim is rather
vague.

OTOH, driver-assisting tech will I think become better and better, so
that drivers will come to rely on it. That will deskill drivers. Eg,
I've lost my ability to maintain constant speed since I started using
cruise control. Before that, I could keep the car within +/- 2km/hr
uphill and down.
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
Loading...