Post by danny bursteinPost by RhinoPost by danny bursteinPost by RhinoThat must mean you because your second last paragraph truly IS incoherent.
Facts scare you, eh?
Like, oh, this piece from the pretty well regarded MIT Media Lab [a]
https://www.media.mit.edu/articles/facial-recognition-technology-is-both-biased-and-understudied/
[a] let's forget about that little issue with Epstein, ok?
WTF does Epstein have to do with facial recognition? Or me?
Are you proud of demonstrating just how stupid you sound?
On the very slim chance you really don't have a clue
Was I remotely abusive to you? If so, I certainly don't recall that.
Maybe you could quote the offending passage?
Post by danny bursteinThe MIT Media Lab is one of those groups way above
any of our pay grades whose pronouncements are
definitely worth paying attention to.
I am well aware of MIT and their expertise in the field of computing.
Post by danny bursteinThe Epstein reference was to the recent scandal in which
it turned out he had given them (as well as lots of
other instituions) plenty of contributions. But in
their case, it wasn't just to get attaboys and positive
recognition, but there were some pretty close connections
between him and some of the top people.
And how does Epstein's donations to MIT refute anything I've said?
Here's a little primer for you since you admit to not knowing a whole
lot about computers. (Or at least I *think* you admitted that but it was
in one of your less coherent paragraphs so I could be wrong.)
A facial recognition system is going to compare one photograph to a
series of photographs in a database. The photographs will have been
taken by cameras, perhaps without any human intervention. The system
will compare the target picture to each of the ones in the database
until it finds one that is a close match. The computer itself is not
human and has no biases against different "races"; ditto for the cameras
or databases. I defy you to tell me how a computer or camera could be
racist.
Now, the facial recognition system is presumably initiated by a human
being. SOMEONE is going to decide which picture is being searched in the
database. They will presumably supply the clearest available picture of
the person who the police are trying to find so the operator will give
that to the system. Assuming they actually supply a picture of the right
person and not some innocent third party, the system will do the rest.
Now, the input parameters for the search MIGHT conceivably include an
option for setting how close the match has to be. In a perfect world,
starting with a perfectly clear and perfectly lit photo, the match would
also have to be 100% and you wouldn't need to specify a percentage. But
the picture the police have may not be perfect; it could be blurry, the
lighting might be bad, and the face in their picture might be at an
angle that makes it hard to compare with the pictures in a mug shot,
which means a 100% match might never match ANYTHING in the database. So,
it's possible there is an option to lower the percentage to, say, 95% or
90%, just to get some suspects.
An ethical operator will know that any hits that turn up in that search
are only POSSIBLE matches and will advise his police colleagues only
that this person is SIMILAR but not a perfect match. Now, the human
investigators COULD demonstrate some sort of racial bias by preferring
one of the people on the suspect list "because he's black and we all
know those blacks are all criminals at heart" (or similar racist
garbage) but it's NOT the software that's being racist.
I would be happy to discuss with you further if any of that is not clear
but if you abuse me again, I *will* killfile you as a waste of my time.
Your choice: learn something or just fling abuse at people who are
trying to help you understand something outside of your field of
expertise, whatever that is.
--
Rhino