Wednesday, December 10, 2014

Of Google and Gestalt

Maybe I have an odd definition of "amusing," but the near-juxtaposition of two Slashdot article picks today made me smile.

The first item is a thought-provoking rebuttal of Elon Musk and Dr. Stephen Hawking's warnings about the dangers of artificial intelligence (AI).  Its salient point is that intelligence is not autonomy--i.e. a machine cannot truly have free will.  Certainly, our reliance on AI--as with computers and networks in general--makes us especially vulnerable to its failure(s).  We're sometimes vulnerable to its successes, too.  (Think obsolete livelihoods, cybercrime, etc.)  And when some fool decides to abdicate critical decisions to an algorithm?  Yeah--I think that most of us know the end of that movie.

There's also phenomenon known as "the uncanny valley," wherein computer generated (human) images are oh-so-close-but-no-cigar to life-like that we actually react negatively to them, compared with something more cartoonish.  (Aside:  If you're among those who were creeped out by The Polar Express but think that the minions of Despicable Me are adorable, now you know why.)  In Star Trek: The Next Generation, the android Data notes that he has been programmed not only to blink, but to do so at semi-random intervals so as not to trigger that vague sense of unease associated with the uncanny valley.

And, even being a programmer, I have to admit to being creeped out myself by the accuracy of voice recognition in some automated phone systems.  In the end, it may well be that the market's response to the uncanny valley may forestall an AI bot takeover before the technology is even capable of making it a threat.

In short, we are (probably) a long, long way off from renegade replicants and time-travelling hit-men for a genocidal AI.  Or so The Matrix wants us to believe...  ;~)

At this point, it's tempting to congratulate ourselves for being such inimitably complex carbon-based beasties.  Until we consider the second Slashdot item, which brings home how easy it is to impersonate a human browsing a website.  And not only a human, but a wealthy human.  In related news, Google made headlines last week for stealing a march in the arms-race against the bots--or, more aptly, the people who code them.  (Though I do have to wonder whether the visually impaired will be the collateral damage of that escalation.)

That's the contrast that made me smile, albeit wryly.  To wit:  The bar for "humanity" is set so high in one area of software development, but so low in another.  (Embarrassingly, that latter area is my own.)

As Mr. Etzioni pointed out, part of culture's freak-out over threats of AI is our fear of irrelevance.  Or...do we also fear that we've passed some inflection-point where our lazy, self-fetishising parochialism leaves us open to a palace coup by our digital serfs? Personally, machine learning doesn't worry me half so much as humans who refuse to learn.  But if my Gentle Reader is more of the Musk/Hawking camp, perhaps we can agree that the only viable response is to insist on a higher bar for humanity.