Curb Your (AI)nthusiasm

The Boston Dynamics “dogs” have become something of an inside joke around here: any time we want to suggest that an idea, project, or new technology might have worrisome long-term implications, the robotic canines come up in conversation.  Much of it has to do with their somewhat surreal, uncanny valley look, something familiar enough to be recognizable but alien enough to make you uncomfortable (just ask the writers at Black Mirror).  But really, they’re just a useful shorthand for concerns about the potential unintended consequences of AI and ML developments.

https://i0.wp.com/media.tenor.com/images/1f1a5a8931810d2e793680aef8568bd1/tenor.gif?resize=299%2C167&ssl=1
What could possibly go wrong?

More broadly, I think we have the wrong idea about AI, with unfounded assumptions and, frankly, guesswork seemingly everywhere.  The media latches on to public warnings about the risks of AI from Stephen Hawking or Elon Musk, and the public assumes that we’re just a few years away from Skynet.  Others imply that AI systems are little short of mystical — in fact, Yahoo Finance’s tech critic literally called Google’s AI platform “magic” — leaving people to assume that AI is the solution to human necessity and a future of ease.

The Boring Middle

Of course, this Manichaean, red-pill/blue-pill approach to an extremely complicated subject does no one any good.  Although AI platforms present many opportunities to change the way we live our lives and run our businesses, they are still a nascent tool, and have plenty of issues to work through. That’s the reason why the largest tech companies have deployed hundreds, even thousands of employees and data scientists to help move AI systems from merely interesting to operationally useful. And while it is certainly true that some of these platforms are already operating and providing value, plenty of others have either burned out or still need a tremendous amount of revision. For every Google Assistant setting up an appointment for a haircut there is another Google AI platform that can’t pass a sophomore-year math test.

Number Crunch GIFs - Get the best GIF on GIPHY
To be fair, at this point, I don’t think I could, either.

A large part of the problem with public perception is a misunderstanding of exactly what AI is. At root, artificial intelligence is a sorting tool, a decisional platform that is able to analyze patterns in raw data and come to a (hopefully) correct conclusion about how the pattern will apply in a given case.  The process involves using a substantial amount of training data — effectively, unstructured data that the algorithm can analyze for patterns — as the AI platform reviews more training data and becomes more sophisticated, it improves its ability to understand patterns.  This is one reason why companies that invest heavily in AI are also voracious consumers of data about us: the more training data the algorithm has, the better it will become at identifying patterns.

Even armed with that basic understanding of what AI is, you can start debunking myths about what AI does.  No, it’s not Skynet, “thinking” for itself about what it wants to do.  At least not yet, anyway.  We’re several leaps in processing ability and, frankly, human programming skill before we reach that point, because AI still requires humans to create the algorithms, write the code, and control the inputs.  And while the ability to recognize patterns is a powerful tool — in fact, it’s an important component of how humans think — there is far more to cognition than mere pattern recognition.

So, while we may talk about “computers having dreams” and creating amazing artwork, what we’re really saying is that we’ve created systems capable of identifying patterns in a way that they can, more or less, describe back to us.  That’s why computer dreams about “dumbbells” have arms attached to them: the pattern is that, in its training data, every image the AI system saw had an arm lifting the weight.  The AI doesn’t “know” that those arms are not part of the dumbbell, but then, humans don’t “know” things until we learn them through instruction or experience – it all has to be taught somehow.

Chidi Anagonye Good Place GIF - ChidiAnagonye GoodPlace ...
No, Chidi, I impliedly invoked Kant. Big diff.

The Golden Mean

Reduced to its essence, then, AI is about pattern recognition derived from training data.  Good data means better pattern recognition and outcome prediction, though not necessarily any understanding of what the outcome “means.”  That is a crucial distinction: information is not necessarily meaning, and results are not necessarily wisdom.  Those require a human element, by which I mean an actual human acting in an oversight or revision capacity.

Deploying AI requires an understanding of its limits and possibilities.  Consider its uses in government, which is one of the areas where both skeptics and advocates are calling for the most thorough investigation.  Some experts, like the Oxford Internet Institute’s Helen Magretts, advocate for more extensive use of AI in government services to improve access, reduce waste, and cut down on lost time.  She cautions, though, that any use of AI must be “responsive, efficient, and fair.”  Given the breadth and sensitivity of the data governments possess, that’s very good counsel.  Unthinking application of algorithmic governance is the short route to an unaccountable, opaque government.

This Woman Made A Robot To Help Her Apply Lipstick And It ...
At least it’s efficient…oh.

In fact, the accountability model is crucial for AI altogether, which is why so much of the present discussion about AI centers on encoded human weaknesses.  Some of the possibilities that AI might unlock are incredible — everything from predictive modeling to prevent fires and building collapse to early warnings about health needs to making pesto taste better.  But without a recognition that those same systems can undermine autonomy, liberty, and human expression, we risk subordinating the values we need and use to define ourselves in favor of an expedience we merely prefer.

The solution is all about balance.  Right thinking and right acting, in the AI context, means weighing the value propositions of forward-looking technology against the potential risks.  It also means evaluating how those value propositions actually benefit users (including businesses) as opposed to “magic wand” thinking about what AI and ML can do.  And, most importantly, it requires critical thinking.  Humans coding bias into AI is a significant, ongoing problem, especially when it is unconscious bias because, you know, it’s unconscious.  The true measure of a balanced approach to AI is how critically you examine the data you use, the algorithms you design, and the meaning you assign to the outputs.  AI has enormous potential as a tool, but it will only as good as we are in challenging ourselves, and our assumptions, in using it.

Flip It Larry David GIF by Eric - Find & Share on GIPHY
As always Larry David understands me best.

 

 

 

 

 

One thought on “Curb Your (AI)nthusiasm

Leave a Reply