I am an AI Neophyte

This blog was written by Candace Worley, McAfee’s former Vice President and Chief Technical Strategist.

I am an Artificial Intelligence (AI) neophyte. I’m not a data scientist or a computer scientist or even a mathematician. But I am fascinated by AI’s possibilities, enamored with its promise and at times terrified of its potential consequences.

I have the good fortune to work in the company of amazing data scientists that seek to harness AI’s possibilities. I wonder at their ability to make artificial intelligence systems “almost” human. And I use that term very intentionally.

I mean “almost” human, for to date, AI systems lack the fundamentals of humanness. They possess the mechanics of humanness, qualities like logic, rationale, and analytics, but that is far from what makes us human. Their most human trait is one we prefer they not inherit –  a propensity to perpetuate bias.  To be human is to have consciousness. To be sentient. To have common sense. And to be able to use these qualities and the life experience that informs them to interpret successfully not just the black and white of our world but the millions of shades of grey.

While data scientists are grappling with many technical challenges associated with AI there are a couple I find particularly interesting. The first is bias and the second is lack of common sense.

AI’s propensity to bias is a monster of our own making. Since AI is largely a slave to the data it is given to learn from, its outputs will reflect all aspects of that data, bias included. We have already seen situations where applications leveraging AI have perpetuated human bias unintentionally but with disturbing consequences.

For example, many states have started to use risk assessment tools that leverage AI to predict probable rates of recidivism for criminal defendants. These tools produce a score that is then used by a judge for determining a defendant’s sentencing. The problem is not the tool itself but the data that is used to train it. There is evidence that there has historically been significant racial bias in our judicial systems, so when that data is used to train AI, the resulting output is equally biased.

A report by ProPublica in 2016 found that algorithmic assessment tools are likely to falsely flag African American defendants as future criminals at nearly twice the rate as white defendants*. For any of you who saw the Tom Cruise movie, Minority Report, it is disturbing to consider the similarities between the fictional technology used in the movie to predict future criminal behavior and this real life application of AI.

The second challenge is how to train artificial intelligence to be as good at interpreting nuance as humans are. It is straight forward to train AI how to do something like identifying an image as a Hippopotamus. You provide it with hundreds or thousands of images or descriptions of a hippo and eventually it gets it right most if not all the time.

The accuracy percentage is likely to go down for things that are perhaps more difficult to distinguish—such as a picture of a field of sheep versus a picture of popcorn on a green blanket—but  with enough training even this is a challenge that can be overcome.

The interesting thing is that the challenge is not limited to things that lack distinguishing characteristics. In fact, the things that are so obvious that they never get stated or documented, can be equally difficult for AI to process.

For example, we humans know that a hippopotamus cannot ride a bicycle. We inherently know that if someone says “Jimmy played with his boat in the swimming pool” that, except in very rare instances likely involving eccentric billionaires, the boat was a toy boat and not a full-size catamaran.

No one told us these things – it’s just common sense. The common sense aspects of interpreting these situations could be lost on AI. The technology also lacks the ability to infer emotion or intent from data. If we see someone buying flowers we can mentally infer why – a romantic dinner or somebody’s in the doghouse. We can not only guess why they are buying flowers, but when I say somebody’s in the dog house you know exactly what I mean. It’s not that they are literally in the dog house, but someone did something stupid and the flowers are an attempt at atonement.

That leap is too big for AI today. When you add to the mix cultural differences it exponentially increases the complexity. If a British person says put something in the boot it is likely going to be groceries. If it is an American it will likely be a foot. Teaching AI common sense is a difficult task and one that will take significant research and effort on the part of experts in the field.

But the leap from logic, rationale and analytics to common sense is a leap we need AI to make for it to truly become the tool we need it to be, in cybersecurity and in every other field of human endeavor.

In my next blog, I’ll discuss the importance of ensuring that this profoundly impactful technology reflects our human values in its infancy, before it starts influencing and shaping them itself.

*ProPublica, Machine Bias, May 23, 2016

FacebookLinkedInTwitterEmailCopy Link

Stay Updated

Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.

FacebookTwitterInstagramLinkedINYouTubeRSS

More from Executive Perspectives

Back to top