Why AI Innovation Must Reflect Our Values in Its Infancy

This blog was written by Candace Worley, McAfee’s former Vice President and Chief Technical Strategist.

In my last blog, I explained that while AI possesses the mechanics of humanness, we need to train the technology to make the leap from mimicking humanness with logic, rational and analytics to emulating humanness with common sense. If we evolve AI to make this leap the impact will be monumental, but it will require our global community to take a more disciplined approach to pervasive AI proliferation. Historically, our enthusiasm for and consumption of new technology has outpaced society’s ability to evolve legal, political, social, and ethical norms.

I spend most of my time thinking about AI in the context of how it will change the way we live. How it will change the way we interact, impact our social systems, and influence our morality.  These technologies will permeate society and the ubiquity of their usage in the future will have far reaching implications. We are already seeing evidence of how it changes how we live and interact with the world around us.

Think Google. It excites our curiosity and puts information at our fingertips. What is tripe – should I order it off the menu? Why do some frogs squirt blood from their eyes? What does exculpatory mean?

AI is weaving the digital world into the fabric of our lives and making information instantaneously available with our fingertips.

AI-enabled technology is also capable of anticipating our needs. Think Alexa. As a security professional I am a hold out on this technology but the allure of it is indisputable. It makes the digital world accessible with a voice command. It understands more than we may want it to – Did someone tell Alexa to order coffee pods and toilet tissue and if not – how did Alexa know to order toilet tissue? Maybe somethings I just don’t want to know.

I also find it a bit creepy when my phone assumes (and gets it right) that I am going straight home from the grocery store letting me know, unsolicited, that it will take 28 minutes with traffic. How does it know I am going home? I could be going to the gym. It’s annoying that it knows I have no intention of working out. A human would at least have the decency to give me the travel time to both, allowing me to maintain the illusion that the gym was an equal possibility.

On a more serious note, AI-enabled technology will also impact our social, political and legal systems. As we incorporate it into more products and systems, issues related to privacy, morality and ethics will need to be addressed.

These questions are being asked now, but in anticipation of AI becoming embedded in everything we interact with it is critical that we begin to evolve our societal structures to address both the opportunities and the threats that will come with it.

The opportunities associated with AI are exciting.  AI shows incredible promise in the medical world. It is already being used in some areas. There are already tools in use that leverage machine learning to help doctors identify disease related patterns in imaging. Research is under way using AI to help deal with cancer.

For example, in May 2018, The Guardian reported that skin cancer research using a convolutional neural network (CNN – based on AI) detected skin cancer 95% of the time compared to human dermatologists who detected it 86.6% of the time. Additionally, facial recognition in concert with AI may someday be commonplace in diagnosing rare genetic disorders, that today, may take months or years to diagnose.

But what happens when the diagnosis made by a machine is wrong? Who is liable legally? Do AI-based medical devices also need malpractice insurance?

The same types of questions arise with autonomous vehicles. Today it is always assumed a human is behind the wheel in control of the vehicle. Our laws are predicated on this assumption.

How must laws change to account for vehicles that do not have a human driver? Who is liable? How does our road system and infrastructure need to change?

The recent Uber accident case in Arizona determined that Uber was not liable for the death of a pedestrian killed by one of its autonomous vehicles. However, the safety driver who was watching TV rather than the road, may be charged with manslaughter. How does this change when the car’s occupants are no longer safety drivers but simply passengers in fully autonomous vehicles. How will laws need to evolve at that point for cars and other types of AI-based “active and unaided” technology?

There are also risks to be considered in adopting pervasive AI. Legal and political safeguards need to be considered, either in the form of global guidelines or laws. Machines do not have a moral compass. Given that the definition of morality may differ depending on where you live, it will be extremely difficult to train morality into AI models.

Today most AI models lack the ability to determine right from wrong, ill intent from good intent, morally acceptable outcomes from morally irreprehensible outcomes. AI does not understand if the person asking the questions, providing it data or giving it direction has malicious intent.

We may find ourselves on a moral precipice with AI. The safeguards or laws I mention above need to be considered before AI becomes more ubiquitous than it already is.  AI will enable human kind to move forward in ways previously unimagined. It will also provide a powerful conduit through which humankind’s greatest shortcomings may be amplified.

The implications of technology that can profile entire segments of a population with little effort is disconcerting in a world where genocide has been a tragic reality, where civil obedience is coerced using social media, and where trust is undermined by those that use mis-information to sew political and societal discontent.

There is no doubt that AI will make this a better world. It gives us hope on so many fronts where technological impasses have impeded progress. Science may advance more rapidly, medical research progress beyond current roadblocks and daunting societal challenges around transportation and energy conservation may be solved.  It is another tool in our technological arsenal and the odds are overwhelmingly in favor of it improving the global human condition.

But realizing its advantages while mitigating its risks will require commitment and hard work from many conscientious minds from different quarters of our society. We as the technology community have an obligation to engage key stakeholders across the legal, political, social and scientific community to ensure that as a society we define the moral guardrails for AI before it becomes capable of defining them, for or in spite of, us.

Like all technology before it, AI’s social impacts must be anticipated and balanced against the values we hold dear.  Like parents raising a child, we need to establish and insist that the technology reflect our values now while its growth is still in its infancy.

Introducing McAfee+

Identity theft protection and privacy for your digital life

FacebookLinkedInTwitterEmailCopy Link

Stay Updated

Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.

FacebookTwitterInstagramLinkedINYouTubeRSS

More from Executive Perspectives

Back to top