New technologies don’t always come with a completed rule book or instruction manual, let alone industry-agreed standards. KPMG director, data science, David Evans asks: How do we take advantage of AI offers while managing and mitigating risks?
What kind of feelings spring to mind when you hear the words “Artificial Intelligence”? Are they positive or negative? Considering how AI is represented in the media, chances are that those feelings are somewhat negative. There are plenty of examples to support those feelings. Racist chatbots, self-driving car crashes, predatory pricing and ‘deepfakes’ are just several. Even the positive-sounding stories about “efficiency” and “productivity” mask a rather uncomfortable question for most people reading them that basically boils down to, “Will the robots take my job?”
It’s easy to focus on the negative stories, but it’s important to balance this by recognising that, as humans, we’re all susceptible to hard-wired cognitive biases that skew our sense of risk.
That’s not to say that people haven’t been disadvantaged, injured or worse as a result of decisions made by AI, or that we shouldn’t act to remedy and prevent these things. Quite the opposite! But at what point does our focus on the negative stories restrict our view of the overwhelming positive outcomes for society that don’t make headlines so easily? Things like new medical breakthroughs or environmental conservation.
Look back through history at the introduction of new technologies that we take for granted today as being safe and reliable, then look back at the way they were perceived during their introduction and initial application. Whether it’s electricity and airliners or anaesthetics and organ transplants, a common theme throughout is the simultaneous combination of excitement about new opportunities, coupled with apprehension about new risks and bad outcomes.
What makes AI different to innovations of the past, is that the risks become amplified when technology can be scaled and applied to entire populations at once. While faulty wiring may only affect the safety of a building and the people in its immediate area, a faulty AI algorithm in a chatbot can easily affect millions of people at once across the globe.
This raises an important question: How do we take advantage of the benefits AI offers while managing and mitigating risks? In other words, how do we let early adopters play their important role, pushing boundaries in society, while making sure they push them safely?
While launching KPMG’s AI in Control risk governance framework in Singapore recently, I explored these very questions. After polling an audience of over 200 early adopters, only one in five said their organisation had policies and procedures in place to manage the risks of using AI. And surprisingly, only one in four said they’re confident that their organisation’s AI models were working as intended. It’s no wonder that any talk of AI often has a negative undertone.
New technologies don’t always come with a completed rule book or instruction manual, let alone industry-agreed standards. It’s often ‘after the fact’ that these things are put in place, and depending on what’s at stake, the amount of regulation that comes with them. But in the meantime, anyone developing or applying AI should make use of frameworks and guidelines to reduce the chance of things going wrong – and reduce the fallout if they do. Frameworks for governing AI risks vary from one organisation to another but they’re generally structured around these main areas:
- Ethics – Just because something can be done technically and legally, doesn’t always mean that it should be done.
- Transparency – Giving people the peace of mind about what’s going on under the AI bonnet.
- Reliability – Making sure an AI algorithm is performing the best it can, given its application and the data it feeds off.
- Accountability – Creators of products and services are accountable for what they release into world. The legal concepts that apply to canned soup and widgets also apply to algorithms in mobile phones and medical scanners.
If history is anything to go by, we haven’t heard the last of bots behaving badly. But history also tells us a little bit of discipline goes a long way to fixing things, bringing the peace of mind we need to focus our energy on the positive things AI has to offer.