With every seismic shift in technology come new and challenging risks. However, the risks of AI will only grow if we leave their resolution to chance, writes Ken Reid, national managing partner innovation, digital & data at KPMG.

We use the term AI as an umbrella term to describe the use of Big Data, increasingly sophisticated bots, machine learning and deep learning algorithms, and the forms artificial intelligence that are yet to come. We look forward to all that our nation has to gain from advances in computer processing power,  access to data and ease of storage that are driving the developments in AI.

These advances will revolutionise industries and organisations across all sectors including healthcare, mining, transport, finance and more. Indeed, AI will pave the way for more rapid decision making, more accurate analysis, improved customer experience, increased productivity as well as a raft of benefits we have yet to conceive.

With every seismic shift in technology come new and challenging risks. However, the risks of AI will only grow if we leave their resolution to chance. Thus, we need to shift the dialogue from reactive to proactive. And although it’s easy to get caught up in the well-documented risks of AI,  our focus needs to be balanced between these risks and the inherent opportunity AI presents. We want to seek ways to encourage the ethical development of AI and maximise its benefits to society, drive greater investment and more rapid adoption in Australia.

So how do we propose to achieve this?

We believe Australia needs a set of principles to drive engagement with and the governance of AI. These principles should be designed to guide all organisations in the investment, development and deployment of AI and help shape regulation, compliance programs and if adhered to – improve public trust in AI related technologies and outcomes. They should be designed to unite the nation around a consistent and meaningful national roadmap for AI.

There are many examples of the published principles that could apply here and they all focus on similar themes of Accountability; Transparency; Fairness; Inclusiveness, Privacy & Security and Social Benefit.  What already exists provides a great starting point and could fairly quickly be modified to suit Australian needs and adopted as a guide for all stakeholders.

As has always been the case when new technologies are introduced to society, the law inevitably lags, which is already the case in terms of AI internationally. As businesses continue to innovate through new and exciting applications of AI, it’s clear accountability for the ethical implications of its impact must be shared. Although many companies leading the development of AI globally already have a set of principles to guide their own investment, many have already called out a gap in oversight of standards upstream.  As an example, Microsoft President Brad Smith has called on governments to regulate the use of facial-recognition technology to ensure it does not become a tool for discrimination or surveillance.

Not all areas of AI will need regulation.  But we argue all areas of AI development will benefit from an agreed set of principles and standards to guide them.

So the question then becomes: who should have a say in the design of these principles and standards, monitor their adoption and police their impact as the development and use of AI increases?

It is KPMG’s view Australia needs a new independent organisation to take the lead on this, driving the conversation where the law lags technological progress and develop additional guidance as need arises. Such an organisation should comprise a broad range of stakeholders that come from industry, government, non-profit, unions and universities.

What do you think SR readers? Should AI be regulated? And if so, by whom?

 

Topics