An ethical approach moves AI from threat to opportunity

Blog
11 June 2019

Tags: Artificial Intelligence (AI)

One of the biggest challenges facing business at present is how to exploit the opportunities afforded by Artificial Intelligence. There is a big temptation to spend a lot of time worrying about the risks – of cyber-attack or about the reputational damage that can follow if the algorithms make decisions which end up damaging the business.

We can see from the debate about Huawei how the public and politicians can see AI as a threat. Do we really want the Chinese Communist Party to know all about us and who we communicate with? Or more particularly, when our motor insurance company promises to cut our premiums if we install a black box to monitor our driving, do we really want it to know where we have been going? Can we trust it not to pass the information on to someone who will use it against us?

Yet to conclude from this that we need to restrict the usage of AI is also to forgo many of the benefits it brings. The right answer is for companies to build ethical standards into the way they apply the new technology. This will allow them to exploit the opportunities in a climate of trust, and that can confer an important business advantage.

We can see two damaging extremes in China and the US. In China the government is using its powers of surveillance to penalize those who it regards as bad citizens. They may suddenly find themselves unable to buy rail tickets. In the US, the determination of Facebook and others to milk the information they acquire about their customers to their own commercial advantage has resulted in huge damage to reputation, regulatory initiatives and fines.  

There is a middle road. As we learn to apply AI to enhance our business, there is a real opportunity to strike a better balance. This involves setting an ethical framework including the development and embedding of a values system which determines how the new technology should be used. This creates a natural advantage. People will buy from companies they trust and they will also want to work for them.

The process starts with boards and senior management. Some boards are worried about dealing with technology because directors feel they do not understand it. Yet once they look into the matter, they will find that this is simply an extension of the work they are already doing on risk appetite and risk oversight. 

The questions they confront are sometimes difficult, often because they involve low risk/high impact situations which arise if, for any reason, the technology fails. But that is no reason for passing up the opportunity altogether.  In AI as in other areas, an ethical approach creates the confidence which frees companies to get stuck in.

Author

Peter Montagnon
Peter Montagnon

formerly Associate Director