Blog: Real Intelligence about AI

Directors' Blog: Professor Chris Cowton

Some 25 years ago, I was giving a lecture at one of the UK’s ‘ancient’ universities. Picture a large, tiered lecture theatre, but more ‘60s brutalist architecture than ivy-clad medieval. Suddenly, as I was in full flow, a door at the back of the lecture theatre flew open and a young man called down to me:
‘Is this artificial intelligence?’ he asked.  

‘No,’ I replied, ‘it’s the real thing’.  

He promptly exited and therefore missed what I had to say about accounting. His loss – or maybe not.

Massive advances in computing power and its penetration into so many areas mean that AI is well and truly out of the lecture theatre and in our everyday lives. We’re past a tipping point. Some of the sci-fi has become reality.

It’s a familiar human experience that rapid advances in technology outrun our ability to manage them well. New ethical risks take time to recognise, think through and manage, to ensure that these advances contribute consistently to flourishing lives and a better society. We can find ourselves playing moral catch-up.

What do right and wrong mean, in an increasingly digital world? 

When thinking about AI, we have to forget the sci-fi depiction of robots that look, think and feel like a human, but, with the exception of the Tin Man in the Wizard of Oz, with evil intentions to take over the world, AI technologies are not ethical or unethical per se. The real issue is around the uses that we make of AI, which should never undermine human ethical values.
To the uninitiated, one might assume that machine intelligence is neutral, whereas humans are biased, but there is a saying in computer science "garbage in, garbage out”. If AI learns from human prejudices, rather than human values, it will mimic the worst of humanity.

An example is of the AI recruitment tool – ironically introduced, among other reasons, to reduce bias and discrimination in the recruitment process –  which Amazon scrapped when it emerged that it was biased against women. Or the Google image recognition programme which labelled the faces of several black people as gorillas, and image searches for ‘CEO’ as white men. How do we protect ourselves from unintended consequences, where the logical outcome may not necessarily be the ethical one? 

But while there are ethical risks, AI and other digital technologies also open up positive possibilities for us to express our ethical values. For example, what new opportunities might exist to empower colleagues with disabilities? Or in transforming healthcare and early detection of diseases like cancer? How might AI be harnessed to help us deal with the climate crisis?

The main challenge now is not about the next technological innovation; we’ve already shifted to a digital society. Technological progress will continue and in ten years’ time we will probably have even more amazing devices in our pockets, or even inside us, that now we can’t even imagine. 

The real challenge is to focus on the governance of the digital, which can help us address the grey areas that arise from the use of AI. Once we agree on the direction we want to move in, the speed of technological developments (which can seem scarily fast sometimes) will be less of a concern: it will enable us to get where we want to get faster. 

At the IBE we’ve been putting together some resources to help organisations get to grip with these issues. Our recent Board Briefing, Corporate Ethics in a Digital Age is an excellent starting point. Our earlier Briefing, Business Ethics and Artificial Intelligence, offers a framework of fundamental values and principles for the use of AI in business.  
Together they provide some real intelligence, of the ethical variety, about intelligence of the artificial kind.

Or why not join us at our event on Thursday 10 October 2019: Business Ethics in the digital age: utopia or dystopia? Professor Christoph Lütge, who is the Peter Löscher Endowed Chair of Business Ethics at the Technical University of Munich and Director of the new TUM Institute for Ethics in Artificial Intelligence, will give a short talk entitled AI: Ethical Challenge or Opportunity?, after which he will join a panel which includes Emmanuel Blochfrom Thales, and the IBE's Head of Research Guendalina Dondé to answer questions from the audience. 

Posted: 08/10/2019

IBE Blogs

Directors' blog: Peter Montagnon

An ethical approach moves AI from threat to opportunity, writes Peter Montagnon

Culture Club: Views my own

Katherine Bradshaw explores the ethical dilemmas of social media at work

Ethics and values are at the heart of a sustainable culture

Professor David Grayson, IBE's new chair, shares his thoughts on ethics and sustainability

Culture Club: Bringing your ethics training to life

Rozlyn Spinks shares some tips on what makes an effective scenario 

Research Hub: Changing attitudes to business

Linn Byberg looks at how millennials are changing business ethics

Research Hub: What were the hot ethical issues of 2018?

Simon Webley looks back on the the business ethics news stories of last year

Directors' blog: Trust in business is on the rise

Philippa Foster Back examines the results of IBE's public attitudes to business survey

Directors' blog: Making the FRC fit for purpose

Peter Montagnon asks some pertinent questions ahead of the Kingman review

Trust is the currency of ethics

Emmanuel Lulin, Chief Ethics Officer at L'Oreal takes a personal look at IBE's Ethics at Work survey

My Basket

There are no items in your basket


Support Us & Get Involved

Support the IBE
Contact the Institute of Business Ethics