Real Intelligence about AI

Blog
08 October 2019

Tags: Artificial Intelligence (AI)

It’s a familiar human experience that rapid advances in technology outrun our ability to manage them well. New ethical risks take time to recognise, think through and manage, to ensure that these advances contribute consistently to flourishing lives and a better society. We can find ourselves playing moral catch-up. What do right and wrong mean, in an increasingly digital world? asks Professor Chris Cowton.

Some 25 years ago, I was giving a lecture at one of the UK’s ‘ancient’ universities. Picture a large, tiered lecture theatre, but more ‘60s brutalist architecture than ivy-clad medieval. Suddenly, as I was in full flow, a door at the back of the lecture theatre flew open and a young man called down to me:

‘Is this artificial intelligence?’ he asked.  

‘No,’ I replied, ‘it’s the real thing’.  

He promptly exited and therefore missed what I had to say about accounting. His loss – or maybe not.

Massive advances in computing power and its penetration into so many areas mean that AI is well and truly out of the lecture theatre and in our everyday lives. We’re past a tipping point. Some of the sci-fi has become reality.

It’s a familiar human experience that rapid advances in technology outrun our ability to manage them well. New ethical risks take time to recognise, think through and manage, to ensure that these advances contribute consistently to flourishing lives and a better society. We can find ourselves playing moral catch-up.

What do right and wrong mean, in an increasingly digital world? 

When thinking about AI, we have to forget the sci-fi depiction of robots that look, think and feel like a human, but, with the exception of the Tin Man in the Wizard of Oz, with evil intentions to take over the world, AI technologies are not ethical or unethical per se. The real issue is around the uses that we make of AI, which should never undermine human ethical values.

To the uninitiated, one might assume that machine intelligence is neutral, whereas humans are biased, but there is a saying in computer science "garbage in, garbage out”. If AI learns from human prejudices, rather than human values, it will mimic the worst of humanity.

An example is of the AI recruitment tool – ironically introduced, among other reasons, to reduce bias and discrimination in the recruitment process –  which Amazon scrapped when it emerged that it was biased against women. Or the Google image recognition programme which labelled the faces of several black people as gorillas, and image searches for ‘CEO’ as white men. How do we protect ourselves from unintended consequences, where the logical outcome may not necessarily be the ethical one? 

But while there are ethical risks, AI and other digital technologies also open up positive possibilities for us to express our ethical values. For example, what new opportunities might exist to empower colleagues with disabilities? Or in transforming healthcare and early detection of diseases like cancer? How might AI be harnessed to help us deal with the climate crisis?

The main challenge now is not about the next technological innovation; we’ve already shifted to a digital society. Technological progress will continue and in ten years’ time we will probably have even more amazing devices in our pockets, or even inside us, that now we can’t even imagine. 

The real challenge is to focus on the governance of the digital, which can help us address the grey areas that arise from the use of AI. Once we agree on the direction we want to move in, the speed of technological developments (which can seem scarily fast sometimes) will be less of a concern: it will enable us to get where we want to get faster. 

At the IBE we’ve been putting together some resources to help organisations get to grip with these issues. Our Board Briefing, Corporate Ethics in a Digital Age is an excellent starting point. Our earlier Report, Business Ethics and Artificial Intelligence, offers a framework of fundamental values and principles for the use of AI in business.  

Together they provide some real intelligence, of the ethical variety, about intelligence of the artificial kind.

Author

Professor Chris Cowton
Professor Chris Cowton

Associate

Chris served the IBE as part-time Associate Director from 2019 to 2023, having previously been a Trustee. He continues to contribute to our work from time to time as an Associate.

Chris originally joined the IBE staff following a long career of leadership, research and teaching in the higher education sector. He is Emeritus Professor at the University of Huddersfield, where he served as Professor of Accounting (1996-2016), Professor of Financial Ethics (2016-2019) and Dean of the Business School (2008-2016). He moved to Huddersfield after ten years lecturing at the University of Oxford. He has also been a Visiting Professor at Leeds University’s Inter-Disciplinary Ethics Applied Centre, the University of Bergamo (Italy) and the University of the Basque Country, Bilbao (Spain).

He is internationally recognised for his contributions to business ethics, especially his pioneering work on financial ethics. In 2013 he was awarded the University of Huddersfield’s first DLitt (Doctor of Letters, a higher doctorate) in recognition of his contribution to the advancement of knowledge in business and financial ethics.

He is the author of more than 70 journal papers, has edited three books and has written many book chapters and other publications. He served 10-year terms as Editor of the journal Business Ethics: A European Review (2004-2013) and as a member of the Ethics Standards Committee of the Institute of Chartered Accountants in England and Wales (2009-2018). He continues to write extensively and to speak to both academic and practitioner audiences. 

Read lessmore