Tags: Artificial Intelligence (AI)
It’s a familiar human experience that rapid advances in technology outrun our ability to manage them well. New ethical risks take time to recognise, think through and manage, to ensure that these advances contribute consistently to flourishing lives and a better society. We can find ourselves playing moral catch-up. What do right and wrong mean, in an increasingly digital world? asks Professor Chris Cowton.
Some 25 years ago, I was giving a lecture at one of the UK’s ‘ancient’ universities. Picture a large, tiered lecture theatre, but more ‘60s brutalist architecture than ivy-clad medieval. Suddenly, as I was in full flow, a door at the back of the lecture theatre flew open and a young man called down to me:
‘Is this artificial intelligence?’ he asked.
‘No,’ I replied, ‘it’s the real thing’.
He promptly exited and therefore missed what I had to say about accounting. His loss – or maybe not.
Massive advances in computing power and its penetration into so many areas mean that AI is well and truly out of the lecture theatre and in our everyday lives. We’re past a tipping point. Some of the sci-fi has become reality.
It’s a familiar human experience that rapid advances in technology outrun our ability to manage them well. New ethical risks take time to recognise, think through and manage, to ensure that these advances contribute consistently to flourishing lives and a better society. We can find ourselves playing moral catch-up.
What do right and wrong mean, in an increasingly digital world?
When thinking about AI, we have to forget the sci-fi depiction of robots that look, think and feel like a human, but, with the exception of the Tin Man in the Wizard of Oz, with evil intentions to take over the world, AI technologies are not ethical or unethical per se. The real issue is around the uses that we make of AI, which should never undermine human ethical values.
To the uninitiated, one might assume that machine intelligence is neutral, whereas humans are biased, but there is a saying in computer science "garbage in, garbage out”. If AI learns from human prejudices, rather than human values, it will mimic the worst of humanity.
An example is of the AI recruitment tool – ironically introduced, among other reasons, to reduce bias and discrimination in the recruitment process – which Amazon scrapped when it emerged that it was biased against women. Or the Google image recognition programme which labelled the faces of several black people as gorillas, and image searches for ‘CEO’ as white men. How do we protect ourselves from unintended consequences, where the logical outcome may not necessarily be the ethical one?
But while there are ethical risks, AI and other digital technologies also open up positive possibilities for us to express our ethical values. For example, what new opportunities might exist to empower colleagues with disabilities? Or in transforming healthcare and early detection of diseases like cancer? How might AI be harnessed to help us deal with the climate crisis?
The main challenge now is not about the next technological innovation; we’ve already shifted to a digital society. Technological progress will continue and in ten years’ time we will probably have even more amazing devices in our pockets, or even inside us, that now we can’t even imagine.
The real challenge is to focus on the governance of the digital, which can help us address the grey areas that arise from the use of AI. Once we agree on the direction we want to move in, the speed of technological developments (which can seem scarily fast sometimes) will be less of a concern: it will enable us to get where we want to get faster.
At the IBE we’ve been putting together some resources to help organisations get to grip with these issues. Our Board Briefing, Corporate Ethics in a Digital Age is an excellent starting point. Our earlier Report, Business Ethics and Artificial Intelligence, offers a framework of fundamental values and principles for the use of AI in business.
Together they provide some real intelligence, of the ethical variety, about intelligence of the artificial kind.
Professor Chris Cowton
Associate Director (Research), IBE, email@example.com
Chris brings his vast experience of researching business ethics issues to the work of the IBE Research Hub, with a remit to strengthen its widely respected applied research and a specific role of further developing our engagement with higher education.
Chris Cowton is Emeritus Professor at the University of Huddersfield and Visiting Professor at Leeds University’s Inter-Disciplinary Ethics Applied Centre. He was previously Professor of Accounting (1996-2016), Professor of Financial Ethics (2016-2019) and Dean of the Business School (2008-2016) at Huddersfield, having joined after ten years lecturing at the University of Oxford.
He is internationally recognised for his contributions to business ethics, especially his pioneering work on financial ethics. In 2013 he was awarded the University of Huddersfield’s first DLitt (Doctor of Letters, a higher doctorate) in recognition of his contribution to the advancement of knowledge in business and financial ethics.
He is the author of more than 60 journal papers, has edited three books and has written many book chapters. He was Editor of the journal Business Ethics: A European Review for a decade (2004-2013).
He is also a visiting professor at University of the Basque Country, Bilbao (Spain), and has been a visiting professor at the University of Bergamo (Italy) and a member of the Ethics Standards Committee of the Institute of Chartered Accountants in England and Wales (2009-2018).
Discussion of ethics in public life, including business, often makes unhelpful sweeping generalisations that take us nowhere. ‘Politicians are only in it for themselves’, ‘businesses manipulate consumers’, etc. Such comments describe one end of a spectrum, perhaps, but they do a disservice to those who are trying to do so much better. The IBE plays a key role, in supporting high standards of business behaviour and I am delighted to use my research expertise to contribute to that mission.