Peter Montagnon of the Institute of Business Ethics says the skills needed to address the challenges of AI require less a technical mastery of its inner workings than for directors to know how to ask the right questions, and to trust the answers they receive

With the new series of Black Mirror trailing on Netflix, our paranoia about artificial intelligence is bound to increase dramatically. The drama series thrives on exploiting our fear of sinister technologies we don’t understand, but which seem chillingly close to the present day.

The corporate journey into the world of AI is only just beginning. While we perceive the potential for enormous change on the horizon, we are uncertain about where AI will take us. Business leaders are concerned with how they will cope with something whose dimensions keep changing just when they think they have begun to understand them.

While there are examples of the positive opportunities AI technologies can bring, the current debate about Huawei is an example of how the public and politicians can see AI as a threat. Do we really want the Chinese Communist Party to know all about us and who we communicate with? A more personal example is our conflicted relationship with Google, where we love the convenience, but have an uneasy sense that it knows more about us than we know about ourselves. Can we trust the company not to pass the information on to someone who will use it against us?

An ethical approach inspires trust, and trust is needed to build public confidence in organisations that control data with such power over people’s lives

Growing reliance on data and the integration of AI into business activity has thrown up some large challenges for governance. Boards not only have to manage a new set of risks and opportunities – they have to do so in a world that is rapidly changing, and in ways that make it harder for them to exercise control. But this challenge cannot be filed under “too difficult” in the hope that it will go away. The technology is already here and directors can no longer abstain from the challenge to manage the consequences of AI.

At their core, these challenges fall naturally into board responsibility for risk appetite and risk oversight. They should not be ignored or put in a silo just because technology is complicated. Most of the key decisions are actually about how technology is applied, with the dilemmas being primarily philosophical and ethical.

With AI, there is a strong ethical dimension, where personal data has acquired an economic value. Even if we all stopped logging into Facebook today, the company would still have considerable assets in the data it has about us. As one commentator put it “You may forget Facebook; it could happen sooner than you expect. But it’s not likely to forget you.” While, in principle, the value ought to reside with the subject of the data, it is usually others who are best-placed to exploit it. Access to the data creates an information asymmetry that confers power on those who have it and vulnerability on those who do not.

If we stopped logging into Facebook today, it would still have a lot of data about us. (Credit: PK Studio/Shutterstock)
 

Ethics matters because an ethical approach inspires trust, and trust is needed to build public confidence in organisations that control data with such power over people’s lives. This is not a reason for seeking to curtail the adoption of new technology. It is instead an opportunity to adopt it in a way that delivers clear benefits within a trusted framework.

Boards have to decide where to draw the line between the opportunities of using technology to further business objectives and the risk of inadequate controls that end up infringing individual rights or otherwise endangering the company’s reputation. Perhaps contrary to intuitive expectations, the skills needed to address these challenges require less of a technical mastery of the inner workings of AI than a philosophical and ethical approach to resolving the issues thrown up. In that sense, the decisions that boards must take fit naturally into their general view of risk appetite, risk management and oversight.

Those who consider and respond to the ethical challenges of AI, and are true to their values, are more likely to be trusted

Ethical lapses cannot just be blamed on AI. Someone has to be accountable, and in the corporate world, accountability rests with the board. It is imperative that directors know how to ask the right questions and can trust the answers they receive. That’s why the Institute of Business Ethics has just published a board briefing, Corporate Ethics in a Digital Age, to guide directors, and help give them the confidence to discuss and challenge the ethical implications of AI technology in their businesses.

Those who consider and respond to the ethical challenges of AI, and are true to their values, are more likely to be trusted. And those who are trusted are more likely to survive and prosper in the long run. Companies that can do this will set themselves apart, as well as finding it easier to comply with data protection requirements, and mitigate against reputational and cyber risks. That is where competitive advantage lies, and is indeed the real opportunity.

Peter Montagnon is associate director of the Institute of Business Ethics.

Main picture credit: Channel 4
 
artificial intelligence  Black Mirror  Big data  Facebook  data security  Huawei  google 

comments powered by Disqus