Exploring the morality of AI
When do the benefits outweigh the risks?
When do the benefits outweigh the risks?
With great power comes great responsibility. AI can be a force for good. It can power innovation. But leaders will need to understand the potential risks first.
Artificial intelligence (AI) poses a real-life tug-of-war between good and evil. And leaders play a significant role in deciding which side will prevail.
“The fundamental challenge with AI is its dual-use – there are good uses and there are poor uses of the technology,” explained Scientia Professor Toby Walsh, UNSW Laureate Fellow and author, during AGSM’s 2022 Professional Forum: Ethical AI in an Accelerating World.
“We need to make the right choices, to ensure we use technology in a way that adds economic benefit, and also makes our society more inclusive.”
During his keynote address, Machines Behaving Badly, the Morality of AI, Professor Walsh cautioned the leaders attending to consider both sides of AI.
While AI allows businesses and governments to dramatically impact the scale, speed, and cost of what they can do, it also poses significant risks.
“All technological change is a trade-off. You can cause untold damage as much as untold good,” said Professor Walsh. “And that’s where you come in. With any technology you should ask the question, what are we getting, and what are we giving up? And is it worth it?”
Trading life skills for convenience
While AI is still in its infancy, we already depend on the technology’s applications in our daily lives.
For example, Google Maps, along with other navigation apps built on AI technology, have been helping us find our way from A to B for years. The algorithm can suggest the fastest routes and ways to avoid traffic. But has this modern convenience taken away valuable life skills?
“We are likely to be the last generation that knows how to read a map,” said Professor Walsh.
“As we continue to outsource these skills to machines, it will change us physically as well as our understanding of the world,” he said, noting the knowledge of how to navigate around the sun’s movements has shaped our culture and society for centuries.
See also: Ethical AI: How can leaders use technology for good?
Managing an ambiguous process
Professor Walsh also urged leaders to consider the unintended future consequences of technology.
Short term impact can be relatively easy to predict – such as autonomous trucks reducing the need for drivers. But longer-term effects can be difficult to identify.
When the first Boeing 747 took off in 1969, for example, it heralded the jumbo jet age – which quickly made the world smaller and more connected. However, it would be almost impossible to foresee the consequences in 2020, when air travel accelerated the pandemic’s spread globally.
Professor Walsh implored leaders stop and consider all possible consequences before rolling out new technology.
“Technological change is not an additive, but a vast, exponential and unpredictable process. And while unintended consequences are almost impossible to predict, leaders need ask – is this going to make the world a better place? Will it make our business better for our customers? What could possibly go wrong?”
Asking the right questions: why diversity matters
There are many examples of organisations that hastily rolled out AI solutions without adequate consideration of potential trade-offs – leading to unintended financial and human consequences.
For example, Australia’s Robodebt program developed a data-matching algorithm to identify individuals who had been overpaid social security benefits. However, the algorithm raised debts for many legitimate welfare recipients, ultimately costing the government $1.8 billion in a class action settlement.
After a year of remote schooling during the pandemic response, the British government tried using an algorithm to determine end of year results. But the automated model backfired, unfairly downgrading disadvantaged students compared with those from affluent areas and private schools.
One reason these setbacks continue to happen is the homogenous group of people building the technologies, said Professor Walsh.
“There aren't enough women, people of colour or minorities helping develop these technologies. So, the right questions aren't being asked. We need a diverse set of people to help us think each problem through.”
Stela Solar, Director with National Artificial Intelligence Centre CSIRO's Data61, also highlighted the importance of diversity to better balance risks with benefits during her Masterclass panel discussion.
“Diversity is an incredibly important element - it serves as a compass. It is those many eyes together that can come up with different ways and different pathways forward and help navigate the risks, blockers and concerns.”
See also: Steer it – don’t fear it: navigating AI with confidence
When are the benefits worth it?
One way to weigh up the benefits and risks – and continue progress – is by categorising them and offering context, said Lee Hickin, CTO at Microsoft Australia, during the Masterclass.
“There’s brand damage, and then there's significant damage,” he said. “You could demean someone, or segregate parts of society, or prevent access. It’s easy to get worried because AI does offer these terrible potential scenarios. But you have to also consider that some risks are manageable or can be tolerated in a business decision.”
Clearly, a lot of work needs to be done before new technologies can be adopted – from research with diverse user groups to developing robust ethical frameworks. But Stela Solar urged leaders to embrace the challenges and keep pushing forward.
“I encourage every leader to steer AI rather than fear it – and help shape the future of AI so it can reach its full potential,” she said.
AI will impact everyone’s lives and potentially change the fabric of our society. And today’s leaders need to carefully consider all ethical implications, because they are shaping the future for new generations.
“We're at a critical stage of our era, where we're reassessing some of our values,” Professor Walsh said.
“And these technologies pose questions that test some of those values. That’s why we need to find ways to build technology that reflects our human values, rather than undermines who we are as a society.”
To find out more about AGSM @ UNSW Business School, click here.
To learn more about the topic of Ethical AI, listen to AGSM’s Business of Leadership podcast: The Business of AI.