Leaders play an important role in minimising the risks and maximising the benefits of technology – and adopting responsible business practices are critical to getting this right. These were the resounding takeaways from the AGSM @ UNSW Business School 2022 Professional Forum: Ethical Leadership in an Accelerating World.

Professor Nick Wailes, Senior Deputy Dean, UNSW Business School and Director AGSM, opened the event by acknowledging the complexities current and future leaders are facing when it comes to harnessing the power of AI and technology.

“We stand at a very interesting time when technology creates opportunities to know your customer, understand your data, improve decision making and service provision, and reduce costs,” Professor Wailes said. 

“At the same time, it raises very significant challenges. Do we know enough about this technology to ensure that we protect our customers privacy? Have we thought through the implications of this for the culture of our organisation and the type of services we create? Do our teams have the requisite knowledge and skills to be able to harness this technology in an effective way?”

Leaders have to think about implications of AI-use for privacy, organisation and culture. They also need to ensure that they have the right skills to harness technology’s power – to create frameworks and best practices so new and existing technology can be used to the benefit of the organisation without causing harm. 

See also: AGSM’ Business of Leadership podcast: The Business of AI.

 

Steering the impact of technology

AI presents an immense opportunity for businesses and for society as a whole and it’s starting to have a significant impact on many aspects of our lives.

According to PwC, AI could add $15.7 trillion to the global economy by 2030 , that equates roughly to the combined GDPs of China and India. 

“It’s like we’re discovering a new continent,” said Scientia Professor Toby Walsh, UNSW Laureate Fellow and author, in his keynote address, Machines Behaving Badly, the Morality of AI.

“The rewards of AI are only as limited as our imagination. And its benefits can already be felt across a number of industries – from new drug discoveries and assisting people with disabilities to creating efficiencies and improving customer service.”

Ed Santow, Former Australian Human Rights Commissioner and Industry Professor for Responsible Technology at University of Technology Sydney, agrees.

“AI is being integrated into products and services in truly extraordinary ways that are not only feeding economic development, but is literally making our world more inclusive,” he said, speaking during the 'Embedding responsible AI in your organisation’ masterclass discussion

Similar to other technology before, AI can change the speed and cost of what we do and scale things seamlessly.

“But therein lies the risks. This ability to scale can cause as much untold damage as untold good,” said Professor Walsh and this is where leaders can play an incredibly important role.

“This is where you come in. It’s about making the right choices to make sure that we use technology in a way that both adds economic benefit and makes our society more inclusive.”

See also: Exploring the morality of AI: When do the benefits outweigh the risks?

 

Bias and diversity: Challenges of responsible AI

Bias is one of the biggest management challenges that leaders face when it comes to AI. 

“Humans create AI and humans are inherently biased,” said Lorenn Ruster, Responsible Tech Collaborator at the Centre for Public Impact, speaking as part of ‘The risks and rewards of AI: making informed decisions in the era of artificial intelligence’ panel discussion. 

“Once bias is embedded in AI software, it is amplified and scalable. Something that could seem quite small becomes quite large and can easily be replicated and influence a lot of people.”

Leaders also face the challenge of bringing more diverse talent into the design and building process of technology, said fellow panel member, Professor Mary-Anne Williams, UNSW Michael J Crouch Chair in Innovation and Deputy Director, UNSW AI Institute.

“Not having those diverse perspectives is a very real and deep problem,” she said. “It’s not only about finding the right people, it’s also about changing mindsets. We have to engage different people early and understand what gets them excited about technology to overcome this challenge.”

AI can worsen bias around how a business treats its customers. Call centres, for example, can inadvertently prioritise higher valued customers before lower valued ones, raising questions around transparency. 

Frameworks suggest leaders should be very clear about how they use AI. But inevitably, not every company will be. This could create competitive advantages or disadvantages as leaders interact with other companies who are not always doing the right thing, according to Associate Professor Sam Kirshner, School of Information Systems and Technology Management, UNSW Business School. 

“Leaders need to consider the black box that is other companies they interact with. Even if you are trying to be as ethical as possible, you're still going to be potentially liable, because your end result won't necessarily be ethical,” said Associate Professor Kirshner.

Creating a common ethical ground for all

Building responsible technology practices is essential to ensure that organisations use AI in an ethical way that reinforces and respects human rights. 

This starts with creating a common ground when it comes to implementing ethical principles, said Stela Solar, Director, National Artificial Intelligence Centre, CSIRO's Data61, during the 'Embedding responsible AI in your organisation’ masterclass discussion.

“We all agree with the principles and ethical frameworks that are in place, in theory. But we’re not sure how to actually implement them,” she explained. 

“While there is no blueprint to implementation, co-creation and co-design, these are all critical parts of the process, along with diversity.” 

Lee Hickin, CTO at Microsoft Australia, NSW Government AI Advisory and Review committee; Responsible AI Lead, sees transparency and accountability as the most important ethical considerations for leaders when it comes to designing, procuring, and implementing technology.

“For me, transparency is about why a decision has been made. One of the biggest challenges in AI is you implement a tool to make a decision or to assimilate a larger set of data to make a quicker decision than a human could make. But if you can't explain why a decision was made, you have a problem,” he said during the masterclass panel discussion.

Lee also reminded leaders that AI is about helping humans do what they do better and smarter, but it’s not about taking decisions out of our hands – accountability stays with us. 

And while all speakers at the AGSM 2022 Professional Forum agreed that regulation will be critical to ensure a level playing field, Ed Santow pointed out that we need clarity on existing rules first before we develop new ones. 

“AI generally doesn't allow us to do new things, we're simply doing things we've always done in a new way. And that means the vast majority of existing laws are still applicable,” he said.

See also: Steer it – don’t fear it: navigating AI with confidence

 

As AI gains momentum, leaders need to identify the right frameworks for their business and implement truly responsible practices that minimise risk and allow technology and AI to have a positive impact on their business – and society. 

And if they’re successful, they will unearth a significant point of differentiation in the market. 

“People will start to really care about responsible AI, as news of the technology spreads,” explained Associate Professor Kirshner. 

“If you meaningfully invest in sound, responsible AI principles and business practices you will actually have a real source of competitive advantage.”

 

To find out more about AGSM @ UNSW Business School, click here.

To learn more about the topic of Ethical AI, listen to AGSM’s Business of Leadership podcast: The Business of AI.