The imperative of AI strategy: Navigating trust and value in the corporate landscape
By Paul Clifford, Advisor at Six O’Clock Advisory.
“A third of executives are at risk of losing their jobs if they fail to learn about artificial intelligence and lead their companies to adopt it,” began a recent piece in the Australian Financial Review.
The reporting reflected comments made at the publication’s inaugural AI Summit. Whether one agrees or disagrees with the statement, it is indisputable that AI has permeated the corporate landscape.
It’s listed as the primary reason for the extraordinary gains of Nvidia, while closer to home stock pickers are playing a variety of ASX companies that have links to the AI ‘boom’ in one way or another.
The knock-on effect of this is that corporates are intent on spruiking their use of AI, wherever possible.
Everyone is jumping on the bandwagon. And, yet, trying to find a published AI strategy across the broad list of companies which are willingly talking up their wares in the space yields limited results.
Amid the gold rush, there are important reputational, and communication, considerations for businesses. Chiefly among them should be the development and publication of an AI strategy.
A well-defined AI strategy should serve as the foundation for an organisation’s AI initiatives as well as the basis for how they communicate those initiatives.
Building trust through transparency
The lack of comprehensive regulation in AI makes it challenging for stakeholders to trust the technology. This regulatory gap necessitates that companies take proactive steps to build trust with their stakeholders.
Transparency about how AI is used and the measures in place to mitigate risks should be a key component to building stakeholder trust.
We have already seen the first cases of ‘AI washing,’ a newly-coined term (and one we’re likely to become far more familiar with), in the U.S. where investment advisers were deemed to have overstated their use of AI.
Companies need to avoid exaggerating their AI capabilities and, instead, focus on authentic, transparent dialogue.
Communicating value
In addition to trusting AI, stakeholders must understand the benefits of a company’s enhanced adoption of it. Organisations should focus on their core stakeholder groups, ensuring that they understand the value it offers.
That value may be that AI enables employees to focus on higher-value work, ultimately improving productivity and leading to greater shareholder returns. Or, perhaps, it will reduce contact centre wait-times leading to a better customer experience.
It boils down to company leaders needing to enable their stakeholders – internal and external –to understand where the value resides for them in the organisation’s AI strategy.
Mitigating risk
There is a broader conversation around AI and its ethical implications that are bigger than any one company. Meaningful concerns range from its impacts on energy security, to its potential to replace millions of jobs globally.
While a company might not have all the answers, it should consider these broader issues and emphasise ethical AI practices in its strategy to mitigate risks and maintain trust with stakeholders.
Central to this should be ensuring that human oversight and intervention play a crucial role in a company’s use of AI. This oversight helps to address issues of bias, fairness, and accountability, ensuring that AI technologies align with the organisation’s values and societal norms.
The ability to effectively communicate these considerations will be increasingly important as organisations seek to achieve the strategic and operational benefits of enhanced AI adoption.
The opportunity
As AI continues to evolve, companies that prioritise clear and effective communication about their AI strategies will be better-positioned to navigate the complexities of technology’s newest frontier, and reap its full benefits.
By doing so, they can enhance their operational capabilities while improving brand reputation among their stakeholders.