AI presents high risk and reward to any business. PwC reckons that AI will generate $15.7 trillion in global economic growth. But generative AI in particular presents some risky businesses for companies, exposing them to potential legal liability if their gen AI tools violate copyright, as well as potential data privacy breaches. It wasn’t long ago that Samsung banned its employees from using ChatGPT over concerns about privacy lapses. Which raises the question: how much should businesses be reporting about how they use AI to power their futures?
I’m not talking about the ethics of AI (although I certainly have thoughts about that) – I’m referring to the fiduciary perspective, meaning what businesses are bound to share with investors. In that regard, the signs are certainly pointing in the direction of businesses treating AI like they do sustainability – a hot-button topic that exposes them to risk if managed poorly, and reward if managed well. Consider some of these factors:
Around the world, governments are making it clear that they want more transparency from businesses in how they use AI. To cite one example: the EU Artificial Intelligence Act (AIA) is a proposed regulation that aims to establish a harmonized legal framework for AI in the European Union (EU). The proposed Act includes several transparency requirements that would apply to businesses that develop, deploy, or use AI, including information about AI’s capabilities and limitations, such as the types of data it is trained on, the decisions it can make, and the potential risks of using the system. According to the Center for Data Innovation, the EU AIA could cost the European economy €31 billion.
Do investors care about how businesses use AI? Yes, they do, according to PwC’s latest Global Investor Survey. Sixty-one percent of investors say companies’ faster adoption of AI is very or extremely important in their investing decisions, and 77 percent say that reporting on the use and deployment of new and emerging technologies such as AI is important or very important to their investment analysis. In particular, they’re concerned about how much AI exposes a company’s risks in data security/privacy, lack of governance/process controls, the spread of disinformation, and compounding bias and discrimination.
A reasonable investor relations officer might well ask what exactly they should be disclosing about such a fast-moving technology. In fact, I’ll add one more: what do they seek to gain by disclosing? This second question, I think, is just as vital and should help answer you first. The fact of the matter is this: a company can demonstrate vision by sharing with all its stakeholders how it develops and uses AI. In fact, some are doing so already:
Typically businesses are at the stage of disclosing how they use AI fairly and responsibly. Responsible AI is important, but as investors cry out for more data and governments pressure businesses to be more transparent, I think AI disclosure will evolve well beyond issues of governance and fairness.
What would an ideal AI disclosure page on a website cover?
I think, first of all, the risk management aspects:
But it’s also important that companies report how AI relates to profitable growth – the really interesting stuff that can and should articulate a strong investment case:
In my experience, AI is no different than any other emerging trend in one important way: investor relations teams that get out in front of this issue and take steps now to address it will win. They’ll court new investors and build trust with their current ones by demonstrating insight into a far-reaching trend, and by connecting AI to what matters most: profitable growth.

Simon Gittings is Head of IR and Corporate Communications at IDX.
Get in touch with Simon Gittings today to learn more about our Investor Relations offerings.