The outcry for responsible AI is getting louder. While ethical conversations around AI have progressed in parallel with artificial intelligence as a whole, the rapid progression and potential implications of generative AI interfaces like ChatGPT have the world reeling anew, from businesses to consumers to policymakers. How many jobs will be lost to generative AI? How many content creators are having their work stolen, or replaced, by it? How can generative AI become more accurate and less biased? How can we prevent nefarious actors from using it in manipulative or harmful ways?
These are among the many questions swirling around generative AI in particular and, consequently, AI in general. Policies governing the use of AI are already emerging from the public and private sectors. Even the creator of ChatGPT, the global symbol of generative AI, wants generative AI regulated. Indeed, as Boston Consulting Group noted recently, responsible AI belongs on the CEO's agenda.
I predict responsible AI will be factored into how companies report on ESG (environmental, social and governance) in years to come. AI cuts across all aspects of ESG, and generative AI creates ESG issues all its own — although the risk to investors may not be as clear-cut. I argue that businesses integrating AI into their offering portfolios, marketing and communication workflows, or internal processes should start thinking about responsible use now to avoid significant risk, both today and tomorrow.
Environmental, social, and governance (ESG) are the three main factors used to evaluate a company's sustainability and ethical impact and, in turn, of investing in a company. ESG criteria are used to measure a company's performance in areas such as its impact on the environment, its treatment of employees and other stakeholders, and the quality of its corporate governance practices:
Investors use ESG criteria to make informed decisions about where to invest their money, based on the values and priorities that align with their investment goals. Companies that perform well on ESG criteria may be seen as more sustainable and responsible and may be more attractive to socially conscious investors. In fact, 76% of investors consider ESG risks and opportunities an important factor in investment decision-making, according to a recent PWC survey — and 49% are willing to divest from companies that aren’t taking sufficient action on ESG issues.
In recent years, ESG has become a major CMO-level issue, too. That’s because investors aren’t the only ones concerned about ESG — so are employees, clients, job seekers, and virtually all stakeholders.
Governmental bodies are increasing pressure on businesses to report their ESG risks. Almost a decade ago, the European Union (EU) began requiring companies to report non-financial information to make them more accountable for ESG issues. That was the first time disclosure requirements included the concept of double materiality.
The EU is now tightening its regulations with the Corporate Sustainability Reporting Directive (CSRD), which will be phased in for the 27 EU countries starting in 2024. The CSRD will create new, detailed sustainability reporting requirements and significantly expand the number of EU and non-EU companies subject to the EU sustainability reporting framework. The required disclosures will go beyond environmental and climate change reporting to include social and governance measures, such as respect for employee and human rights, anti-corruption and bribery, corporate governance, and DEI.
The United States is not far behind. In 2022, the Securities and Exchange Commission proposed a rule that would require a domestic or foreign registrant to include certain climate-related information in its registration statements and periodic reports, such as on Form 10-K. Although many businesses already disclose sustainability data, the SEC would make sustainability reporting mandatory. That proposal is likely to be passed in 2023.
Once you understand the purview of ESG reporting, it’s easy to see the potential correlation between these metrics and how responsible use of AI may eventually be measured alongside the already approved or proposed ESG reporting frameworks — more so when you consider that Europe and the US already have drafted AI policies (even if those policies are struggling to keep pace with the speed of AI’s development).
We know the environmental impact of AI on the environment is mixed. On one hand, AI has the power to analyze vast amounts of data to inform environmental research at scale, such as identifying climate patterns and predicting natural disasters. AI-powered devices can optimize energy consumption by adjusting power usage to usage patterns and predict when machinery and equipment are likely to fail, helping to mitigate the effects of unplanned interruptions or sudden catastrophes.
Negative impacts of AI on the environment include, most glaringly, energy consumption. The process of training and operating large AI and machine learning models requires vast amounts of energy, resulting in increased air pollution, water usage, and carbon emissions that can accelerate climate change. (What benefits can AI truly bring if it destroys the planet in the process?)
Generative AI is a microcosm of these broader issues. ChatGPT runs on one of the largest large language models (LLMs) in existence, with the latest iteration, GPT-4 rumored to include over 1 trillion parameters. One source estimated ChatGPT’s potential carbon footprint, running on the previous LLM GPT 3.5, to be 23.04 kgCO2e per day, a number that grows as more users adopt the tool.
One thing is certain: AI is not a neutral force. It can potentially help with analysis that leads to more sustainable decision-making, but it also creates a threat to sustainability in ways we are still uncovering. How will investors assess AI companies (and those who use AI technologies) from an ESG lens?
The social impact of AI starts with jobs — as in jobs lost. According to a McKinsey study, AI could displace roughly 15% of workers (400 million people) between 2016 and 2030. In a scenario of wide AI adoption, the firm found that the share of displaced jobs could rise to as much as 30%.
The CEO of OpenAI, Sam Altman, admits the company is “a little bit scared” of ChatGPT and says it will “eliminate” many jobs. But he also notes that AI could create “much better ones.” Even so, how much could large-scale re-skilling or unemployment contribute to a company or entire society’s workforce and social welfare costs? In the long run, will job displacement be supplanted by job enrichment?
If AI leads to decreased payrolls and increased company profits, investors might find that company more attractive. (How many times have we seen a company’s stock price rise in value when it announces job cuts?) This potential persists even (or especially) at the top, as in the case of one Chinese company that replaced its CEO with an AI — one that takes no salary, works 24/7, and whose company has outperformed the Hang Seng Index.
But beyond the direct impact on jobs, generative AI, in particular poses indirect social threats, from copyright to privacy infringement. Even the economic benefits stemming from increased efficiency may be counter-balanced by an increased risk of litigation, social welfare costs, or reputational risk.
Investors look at ESG from the standpoint of investment risk, but that may be not easy to quantify from a social perspective. What will AI’s long-term impact on society be, and how will we measure it?
As noted above, generative AI created corporate governance challenges from the start because the technology can violate copyright when it sources content, among other issues. Any business that adopts AI needs to consider the potential for fines or litigation due to improper use or disclosures around a business’s use of AI. Some companies are not taking chances — several Wall Street firms, including Bank of America and Goldman Sachs Group, have already banned ChatGPT.
Online news agency CNET presents an excellent example of why these companies have cause for concern. Early this year, the company quietly tested using generative AI to produce some financial advice columns. When independent reporting called out factual inaccuracies in those stories (and social media piled on with complaints of misleading authoring disclosures), CNET was forced to respond, pause its test, and reset expectations.
And then there is the case of Alphabet, whose stock suffered a $100 billion hit when its own generative AI tool, Bard, committed mistakes during a demo — before Bard was even made public. Microsoft’s earlier adoption of AI-enabled chat on Bing search likely prompted Alphabet's rush to release Bard. This experiment quickly led to reports of toxic, inappropriate, and even abusive responses from the tool and ultimately required much tighter controls.
Microsoft recently laid off the team responsible for AI ethics, even as it races to expand its use of AI-enabled search. Given the potential risks of poor AI governance outlined above, this move seems highly short-sighted, and I’m not alone in thinking so. A University of Washington expert on ethical issues in natural-language processing denounced Microsoft’s decision to dissolve the ethics and social team. Microsoft continues to reel from widespread condemnation, as reported here, here, and here.
It seems evident that the repercussions of bad AI governance (both actual and potential) come with a cost, and investors are not likely to ignore them.
In a discussion group of professionals and enthusiasts following AI Tech, Culture, and Policy (of which I’m a member), fellow Princeton alum and Director of Iowa State’s Graduate Program in Human-Computer Interaction, Stephen Gilbert, imagines a world where human-agent teaming (HAT) is a normal and productive way of both evaluating AI tools and improving our outputs with those tools. He writes:
“One day soon, just like people review restaurants and products, they'll be reviewing agents. "Oh, I love working with the new Selma3; she understands me." And when Consumer Reports or CNET does a run down of the top 10 productivity agents, I'm interested in what the criteria will be and how they'll measure them: Trustworthiness: 4.5/5. Understands what you meant to say: 3.8/5. Has got your back in a pinch: 2/5. Can write well: 5/5. Factual correctness: 2/5…”
This aligns with my prediction for how AI will change work for content creators (and that, eventually, the world will find a way to acknowledge, accept and adapt to this change). But in a global economy where businesses decide when and where to integrate artificial intelligence into their products and processes, I argue that transparency on the ESG impact of these decisions is equally likely, and essential.
We don’t yet know what businesses might need to report regarding AI’s impact on ESG — but they’ll likely need to be more transparent to start. Consider CSRD and the proposed SEC protocols as proving grounds for how businesses must address the ESG issues surrounding AI discussed here. This issue will become even more interesting when the CSRD is phased in for the 27 EU countries starting in 2024.
As governments and global bodies advance regulations and requirements around ESG, will you be prepared for the inclusion of AI metrics? As so many businesses race toward adopting the efficiencies of AI, the time to consider the holistic impact of your choices around this technology is now — not only for your profits but also for your people and the planet.
At IDX, we create investor solutions strategies that can make your content, audience and platforms sing. We do all this with a pulse on the tech changing the demands on content, like ChatGPT, omnichannel marketing, headless CMS, and more. Contact us to learn more about the future of content and how we can help you prepare.