Transparency has rapidly emerged as a hot-button topic in the rapidly evolving field of generative AI, in which tools such as ChatGPT can be used to quickly create amazingly human-looking content ranging from blog posts to term papers. In fact, generative AI can be used to write complete drafts of content efficiently. But, when and how should the content creator acknowledge that they’ve been using AI? This question has already rocked institutions ranging from publishing to higher education. In financial services, transparency has become such a thorny legal issue that Wall Street firms are scrutinizing the adoption of generative AI amid legal exposure.
The potential impact of ChatGPT on learning is reverberating throughout the halls of higher education. Recently, Jonathan Choi, a professor at Minnesota University Law School, gave ChatGPT the same test faced by students, consisting of 95 multiple-choice questions and 12 essay questions. Although ChatGPT performed spectacularly poorly answering some questions, it passed exams after writing essays on topics such as constitutional law and taxation. This is just one instance that has caused educators to ask how it’s going to be possible to know when students are taking coursework without the use of AI to help them.
Meanwhile, the publishing world was shaken by the revelation that a respected news outlet, CNET, had been using AI to write stories without telling its readers in a forthright manner. As reported in The Futurist, the articles are published under the alias “CNET Money Staff,” and cover topics like “What is Zelle and How Does It Work?” CNET was not transparent about the fact that it uses AI for these explainer-type articles, which outraged readers and staff alike. Since the Futurist article was published, CNET has inserted a visible disclaimer in the CNET Money Staff section, “This article was assisted by an AI engine and reviewed, fact-checked and edited by our editorial staff.”
There is no end to these examples of ChatGPT fooling people, but here’s one more to drive home the point: a recruitment team unknowingly recommended ChatGPT for a job interview after the AI was used to complete an application task.
These are only two among many examples emerging in which the use of AI can create enormous trust issues for two reasons:
But generative AI isn’t going away. In fact, ChatGPT is just one of many AI assistants available to writers. There are no easy answers, but it’s obvious that organizations need to get out in front of this issue and start setting some ground rules. Until that time comes, generative AI (and AI more broadly) will suffer from a “black box” problem, which stems from the difficulty in understanding how AI systems and machine learning models process data and generate predictions or decisions. These models often rely on intricate algorithms that are not easily understandable to humans, leading to a lack of accountability and trust.
For instance, an associate professor at Wharton recently commented on LinkedIn about how he’s added an AI policy to his syllabus in order to teach students how to use AI responsibly:

Source: Ethan Mollick
He also published a tutorial to help students and anyone else involved in content creation. Clearly, he understands that AI assistants will change the academic landscape; this requires new frameworks for understanding prompt-making and accountability while using the tools -- how to get valuable outputs from them, while still acknowledging their limitations.
Businesses need to do the same thing. There are no fixed rules here, but to protect their reputations, any organization needs to make it clear to writers:
There are inevitably grey areas to deal with here. Should content creators disclose every instance of using AI in the workflow? What happens if AI was used for ideation? For writing headlines and subheads in a 3,000-word article when the rest of the article was written entirely by a human? This is the general guideline to follow: if AI is doing your work for you, then you absolutely need to disclose that. If it’s being used as a content assistant? Probably not.
And an even better way to bypass this topic/debate altogether – create great content that is authentically your own. AI can produce human-sounding content, visual manipulation, and voice cloning but that does not mean it’s necessarily good or even original. So, while AI can pull together multiple sources, humans can stand out from their machine counterparts by simply adding their own voice, personality, and character into their creative works.
At Investis Digital, we create content and the strategies to make your content sing—from the platforms it's on to the people who engage with it to the business results it drives. We do all this with a pulse on the tech that’s changing the demands on content, omnichannel marketing, headless CMS (Content Management System), and more. Check out our content and creative services or contact us to learn more about how we can help you keep up with your content demands.