The Challenges of AI in Journalism: Bias, Accuracy, and Trust

By Associate Professor Wendy Sloane (and ChatGPT)

Date: 14 March 2025

Generative AI presents new challenges for journalists, as it essentially creates new material from scratch, Tami Hoffman, a specialist in AI and journalism, told Journalism BA and Fashion Marketing and Journalism BA students on Wednesday.
 
“In effect, it asks you: ‘What do you want to hear?’, which is fundamentally at odds with journalism,” said Hofmann, who is currently Director of Commercial Innovation at ITN and their lead on AI. She is about to take on a new job as Director of Public Policy at the Guardian.
 
AI has the potential to shape the news cycle, often unfairly, if used without human oversight, she said. “AI carries inherent biases that can influence how news is reported if journalists fail to scrutinize it properly with a risk that historical stereotypes will be perpetuated in the digital future", she explained.
 
To mitigate these biases, journalists must refine their prompts carefully. “The better the prompts, the better the output. Use AI with your eyes wide open—don’t be seduced by its surface-level charm.”
 
Another major challenge is AI ‘hallucinations’—instances where the system generates incorrect or entirely fabricated information. This poses a significant risk to journalists, as it can lead to misinformation and erode public trust. "If journalists rely too much on AI, their work could contain AI hallucinations, and the trust could be lost.”
 
To navigate these risks, ITN has chosen to incorporate AI in production while keeping it separate from the core journalistic process. For example, it uses AI for tasks like colour grading in post-production rather than content creation. 
 
While AI can streamline workflows and aid in the dissemination of news, Hofmann cautioned against over-reliance, particularly for summarizing stories. “It’s problematic using AI for news summaries, as you don’t know where the information comes from.”
 
ITN has established clear AI guidelines emphasising editorial independence, rigorous content verification, continual scrutiny of AI-generated stereotypes and biases, and awareness of legal implications. Crucially, AI usage must be transparently disclosed, and editorial decisions must "always remain under human control", Hoffman said.
 
Ultimately, maintaining human oversight is critical. "If a human is not invested in the creation of something… they are likely to just rubber-stamp it,” she said. “If we get something wrong in the newsroom because of AI, it’s not AI’s responsibility but that of the newsroom, which can be damaged reputationally.”
Tami Hoffman, a specialist in AI and journalism

Pic: AI and journalism specialist, Tami Hoffman.

----
This article was written with input from ChatGPT. While it very slightly improved the structure of the original article, of particular note is the fact that it changed the quotes so that they read less like spoken speech. In keeping with journalistic integrity, they were changed back to the original - and this disclaimer was added.