Good, bad, questionable of AI and journalism

By Celia Wexler

When it comes to ChatGPT, journalists should approach with low expectations and an abundance of caution. That was one of the conclusions of the webinar, “ChatGPT: What Could Wrong? Or Right?,” hosted April 28 by the SPJ DC Pro Chapter in observance of SPJ Ethics Week.

Panelists were Alex Mahadevan, director of the MediaWise program at the Poynter Institute, and an expert on digital media literacy; Samantha Sunne, author of the best-selling textbook, “Data + Journalism: A Story-Driven Approach to Learning Data Reporting,” and James Goodwin, senior policy analyst at the Center for Progressive Reform, helping average citizens understand the impact of federal rules on public health and safety and how they can influence the regulatory process.

During the hourlong webinar, the panelists discussed this new technology from both an ethical and practical perspective.

Sunne said that she found the technology has limited usefulness. For example, she might use it to write a breezy introduction to her newsletter, but not for the newsletter’s content. Likewise, she may use it to write an email. But she would avoid using it for anything “where you depend on fact.” Computers, she added, remain “light years away from the human brain.”

“It generates frustratingly boring copy,” Mahadevanadded, comparing it to the work of middle and high schoolers. In the newsroom, he said, for any copy that would be “public-facing … it would make your job harder,” given all the fact-checking and editing a Chat-GPT-produced story would require.

However, in some limited situations, the tech might prove useful, he added. One example – filing Freedom of Information Act requests. Sunne agreed. Such requests are formulaic, and contain a “limited number of facts that you could get wrong” and generally are seen by one government records official. Filing such a request usually requires fact-checking before it’s sent out, so that doesn’t add to the workload.

Panelists agreed that it may also be possible for a nonprofit news site to use the technology to apply for a grant from a foundation. Goodwin added that it could be a boon to small environmental nonprofits seeking federal grants to respond to the “boilerplate” government forms, and apply for the money.

Goodwin also speculated that ChatGPT might also help citizens communicate with their government. The rule-making process requires public comment, but many average people feel uncomfortable writing comments or are uncomfortable speaking English. If ChatGPT could translate their comments of Hispanic activists concerned about air pollution, that could benefit citizen engagement, he said.

Of course, you can’t discuss AI without bringing up disinformation, and this clearly was an area where panelists agreed that ChatGPT could cause real harm.

Using ChatGPT, Mahadevan spent 10 minutes and was able to create the Suncoast Sentinel, an entire fake news site, complete with a staff. He called his product “plausible BS” to any uninformed reader. “The barrier to entry to disinformation just went down a lot,” he said.

Fighting misinformation now “is a huge field,” Sunne agreed. There are efforts afoot to have computers assist, but they lack “critical thinking” skills, so they need “computational indicators” to guide them, she explained. For example, a computer could assess how many other websites are linking to a questionable website to gauge its credibility. “That’s something I know people are working on.”

Poynter is already doing “prebunking” training, Mahadevan added. “Prebunking operates under the idea of inoculation theory. … We expose people to fake information” and help them detect what is untrue, using strategies like googling the outlet and authors, or doing a reverse image search.

“Fact-checkers are using the same tools as before, but the scale is bigger,” he added. “We need more resources.”

The panel was organized, at the behest of the national SPJ Professional Standards and Ethics Committee, and moderated by Celia Wexler (seen in the upper left pane of the screen grab from the April 28 webinar.)


Celia Wexler, a member of the board of directors of the Washington, D.C., Professional Chapter of the Society of Professional Journalists, also is the author of Catholic Women Confront Their Church (Rowman & Littlefield, 2016) and Out of the News (McFarland, 2012), the winner of the Society of Professional Journalists’ Excellence in Journalism award for journalism history. 


Additional commentary suggested by panelists

General commentary and newsroom experience

A fake news frenzy: why ChatGPT could be disastrous for truth in journalism (Columbia’s Emily Bell)

Samantha Sunne

Your one stop shop for AI R & D (ChatGPT might not be the best AI tool for newsrooms)


Do News Bots Dream of Electric Sheep? (from 2016)


AI and Productivity Tools (slide presentation)


Wrangling the robots: Leveraging smart data-driven software for newsmaking


Alex Mahadevan


This newspaper doesn’t exist: How ChatGPT can launch fake news sites in minutes


James Goodwin


Could AI be used to sway federal rule-making? (James Goodwin says ChatGPT could make it easier for the public to comment on regulations … )


‘Sorry in Advance!’ Rapid Rush to Deploy Generative A.I. Risks a Wide Array of Automated Harms

(James Goodwin says … but also potential for misinformation and manipulation of consumers)


Watch the April 28 session on YouTube at the SPJ DC channel.