A few years ago, a reporter from The Washington Post accomplished a journalistic feat that not many writers have done before. In 2016, within only their first year of writing for The Post, the reporter, pen name Heliograf, wrote and published 850 articles.
Heliograf was a multi-interest reporter who was interested in both sports and politics, and nearly all 850 of their articles came from just those two topics. They wrote 300 short reports and alerts about the Rio Olympics and 500 articles about congressional and gubernatorial races on Election Day.
Heliograf isn’t a real person — it is The Post’s in-house automated storyteller technology, just one of many artificial intelligence programs implemented in an increasing number of news outlets.
From using code that identifies trends in a labyrinth of data to developing technology that generates stories about the Olympics, the use of AI in journalism is becoming a key aspect of a news outlet’s operation. Even though AI can make a reporter’s job easier in certain ways, journalists are debating whether or not the technologies can be ethically implemented in a field that is so heavily human.
How is AI being used in newsrooms?
Get The Daily Illini in your inbox!
Dr. Andrea L. Guzman, associate professor in the Department of Communication at Northern Illinois University, researches human-machine communication, which is how “we communicate with technology and how it affects communication with other people.” Guzman said there are two ways AI is being used by journalists: workflow automation, which is more common, and content creation.
Regarding workflow automation, newsrooms use AI to sift through heaps of information. For example, Reuters uses Lynx Insights to find trends, facts and anomalies within a lot of data. Other outlets like the Associated Press employ social media-powered alerts to analyze media feeds and detect breaking news.
Another use of AI in workflows includes transcribing interview audio with platforms like Otter.ai. Some journalists also use AI to scrape data from websites, digitize archives and fact check.
Guzman said using AI for these “mundane tasks” lets the reporter focus on more important responsibilities while still producing content. AI also gets more done in a short amount of time than a human journalist can.
“(AI) is assisting human journalists in doing their jobs because what AI can process and the information it can provide is at a larger scale than what would have taken months, if even possible, for a human journalist to do,” Guzman said.
Regarding content creation, Guzman said some outlets are using AI to write basic, “data-driven” reports on topics such as stocks, earnings and sports games. For these reports, human programmers generate templates, and the AI fills in the data. These are the types of stories that AI like Heliograf are writing.
“Think of it similar to Mad Libs,” Guzman said. “There is a templating process, and there are certain parts left empty that are filled in by current data.”
Not all news outlets are able to use AI in the same capacity. Guzman said that because of limited budgets, local and/or small newsrooms aren’t using AI as much as large newsrooms.
“The degree to which news outlets are experimenting with AI and using AI really hinges on the news outlet’s budget as well as output,” Guzman said. “If you’re someone like the AP that needs to pump out a lot of content, but you’re also really large and have a lot of resources, you have more space to experiment with these types of technologies.”
That doesn’t stop them from trying. Brant Houston, professor in Media, oversees the University’s online newsroom, CU-CitizenAccess.org. Houston said his newsroom is working toward automating the download and organization of board, city council and government agency agendas. He also wants AI to identify key phrases within the downloaded content.
“That then frees up the journalist to focus on the really important stuff, because there’s not enough of us to do what we need to do,” Houston said.
Where problems arise: accuracy
In January, tech newsroom CNET began issuing lengthy corrections to some of its articles. Furthermore, a sister publication, Bankrate, disclosed concerns about the accuracy of articles it has published since November.
In both publications, the articles that required corrections were AI-generated.
With the introduction of free-to-use generative text programs like ChatGPT, journalists are now able to write entire stories using AI. Rather than using a system of templates, these stories are put together by an AI that draws from a massive, continuously learning database.
But like all automation, type programs are flawed. ChatGPT’s database, for example, doesn’t include any information after 2021. This means there’s a chance the AI could use old, incorrect data.
According to Guzman, AI accuracy depends on three factors: how the programming was put together, the accuracy of the information the AI is pulling from and the bias of its creators. When the programming is flawed and the database is old, inaccurate or biased, AI models can produce stories with false or misrepresented information.
“It shows that (newsrooms) do not understand the way these technologies work, or that they’re completely disregarding how they work, and potentially putting out misinformation without guard rails in place,” Guzman said. “We already have enough problems with misinformation.”
So, when an AI-generated story contains errors, who is held accountable?
Houston said to prevent accuracy and bias issues, humans must be involved in everything created by AI in the newsroom. Ultimately, humans are still responsible for the programs they are creating.
“You should never let the machine do everything,” Houston said. “We have an ethical responsibility to make sure that it’s not misleading people, and it’s not introducing bias into whatever it’s doing.”
Guzman said that when designing AI programs, developers need to consider what journalism is really about.
“Journalists work with the designers to think about what are the values of journalism, what is valued in journalism, what is seen as ethical and think about how those get put into the design of the technology,” Guzman said.
Other ethical issues
Besides accuracy, Guzman said there are other ethical questions regarding the use of AI in journalism. For example, should newsrooms be transparent about their use of AI to produce content? Right now, “there’s no standard within journalism” that gives an answer.
But Guzman said she thinks there needs to be transparency.
“I can tell you my research has shown that yes, the audience wants to know where their information is coming from,” Guzman said. “If it’s a story created by AI, it has to be bylined as such.”
Guzman also asked if there’s a limit to the types of stories that can be written by AI. Should journalists automate stories about murder? What about obituaries?
“Some people I talked to said obits are fine, but others … they wouldn’t want AI to write about death because computers themselves can’t die,” Guzman said.
Some journalists, like Victoria Olaleye, sophomore in Media, think that using AI takes away from what it means to be a journalist. She refers specifically to doing workflow tasks.
“I think that it’s part of the job to go through all of the public records and everything that we have to investigate,” Olaleye said. “I know that we’re in a place where we’re looking to make our lives easier with technology, but I also think it kind of takes away from our purpose.”
Elizabeth Kelly, junior in Media, said she is worried about AI taking away the “human connection” of the craft. To her, AI will never be able to replicate that.
“We’re able to interview people and convey their emotions through telling their stories,” Kelly said. “That’s something, at least to my knowledge, that AI can’t do.”
In the future
Technology will never stop evolving, and there are growing concerns of AI taking over human jobs. Guzman is all too familiar with the question of whether or not technology like The Post’s Heliograf will replace human journalists.
Olaleye said she is “absolutely” concerned about this question. She pointed to the fact that journalism is already in an unsteady position.
“Especially in media, jobs are already becoming less and less available, even with print dying out,” Olaleye said. “I feel like adding another aspect like AI would be damaging to something that’s already being broken.”
Guzman does not have a solid answer to that question. To her, it’s up to humanity to decide whether technology will replace human journalists.
“This is not a technology question — this is a human question,” Guzman said. “That is a decision for humans to make.”
While the idea of AI like a technological reporter that can write 850 articles in a year can be scary to some, Guzman encouraged journalists to be open to its advantages.
“What I encourage people to do is not to fear it, but to better understand its benefits and challenges and really learn about the ethical implications,” Guzman said. “Think about ethics to continue to make sure we’re making better decisions.”