How an AI-written Star Wars story backfired on Gizmodo

How an AI-written Star Wars story backfired on Gizmodo

James Whitbrook, Gizmodo’s deputy editor specializing in science fiction, began his workday on Wednesday with a surprising note from his Editor-in-Chief. Within the next 12 hours, Gizmodo planned to release articles penned by artificial intelligence. A mere 10 minutes later, “Gizmodo Bot” published a post on the Star Wars timeline of movies and TV series.

Upon reading the article, which he had neither requested nor reviewed before publication, Whitbrook identified 18 points of concern, correction, and comments. He emailed his findings to Dan Ackerman, Gizmodo’s Editor-in-Chief, highlighting errors such as incorrect ordering of the Star Wars TV series “Star Wars: The Clone Wars,” missing references to TV shows like “Star Wars: Andor” and the 2008 movie “Star Wars: The Clone Wars,” poorly formatted movie titles, and a repetitive narrative style. He also noted the absence of an explicit disclaimer indicating the author was an AI, apart from the “Gizmodo Bot” byline.

This AI-written piece spurred immediate protest from Gizmodo employees who argued via the company’s internal Slack channel that the flawed article was tarnishing their credibility and displayed a complete lack of respect for journalism. They demanded its immediate removal, according to messages obtained by The Washington Post. The controversial story had been crafted using Google Bard and ChatGPT, as per a G/O Media staff member. (G/O Media owns several digital media platforms, including Gizmodo, Deadspin, The Root, Jezebel, and The Onion.)

In a conversation with Whitbrook, he voiced his frustration over the bot’s shortcomings. “These AI [chatbots] can’t even arrange Star Wars movies sequentially, which undermines my confidence in their ability to report any kind of factual information,” he said.

It was an ironic situation for Gizmodo, a site dedicated to technology coverage. Earlier, on June 29, Merrill Brown, the editorial director of G/O Media, had lauded the organization’s duty to engage with AI due to its technology-focused editorial mission.

“These AI-based functions aren’t replacing any existing writer or editor jobs,” Brown said when announcing a trial run for testing “our editorial and technological thoughts about AI utilization.” He acknowledged that mistakes would occur, but promised swift corrections.

The experiment at Gizmodo revealed a larger debate regarding the role of AI in journalism. Several journalists voiced their distrust towards chatbots to deliver well-researched and fact-checked articles, expressing concern that the rush to integrate AI technology into newsrooms may lead to reputation damage and lower employee morale.

AI experts echoed these concerns, pointing out that many AI systems have technical shortcomings that make them unreliable for journalism unless thoroughly monitored by humans. They warned that unchecked AI could contribute to misinformation, political unrest, and negatively impact media organizations.

Nick Diakopoulos, an associate professor of communication studies and computer science at Northwestern University, warned of potential reputational damage to news outlets that publish inaccurate AI-generated content.

Mark Neschis, a spokesperson for G/O Media, defended the company’s AI experiments, citing them as necessary for progress. Despite acknowledging the trials’ shortcomings, he confirmed that the company had no intentions of downsizing its editorial team due to AI activities.

Brown sought to placate displeased employees in a Slack message by emphasizing the company’s commitment to collecting and acting upon feedback. However, his words were met with derisive emojis from employees.

The use of AI chatbots in newsrooms is a subject of ongoing debate. Several media outlets that have experimented with AI in journalism have faced significant setbacks. Despite this, G/O Media remains undaunted.

Lea Goldman, G/O Media’s deputy editorial director, informed employees earlier this week about the company’s limited testing of AI-generated stories on its platforms. Despite being aware of employees’ objections and skepticism, Goldman went ahead with the testing.

The Star Wars story on Gizmodo’s io9 vertical, along with several other AI-generated pieces on the company’s sites, has attracted attention. The articles were corrected without any acknowledgment of the mistakes.

In response to the AI-generated articles, Gizmodo’s union issued a statement on Twitter criticizing the move as unethical and unacceptable. Readers are now alerted about AI-generated stories with the “Bot” byline.

Diakopoulos from Northwestern University cautioned against the poor quality of chatbot-produced articles. He suggested that if news outlets decide to use bots, there needs to be robust editorial oversight to ensure accuracy.

Researchers warn that the growing trend of AI-generated content might not only undermine media organizations’ credibility but also fuel misinformation and political turmoil.

NewsGuard, a media watchdog, identified over 300 AI-generated news sites that operate without human supervision, generating sometimes false content in multiple languages. This provides an economic incentive for using AI bots to generate as many articles as possible for ad placements.

Lauren Leffer, a Gizmodo reporter and union member, suggested G/O Media’s move is an overt attempt to boost ad revenue. She highlighted how this approach has demoralized the newsroom, as concerns about the company’s AI strategy have fallen on deaf ears. According to Chartbeat, a news traffic tracking tool, AI-written articles fail to attract as many readers as human-authored stories.

“If you aim to trick people into clicking on [content], then [AI] might be worth your time,” Leffer said. “But if you’re running a media company, you might want to trust your editorial staff to understand what readers want.”