Home » Tech » Google Tests AI Tool for Writing News Stories and Pitches It to Major Publications

Google Tests AI Tool for Writing News Stories and Pitches It to Major Publications

Google Tests AI Tool Genesis for Writing News Stories

Google is currently testing a cutting-edge AI tool designed to automatically generate news stories, as reported by The New York Times. This innovative technology, code-named “Genesis”, has been pitched to prominent publications, including The New York Times, The Washington Post, and News Corp, the owner of The Wall Street Journal. Google building this AI Tool for Writing News Stories from last few years.

The primary aim of the tool is to function as a personal assistant for journalists by automating certain tasks and thereby allowing them more time for other important aspects of their work. Google envisions this tool as a form of “responsible technology.”

However, not everyone seems comfortable with the idea. Some executives who were presented with the tool found it “unsettling” as they believed it might undermine the effort invested in producing accurate news stories.

A Google spokesperson responded to the report by stating that the company is actively exploring the possibility of providing AI-enabled tools to assist journalists in their work, particularly smaller publishers.

These tools could help with tasks like generating headline options or suggesting different writing styles. The ultimate goal is to enhance journalists’ productivity while still acknowledging the essential role they play in reporting, creating, and fact-checking their articles.

It’s worth noting that several news organizations, including NPR and Insider, have also expressed their intent to explore the responsible use of AI in their newsrooms.

While AI has been used by some news organizations, such as The Associated Press, to generate stories related to corporate earnings, the majority of their articles are still written by human journalists.

The emergence of Google’s new AI tool raises concerns, as it has the potential to contribute to the spread of misinformation if AI-generated articles are not fact-checked or thoroughly edited.

Earlier this year, CNET, an American media website, experimented with generative AI for article production, but this move backfired.

See More: Update on Artificial Intelligence

The company had to issue corrections for over half of the AI-generated articles, as some contained factual errors or even plagiarized material.

To address this, some articles now carry an editor’s note stating that an AI engine assisted in creating an earlier version, which has since been substantially updated by a staff writer.

Leave a Reply

Your email address will not be published. Required fields are marked *