Quartz’s AI News: A Cycle of Regurgitated Content Raises Concerns

Estimated read time 5 min read

Quartz’s AI newsroom combines legitimate sources with AI-generated content, resulting in a mix of reliable and dubious information. Critics argue that the use of AI in journalism lacks originality and accuracy, raising concerns about the integrity of news.

Quartz’s AI News: A Cycle of Regurgitated Content Raises Concerns
In the ever-evolving landscape of journalism, Quartz has recently introduced an AI-driven newsroom that aggregates content from various sources, including reputable outlets like Reuters and NPR, as well as AI-generated content farms like Devdiscourse. This approach has led to the creation of news articles that are essentially AI-generated summaries or rehashes of existing stories, often lacking depth and originality1.

The Quartz Intelligence Newsroom, part of the G/O Media empire, acknowledges the experimental nature of its technology and provides source links. However, the quality and reliability of its content are questionable. The use of AI in journalism raises significant concerns about the integrity of news, as AI systems recycle and repurpose content, sometimes from dubious sources, resulting in a cycle of AI-generated information that lacks authenticity and accuracy1.
Critics argue that this reliance on AI-generated content reflects a broader trend in media where cost-cutting measures prioritize AI over human journalists. While AI tools can automate repetitive tasks and generate content quickly, they often lack the nuance and context that human writers bring to their work. This can result in superficial amalgamations of information that fail to provide meaningful insights or original perspectives1.
One of the primary issues with AI text rewriters is their accuracy and factual consistency. These tools are trained on vast datasets but do not have a true understanding of the world. As a result, they can unknowingly change or misrepresent key details when rewriting text. For example, an AI might change the year a historical event occurred or alter statistics and numbers within an article2.
The rewritten text can be read fluently, but often with some inaccuracies and contradictions that a human would easily detect. However, the average reader may not know that the details aren’t accurate. For any content where factual correctness is important, such as news reports, technical documentation, and academic writing, this is a major concern2.
Moreover, AI text rewriters essentially work by analyzing patterns in the data they are trained on. They identify common ways ideas are expressed and then remix and reuse those expressions in their output. This can result in passages that are partially plagiarized from the source text or other online content. The tools may also repeat the same phrases or sentences within a piece of rewritten text, leading to unnatural levels of repetition2.
For applications demanding 100% originality, like academic work, marketing content, or journalism, this remains an obstacle. At their core, AI text rewriters do not actually comprehend the meaning of the documents they work with. They don’t have a sense of semantics, ideas, or topics beyond pattern recognition. This means they may do things that violate basic common sense or represent a concept inaccurately2.
In conclusion, while AI tools have impressive capabilities, their limitations in terms of accuracy, originality, and context make them unsuitable for high-stakes content creation such as news journalism. The reliance on AI-generated content raises serious concerns about the integrity of news and highlights the need for human oversight and editing to ensure the quality and reliability of the information presented.


1. What is Quartz’s AI newsroom doing?
Answer: Quartz’s AI newsroom is using generative AI to produce news stories by aggregating content from various sources, including reputable outlets and AI-generated content farms.

2. Why is the quality of Quartz’s AI-generated content questionable?
Answer: The quality is questionable because AI systems recycle and repurpose content, sometimes from dubious sources, resulting in a cycle of AI-generated information that lacks authenticity and accuracy.

3. What are the primary issues with AI text rewriters?
Answer: The primary issues are accuracy and factual consistency. AI tools can unknowingly change or misrepresent key details when rewriting text, and they often lack the nuance and context that human writers bring to their work.

4. How do AI text rewriters work?
Answer: AI text rewriters work by analyzing patterns in the data they are trained on and then remixing and reusing those expressions in their output, which can result in passages that are partially plagiarized or repetitive.

5. Why is human oversight necessary for AI-generated content?
Answer: Human oversight is necessary because AI tools do not comprehend the meaning of documents and may produce inaccurate or misleading information. Human editors can verify the accuracy of content before publishing it.


The use of AI in journalism, as exemplified by Quartz’s AI newsroom, raises significant concerns about the integrity of news. While AI tools can automate repetitive tasks and generate content quickly, they often lack the nuance and context that human writers bring to their work. The reliance on AI-generated content highlights the need for human oversight and editing to ensure the quality and reliability of the information presented.


You May Also Like

More From Author

+ There are no comments

Add yours