Hundreds of news articles published every day across major and mid-tier media outlets are written entirely or substantially by AI, without disclosure to readers. Not experimental articles. Not clearly labelled AI content. Regular bylined news stories, sports recaps, financial earnings summaries, and local news briefs that appear in your feed, in search results, and on the homepages of outlets you have trusted for years.
Some of this is known and technically disclosed in terms of service that nobody reads. Most of it is not disclosed at all.
Where It Is Happening
The Associated Press has been using AI to generate earnings reports and sports statistics stories since 2014. They were early and, to their credit, relatively transparent about it. The AP’s AI-generated stories carry a disclosure tag. Most outlets doing the same thing do not follow this standard.
News outlets including CNET, Bankrate, and Sports Illustrated were caught publishing AI-generated articles in 2023 and 2024. CNET published dozens of AI-written financial explainer articles under fake bylines before being exposed by Futurism. Some of those articles contained factual errors that a human editor would likely have caught. The corrections were issued quietly.
Sports Illustrated published AI-generated articles attributed to authors whose author photos were AI-generated headshots of people who do not exist. Not a template, not a ghost-writer, not a questionable editorial practice. Completely fabricated people, writing completely fabricated content, published under the Sports Illustrated brand.
The Scale of the Problem in 2026
The 2023 and 2024 cases were embarrassing enough to generate coverage. What has happened since is that the practice has become more widespread, more sophisticated, and harder to detect. The early AI-generated news content was identifiable by its flatness, its lack of specific detail, and its tendency to state obvious facts in a slightly robotic cadence. The current generation of models produces text that is much harder to distinguish from human writing.
NewsGuard, an organisation that rates news websites for credibility, identified over 1,200 websites in 2025 that appeared to be publishing primarily AI-generated news content with little or no human editorial oversight. These sites look like news outlets. They have logos, about pages, and consistent publication schedules. They publish articles on real news topics. They are indexed by Google and appear in search results alongside legitimate journalism.
The key difference is that nobody wrote those articles in any meaningful sense, and nobody is responsible for their accuracy.
Why Disclosure Is the Exception Rather Than the Rule
Disclosing AI-generated content creates a credibility problem for outlets that use it. Readers trust journalism partly because they believe a human journalist investigated, verified, and took responsibility for what was written. An AI label on an article removes that implied assurance. So the commercial incentive is to not disclose.
The same dynamic driving AI use in academic research is driving it in journalism: pressure to produce more content, faster, at lower cost, combined with weak accountability systems and the difficulty of detection.
What to Do About It
The honest answer is that there is currently no reliable way for a reader to know whether an article was written by a human. AI detection tools have documented false positive rates. The same detectors being used to catch students cheating are the same tools journalists and fact-checkers have access to, with the same limitations.
Demand disclosure. Support outlets that disclose clearly. Treat undisclosed AI content as a credibility signal about the outlet as a whole, not just the specific article. And read everything with slightly more scepticism than you already do, because the bar for publishing something that looks like journalism has never been lower.