Published
3 min read

Trevor I. Lasn

Builder, founder, based in Tartu, Estonia. Been coding for over a decade, led engineering teams, writing since 2015.

The Internet is Becoming an Ocean of LLM-Generated Junk

The internet’s full of content, but most of it is becoming junk. I’m talking about the stuff generated by Large Language Models (LLMs). These AI tools are cranking out endless articles, and the quality? It's bad—really bad.

The internet has changed. It’s become flooded with content. Most of it is just regurgitated word salad or plain junk. The rise of Large Language Models (LLMs) is partly to blame. But here’s the catch—most of it is worthless. And I’m not saying this lightly.

When you read LLM-generated content, something feels off. It’s usually repetitive, overly wordy, and lacks depth. Sure, it can be grammatically correct and sound “professional.” But does it add real value? Most times, no.

LLMs don’t understand the content they generate. They’re just parroting back patterns they’ve seen from massive datasets. It’s like having a parrot that learned how to mimic conversations. Yeah, it can say things that make sense, but it doesn’t understand what it’s saying.

I’m Skeptical of Almost Everything I Read Now

Here’s the frustrating part: I’m skeptical of almost everything I read online now. I wonder, “Was this written by a human or generated by an LLM?” It’s not just that the quality is poor—it’s that the trust is gone. Even if the content seems polished, I still second-guess its accuracy and depth.

I’m sure you’ve felt the same way. You’re reading an article or documentation, and it feels oddly familiar. It’s like you’ve seen the same phrasing in three other articles. You start questioning, “Is this just recycled LLM output?”

The problem is that LLM-generated content isn’t just clogging up blogs or listicles—it’s starting to leak into everything. From tutorials to documentation, I spend more time verifying whether what I’m reading is trustworthy. And that’s time I could be spending learning or building something, not playing detective.

It’s getting harder to tell whether something is written by a human or a machine. But there are some telltale signs. If an article feels like it’s dragging on, or if it keeps repeating itself without adding any real value, that’s a red flag. LLMs tend to produce a lot of fluff to make the content seem longer or more thorough.

Another sign is repetition. If the same points keep popping up in slightly different wording, you’re probably reading machine-generated content. It’s like the AI doesn’t know when it’s already made a point, so it just keeps going in circles.

Here’s my advice: If you’re writing content, don’t just scratch the surface. Provide depth, real-world examples, and explanations that go beyond the basics. Otherwise, you’re just adding to the growing ocean of junk.

Note: This article is me blowing off steam. I don’t have any solutions to fix the issue.


Found this article helpful? You might enjoy my free newsletter. I share dev tips and insights to help you grow your coding skills and advance your tech career.


Check out these related articles that might be useful for you. They cover similar topics and provide additional insights.

Tech
3 min read

Honey Quietly Hijacked Creator Revenue Through Affiliate Link Switching

Honey's controversial affiliate link practices and what it teaches us about Silicon Valley's ethics

Jan 4, 2025
Read article
Tech
5 min read

Cloudflare's AI Content Control: Savior or Threat to the Open Web?

How Cloudflare's new AI management tools could revolutionize content creation, potentially reshaping the internet landscape for both website owners and AI companies.

Sep 24, 2024
Read article
Tech
3 min read

Why Anthropic (Claude AI) Uses 'Member of Technical Staff' for All Engineers (Including Co-founders)

Inside Anthropic's unique approach to preventing talent poaching and maintaining organizational equality

Oct 23, 2024
Read article
Tech
5 min read

Can OSSPledge Fix Open Source Sustainability?

The Open Source Pledge aims to address open source sustainability challenges by encouraging companies to pay $2,000 per developer per year

Nov 17, 2024
Read article
Tech
3 min read

When Will We Have Our First AI CEO?

Welcome to the future of corporate leadership. It's efficient, profitable, and utterly inhuman

Nov 4, 2024
Read article
Tech
2 min read

skillcraft.ai now shows which tech skills are in demand

See what's rising, what's dying, and where the jobs are actually going

Nov 9, 2025
Read article
Tech
3 min read

Ghost Jobs Should Be Illegal

How fake job postings became a systemic problem in tech recruiting

Nov 15, 2024
Read article
Tech
2 min read

Is it even worth learning to code?

With AI tools like Claude Code, Cursor, GitHub Copilot, OpenAI Codex, and Lovable, is learning to code still valuable?

Oct 17, 2025
Read article
Tech
3 min read

The Crutch Effect: How AI Tools Became A Crutch

Introducing The Crutch Effect

Sep 13, 2024
Read article

This article was originally published on https://www.trevorlasn.com/blog/the-internet-is-becoming-an-ocean-of-llm-generated-junk. It was written by a human and polished using grammar tools for clarity.