{"id":2513,"date":"2026-04-05T15:10:25","date_gmt":"2026-04-05T13:10:25","guid":{"rendered":"https:\/\/blogs.gentoo.org\/mgorny\/?p=2513"},"modified":"2026-04-05T15:15:27","modified_gmt":"2026-04-05T13:15:27","slug":"the-pinnacle-of-enshittification-or-large-language-models","status":"publish","type":"post","link":"https:\/\/blogs.gentoo.org\/mgorny\/2026\/04\/05\/the-pinnacle-of-enshittification-or-large-language-models\/","title":{"rendered":"The pinnacle of enshittification, or Large Language Models"},"content":{"rendered":"<p>Honestly, I hate that I read about <abbr title=\"Large Language Model\">LLM<\/abbr>s all the time.  I hate all the marketing bullshit, but also all the critical pieces.  Not because the criticism is wrong.  I hate them precisely because they&#8217;re right.  And I hate the feeling that I have to write yet another piece on that same topic, to collect some of the thoughts I have had over the recent months.<\/p>\n<p>Machine learning isn&#8217;t anything new.  Neither is calling it &#8220;artificial intelligence&#8221;.  Not only pop science writers and journalists, but even more technical folk have been using the term, and I never complained.  I didn&#8217;t complain about games having &#8220;AI&#8221; either.  It was always clear that this is a special use of &#8220;intelligence&#8221;, one far from what animals truly possess.  This changed recently.<\/p>\n<p>When LLMs enabled chatbots to use human language, the misuse of the term exploded.  Obviously, the marketing people loved calling it &#8220;artificial intelligence&#8221;.  The media, the users and the whole <abbr title=\"Information Technology\">IT<\/abbr> industry followed.  Even people who knew better stopped bothering.  On top of that, anthropomorphisms became commonplace.  LLMs could be said to be &#8220;thinking&#8221;, &#8220;lying&#8221;, &#8220;hallucinating&#8221;, to &#8220;approve&#8221; or &#8220;disapprove&#8221;, &#8220;like&#8221; or &#8220;dislike&#8221;\u2026<\/p>\n<p>Perhaps it wouldn&#8217;t be so bad if not for the fact that LLMs are <em>so good<\/em> at imitating human intelligence.  The problem is not really how people call them.  The problem is that there is a number of people who start actually <em>believing<\/em> that their chatbots are conscious.  And I can see why that would be happening\u2026<br \/>\n<!-- more --><\/p>\n<h2>In pursuit of nonhuman intelligence<\/h2>\n<p>I suppose it is no surprise to anyone that humans always sought nonhuman intelligence.  The research on intelligence in other animals, projects such as <abbr title=\"Search for Extraterrestrial Intelligence\">SETI<\/abbr> or simply the widespread belief in aliens, often strengthened in the face of lack of evidence, all confirm that.  Naturally, artificial intelligence was always a hot topic in science-fiction writing and moral debates as well.  Conscious machines: a dream or a nightmare?<\/p>\n<p>At the same time, we managed to have a pretty specific idea of what intelligence is.  We find it much easier to consider an animal intelligent if it&#8217;s good at communicating with humans or behaving like humans do.  We can be amazed by how complex the waggle dance of bees is, but we feel much more at home with tricks like imitating human speech.  Tools capable of producing elaborate and consistent sentences easily hit that soft spot.<\/p>\n<p>On top of that, as a society we have embraced bullshit.  In the recent decades, our whole society has been reorganized around it.  From students bullshitting their way to pass exams without the actual knowledge, to workers spending all their working time (and sometimes working overtime), justifying their existence by pouring paragraphs upon paragraphs of meaningless bullshit.  Models trained on all that bullshit and optimized for producing more of it feel just at home.  And just like the student, they actually succeed in convincing us that they gave us the answer.<\/p>\n<h2>If it speaks like a human\u2026<\/h2>\n<p>So how do LLMs actually work?  Well, you basically give them a bunch of context, and they predict what comes next.  You give them a question (and a system prompt, and earlier dialog as a context), and they predict that an answer should come next.  Or well, something resembling an answer.  You give them a programming task, and they predict that a specific program should come next.<\/p>\n<p>The key point is that are not capable of <em>comprehension<\/em>.  You can train a parrot to respond to some questions, but said parrot won&#8217;t comprehend the questions nor the answers it is giving; it just imitates the sound of human speech in response to specific triggers.  The same is true of an LLM: it&#8217;s not intelligent, it does not understand human speech or programming languages, it does not have the capability of coming up with anything truly new and unique.  All it does is predict how to remix its own data set.  And I know it&#8217;s really hard to believe that, because it&#8217;s <em>so damn good<\/em> at it.<\/p>\n<p>It clearly can produce a new text in a desired style, or write a new program.  It&#8217;s much better than your average bullshit student.  I honestly can&#8217;t imagine how these algorithms manage that.  But they do, and clearly the size is what makes all the difference.<\/p>\n<h2>All about the data (and lots of it)<\/h2>\n<p>The data is the key.  To achieve their bewildering efficiency, LLMs need lots of training data.  You can&#8217;t speak human without learning the human speech.  But humans do have intelligence, and they can use that intelligence to adapt their limited knowledge to manage great feats of communication.  LLMs don&#8217;t have that, so they instead needs lots and lots of human speech to work with.  They need to bruteforce their way.<\/p>\n<p>The really big LLM companies use all the data they can get.  They <a rel=\"external\" href=\"https:\/\/arstechnica.com\/ai\/2025\/06\/anthropic-destroyed-millions-of-print-books-to-build-its-ai-models\/\" title=\"Anthropic destroyed millions of print books to build its AI models\">destructively scan millions of books<\/a>.  They <a rel=\"external\" href=\"https:\/\/www.tomshardware.com\/tech-industry\/artificial-intelligence\/meta-staff-torrented-nearly-82tb-of-pirated-books-for-ai-training-court-records-reveal-copyright-violations\" title=\"Meta staff torrented nearly 82TB of pirated books for AI training \u2014 court records reveal copyright violations\">torrent terabytes of pirated media<\/a>.  They process all the code stored on public forges.  But most importantly, pretty much all LLM companies big and small scrape the web.<\/p>\n<p>Now, scraping is nothing new.  Search engines have been doing this for a long time already.  It used to be a solved problem, with clear rules on how to do it, and how to be nice to one another.  But then came the LLM companies, and lots of them, always fiercely competing to get as much data, as quickly as possible.  And they&#8217;ve started bombing sites with low-quality, unethical scrapers: largely distributed, with no throttling, no caching, no respect for the rules, and a lot of them all at once.  <a rel=\"external\" href=\"https:\/\/lwn.net\/Articles\/1008897\/\" title=\"Fighting the AI scraperbot scourge\"> It is like a constant <abbr title=\"Distributed Denial-of-Service\">DDoS<\/abbr> attack at independent infrastructure<\/a>.<\/p>\n<p>Of course, sites simply can&#8217;t withstand that.  And I&#8217;m not even talking about sites that do not want to be scraped for LLM training data at all.  I&#8217;m talking about services such as Gentoo Bugzilla that have clear robot rules that enabled (search engine) scrapers to get all the data they need without much duplication or unnecessary server load.  LLM scrapers are ignoring these rules, and firing Bugzilla search after search, report after report.  The server is churning like crazy, the database is churning like crazy, and real users who actually need to find a bug report or file one, are suffering because of that.<\/p>\n<p>Users are either hit by service downtimes, or by additional LLM protection mechanisms are deployed that can range from annoyance and a waste of energy to a complete accessibility blocker.  You put measures against total assholes, but you actually harm the most vulnerable.  Or you have a lot of extra work to ban hundreds of bad actors to just keep the services online (and hope you don&#8217;t accidentally ban real users in the process).<\/p>\n<p>And sometimes, people are simply giving up.  The Internet is losing its independent websites.  It is losing its best, and becoming an enshittified bigtech mess.<\/p>\n<h2>Suffering via the free market<\/h2>\n<p>Processing lots of data requires lots of powerful hardware.  So the LLM companies are buying hard, and hardware manufacturers are diverting their production away from the less profitable consumer hardware.  New hardware is becoming scarce and the prices are rising.  We&#8217;ve already seen <a rel=\"external\" href=\"https:\/\/www.npr.org\/2025\/12\/28\/nx-s1-5656190\/ai-chips-memory-prices-ram\" title=\"Memory loss: As AI gobbles up chips, prices for devices may rise\">prices of video cards and memory skyrocket<\/a>.  Apparently, <a rel=\"external\" href=\"https:\/\/www.techspot.com\/news\/111831-not-memory-anymore-ai-data-centers-taking-all.html\" title=\"It's not just memory anymore: AI data centers are taking all the CPUs, too\">CPUs are coming up next<\/a>.  And if people are already moving away from true personal computers in favor of smartphones and devices that are effectively dumb terminals for &#8220;the cloud&#8221;, this will only accelerate that trend.  People are going to be increasingly dependent on bigtech companies renting the computing power to them, and these companies will be taking advantage of that.<\/p>\n<p>On top of that, the fierce competition in the LLM market means that everyone is constantly in need of new hardware.  It is unlikely that the old video cards will be repurposed or sold second hand.  More likely, <a rel=\"external\" href=\"https:\/\/www.scientificamerican.com\/article\/generative-ai-could-generate-millions-more-tons-of-e-waste-by-2030\/\" title=\"Generative AI Is Poised to Worsen the E-Waste Crisis\">we&#8217;re just going to see huge piles of e-waste<\/a>.<\/p>\n<p>Running lots of powerful hardware requires a lot of electric power.  This in turn means lots of waste heat, and <a rel=\"external\" href=\"https:\/\/www.eesi.org\/articles\/view\/data-centers-and-water-consumption\" title=\"Data Centers and Water Consumption\">significant use of water to cool it all<\/a>.  Where new data centers are being built, <a rel=\"external\" href=\"https:\/\/www.cnet.com\/home\/energy-and-utilities\/the-ai-data-center-boom-is-driving-up-electricity-costs-research-shows\/\" title=\"The AI Data Center Boom Is Driving Up Electricity Costs, Research Shows\">people are already suffering from raising electricity prices<\/a>.  The climate crisis is accelerating, and all the companies are <a rel=\"external\" href=\"https:\/\/www.tomshardware.com\/tech-industry\/google-quietly-removes-net-zero-carbon-goal-from-website-amid-rapid-power-hungry-ai-data-center-buildout-industry-first-sustainability-pledge-moved-to-background-amidst-ai-energy-crisis\" title=\"Google quietly removes net-zero carbon goal from website amid rapid power-hungry AI data center buildout \u2014 industry-first sustainability pledge moved to background amidst AI energy crisis\">silently removing their sustainability pledges<\/a> amid the new hype.<\/p>\n<h2>The copyright mess<\/h2>\n<p>As I&#8217;ve mentioned before, LLM companies are ravenous for training data.  They use pretty much everything they can get, including books, source code (both open and proprietary), websites.  Increasingly, they also go for private user data, with opt-out approach: from private repositories to e-mail and chat messages (and often <a rel=\"external\" href=\"https:\/\/www.legitsecurity.com\/blog\/camoleak-critical-github-copilot-vulnerability-leaks-private-source-code\" title=\"CamoLeak: Critical GitHub Copilot Vulnerability Leaks Private Source Code\">leak them in the process<\/a>).  All of this raises serious copyright, privacy, and especially ethical concerns.<\/p>\n<p>LLM advocates have been <a rel=\"external\" href=\"https:\/\/www.forbes.com\/sites\/roomykhan\/2024\/10\/04\/ai-training-data-dilemma-legal-experts-argue-for-fair-use\/\" title=\"AI Training Data Dilemma: Legal Experts Argue For 'Fair Use'\">arguing that training models constitutes &#8220;fair use&#8221;<\/a>.  On the other side, people have repeatedly proved that <a rel=\"external\" href=\"https:\/\/www.ibtimes.co.uk\/ai-models-reproduce-copyrighted-books-study-1788933\" title=\"Researchers Challenge OpenAI Defence After Claiming ChatGPT Can Output Near-Verbatim Copies of Published Books\">chatbots can easily output near-verbatim copies of published books<\/a>.  A case in point is <a rel=\"external\" href=\"https:\/\/www.bbc.com\/news\/articles\/c5y4jpg922qo\" title=\"AI firm Anthropic agrees to pay authors $1.5bn to settle piracy lawsuit\">Anthropic agreeing to pay settlement for a piracy lawsuit<\/a>.<\/p>\n<p>A relevant but perhaps even more important problem is the copyright of LLM-produced works.  The same problem applies pretty much to any kind of LLM output, though as a software craftsman, I&#8217;m going to focus on code.<\/p>\n<p>If you use a part of another project in your software, the case is clear: you&#8217;re creating derivative work.  The author needs to permit that.  Most of the open source licenses permit derivative work, but require attribution.  Copyleft licenses additionally require that you use an appropriate license.<\/p>\n<p>But what happens when you use an LLM?  The model is trained on millions of different projects, likely including both copyleft and proprietary software, and provides no clear way of establishing which particular projects (all of them?) were used to construct a particular output.  In fact, even if a model was trained purely on permissively licensed data, there&#8217;s simply no way to meet the attribution requirement.  And I&#8217;m not even considering all the proprietary English text used to hone its language capabilities.<\/p>\n<p>LLM enthusiasts often bring up <a rel=\"external\" href=\"https:\/\/www.reuters.com\/legal\/government\/us-supreme-court-declines-hear-dispute-over-copyrights-ai-generated-material-2026-03-02\/\" title=\"US Supreme Court declines to hear dispute over copyrights for AI-generated material\">US Supreme Court rejecting copyright disputes over LLM-generated material<\/a> as a justification that LLM-generated code is not copyrighted.  This has already been used to justify a copywashing case where <a rel=\"external\" href=\"https:\/\/github.com\/chardet\/chardet\/issues\/327\">chardet project has been recreated using an LLM to remove the original attribution and change license<\/a>.  However, the same claim has backfired towards LLM companies already: <a rel=\"external\" href=\"https:\/\/www.linkedin.com\/posts\/gergelyorosz_this-is-either-brilliant-or-scary-anthropic-share-7444752685896986624-P8Za\">as the code of Claude Code leaked, it was promptly LLM-copywashed to stop DMCA takedowns<\/a>.<\/p>\n<p>Even if we accepted the assumption that LLM-generated code is not copyrightable, this is prompting further questions: how does that affect the copyright status of projects that increasingly include LLM-generated code?  Can you transfer the copyright (per an open source project&#8217;s <abbr>CLA<\/abbr> or a work contract) over code created with LLM assistance?<\/p>\n<p>In the end, the legal status of LLM coding is largely indeterminate right now.  However, the ethical concerns are clear: LLMs are trained in abuse of other people&#8217;s work and are used in abusive ways.<\/p>\n<h2>Fracturing the community<\/h2>\n<p><abbr title=\"Free, Libre, Open Source Software\">FLOSS<\/abbr> projects are taking a variety of stances against LLM coding: from rejecting all LLM contributions, through requiring all the code to be reviewed and understood by a human, to largely relying on LLM coding themselves.  The last group is especially worrying to downstream maintainers.  We are already overburdened with having to maintain thousands of different packages, and we cannot reasonably be expected to be able to deal with even greater increase of code churn that is prompted by LLM use.  And we definitely don&#8217;t feel comfortable bumping a package that has been dormant for years, and is now making releases at unprecedented rate, with 10000-line changes that clearly could not have been reviewed by a human.<\/p>\n<p>LLMs aren&#8217;t limited to generating code.  They are being used to create and translate documentation, review patches (and sometimes <a rel=\"external\" href=\"https:\/\/www.webasha.com\/blog\/amazon-ai-coding-agent-hack-how-prompt-injection-exposed-supply-chain-security-gaps-in-ai-tools\" title=\"Amazon AI Coding Agent Hack | How Prompt Injection Exposed Supply Chain Security Gaps in AI Tools\">autonomously merge malicious pull requests<\/a>), handle bug reports and other user communications.  It&#8217;s easy to believe that such tools can help overburdened, burned out maintainers.  After all, they&#8217;re deferring the tedious but necessary tasks to automation, while letting the developers focus on the work they enjoy (unless they don&#8217;t, and the project is run by LLMs entirely).  However, this can create a great rift between the project and its users.<\/p>\n<p>Believe me, it is awfully frustrating when you spend an hour packaging a release, debugging an issue and preparing a proper bug report, only to be met by <a rel=\"external\" href=\"https:\/\/github.com\/crossbario\/autobahn-python\/issues\/1716#issuecomment-3409901490\" title=\"autobahn-python: Reply to [ISSUE] 25.9.1 source distribution contains trailing garbage, causing tar to fail to unpack it\">LLM-generated bug analysis<\/a> (that sounds plausible at first but is entirely wrong), followed by <a rel=\"external\" href=\"https:\/\/github.com\/crossbario\/autobahn-python\/pull\/1715\" title=\"autobahn-python: Release v25.10.1\">a vibe-coded overengineered solution<\/a> (that originally contained a verification step that didn&#8217;t do anything), followed by <a rel=\"external\" href=\"https:\/\/github.com\/crossbario\/autobahn-python\/issues\/1735\" title=\"autobahn-python: [ISSUE] 25.10.1 crc errors in .tar.gz\">another broken release<\/a> (with equally weird issue).  This is now how you save time.  This is how you erode trust and discourage people from interacting with you (not that you bothered <em>actually<\/em> communicating with them in the first place).<\/p>\n<p>On the other end of the pipeline, LLMs are used to report bugs.  There are people who use them to create professional looking bug reports, because they aren&#8217;t fluent in English and mistakenly believe that such a report would be easier to understand.  But there are also people who run industrial scale operations to find and report bugs in open source projects, with motivations ranging from wanting to do something good, through trying to reap bug bounties, to actively trying to sell a product.<\/p>\n<p>The result is an increasing number of slop reports.  In the best case, they are valid but unnecessarily verbose and hard to understand.  In the worst, they sound plausible at first but turn out to be entirely wrong \u2014 at a major waste of a human maintainer&#8217;s time.  <a rel=\"external\" href=\"https:\/\/daniel.haxx.se\/blog\/2026\/01\/26\/the-end-of-the-curl-bug-bounty\/\" title=\"The end of the curl bug-bounty\">The curl project ended its bug-bounty program<\/a> over explosion of slop reports.  A few projects went as far as to disable public bug trackers altogether; and while I can understand the frustration, it&#8217;s not a decision I can appreciate as a downstream maintainer who frequently needs to report bugs.  All of this is truly harming the integrity of FLOSS.<\/p>\n<p>On top of it, growing trust issues are emerging.  People stop trusting projects over LLM usage.  Myself, I have serious concerns about software that started heavily relying on LLM coding or made some major mistake using it, even though I fully realize that if my distrust were justified, then I shouldn&#8217;t have trusted said project even before LLMs were used.<\/p>\n<p>People are being ostracized over using LLMs.  While some projects require explicit disclosure, others actively seek to hide LLM usage.  As projects explicitly disallow LLM contributions, people sometimes submit something suspicious.  When I see a really weird bug in the submitted code, I keep asking myself: did a human really make that mistake, or is this slop?  And if the latter, did someone miss our policy, or did they deliberately ignore it?<\/p>\n<p>In the end, real humans are hurt by the distrust.  I have not been accused of using LLMs yet; however, a coworker already cautioned me about using em-dashes.  I have been using em-dashes at least since I&#8217;ve learned LaTeX as a teenager, and perhaps even earlier \u2014 and today they&#8217;re seen not as a sign of skill, but as a sign of slop.  <a rel=\"external\" href=\"https:\/\/marcusolang.substack.com\/p\/im-kenyan-i-dont-write-like-chatgpt\" title=\"I'm Kenyan. I Don't Write Like ChatGPT. ChatGPT Writes Like Me.\">People educated in formal English are accused of using ChatGPT<\/a>, even though they clearly put much more effort in writing than most of us do.  On the other end of the spectrum, people are apparently starting to mimic the chatbot style, which is a horrible sign of the times: people believing that bullshitting makes them sound more professional.<\/p>\n<h2>In evil hands\u2026<\/h2>\n<p>Replacing workers with machines was always the capital owner&#8217;s wet dream.  Robots are perfect: cost-effective and scalable.  They don&#8217;t need to be paid a living wage, they don&#8217;t need to be recruited or trained individually.  They don&#8217;t make mistakes like humans do (and when they do, it&#8217;s human&#8217;s fault anyway!).  They don&#8217;t slack, they don&#8217;t read social media at work, they don&#8217;t take bathroom breaks.  They don&#8217;t have families to take care of, and they don&#8217;t get sick (we&#8217;re silently ignore the frequent LLM downtimes that bring everything to a halt).<\/p>\n<p>Of course, one could ask: why am I criticizing LLMs?  They&#8217;re just a tool!  It&#8217;s up to the people how they&#8217;re going to use them.  Employers were looking for excuses to cut personnel before LLMs were available, and they would do so even if LLMs never came to be.  It&#8217;s the &#8220;guns don&#8217;t kill people&#8221; argument over again.<\/p>\n<p>However, the fact is that LLMs are marketed as a replacement for human labor, and they are purchased as such.  It doesn&#8217;t matter whether this is framed as &#8220;increasing productivity&#8221; or &#8220;reducing costs&#8221;.  Either way, the usual result is that sooner or later some people are laid off over being &#8220;unnecessary&#8221;, some seniors are replaced by juniors (because an LLM will paper over the skill difference), and\/or the remaining employees are expected to be &#8220;more productive&#8221;.  In the most extreme cases, they are not only expected to work more in spite of LLMs; they are expected to actively use LLMs and prove that they make them more productive.<\/p>\n<p>This isn&#8217;t just affecting coding or other workplaces that are supposedly &#8220;replaced&#8221;.  This is also affects artists who are losing commissions, because companies and individuals prefer to get LLM to generate free (or cheap) assets for them instead.  Whole translation studios are being closed as companies switch to LLM translations.<\/p>\n<p>It&#8217;s just not just human being losing jobs in an increasingly hostile and unequal world.  It&#8217;s also a major loss of creativity where originality is being replaced by bleak repetitive slop.  It&#8217;s a major loss of quality where software and books are released to the world with translations that are plain wrong.  And even if you can claim that an LLM can code as well as you do (not really a compliment to you), it is entirely dependent on prior human creativity as the training material.<\/p>\n<p>LLMs are serving hundreds of malicious purposes.  Admittedly, most of them existed before, but LLMs made things exponentially worse.  You can thank them for the spam that&#8217;s passing through your mail filter (likely also LLM-based: the poison and the antidote), but also requiring increasing human effort to distinguish from plausibly valid mail.  Automated spam phone calls that can&#8217;t be outright rejected, because valid organizations also start using LLMs to call you.  Support lines that require you to plow through useless LLMs to end up in a long queue to the last human operator.  Websites filled with slop that litter the search results; and when you finally find something human, you don&#8217;t know if the human just didn&#8217;t repeat the answer from one of the slop sites.  Spreading fake news and propaganda, flaming and provoking people, all at industrial scale.  Deep fakes used in disinformation and scam attempts.<\/p>\n<p>Machine learning was always useful in many scientific endeavors, provided that it was used correctly.  However, LLMs enable &#8220;everyone&#8221; to create their own &#8220;artificial intelligence&#8221; decision algorithms, and what&#8217;s perhaps worse is that they can be plausibly correct in the most common cases.  Most importantly, deferring decisions to machines frees people from responsibility.  Well, people in high positions \u2014 because a low-paid programmer, who had serious doubts about the project from the very beginning, makes a perfect scapegoat.<\/p>\n<h2>The drug<\/h2>\n<p>Can things get worse?  Actually, they do.  People are becoming addicted to LLMs.  And it&#8217;s worse than your average smoke addiction that&#8217;s harming your health, your wallet and all the people around you.  It&#8217;s not just the overarching dependence on LLMs for work that leads to people in workplaces immediately informing everyone else that &#8220;Claude Code is down&#8221;.  It&#8217;s not just the loss of skills.  It&#8217;s people starting to believe that their LLMs are sentient, or have feelings.<\/p>\n<p>It&#8217;s all about people who are entirely dependent on LLMs for their mental well-being.  They suffer whenever their chatbot &#8220;friend&#8221; is down.  They are at the mercy of company that can raise the prices beyond their abilities, and that may eventually decide to discontinue the service because it&#8217;s not profitable.  They are subject to manipulation, and to deliberate or accidental harm, as the algorithm may go haywire.  It&#8217;s worse than trying to be friends with the bully that is nice to you because they need your help.<\/p>\n<p>And let&#8217;s be honest: the LLM companies are entirely unsustainable, and I&#8217;m talking about money here.  They are losing billions, and staying afloat only thanks to constant subsidies and <a rel=\"external\" href=\"https:\/\/www.bloomberg.com\/graphics\/2026-ai-circular-deals\/\" title=\"A Guide to the Circular Deals Underpinning the AI Boom\">circular deals<\/a>.  Short of a miracle, there&#8217;s no chance of salvaging this.  There is just no way they could charge people enough to start making profits, and very little chance that the dead-end research will suddenly bring a breakthrough justifying their parasitic existence.  Of course, there&#8217;s the possibility that with the scale of widespread addiction, the subsidies will continue: the billionaires, the governments, the criminals will keep pouring cash just to prevent the boat from sinking.<\/p>\n<p>Some people think that&#8217;s fine: we&#8217;ll just switch over to smaller models that are more sustainable, we&#8217;ll self-host.  But at least according to the people I&#8217;ve heard from, the difference between an industry-scale model and a local model resembles the difference between a tool that can implement a whole feature for you, and one that can write a relatively simple function.  It&#8217;s the difference between hiring an architect with a construction crew, and hiring a single off-hours construction worker who expects you to tell them what exactly to do (and you may need to show them how to use a water level).<\/p>\n<p>Some people think they&#8217;re not affected.  They&#8217;re in just for the immediate gain.  Perhaps that&#8217;s true.  Perhaps they&#8217;ll just reap some cash, deploy some projects, then move on and forget about it.  Maybe they won&#8217;t lose much skill, and be able to go back to working without LLMs.  Maybe they didn&#8217;t have the skill in the first place.  Maybe they&#8217;ll just find another job, find another hype to milk.<\/p>\n<h2>Final words<\/h2>\n<p>I think I&#8217;ve covered a fair share of harm done by LLMs.  This post is by no means meant to be complete or final.  It just ends with an arbitrary cutoff point where I feel like I&#8217;ve been typing for far too long, and I doubt that making more points will change anyone&#8217;s mind.<\/p>\n<p>You may have noticed that I didn&#8217;t talk of quality per se.  I don&#8217;t think there&#8217;s a point in doing that.  I believe that LLMs sometimes spit quality slop, and sometimes they don&#8217;t.  People who claim that they are &#8220;getting better and better&#8221; are probably right.  Perhaps they will continue getting better, or perhaps they&#8217;ll suddenly start collapsing after eating too much of their own shit.  That&#8217;s beside the point.<\/p>\n<p>The point is, however you look at it, LLMs are unethical.  They may be useful, but they are poison \u2014 just like asbestos.  They are trained in an unethical way, they are sold with immoral goals, and they are used to do a lot of evil.  Yes, maybe they can make your life a little easier, a little more comfortable (just like cheap goods manufactured through slave labor).  But is it something worth losing our humanity for?<\/p>\n<p>You can just say &#8220;no&#8221;.  Getting left behind can actually be a good thing.<\/p>\n<p><small>Final note: throughout the post, I&#8217;ve been randomly shuffling between different sources to avoid promoting anything specific.  From retrospective, that may have been a bad idea.<\/small><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Honestly, I hate that I read about LLMs all the time. I hate all the marketing bullshit, but also all the critical pieces. Not because the criticism is wrong. I hate them precisely because they&#8217;re right. And I hate the feeling that I have to write yet another piece on that same topic, to collect &hellip; <a href=\"https:\/\/blogs.gentoo.org\/mgorny\/2026\/04\/05\/the-pinnacle-of-enshittification-or-large-language-models\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;The pinnacle of enshittification, or Large Language Models&#8221;<\/span><\/a><\/p>\n","protected":false},"author":137,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_publicize_message":"","jetpack_is_tweetstorm":false,"jetpack_publicize_feature_enabled":true},"categories":[8],"tags":[],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/blogs.gentoo.org\/mgorny\/wp-json\/wp\/v2\/posts\/2513"}],"collection":[{"href":"https:\/\/blogs.gentoo.org\/mgorny\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.gentoo.org\/mgorny\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.gentoo.org\/mgorny\/wp-json\/wp\/v2\/users\/137"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.gentoo.org\/mgorny\/wp-json\/wp\/v2\/comments?post=2513"}],"version-history":[{"count":109,"href":"https:\/\/blogs.gentoo.org\/mgorny\/wp-json\/wp\/v2\/posts\/2513\/revisions"}],"predecessor-version":[{"id":2623,"href":"https:\/\/blogs.gentoo.org\/mgorny\/wp-json\/wp\/v2\/posts\/2513\/revisions\/2623"}],"wp:attachment":[{"href":"https:\/\/blogs.gentoo.org\/mgorny\/wp-json\/wp\/v2\/media?parent=2513"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.gentoo.org\/mgorny\/wp-json\/wp\/v2\/categories?post=2513"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.gentoo.org\/mgorny\/wp-json\/wp\/v2\/tags?post=2513"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}