Over the past four years, I've significantly reduced my social media footprint. There are countless reasons for this, all of which are beyond the scope of this article, but the point I want to make is this: despite my growing apathy and downright hostility towards social platforms, I've found YouTube to be an oasis of sorts.
I am not going to pretend that YouTube hasn't played its part in the global disinformation epidemic or that it has somehow escaped the claws of enshittification. What I will say is that unlike other social platforms, its feed (unlike those of its competitors) are maleable using browser-based plugins (tools such as subscription managers). It is one of my primary learning platforms; without its vast array of tutorials, there is no way that I, a non-programmer, would have learnt Linux as fast or become as comfortable in a FOSS-based computing environment, as I have since the pandemic.
But enshittification is, like death and taxes, a certainty now. Which brings us to the subject of this column: AI moderation on YouTube.

The Incident
Ars Technica reported that popular Windows 11 workaround videos guides to install on unsupported hardware or bypass the online account requirement, were flagged as “dangerous” or “harmful” and removed. The incident was brought to wider notice by well-known YouTubers including Enderman.
Some appeals were rejected in under a minute. YouTube later reinstated the videos and denied that automation was responsible for either removals or appeal decisions. That denial clarified little, because the creators’ experienced unusually quick flagging of videos, uniform phrasing, and lack of escalation. If it walks and quacks like an AI...
In parallel, large channels (Enderman among others) were suddenly terminated, allegedly due to a mistaken association with an unrelated Japanese account previously banned for copyright strikes. After significant public pressure, YouTube reinstated those channels and again said automation wasn’t the cause. The pattern holds: sudden enforcement with minimal clarity, followed by restoration without systemic explanation.
The moderation reality
YouTube claims a “combination of automated and human review,” with automated decisions only when systems have “a high degree of confidence.” That framing sounds reasonable until you overlay three facts:
- Scale compels automation. It’s implausible that human review alone can trigger minute-scale appeal rejections across multi‑minute videos at platform scale.
- Incentives penalize false negatives more than false positives. Platforms are punished more for letting harmful content slip than for removing legitimate content, so systems bias toward removal.
- Opaque processes erode trust. If creators can’t see the decision path or escalate to a clear human adjudication, the default is fear and self‑censorship. ved-with-odd-removals-of-tech-tutorials/
What “AI moderation” actually does
Modern moderation blends classifiers, large language models, heuristic rules, and partner tools; even if the final button‑press is human, automated triage determines what humans see, how fast, and with what recommended action. When appeals are denied at bot‑speed, creators don’t much care whether the last click was a person. They encounter machines.
The tutorial case is telling. Guides to bypass Microsoft’s online account requirement aren’t piracy when they require a valid license. They’re consumer‑choice workarounds. But “bypass,” “workaround,” and registry/OOBE steps are tokens that trip automated risk signals. If you model danger coarsely, you miss context and punish legitimate education.
Why this breaks creators
- Economic fragility. A single strike can kill monetization and spook sponsors. False positives have outsized impact on independent channels. https://forums.theregister.com/forum/all/2025/10/31/ai_moderation_youtube_windows11_workaround/
- Guidance contradictions. YouTube’s own creator tools suggest topics that moderation later flags. That mismatch pushes channels into uncertainty and throttles their output.
- Appeal fatigue. When appeals feel automated and non‑explanatory, creators stop appealing. The system silently shifts the burden onto them to “avoid risky topics,” which is de facto policy change without a written policy change. ved-with-odd-removals-of-tech-tutorials/
The AI slop backdrop
All this is happening in the first age of AI slop. Despite my attempts at moulding the YouTube algorithm to my needs, the platform's worst content still drips into my experience. In my suggested videos, I keep seeing increasingly more videos from unknown creators that are obviously AI-made: from the thumb-nails, to the video titles, and increasingly the voice-overs, it leaves you to wonder: gallons of water and other finite resources was wasted to make this nonsense.
Put yourself in the shoes of YouTubers who focus on the unsexy content like tutorials; you are watching this low effort drivel take your audience and views. What is the point?
YouTube says it’s taking action against “mass‑produced” content, but the enforcement signal is inconsistent. When moderation catches detailed, hands‑on tutorials yet lets high‑volume synthetic content ride, the platform’s quality incentives look inverted.
What YouTube should do
- Publish a granular policy note for tutorials that alter OS setup, firmware, or device limits. Spell out permissible cases with examples, and draw lines based on harm, not optics.
- Add genuine human escalation for appeals with creator‑visible audit trails—timestamps, reviewer handoff, specific policy citations—so trust can be rebuilt.
- Separate “dangerous acts” policy from “software configuration” policy.
- Align creator‑tool recommendations with enforcement. If the Ideas tool suggests a topic, the policy should not instantly penalize it.
What creators should do now
- Document context in‑video and in descriptions. State the license requirement explicitly, the safety boundary, and a clear policy rationale (“educational content, not circumvention of paid features”). It’s extra work, but it gives human reviewers anchors to overturn automated flags.
- Diversify distribution. Mirror videos on PeerTube, Odysee, or self‑hosted pages. Link out in pinned comments. Redundancy is an economic safeguard.
- Keep a moderation dossier. Track removals, timestamps, appeal language, and topic overlap. Patterns help challenge decisions and inform sponsors.
- Avoid loaded phrasing that trips risk models. “Bypass online account” can become “Install with a local account using supported setup steps,” while preserving substance.
The deeper tension
Platforms face three simultaneous pressures:
- Legal and regulatory risk pushes more preemptive removal.
- Scale demands automation.
- Creator economies require predictability.
When “safety” is defined loosely and enforced opaquely, automation becomes a blunt instrument. The solution isn’t less automation. It’s better policy granularity, transparent appeal channels, and metrics that penalize wrongful removals as much as missed takedowns.
The FOSS angle
For Linux and open‑source communities, this matters beyond YouTube. Tutorials—bootloaders, firmware flashes, kernel flags—are core to user autonomy---my story is an embodiment of that truth.
If mainstream platforms conflate technical education with harm, communities must own their distribution. Self‑hosting, federated video, and mirrored documentation aren’t ideological luxuries. They’re resilience strategies.
A sober take on “AI vs. human”
The question isn’t whether AI pressed the ban button. It’s whether automated signals dominated the path, whether appeals were truly reviewed, and whether creators can predict outcomes. Right now, too much uncertainty sits between upload and livelihood.
Make no mistake: YouTube can fix this. Publish specific tutorial allowances, make appeal escalation real, and tune risk models with creator input. Until then, expect more cautious creators, fewer hands‑on guides, and more formulaic slop.
If you care about a healthy creator ecosystem, the goal is simple: make the safest path the most transparent one. Not the quietest. Or we can count YouTube as another casualty of the enshittification era.