Your AI powered learning assistant

The end of the free Internet era. An experiment on human control in Britain

Вступление

00:00:00

From global crackdowns to a British two‑tier internet A world map of online control paints China, Iran, and North Korea in the deepest red, with Russia drifting toward their tier. Forecasts of “internet by passport” arrive as Britain, despite high freedom rankings, detains about 30 people a day for online posts and warns citizens to “Think before you post.” On July 25, 2025, the UK activates a system that splits access by verified age. Verified adults receive the full internet; everyone else sees a censored, cut‑down version.

Cheap AI vision makes ID‑internet feasible Plummeting costs in technologies like machine vision turn robust age checks from fantasy into commodity. The same deflation that took genome sequencing from $100M to cheap consumer tests now powers face‑based verification at scale. Banks, carriers, passports, and AI all converge to confirm age with high confidence. That convergence enables a practical, policy‑grade “internet by passport.”

A tragedy, an algorithmic spiral, and a checklist of failures In 2017, 14‑year‑old Molly Russell died by suicide after sustained exposure to self‑harm and depression content, with roughly 2,000 of 16,000 saved Instagram posts tied to such themes. Recommendation systems fed a relentless stream, forming an inescapable rabbit hole. A consulting psychiatrist described the material as so disturbing he lost sleep for weeks. The coroner flagged missing age modes, absent age checks, algorithm‑driven delivery, no parental visibility or control, and no way to link child and parent accounts.

The Online Safety Act reframes platforms’ duties Within a year of the coroner’s concerns, Britain advances the Online Safety Act, effective October 26, 2023. The law focuses on shielding children from pornography and self‑harm content and reducing bullying and hate. Platforms must publish transparency reports on how their algorithms operate and affect users. The policy’s spine becomes a two‑internet design: one for adults, one for children.

High‑assurance age checks and punitive fines Self‑attesting “I am 18” clicks are invalid under the law. Verification must be high‑efficiency, relying on passports, banks, mobile operators, or AI facial analysis. Noncompliance risks penalties of £18 million or 10% of global revenue, whichever is higher, implying multi‑billion‑pound exposure for giants. Adult access now hinges on verified identity, shocking many users.

Phases and the sweep beyond adult sites Phase one defined illegal content and risk assessments through late 2024. Phase two on child safety started with mandatory gates on adult sites, then extended scrutiny to where children spend time: social networks, games, video services, messengers, and marketplaces. Any UK‑accessing account must either prove adulthood or be served a child version. Attempts to enter adult features redirect users to third‑party verifiers for high‑assurance checks.

Edge‑case harm and blunt filters Keyword‑driven moderation initially suppressed helpful threads like “how to quit smoking/drinking” simply for containing flagged terms. Even a menstruation subreddit was swept up despite obvious relevance to minors. One‑off misclassifications ranged from flagging a SpongeBob GIF as confidential to blocking a post about the crusades. Some reversals followed, but the structural split entrenches conservative, overbroad filtering.

Games, consoles, and streaming retool for age gates Publishers lean on credit cards to infer 18+ status, blocking purchases of 18‑rated titles without a card. Xbox restricts communication with strangers for unverified users, while marquee releases like GTA 6 demand adult proof; by contrast, Dota 2’s rating leaves younger players unaffected. Spotify rolled out high‑efficiency checks and deletes sub‑13 accounts. Companies must either fund costly verification or exit, a choice many small communities cannot afford.

The quiet extinction of small sites Because any forum could theoretically host 18+ material, small operators face the full compliance burden. Many cannot build verification pipelines or bear liability and shut down. Trackers now document these closures as collateral damage. It is not mass blocking of giants, but it is reshaping the long tail of the web.

Safety beats privacy in official polling, but not in practice A 2019 government White Paper spotlighted broad worry about online harm. Later regulator polling found majorities expecting child‑safety gains and endorsing platform compliance. Independent surveys show reluctance to upload IDs—especially to porn sites, where only 14% would—amid fears of leaks and blackmail, with over half expecting eventual state censorship. Only about a third believe the Act will actually achieve its aims.

Circumvention via VPNs and verification exploits Because the adult/child split hinges on UK access, VPNs became the straightforward bypass. After July 25, searches for VPNs spiked and VPN apps climbed UK store rankings. Users even tricked Discord’s face verification by using Death Stranding’s photomode, where AI accepted a lifelike game character as an adult. These gaps sap effectiveness while normalizing privacy tools.

A half‑million‑signature pushback meets a hard wall On July 22, 2025, a petition to repeal the law gathered 550,000 signatures, entering the site’s all‑time top five. The government issued a lengthy refusal, promising assistance to small sites but rejecting repeal. Given years of official groundwork, reversal was unlikely. Policy momentum outweighed public dissent.

From Green Paper to White Paper to a spotlighted case A 2017 Green Paper floated regulation of online harm; in 2018 the government promised a detailed White Paper and new legislation targeting major platforms first. Molly Russell’s case stayed obscure for 15 months, then erupted in early 2019 with BBC’s “Instagram helped kill my daughter,” drawing immediate ministerial condemnation. The family’s foundation, bolstered by the NSPCC, sent letters to tech leaders and gained a parliamentary platform. Over time, its mission emphasized social‑platform regulation, aligning with the legislative agenda.

Timing, alignment, and debate‑killing binaries In September 2022, a new government paused the bill to reassess “legal but harmful,” alarming backers. Just 24 days later, the coroner’s report appeared, pinning blame on social platforms and prescribing solutions mirroring the bill. The sequencing and tight alignment fuel suspicion of orchestrated narrative rather than coincidence. Critics were smeared through binaries like “opposed equals pro‑suicide” or even “pedophile,” chilling substantive debate.

No monopoly vendor, but a new data‑risk surface Multiple firms, including foreign vendors, supply age‑verification tech; there is no single monopolist. Rather than outright blocks, platforms hold back adult features, graphic or depressive content, and stranger messaging until age is proven. Verification restores full functionality but creates a large, attractive trove of sensitive data. The model spreads regardless—Roblox ties chat access to face scans and says it deletes biometrics after checking.

The near‑miss of policing “legal but harmful” speech An earlier draft aimed to moderate not just illegal content for minors but also “legal but harmful” material for adults. Such elasticity invites creep—from quack cures to criticism of the economy, protests, or opposition reporting, all framed as mental or moral harm. The 2022 government removed that clause, averting a path to broad political censorship. The door could reopen in future revisions.

A proactively sterilized internet, with trade‑offs Authorities increasingly treat the web as a generator of hatred and disorder to be preemptively sanitized. Two Internets, mandatory age checks, and algorithmic transparency embody the new default. Safety goals resonate, yet overblocking, privacy loss, and chilled dissent shadow the model. Easy circumvention leaves effectiveness uncertain even as proactive control tightens.