facebook-pixel

UK Online Safety Act: What It Means for Your Privacy in 2026

L
Lunyb Security Team
··10 min read

The UK Online Safety Act is the most sweeping piece of internet legislation Britain has ever passed. Designed to make the UK "the safest place in the world to be online," the Act introduces sweeping new duties for tech platforms, age verification requirements for adult content, and powers that could fundamentally reshape how encrypted messaging works. But behind the headlines about protecting children lies a more complicated story — one with serious implications for the privacy of every UK internet user.

This guide explains what the Online Safety Act actually does, how it affects your day-to-day privacy, what age verification means in practice, and the ongoing battle over end-to-end encryption.

What Is the UK Online Safety Act?

The UK Online Safety Act 2023 is a law that places legal duties on online platforms to protect users — particularly children — from illegal and harmful content. It is enforced by Ofcom, the UK's communications regulator, which can fine companies up to £18 million or 10% of global annual turnover for non-compliance.

The Act received Royal Assent in October 2023 after years of political wrangling, and its provisions are being phased in throughout 2025 and 2026. It applies to any service with a "significant number of UK users" — meaning even non-UK companies must comply if British people use their platform.

Who Does the Act Apply To?

The Act covers a remarkably wide range of services:

  • User-to-user services: Social media (Facebook, X, TikTok), forums, dating apps, messaging services
  • Search services: Google, Bing, DuckDuckGo
  • Pornography providers: Both commercial sites and platforms hosting adult content
  • File-sharing and cloud services where users can share content
  • Online games with chat or user-generated content

The Three Categories of Content Under the Act

The Online Safety Act creates a tiered system of content that platforms must address. Understanding these categories is key to understanding how the Act affects your privacy.

1. Illegal Content

All in-scope services must take proactive steps to prevent users encountering illegal material. This includes child sexual abuse material (CSAM), terrorism content, fraud, threats to kill, controlling and coercive behaviour, intimate image abuse, and content encouraging suicide or serious self-harm.

2. Content Harmful to Children

Services likely to be accessed by children must protect them from "primary priority" content (pornography, suicide content, eating disorder content) and "priority" content (bullying, violence, harmful substances). This is what triggers the controversial age verification requirements.

3. Content Harmful to Adults ("Category 1" Services)

The largest platforms must give adult users tools to filter out legal-but-harmful content if they choose. The original "legal but harmful for adults" duties were watered down before the Act passed, but Category 1 services still face additional transparency requirements.

Age Verification: The Biggest Practical Change

From July 2025, any site hosting pornography or content harmful to children that is "likely to be accessed by under-18s" must implement "highly effective" age assurance. This is the most visible change for ordinary users — and the one with the deepest privacy implications.

How Age Verification Works

Ofcom has approved several methods of age assurance, including:

  1. Photo-ID matching: Uploading a passport or driving licence
  2. Facial age estimation: AI analyses a selfie to estimate your age
  3. Open banking checks: Your bank confirms you're over 18
  4. Mobile network operator checks: Your phone provider confirms your age
  5. Credit card verification: Holding a credit card implies adult status
  6. Digital identity wallets: Reusable verified credentials

The Privacy Concerns

Age verification creates an entirely new category of risk. Every time you prove your age to access an adult site, dating app, or even a Wikipedia-style platform, that data has to be processed by someone. The concerns include:

  • Data breaches: A leak of "who visited which adult site" would be catastrophic. The 2015 Ashley Madison breach showed exactly how dangerous such data can be.
  • Linking browsing to identity: Verification systems can theoretically link your real legal identity to specific content you view.
  • Third-party processors: Most platforms outsource age checks to third-party companies, expanding the chain of custody for your personal data.
  • Function creep: Once infrastructure exists, it can be repurposed. What starts as porn-site verification could expand to all manner of online activities.

Ofcom requires age-assurance providers to follow data minimisation principles, but enforcement is patchy and the technology is new. If you do need to verify your age online, look for providers offering "double-blind" systems where the verification provider doesn't know which site you're visiting, and the site doesn't see your ID.

The Encryption Battle: Clause 122 and the Backdoor Debate

The most controversial part of the Online Safety Act is Section 121 (originally Clause 122), which gives Ofcom the power to require platforms to use "accredited technology" to identify CSAM in private messages — including those protected by end-to-end encryption.

Why This Matters

End-to-end encryption is the technology that ensures only you and the person you're messaging can read your conversations. Not Meta, not Apple, not the government. End-to-end encryption protects everything from family photos to journalistic sources, and it's foundational to digital privacy.

The problem: there is no known way to scan encrypted messages for illegal content without breaking the encryption itself. The proposed solution — "client-side scanning," where your device checks messages before they're encrypted — is widely regarded by cryptographers as introducing a permanent backdoor.

The Government's Climbdown

During the Bill's passage, the government issued a written statement saying Ofcom would only require scanning "when technically feasible." Critics, including Signal, WhatsApp and the Internet Society, called this a face-saving compromise rather than a real concession — the legal power remains on the statute book and could be activated whenever the government decides scanning has become "feasible."

Signal President Meredith Whittaker has stated that Signal would withdraw from the UK rather than implement client-side scanning. WhatsApp's Will Cathcart has made similar statements. As of 2026, no Section 121 notice has been issued, but the threat remains.

What the Act Means for Your Privacy in Practice

Even if you never visit an adult website and never send a sensitive message, the Online Safety Act changes the privacy landscape in ways you should understand.

1. More Identity Checks for Everyday Services

Platforms err on the side of caution. Reddit, X, Bluesky, and even Wikipedia have had to consider age assurance for UK users. You'll increasingly be asked to prove who you are to access services that previously required nothing.

2. Pressure on Anonymous Speech

Category 1 services must offer "user verification" tools so adults can choose to interact only with verified users. While voluntary, this creates a two-tier internet where pseudonymous accounts — used by whistleblowers, abuse survivors, and dissidents — become marginalised.

3. Greater Use of AI Moderation

Compliance at scale means automated content scanning. Algorithms decide what gets removed, suspended, or reported. False positives are inevitable, and appeals processes are often inadequate.

4. The Risk of VPN-Driven Workarounds

Many UK users have already turned to VPNs to bypass age checks. While this restores some privacy, it also creates new risks — choose your VPN provider carefully, and remember that VPNs are particularly important on public Wi-Fi networks regardless of regulatory issues.

How to Protect Your Privacy Under the Online Safety Act

The Act is the law, but you still have agency over how much of your data you expose. Here's a practical checklist:

  1. Use encrypted messengers: Signal, WhatsApp, and iMessage all use end-to-end encryption. Use them for anything sensitive.
  2. Choose privacy-respecting age verification: When you must verify, pick providers that use zero-knowledge proofs or facial age estimation rather than full ID upload.
  3. Minimise account linkage: Don't use "Sign in with Google/Facebook" for sensitive services. Use separate email aliases.
  4. Use a reputable VPN: A no-logs VPN protects your IP address from sites and your browsing from your ISP.
  5. Be careful with link shorteners: Some shorteners track every click. Privacy-respecting tools like Lunyb let you share links without exposing your audience to invasive tracking — useful for journalists, researchers and anyone sharing sensitive material under the new regime.
  6. Review platform privacy settings: Many Category 1 services now have new "user empowerment" toggles. Use them.
  7. Keep your software updated: Many privacy protections only work on current OS versions.

Comparing the UK Approach to Other Jurisdictions

The UK isn't alone in regulating online content, but its approach is among the most aggressive. Here's how it compares.

JurisdictionKey LawAge VerificationEncryption StanceMax Fine
United KingdomOnline Safety Act 2023Mandatory for adult contentPower to require scanning (dormant)£18m or 10% turnover
European UnionDigital Services ActRisk-based, not mandatoryEncryption protected (currently)6% global turnover
AustraliaOnline Safety Act 2021Trial ongoingAssistance and Access Act allows requestsAU$782,500 per breach
United StatesPatchwork (state-level)Varies by state (Texas, Utah, etc.)Generally protectedVaries
CanadaOnline Harms Act (proposed)Not yet definedEncryption to be protectedUp to 6% global revenue

For Australian readers, our guide to protecting your privacy online in Australia covers the equivalent local concerns in detail.

Pros and Cons of the Online Safety Act

Pros

  • Creates real legal accountability for platforms that have long resisted moderation
  • Provides meaningful protections against CSAM, terrorism content, and intimate image abuse
  • Forces transparency on how the largest platforms operate
  • Gives users new tools to control their experience on Category 1 services
  • Raises the floor for child safety online

Cons

  • Threatens end-to-end encryption that protects all users
  • Creates honeypots of sensitive identity data via age verification
  • Pushes towards a verified, less anonymous internet
  • Enforcement burden may push smaller platforms out of the UK market
  • Vague definitions of "harm" risk over-removal of legitimate speech
  • Significant compliance costs raise barriers to new entrants

What Comes Next?

Ofcom is still rolling out codes of practice through 2026 and beyond. Expect more services to introduce age checks, more aggressive moderation of borderline content, and continued political pressure on encryption. The Information Commissioner's Office (ICO) is also publishing guidance on how the Act interacts with UK GDPR — particularly around the lawful basis for processing identity documents.

For users, the long-term direction is clear: the era of casual, anonymous browsing in the UK is ending. Privacy will increasingly require active effort — choosing the right tools, reading privacy policies, and being deliberate about what you verify and where.

Frequently Asked Questions

Does the UK Online Safety Act apply to small websites?

Technically yes, if your site has UK users and allows user-generated content or hosts content harmful to children. However, Ofcom has signalled it will focus enforcement on larger platforms. Small forum operators should still complete an illegal content risk assessment and document their moderation policies.

Will the Online Safety Act break WhatsApp or Signal?

Not immediately. The power to require message scanning exists in law but has not been activated. Both Signal and WhatsApp have said they would withdraw from the UK rather than weaken their encryption. The standoff continues, with the government's "only when technically feasible" caveat acting as a temporary truce.

Can I use a VPN to avoid age verification?

Using a VPN to access a UK-blocked or UK-gated site isn't illegal for you as a user, and VPN sign-ups in the UK have surged since age checks rolled out. However, platforms may detect VPN traffic and block it, and you should still pick a trustworthy no-logs provider — your VPN sees everything your ISP would otherwise see.

What happens to my ID after age verification?

It depends entirely on the provider. Reputable age-assurance services delete the document immediately after verification, retaining only a yes/no flag. Less scrupulous ones may retain data longer. Always check the privacy policy of both the platform and the verification provider before uploading documents — and prefer methods like facial age estimation that don't require ID at all.

Does the Online Safety Act protect free speech?

The Act includes specific duties on Category 1 services to protect "content of democratic importance" and journalistic content, and it removed the original "legal but harmful for adults" provisions before passing. Critics argue these protections are weak in practice because automated moderation tends to over-remove. The real-world impact on free expression will be tested in the courts over the coming years.

How do I report a platform that isn't complying?

You can complain directly to Ofcom via their online complaints form once their full enforcement regime is active. For data protection concerns specifically — including misuse of age verification data — complain to the ICO at ico.org.uk.

Protect your links with Lunyb

Create secure, trackable short links and QR codes in seconds.

Get Started Free

Related Articles