News

Online Safety Act: New data shows broad support, but almost half worried about censorship

New data from verification and anti-fraud platform Sumsub shows that while 64% agree with the rules, many are worried about Government overreach and poor enforcement 

  • 64% of people agree it protects children – with support higher among parents of young children and lower among ‘empty nest’ parents
  • 48% concerned it will lead to censorship 
  • 57% believed that “they are already too easily bypassed” and “aren’t fit for purpose”.
  • 35% report seeing age restrictions for ‘safe for work’ (non-harmful and not just for adults) content, while 32% have seen unrestricted adult content 
  • 26% still don’t trust AI-augmented facial recognition scans to accurately estimate age

Over one month from the Online Safety Act coming into force, compelling websites hosting adult content to verify users’ age to protect children, many are seriously concerned about its ethical and practical implications. 

Early data from a survey of 2000 UK consumers shows people are worried that the Government and OFCOM are incapable of properly enforcing the Act. Respondents were also concerned about censorship through overreach – flagging that online content they’d view as ‘safe for all’ has been impacted, as well as harmful content being left unrestricted.

Despite this though, there is clear support for the Act and its aims, and furthermore, many of those who disagree with the Act do so because it’s inaccurately or insufficiently enforced.

Early support for new online regime – parents of young children especially

Findings from global verification and anti-fraud leader Sumsub have found broad support for these new online processes.

  • 64% of people agree verification checks protect children from potentially harmful content online.
  • This figure is higher than ‘empty nest’ parents with adult children (57%), while 78% of parents with children under 18 living at home agree

Uneven application causing problems

Despite broad support, many are finding the Online Safety Act either excessively or insufficiently enforced.

  • 35% report seeing age restrictions for ‘safe for work’ (non-harmful and not just for adults) content
  • Almost half, 48%, worry that these new rules and how they’re being adopted increases risk of censorship due to the types of non-pornographic and non-harmful content requiring age verification
  • Inversely, 32% report regularly viewing adult content on the internet that should be restricted but isn’t.

One in four people mistrust AI to correctly guess age

  • 26% report not trusting AI-augmented facial recognition scans to accurately estimate age.
  • Interestingly, 69% of those aged 25-34 – those typically with the most digital literacy and exposure to AI –  trust it the most.
  • Conversely, those aged 55 and above were most suspicious of facial scan-led verification methods  –  with a third not trusting the technology.

Among those who disagreed that these new checks were helping to protect children:

  • 57% believed that “they are already too easily bypassed” and “aren’t fit for purpose”.
  • 50% don’t think the government and OFCOM can properly  enforce the Act – leaving harmful content online and viewable by children

Deepfakes and the OSA

The growing sophistication and availability of AI generated images, videos and documents presents a new and growing threat to traditional methods of identity verification – including age estimation. From Q1 2024 to 2025 in the UK, Sumsub’s data shows that:

  • Deepfakes have increased by 900%
  • Synthetic document forge, when Gen AI is used to create artificial identity documents, increased by 275%

Pavel Goldman-Kalaydin, Head of AI/ML at Sumsub: “We’ve seen countless examples of people getting past age verification checks – from sophisticated 3D models to simple screenshots from low resolution games. Clearly more has to be done to plug the gaps and ultimately protect children from harmful digital content. Online firms have a role to play too. Adopting multiple layers of verification – such as a scan of an ID document like a passport alongside a live facial scan – is vital. Facial recognition technology isn’t foolproof. This will make it that much harder to bypass checks while minimising unnecessary user friction to encourage compliance.”

Kat Cloud, Head of Government Relations at Sumsub: “Our survey clearly shows that the Online Safety Act has laudable aims, and that the Government is right in its attempts to protect children from harmful digital content. It has fallen short however. Whether it’s due to sites’ being reluctant to limit traffic, a fear of falling foul of the regulator, or a lack of clarity in exactly what counts as ‘adult content’ and therefore what should be restricted, the Act is being applied unevenly. OFCOM needs the will and resources to tackle websites that don’t comply, and websites need to know exactly what does or doesn’t need age verification”.

This representative survey of 2,000 UK respondents was conducted 18-20th August, c3.5 weeks after the UK’s Online Safety Act was implemented on Friday 25 July. The full results can be accessed here.

  • September 11, 2025
  • Corporate

Explore latest news

Start exploring Sumsub today