Ofcom finds UK children exposed to unavoidable violent online content
4 min readEvery child interviewed by the media watchdog had viewed violent content on the internet
Research from the media watchdog has found that violent online content is now “unavoidable” for children in the UK, with many first exposed to it while still in primary school. Every British child interviewed for the Ofcom study had watched violent material on the internet. This ranged from videos of local school and street fights shared in group chats to explicit and extreme graphic violence, including gang-related content. Children were aware that even more extreme material was available on the web’s deeper recesses but had not actively sought it out. The findings prompted the NSPCC to accuse tech platforms of neglecting their duty of care to young users.
Rani Govender, a senior policy officer for child safety online, expressed deep concern, stating, “It is deeply concerning that children are telling us that being unintentionally exposed to violent content has become a normal part of their online lives. It is unacceptable that algorithms are continuing to push out harmful content that we know can have devastating mental and emotional consequences for young people.”
The research, conducted by the Family, Kids, and Youth agency, is part of Ofcom’s preparation for its new responsibilities under the Online Safety Act, passed last year. This act granted the regulator the power to crack down on social networks that fail to protect their users, particularly children.
Gill Whitehead, Ofcom’s online safety group director, remarked, “Children should not feel that seriously harmful content – including material depicting violence or promoting self-injury – is an inevitable or unavoidable part of their lives online. Today’s research sends a powerful message to tech firms that now is the time to act so they’re ready to meet their child protection duties under new online safety laws. Later this spring, we’ll consult on how we expect the industry to ensure that children can enjoy an age-appropriate, safer online experience.”
Almost every leading tech firm was mentioned by the children and young people interviewed by Ofcom, but Snapchat and Meta’s apps Instagram and WhatsApp were mentioned most frequently.
“Children described private, often anonymous accounts dedicated to sharing violent content, such as videos of local school and street fights,” the report states. “Almost all children who had engaged with these accounts mentioned finding them on either Instagram or Snapchat.”
“One 11-year-old girl mentioned, ‘There’s peer pressure to pretend it’s funny. You feel uncomfortable on the inside, but pretend it’s funny on the outside.’ Another 12-year-old girl shared that she felt ‘slightly traumatized’ after being shown a video of animal cruelty, noting, ‘Everyone was joking about it.’
“Several older children in the study ‘appeared to have become desensitized to the violent content they were encountering.’ Professionals also expressed concern about violent content normalizing violence offline. They reported that children often laughed and joked about serious violent incidents.”
On some social networks, exposure to graphic violence originates from the platform’s upper echelons. Recently, Twitter, now rebranded as X following its acquisition by Elon Musk, removed a graphic clip depicting sexual mutilation and cannibalism in Haiti. The clip had gone viral on the platform and had been reposted by Musk himself, who shared it with news channel NBC in response to a report accusing him and other right-wing influencers of spreading unverified claims about the situation in Haiti.
While other social platforms offer tools to help children avoid violent content, these tools provide little assistance. Many children, some as young as eight, mentioned to researchers that they could report content they did not want to see, but they lacked trust in the system’s effectiveness.
In private chats, children expressed concerns that reporting violent content could label them as “snitches,” leading to potential embarrassment or punishment from peers. They also lacked confidence that platforms would effectively penalize those who posted such content.
The prevalence of algorithmic timelines, particularly on platforms like TikTok and Instagram, added complexity. Children believed that any interaction with violent content, such as reporting it, would increase the likelihood of being recommended similar content.
Professionals involved in the study highlighted concerns about the impact of violent content on children’s mental health. A separate report by the children’s commissioner for England, released on Thursday, revealed that over 250,000 children and young people were awaiting mental health support after being referred to NHS services. This means that approximately one in every 50 children in England is on a waiting list. While the average waiting time for those who accessed support was 35 days, nearly 40,000 children experienced waits exceeding two years in the past year.
A spokesperson from Snapchat stated, “Violent content or threatening behavior has no place on Snapchat. We promptly remove such content and take necessary action against the offending account.
We provide easy-to-use, confidential in-app reporting tools and collaborate with law enforcement to aid their investigations. We support the objectives of the Online Safety Act to safeguard individuals from online harms and remain actively engaged with Ofcom regarding its implementation.”