Skip to content

Internet users are getting younger; Now the UK is weighing up whether AI can help protect them.

Artificial intelligence has been in the point of view from governments concerned about how it could be misused for fraud, disinformation and other malicious activities online; Now, in the UK, a regulator is preparing to explore how AI is used in the fight against some of the same, specifically when it comes to content harmful to children.

ofcomthe regulator charged with enforcing UK standards Online Safety Lawannounced that it plans to launch a consultation on how AI and other automated tools are used today, and how they can be used in the future, to proactively detect and remove illegal content online, specifically to protect children from harmful content and identify child sexual abuse. material that was previously difficult to detect.

The tools would be part of a wider set of proposals Ofcom is putting together focused on child online safety. Consultations for the comprehensive proposals will begin in the coming weeks and the consultation on AI will take place later this year, Ofcom said.

Mark Bunting, director of Ofcom’s Online Safety Group, says his interest in AI starts with a look at how well it is used as a detection tool today.

“Some services already use these tools to identify and protect children from this content,” he said in an interview with TechCrunch. “But there isn’t much information about how accurate and effective those tools are. We want to look at ways to ensure that the industry evaluates [that] when they use them, ensuring that risks to freedom of expression and privacy are managed.”

A likely outcome will be Ofcom recommending how and which platforms they should evaluate, which could lead not only to platforms adopting more sophisticated tools, but also to fines if they fail to offer improvements in blocking content or creating better ways to retain younger users. to see.

“As with many online safety regulations, the onus is on companies to ensure they are taking the right measures and using the right tools to protect users,” he said.

There will be both critics and supporters of the measures. AI researchers are finding increasingly sophisticated ways to use AI to detect, for example, deepfakes, as well as to verify users online. However, there are so many skeptics who point out that AI detection is far from infallible.

Ofcom announced the consultation on AI tools at the same time as publishing its latest research into how children interact online in the UK, which found that overall, more younger children are online than ever before, to the point that Ofcom is now breaking up. activity among increasingly younger age groups.

Nearly a quarter, 24%, of all children ages 5 to 7 now own their own smartphones, and when tablets are included, the numbers rise to 76%, according to a survey of American parents. That same age group also uses media much more on those devices: 65% have made voice and video calls (up from 59% just a year ago), and half of children (up from 39% a year ago ) are watching streaming media. .

The age restrictions on some mainstream social media apps are getting lower, but whatever the limits are, they don’t seem to be respected in the UK anyway. Around 38% of children aged 5 to 7 use social media, Ofcom found. Meta’s WhatsApp, with 37%, is the most popular application among them. And in possibly the first case of Meta’s flagship imaging app being relieved to be less popular than ByteDance’s viral sensation, TikTok was found to be used by 30% of 5- to 7-year-olds, while that Instagram was “only” 22%. Discord rounded out the list, but is significantly less popular at just 4%.

About a third, 32%, of children this age go online alone, and 30% of parents said they were fine with their minor children having social media profiles. YouTube Kids remains the most popular network among younger users, at 48%.

The games, a perennial favorite among children, have grown to be used by 41% of children ages 5 to 7, and 15% of children in this age group play shooting games.

While 76% of parents surveyed said they had spoken to their young children about how to stay safe online, Ofcom points out that there are question marks between what a child sees and what that child might report. When investigating older children aged between 8 and 17, Ofcom interviewed them directly. It found that 32% of children reported they had seen worrying content online, but only 20% of their parents said they had reported anything.

Even allowing for some inconsistencies in reporting, “research suggests a disconnect between older children’s exposure to potentially harmful online content and what they share with their parents about their online experiences,” Ofcom writes. And worrying content is just one challenge: deepfakes are also a problem. Among 16- to 17-year-olds, Ofcom said, 25% said they were unsure of distinguishing fake from real online.