Ofcom Pushes Tech Firms to Strengthen Online Safety
Hey there, little explorer! 👋
Imagine your favorite playground, but it's on the internet! Sometimes, there can be wobbly swings or slippery slides that aren't safe.
So, a grown-up helper named Ofcom is like the playground supervisor. They told the big companies that make games and apps (like your tablet games!) to check their playgrounds very carefully.
They want these companies to find all the wobbly swings and slippery slides. Then, they have to fix them! This is to make sure you and all your friends are super safe when you play online. It's like putting soft mats under the swings! Yay for safe fun! 🎉
More than 70 risk assessments have been legally mandated from 40 of the largest and riskiest sites and apps across the globe. Ofcom has suggested that these assessments are a crucial part of keeping users safe online, and act as guides to putting appropriate safety measures in place. The guardrails are supposed to keep all [ ] The post Ofcom Pushes Tech Firms to Strengthen Online Safety appeared first on DIGIT .
More than 70 risk assessments have been legally mandated from 40 of the ‘largest and riskiest’ sites and apps across the globe.
Ofcom has suggested that these assessments are a crucial part of keeping users safe online, and act as guides to putting appropriate safety measures in place. The guardrails are supposed to keep all users safe – but with an emphasis on children.
Risk audits allow platform owners to identify how their platforms and features could cause potential harm to users and enable them to put risk mitigation strategies in place. The UK’s Online Safety Act mandates that tech firms assess and mitigate risks of people encountering illegal content, and under-18s being exposed to certain types of harmful material (e.g. self-harm content and pornography).
Best practice suggests that these safety reviews are conducted annually and at critical times e.g. when a new design is being rolled out. In a move to hold tech firms to account, by the end of this year risk assessments will become public, allowing users to review potential platform risks and the provisions platform owners have put in place to mitigate them.
Recommended reading
-
UK Gov Cracks Down on Explicit Deepfake Creators
-
Cyberflashing Made Priority Offence Under Online Safety Act
-
1 in 3 UK Business Leaders Now See Themselves as Influencers
-
The ‘Dark Side’ of Social Media Influencers Revealed
Failure to comply with these regulations can result in legal action, and in some cases financial penalties.
Ofcom claimed this process is having the necessary effect. Last year Snapchat risk assessments were flagged by Ofcom as having concerning results – the organisation responded by putting additional safety measures in place, to reduce its illegal content risks.
The risk assessments form part of broader regulation which tech companies are being pushed to adhere to in an effort to police online platforms.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
legalsafety
Beyond Detection: Ethical Foundations for Automated Dyslexic Error Attribution
arXiv:2604.01853v1 Announce Type: new Abstract: Dyslexic spelling errors exhibit systematic phonological and orthographic patterns that distinguish them from the errors produced by typically developing writers. While this observation has motivated dyslexic-specific spell-checking and assistive writing tools, prior work has focused predominantly on error correction rather than attribution, and has largely neglected the ethical risks. The risk of harmful labelling, covert screening, algorithmic bias, and institutional misuse that automated classification of learners entails requires the development of robust ethical and legal frameworks for research in this area. This paper addresses both gaps. We formulate dyslexic error attribution as a binary classification task. Given a misspelt word and
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
![[D] Budget Machine Learning Hardware](https://external-preview.redd.it/yYu78J9gmbWgD-d0Q13RQCK1gMT1esBWkIAdMl7bX1Q.jpeg?width=320&crop=smart&auto=webp&s=843cc0b5634242f87b925ab0d25fc05e2c3df9f1)
[D] Budget Machine Learning Hardware
Looking to get into machine learning and found this video on a piece of hardware for less than £500. Is it really possible to teach autonomy with such cheap hardware? For context the hardware is the elephant robotics mechArm 270 Pi - any other recs would be greatly appreciated. submitted by /u/Interesting-Tear-375 [link] [comments]

Anthropic cuts off third-party tools like OpenClaw for Claude subscribers, citing unsustainable demand
Anthropic is cutting off Claude usage through external tools like OpenClaw for subscription customers. The decision exposes a core problem in the AI industry: flat-rate pricing and agent-driven nonstop usage don't mix. The article Anthropic cuts off third-party tools like OpenClaw for Claude subscribers, citing unsustainable demand appeared first on The Decoder .

Anthropic drops 400 million in shares on an eight-month-old AI pharma startup with fewer than ten employees
Anthropic is paying 400 million dollars for an eight-month-old biotech startup with fewer than ten employees. The investor walks away with a 38,513 percent return. The article Anthropic drops 400 million in shares on an eight-month-old AI pharma startup with fewer than ten employees appeared first on The Decoder .


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!