Peppa Pig and Transformers owner Hasbro hit by cyber-attack
The US company filed a report to the US government saying it had discovered hackers on 28 March.
14 hours ago
Imran Rahman-JonesTechnology reporter
Getty Images
The toy and entertainment giant, Hasbro - which owns brands including Peppa Pig, Transformers and Monopoly - has been hacked.
Parts of the its website and those of its brands were showing an error message on Wednesday afternoon, with the company warning the cyber-attack could delay product deliveries.
Other Hasbro lines include Play-Doh, Power Rangers, Nerf and Dungeons & Dragons.
In its filing to the Securities and Exchange Commission (SEC), Hasbro said the breach was discovered on 28 March.
It is not known if the cyber-criminals are still in the company's systems or if they have contacted Hasbro, nor if customer data has been compromised.
"While this is an unfortunate incident, Hasbro's business operations remain open," a Hasbro spokesperson told BBC News.
They added: "We have taken swift action to protect our systems and data," including taking some systems offline.
In its SEC filing, Hasbro said it had put measures in place so it could continue taking and shipping orders, but these could be in place "for several weeks" and "may result in some delays".
Hasbro has become owner of some of the world's most recognisable toy brands in its 103-year history.
An error page showed the Peppa Pig website was down
Around Easter 2025, a number of retail businesses in the UK fell victim to cyber-attacks, including M&S, Co-op and Harrods.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
transformercompanyreportThe quest for general intelligence is hitting a wall
There has been a lot of talk in the AI community lately about the possibility of achieving general intelligence. Indeed, recent progress in areas such as mathematical problem solving and coding has been dramatic, with recent systems assisting in the creation of platforms such as Moltbook and helping an AI researcher in discovering faster matrix multiplication algorithms . Despite the hype, however, it seems like there are clear limitations to the current best non-AI systems: They cannot perform symbolic reasoning (even the best trained models struggle to multiply 16 bit integers) They are black boxes with uninterpretable reasoning (although they sometimes write their thoughts out, which helps). Misalignment issues where they will pursue their own goals despite explicit instructions not to
🌐 Beyond One Data Source: Building Scalable Data Pipelines in Power BI
<h2> Introduction </h2> <p>The request sounded simple:</p> <p>“Can you build a dashboard for this report?”</p> <p>You open your laptop, ready to begin... </p> <div class="highlight js-code-highlight"> <pre class="highlight plaintext"><code> ... until the data starts coming in. </code></pre> </div> <p><em>An Excel file from finance.<br> A CSV export from sales.<br> A PDF report from operations.<br> A JSON file from a web API.<br> A database connection from IT.<br> And a SharePoint folder filled with weekly uploads</em>.</p> <p>At first glance, everything looks fine. But as you begin to compare the numbers, things don’t align. Totals don’t match. Formats differ. Some fields are missing. Others are duplicated.</p> <p>That’s when it hits you.</p> <p>The challenge isn’t building the dashboard.<
Detecting Bots in 2026: IP Intelligence + Email Validation in One API Call
<h2> The Bot Problem Nobody Talks About </h2> <p>If you're running a web app in 2026, roughly 40% of your traffic isn't human. Scrapers, credential stuffers, fake signups — they eat your bandwidth, pollute your analytics, and sometimes steal your data.</p> <p>Most developers slap on a CAPTCHA and call it a day. But CAPTCHAs are a UX nightmare, and sophisticated bots solve them anyway. There's a better approach: <strong>checking the reputation of incoming requests before they even reach your app.</strong></p> <h2> The Two Signals That Matter Most </h2> <p>After building fraud detection systems for years, I've found that two data points catch 90%+ of malicious traffic:</p> <h3> 1. IP Intelligence </h3> <p>Every request comes from an IP address, and that IP tells a story:</p> <ul> <li> <stron
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
The quest for general intelligence is hitting a wall
There has been a lot of talk in the AI community lately about the possibility of achieving general intelligence. Indeed, recent progress in areas such as mathematical problem solving and coding has been dramatic, with recent systems assisting in the creation of platforms such as Moltbook and helping an AI researcher in discovering faster matrix multiplication algorithms . Despite the hype, however, it seems like there are clear limitations to the current best non-AI systems: They cannot perform symbolic reasoning (even the best trained models struggle to multiply 16 bit integers) They are black boxes with uninterpretable reasoning (although they sometimes write their thoughts out, which helps). Misalignment issues where they will pursue their own goals despite explicit instructions not to
AI Journey 2025 Conference: exploring the future of artificial intelligence - Азия-Плюс
<a href="https://news.google.com/rss/articles/CBMi1AFBVV95cUxNdXZxbHl0MjNpbnZjb25tYUxtZ1BzbXU0VnVvVHA0OWhrZE9vWFVneEZpQ24wWll5ZEo4MXdkMlZOLUx2c3FTcDBBeXZJcGdNWllybmZ0OFVINEwxVENVbmN4S0VlaTJuTHNUbUNuV05oX3V6THV1N1FhcXktaENmODM5b254cVNfeG9tT3U1Q3NaVDdJckNzbXlsMUtsV21WdDU1QjF1RWlLMzYtZkR3bUxKQkRXZVZjYU5ialdpS1gtOE1vd1RFVVJIX1NRZTJoaWtHdQ?oc=5" target="_blank">AI Journey 2025 Conference: exploring the future of artificial intelligence</a> <font color="#6f6f6f">Азия-Плюс</font>

RefineRL: Advancing Competitive Programming with Self-Refinement Reinforcement Learning
arXiv:2604.00790v1 Announce Type: new Abstract: While large language models (LLMs) have demonstrated strong performance on complex reasoning tasks such as competitive programming (CP), existing methods predominantly focus on single-attempt settings, overlooking their capacity for iterative refinement. In this paper, we present RefineRL, a novel approach designed to unleash the self-refinement capabilities of LLMs for CP problem solving. RefineRL introduces two key innovations: (1) Skeptical-Agent, an iterative self-refinement agent equipped with local execution tools to validate generated solutions against public test cases of CP problems. This agent always maintains a skeptical attitude towards its own outputs and thereby enforces rigorous self-refinement even when validation suggests cor

UK AISI Alignment Evaluation Case-Study
arXiv:2604.00788v1 Announce Type: new Abstract: This technical report presents methods developed by the UK AI Security Institute for assessing whether advanced AI systems reliably follow intended goals. Specifically, we evaluate whether frontier models sabotage safety research when deployed as coding assistants within an AI lab. Applying our methods to four frontier models, we find no confirmed instances of research sabotage. However, we observe that Claude Opus 4.5 Preview (a pre-release snapshot of Opus 4.5) and Sonnet 4.5 frequently refuse to engage with safety-relevant research tasks, citing concerns about research direction, involvement in self-training, and research scope. We additionally find that Opus 4.5 Preview shows reduced unprompted evaluation awareness compared to Sonnet 4.5,

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!