Even Microsoft knows Copilot shouldn't be trusted with anything important
Terms admit it is for entertainment only and may get things wrong A recent surge of interest in Microsoft's Terms of Use for Copilot is a reminder that AI helpers are really just a bit of fun.…
A recent surge of interest in Microsoft's Terms of Use for Copilot is a reminder that AI helpers are really just a bit of fun.
Despite the last update taking place in late 2025, the document for Copilot for Individuals recently attracted new attention from netizens. It includes this gem: "Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don't rely on Copilot for important advice. Use Copilot at your own risk."
Regular readers of The Register won't be shocked by Microsoft's warning that Copilot gets things wrong and should not be relied on. The company itself has long acknowledged the assistant's limitations. During the London leg of its AI tour, for example, every demonstration of Copilot wizardry came with a warning that the tool could not be fully trusted and that human verification was required.
The same applies to any other AI assistant: they can be useful, but their output still needs checking, particularly on anything consequential like medical advice or an investment plan.
-
Welsh government used Copilot for review to justify closing organization
-
Microsoft 365 pauses Copilot creep after admins cry foul
-
Gartner suggests Friday afternoon Copilot ban because tired users may be too lazy to check its mistakes
-
Microsoft Copilot to hijack your browser... for your own convenience
As one commenter on Hacker News pointed out: "Anthropic does a somewhat similar thing. If you visit their ToS (the one for Max/Pro plans) from a European IP address, they replace one section with this: Non-commercial use only. You agree not to use our Services for any commercial or business purposes and we (and our Providers) have no liability to you for any loss of profit, loss of business, business interruption, or loss of business opportunity." (The Register checked this from a US and a European IP and can confirm this is the case.)
The commenter added: "It's funny that a plan called 'Pro' cannot be used professionally."
As for Copilot's Terms of Use, they may not be new, but the attention is useful for two reasons. It is a reminder to read the text users so often click through, and it underlines that chatbots such as Copilot are neither companions nor dependable sources of advice.
Instead, they are error-prone tools that can be helpful one moment and confidently wrong the next. Some in the tech industry may market AI assistants as though they put a genius in every laptop, but Microsoft's own warning is rather less grand: "It can make mistakes, and it may not work as intended."
Copilot for Individuals may be for entertainment purposes only. Microsoft 365 Copilot, meanwhile, can be just as inaccurate, only with fewer laughs. ®
The Register AI/ML
https://go.theregister.com/feed/www.theregister.com/2026/04/02/copilot_terms_of_service/Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
copilotKnowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Pakistan’s peace plan a ‘critical opportunity’ for US-Iran talks ahead of Trump deadline
As US President Donald Trump’s Tuesday deadline for reopening the Strait of Hormuz approached, Pakistan put forward a fresh proposal for an immediate ceasefire on Monday, offering what one analyst described as “a critical opportunity” for talks. The plan was brokered through overnight contacts between Pakistani army chief Asim Munir, US officials including Vice-President J.D. Vance and Iran’s Foreign Minister Abbas Araghchi, according to Reuters. It called for an immediate halt to hostilities...

![[PokeClaw] First working app that uses Gemma 4 to autonomously control an Android phone. Fully on-device, no cloud.](https://preview.redd.it/56hbny8rrjtg1.png?width=640&crop=smart&auto=webp&s=26d91255bcdd942aea5255c7d3ac259db5bebf23)
[PokeClaw] First working app that uses Gemma 4 to autonomously control an Android phone. Fully on-device, no cloud.
PokeClaw - A Pocket Version of OpenClaw Most "private" AI assistants are private because the company says so. PokeClaw is private because there's literally no server component. The AI model runs on your phone's CPU. There's no cloud endpoint. You can block the app from the internet entirely and it works the same. It runs Gemma 4 on-device using LiteRT and controls your phone through Android Accessibility. You type a command, the AI reads the screen, decides what to tap, and executes. Works with any app. I built this because I wanted a phone assistant that couldn't spy on me even if it wanted to. Not because of a privacy policy, but because of architecture. There's nowhere for the data to go. First app I've found that does fully local LLM phone control — every other option I checked either

Silverback AI Chatbot Introduces AI Assistant Feature to Support Structured Digital Communication and Intelligent Workflow Automation - Daytona Beach News-Journal
Silverback AI Chatbot Introduces AI Assistant Feature to Support Structured Digital Communication and Intelligent Workflow Automation Daytona Beach News-Journal





Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!