Microsoft Says You’re Not Supposed to Take Copilot’s Advice Seriously
This may be changing soon, but for now, Microsoft's Terms of Use document doesn't sound confident about the company's AI assistant.
Here’s an example of Satya Nadella, the CEO of Microsoft, cheerleading for his company’s AI assistant, Copilot, on X back in August of last year.
3/ Are we on track for the [Product] launch in November? Check eng progress, pilot program results, risks. Give me a probability. pic.twitter.com/9iCuNuneZt
— Satya Nadella (@satyanadella) August 27, 2025
In a thread about how Copilot has “quickly become part of [his] everyday workflow, Nadella suggests asking Copilot “Are we on track for the [Product] launch in November? Check eng progress, pilot program results, risks. Give me a probability.”
Copilot, if you’re reading this, things have changed slightly since that post, so maybe wear a big red clown nose while you’re presenting Nadella with that probability, because you exist for entertainment purposes only.
An update to the Terms of Use document for Copilot on October 24, 2025 clarified this:
“Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.”
That wording is stronger than the one on this Miss Cleo ad from 2000, which—even after saying “The accuracy of the tarot cards is amazing”—just reads “For Entertainment Only.”
PCMag, however, extracted an encouraging statement from an anonymous Microsoft spokesperson about the disclaimer. “The ‘entertainment purposes’ phrasing is legacy language from when Copilot originally launched as a search companion service in Bing.” They added, “As the product has evolved, that language is no longer reflective of how Copilot is used today and will be altered with our next update.”
Gizmodo
https://gizmodo.com/microsoft-says-youre-not-supposed-to-take-copilots-advice-seriously-2000742630Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
companycopilotassistant
At least 80 different Microsoft Copilot products have been mapped out by expert, but there may be more than 100 — Microsoft doesn't have a singular list available, so AI consultant mapped out the myriad products - Tom's Hardware
At least 80 different Microsoft Copilot products have been mapped out by expert, but there may be more than 100 — Microsoft doesn't have a singular list available, so AI consultant mapped out the myriad products Tom's Hardware
![[PokeClaw] First working app that uses Gemma 4 to autonomously control an Android phone. Fully on-device, no cloud.](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-circuit-gold-PMJWD5qsqGfXwX8w9a97Cb.webp)
[PokeClaw] First working app that uses Gemma 4 to autonomously control an Android phone. Fully on-device, no cloud.
PokeClaw - A Pocket Version of OpenClaw Most "private" AI assistants are private because the company says so. PokeClaw is private because there's literally no server component. The AI model runs on your phone's CPU. There's no cloud endpoint. You can block the app from the internet entirely and it works the same. It runs Gemma 4 on-device using LiteRT and controls your phone through Android Accessibility. You type a command, the AI reads the screen, decides what to tap, and executes. Works with any app. I built this because I wanted a phone assistant that couldn't spy on me even if it wanted to. Not because of a privacy policy, but because of architecture. There's nowhere for the data to go. First app I've found that does fully local LLM phone control — every other option I checked either
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

ContractShield: Bridging Semantic-Structural Gaps via Hierarchical Cross-Modal Fusion for Multi-Label Vulnerability Detection in Obfuscated Smart Contracts
arXiv:2604.02771v1 Announce Type: new Abstract: Smart contracts are increasingly targeted by adversaries employing obfuscation techniques such as bogus code injection and control flow manipulation to evade vulnerability detection. Existing multimodal methods often process semantic, temporal, and structural features in isolation and fuse them using simple strategies such as concatenation, which neglects cross-modal interactions and weakens robustness, as obfuscation of a single modality can sharply degrade detection accuracy. To address these challenges, we propose ContractShield, a robust multimodal framework with a novel fusion mechanism that effectively correlates multiple complementary features through a three-level fusion. Self-attention first identifies patterns that indicate vulnerab

Why Your AI Agent Keeps Getting It Wrong: The Three-Layer Architecture Every Data Leader Needs to…
Why Your AI Agent Keeps Getting It Wrong: The Three-Layer Architecture Every Data Leader Needs to Know Your AI agent is not failing because the model is bad. It is failing because the architecture feeding the model is incomplete. The agent does not know what your “revenue” number means. It cannot see the CRM data it needs. It does not know that this question should be answered by the finance persona, not the sales one. The model is doing its job. The infrastructure around it is not. This is the defining challenge of enterprise AI in 2026. Everyone has deployed agents. Most of those agents produce responses that are confidently wrong, inconsistently right, or too generic to act on. The gap between a demo that impresses and an agent that actually drives business outcomes comes down to three




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!