Analyst Warns Against Using Microsoft’s Copilot AI on Friday Afternoons
"Copilot makes over-shared documents more accessible." The post Analyst Warns Against Using Microsoft’s Copilot AI on Friday Afternoons appeared first on Futurism .
Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images
Sign up to see the future, today
Can’t-miss innovations from the bleeding edge of science and tech
As Microsoft has aggressively pushed its Copilot AI, it’s logged more than a few high-profile errors. Copilot has been found hallucinating police reports, exposing secure passwords, and digesting confidential emails — prompting security fears as its use in corporate and government settings becomes more common.
Dennis Xu, a research analyst at the firm Gartner, went as far as to suggest that companies using Copilot should ban it on Friday afternoons, because by that late juncture in the week, workers might be too checked out to double check its work.
According to the Register, that warning — half-joking but half-serious — came at a Gartner panel called “Mitigating the Top 5 Microsoft 365 Copilot Security Risks” held in Sydney, Australia this week.
“Copilot makes over-shared documents more accessible,” Xu warned. “This is not a net new risk, but a known risk amplified by AI.”
Per the Register, Xu spent 30 minutes talking about the five risks, 20 of which were dedicated to Copilot’s penchant for exposing sensitive data obtained after users failed to take the necessary precautions.
Let’s face it. Given the dangerous hallucinations and reputational blunders other models have facilitated, it might not be a bad idea to extend Xu’s Friday-afternoon ban across any AI chatbot, regardless of the model.
More on AI: Meta’s Head of AI Safety Just Made a Mistake That May Cause You a Certain Amount of Alarm
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
copilot![[D] Are there REAL success stories of autonomous AI dev agents working reliably in production?](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-robot-hand-JvPW6jsLFTCtkgtb97Kys5.webp)
[D] Are there REAL success stories of autonomous AI dev agents working reliably in production?
I’m having a serious debate with a colleague, and I want to settle this with actual evidence instead of opinions. The claim: That it’s possible today to run orchestrated AI developer agents (multiple agents, coordinated workflows) that can autonomously build and maintain software — under supervision of a senior AI/dev — without running into unfixable errors or constant breakdowns. I’m skeptical. He believes it’s already happening. So I’m looking for real-world examples, not theory: - Have you actually used autonomous dev agents in production? - What was the setup? (tools, stack, orchestration method) - What level of autonomy are we talking about? - What still breaks? - Did it scale beyond small experiments or toy projects? Especially interested in: - Multi-agent setups (not just Copilot-st
![[D] ICML 2026 Average Score](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-graph-nodes-a2pnJLpyKmDnxKWLd5BEAb.webp)
[D] ICML 2026 Average Score
Hi all, I’m curious about the current review dynamics for ICML 2026, especially after the rebuttal phase. For those who are reviewers (or have insight into the process), could you share what the average scores look like in your batch after rebuttal? Also, do tools like trackers https://papercopilot.com/statistics/icml-statistics/icml-2026-statistics/ reflect true Score distributions to some degree. Appreciate any insights. submitted by /u/Hope999991 [link] [comments]
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
Desktop Canary v2.1.48-canary.29
🐤 Canary Build — v2.1.48-canary.29 Automated canary build from canary branch. Commit Information Based on changes since v2.1.48-canary.28 Commit count: 2 dbdbe16da9 ♻️ refactor: move skills/tools to @ mention with direct context injection ( #13419 ) (Innei) 5cd4e390e3 👷 build(model-bank): align pnpm setup with packageManager ( #13545 ) (Innei) ⚠️ Important Notes This is an automated canary build and is NOT intended for production use. Canary builds are triggered by build / fix / style commits on the canary branch. May contain unstable or incomplete changes . Use at your own risk. It is strongly recommended to back up your data before using a canary build. 📦 Installation Download the appropriate installer for your platform from the assets below. Platform File macOS (Apple Silicon) .dmg (



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!