[D] ICML Reviewer Acknowledgement
Hi, I'm a little confused about ICML discussion period Does the period for reviewer acknowledging responses have already ended? One of the four reviewers did not present any answer to a paper of mine. Do you know if the reviewer can still change their score before April 7th? There is a reviewer comment that I will answer on Monday. Will the reviewer be able to update the score after seeing my answer? Thanks! submitted by /u/Massive_Horror9038 [link] [comments]
Could not retrieve the full article text.
Read on Reddit r/MachineLearning →Reddit r/MachineLearning
https://www.reddit.com/r/MachineLearning/comments/1scaajx/d_icml_reviewer_acknowledgement/Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
updatereviewpaper
Choosing an AI Agent Orchestrator in 2026: A Practical Comparison
Running one AI coding agent is easy. Running three in parallel on the same codebase is where things get interesting — and where you need to make a tooling choice. There's no "best" orchestrator. There's the right one for your workflow. Here's an honest comparison of five approaches, with the tradeoffs I've seen after months of running multi-agent setups. The Options 1. Raw tmux Scripts What it is: Shell scripts that launch agents in tmux panes. DIY orchestration. Pros: Zero dependencies beyond tmux Full control over every detail No abstractions to fight You already know how it works Cons: No state management — you track everything manually No message routing between agents No test gating — agents declare "done" without verification Breaks when agents crash or hit context limits You become

AGI Won’t Automate Most Jobs—Economist Reveals Why They’re Not Worth It
The Hidden Truth About AGI and Jobs: It’s Not Automation—It’s Economics For years, the narrative around artificial intelligence has been dominated by visions of a jobless future, where machines take over every conceivable role. But what if the real story is far more complex? A new paper by one of the world’s leading economists of automation is flipping the script, offering a perspective that is both unexpectedly reassuring and deeply unsettling. Key Takeaways: The assumption that AGI will automate most jobs is being challenged by leading economic research. The paper suggests that many jobs won’t be automated—not because they’re irreplaceable, but because they’re simply not worth the cost of automation. This insight reframes the AI debate, shifting focus from technological capability to eco

I Turned My MacBook's Notch Into a Control Center for AI Coding Agents
Every developer using Claude Code knows the pain: you have 5+ terminal sessions running, Claude is asking for permission in one tab, waiting for input in another, and you're buried in a third. You Alt-Tab frantically, lose context, and waste time. So I built CodeIsland — a free, open-source macOS app that turns your MacBook's notch (Dynamic Island) into a real-time dashboard for all your AI coding agents. The Problem When you're running multiple Claude Code sessions across different projects, there's no way to see everything at a glance. You're constantly switching between terminals to: Check which session finished Approve permission requests Answer Claude's questions Monitor usage limits Multiple Claude Code sessions in cmux, with CodeIsland monitoring everything from the notch The Soluti
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
![[R] Looking for arXiv cs.LG endorser, inference monitoring using information geometry](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-earth-satellite-QfbitDhCB2KjTsjtXRYcf9.webp)
[R] Looking for arXiv cs.LG endorser, inference monitoring using information geometry
Hi r/MachineLearning , I’m looking for an arXiv endorser in cs.LG for a paper on inference-time distribution shift detection for deployed LLMs. The core idea: instead of monitoring input embeddings (which is what existing tools do), we monitor the statistical manifold of the model’s output distributions using Fisher-Rao geodesic distance. We then run adaptive CUSUM (Page-Hinkley) on the resulting z-score stream to catch slow drift that per-request spike detection misses entirely. The methodology is grounded in published work on information geometry (Figshare, DOIs available). We’ve validated the signal on real OpenAI API logprobs, CUSUM caught gradual domain drift in 7 steps with zero false alarms during warmup, while spike detection missed it entirely. If anyone with cs.LG endorsement is
![[D] KDD Review Discussion](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-wave-pattern-4YWNKzoeu65vYpqRKWMiWf.webp)
[D] KDD Review Discussion
KDD 2026 (Feb Cycle) reviews will release today (4-April AoE), This thread is open to discuss about reviews and importantly celebrate successful reviews. Let us all remember that review system is noisy and we all suffer from it and this doesn't define our research impact. Let's all prioritise reviews which enhance our papers. Feel free to discuss your experiences submitted by /u/BomsDrag [link] [comments]

![[D] ICML Reviewer Acknowledgement](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-circuit-gold-PMJWD5qsqGfXwX8w9a97Cb.webp)


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!