Google Joins AI IDE Race to Compete with VS Code, Apparently Forking VS Code
Google has introduced Antigravity, an agent-first IDE built around Gemini 3 and other LLMs that enters the AI IDE arena alongside Visual Studio Code, amid many reports and clues that it may be based on a VS Code fork.
Could not retrieve the full article text.
Read on Visual Studio Magazine →Visual Studio Magazine
https://visualstudiomagazine.com/articles/2025/11/20/google-joins-ai-ide-race-to-compete-with-vs-code-apparently-forking-vs-code.aspxSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
geminireportagentContractSkill: Repairable Contract-Based Skills for Multimodal Web Agents
arXiv:2603.20340v2 Announce Type: replace Abstract: Self-generated skills for web agents are often unstable and can even hurt performance relative to direct acting. We argue that the key bottleneck is not only skill generation quality, but the fact that web skills remain implicit and therefore cannot be checked or locally repaired. To address this, we present ContractSkill, a framework that converts a draft skill into an executable artifact with explicit procedural structure, enabling deterministic verifica tion, fault localization, and minimal local repair. This turns skill refinement from full rewriting into localized editing of a single skill artifact. Experiments on VisualWebArena show that Contract Skill is effective in realistic web environments, while MiniWoB provides a controlled t
Structured Intent as a Protocol-Like Communication Layer: Cross-Model Robustness, Framework Comparison, and the Weak-Model Compensation Effect
arXiv:2603.29953v1 Announce Type: cross Abstract: How reliably can structured intent representations preserve user goals across different AI models, languages, and prompting frameworks? Prior work showed that PPS (Prompt Protocol Specification), a 5W3H-based structured intent framework, improves goal alignment in Chinese and generalizes to English and Japanese. This paper extends that line of inquiry in three directions: cross-model robustness across Claude, GPT-4o, and Gemini 2.5 Pro; controlled comparison with CO-STAR and RISEN; and a user study (N=50) of AI-assisted intent expansion in ecologically valid settings. Across 3,240 model outputs (3 languages x 6 conditions x 3 models x 3 domains x 20 tasks), evaluated by an independent judge (DeepSeek-V3), we find that structured prompting s
Dynamic Cogeneration of Bug Reproduction Test in Agentic Program Repair
arXiv:2601.19066v2 Announce Type: replace Abstract: Bug Reproduction Tests (BRTs) have been used in many Automated Program Repair (APR) systems, primarily for validating promising fixes and aiding fix generation. In practice, when developers submit a patch, they often implement the BRT alongside the fix. Our experience deploying agentic APR reveals that developers similarly desire a BRT within AI-generated patches to increase their confidence. However, canonical APR systems tend to generate BRTs and fixes separately, and focus on producing only the fix in the final patch. In this paper, we study agentic APR in the context of cogeneration, where the APR agent is instructed to generate both a fix and a BRT in the same patch. We evaluate the effectiveness of different cogeneration strategies
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
"It's trained by non-disabled people": Evaluating How Image Quality Affects Product Captioning with Vision-Language Models
arXiv:2511.08917v3 Announce Type: replace Abstract: Vision-Language Models (VLMs) are increasingly used by blind and low-vision (BLV) people to identify and understand products in their everyday lives, such as food, personal care items, and household goods. Despite their prevalence, we lack an empirical understanding of how common image quality issues--such as blur, misframing, and rotation--affect the accuracy of VLM-generated captions and whether the resulting captions meet BLV people's information needs. Based on a survey of 86 BLV participants, we develop an annotated dataset of 1,859 product images from BLV people to systematically evaluate how image quality issues affect VLM-generated captions. While the best VLM achieves 98% accuracy on images with no quality issues, accuracy drops
FIRMED: A Peak-Centered Multimodal Dataset with Fine-Grained Annotation for Emotion Recognition
arXiv:2507.02350v3 Announce Type: replace Abstract: Traditional video-induced physiological datasets usually rely on whole-trial labels, which introduce temporal label noise in dynamic emotion recognition. We present FIRMED, a peak-centered multimodal dataset based on an immediate-recall annotation paradigm, with synchronized EEG, ECG, GSR, PPG, and facial recordings from 35 participants. FIRMED provides event-centered timestamps, emotion labels, and intensity annotations, and its annotation quality is supported by subjective and physiological validation. Benchmark experiments show that FIRMED consistently outperforms whole-trial labeling, yielding an average gain of 3.8 percentage points across eight EEG-based classifiers, with further improvements under multimodal fusion. FIRMED provides
AI In Cybersecurity Education -- Scalable Agentic CTF Design Principles and Educational Outcomes
arXiv:2603.21551v2 Announce Type: replace Abstract: Large language models are rapidly changing how learners acquire and demonstrate cybersecurity skills. However, when human--AI collaboration is allowed, educators still lack validated competition designs and evaluation practices that remain fair and evidence-based. This paper presents a cross-regional study of LLM-centered Capture-the-Flag competitions built on the Cyber Security Awareness Week competition system. To understand how autonomy levels and participants' knowledge backgrounds influence problem-solving performance and learning-related behaviors, we formalize three autonomy levels: human-in-the-loop, autonomous agent frameworks, and hybrid. To enable verification, we require traceable submissions including conversation logs, agent
Gesture Recognition from body-Worn RFID under Missing Data
arXiv:2601.16301v2 Announce Type: replace Abstract: We explore hand-gesture recognition through the use of passive body-worn reflective tags. A data processing pipeline is proposed to address the issue of missing data. Specifically, missing information is recovered through linear and exponential interpolation and extrapolation. Furthermore, imputation and proximity-based inference are employed. We represent tags as nodes in a temporal graph, with edges formed based on correlations between received signal strength (RSS) and phase values across successive timestamps, and we train a graph-based convolutional neural network that exploits graph-based self-attention. The system outperforms state-of-the-art methods with an accuracy of 98.13% for the recognition of 21 gestures. We achieve 89.28% a
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!