Microsoft Brings Anthropic's Claude Opus 4.5 to Foundry Preview
Microsoft has added Anthropic's Claude Opus 4.5 to Microsoft Foundry in public preview, highlighting benchmark gains for coding and tool use, new agent-building controls in Foundry, expanded vision and computer-use capabilities, and updated pricing and regional availability.
Could not retrieve the full article text.
Read on Visual Studio Magazine →Visual Studio Magazine
https://visualstudiomagazine.com/articles/2025/12/02/microsoft-brings-anthropics-claude-opus-4-5-to-foundry-preview.aspxSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudebenchmarkupdateGoverning AI Under Fire in Ukraine - The Cairo Review of Global Affairs
<a href="https://news.google.com/rss/articles/CBMifkFVX3lxTE5sTllMMG1Sek56LVJidWJXUkJjNDR1THFkN1E4N2I4V2RQX2dSRFZJOFlkOGI1TU96T2k4dFA1WlFTUUdiSWE4R2luRlBGS25YZFI1NDlKMGV5dFNJQUE5OE9vVU9CaHNZMGZTSzIzTHZncVJPZTJickY4THgxQQ?oc=5" target="_blank">Governing AI Under Fire in Ukraine</a> <font color="#6f6f6f">The Cairo Review of Global Affairs</font>

LightHarmony3D: Harmonizing Illumination and Shadows for Object Insertion in 3D Gaussian Splatting
arXiv:2603.29209v1 Announce Type: new Abstract: 3D Gaussian Splatting (3DGS) enables high-fidelity reconstruction of scene geometry and appearance. Building on this capability, inserting external mesh objects into reconstructed 3DGS scenes enables interactive editing and content augmentation for immersive applications such as AR/VR, virtual staging, and digital content creation. However, achieving physically consistent lighting and shadows for mesh insertion remains challenging, as it requires accurate scene illumination estimation and multi-view consistent rendering. To address this challenge, we present LightHarmony3D, a novel framework for illumination-consistent mesh insertion in 3DGS scenes. Central to our approach is our proposed generative module that predicts a full 360{\deg} HDR e

M-MiniGPT4: Multilingual VLLM Alignment via Translated Data
arXiv:2603.29467v1 Announce Type: new Abstract: This paper presents a Multilingual Vision Large Language Model, named M-MiniGPT4. Our model exhibits strong vision-language understanding (VLU) capabilities across 11 languages. We utilize a mixture of native multilingual and translated data to push the multilingual VLU performance of the MiniGPT4 architecture. In addition, we propose a multilingual alignment training stage that uses parallel text corpora to further enhance the multilingual capabilities of our model. M-MiniGPT4 achieves 36% accuracy on the multilingual MMMU benchmark, outperforming state-of-the-art models in the same weight class, including foundation models released after the majority of this work was completed. We open-source our models, code, and translated datasets to fac
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
Tunisia: President accuses artificial intelligence of ‘conspiring’ against humans - Middle East Monitor
<a href="https://news.google.com/rss/articles/CBMivwFBVV95cUxPa1lNZm5YQXhuQ0xUaGlKSVM0ekNCMnk2WS1rUGprV3VQLXpldWw1a2RpR2VmQUpueDgzNlc1Y2h6WnZIRXlQLW1mQmxKTmJZNmMtalVTdEhFMjIwWWJBSmRrc25oc1h4T3d1Y01CWVV2bTBQa2dYSzBjMUpMcC1BWFRxUzkwQ2g3XzJ3dHdYWTM1T1JYUVF2eWY3VzY1b01qY2NubVRfanQ1a3VqX3FyQXRDZHlrbXB5M0owMGdOd9IBxAFBVV95cUxNMndIZE5KelVCUWh5d3JLZDluaWx4Wk1jZTNvS0RQVDVLdE1rbDJRbTllMTNlaEozMGIwOVBhdWFSenNWM1VjOTc5LVAyS0ZUMXhITG1iSy1vUURJZzBVOGtTdFVFc2tfSHdfa2tIQlNHaXUtYnMyUVRwdjYyZUN2X2E2OWVodWtIN01MVVNodWhobXBkSnJfdnIwcGVFSzBldkVBcHJEUnFNekg4SjdKMVBqN2RpRzIyaVVmUXNYdTJqWVJa?oc=5" target="_blank">Tunisia: President accuses artificial intelligence of ‘conspiring’ against humans</a> <font color="#6f6f6f">Middle East Monitor</font>

MemFactory: Unified Inference & Training Framework for Agent Memory
arXiv:2603.29493v1 Announce Type: new Abstract: Memory-augmented Large Language Models (LLMs) are essential for developing capable, long-term AI agents. Recently, applying Reinforcement Learning (RL) to optimize memory operations, such as extraction, updating, and retrieval, has emerged as a highly promising research direction. However, existing implementations remain highly fragmented and task-specific, lacking a unified infrastructure to streamline the integration, training, and evaluation of these complex pipelines. To address this gap, we present MemFactory, the first unified, highly modular training and inference framework specifically designed for memory-augmented agents. Inspired by the success of unified fine-tuning frameworks like LLaMA-Factory, MemFactory abstracts the memory lif

From Physics to Surrogate Intelligence: A Unified Electro-Thermo-Optimization Framework for TSV Networks
arXiv:2603.29268v1 Announce Type: new Abstract: High-density through-substrate vias (TSVs) enable 2.5D/3D heterogeneous integration but introduce significant signal-integrity and thermal-reliability challenges due to electrical coupling, insertion loss, and self-heating. Conventional full-wave finite-element method (FEM) simulations provide high accuracy but become computationally prohibitive for large design-space exploration. This work presents a scalable electro-thermal modeling and optimization framework that combines physics-informed analytical modeling, graph neural network (GNN) surrogates, and full-wave sign-off validation. A multi-conductor analytical model computes broadband S-parameters and effective anisotropic thermal conductivities of TSV arrays, achieving $5\%-10\%$ relative

Lie Generator Networks for Nonlinear Partial Differential Equations
arXiv:2603.29264v1 Announce Type: new Abstract: Linear dynamical systems are fully characterized by their eigenspectra, accessible directly from the generator of the dynamics. For nonlinear systems governed by partial differential equations, no equivalent theory exists. We introduce Lie Generator Network--Koopman (LGN-KM), a neural operator that lifts nonlinear dynamics into a linear latent space and learns the continuous-time Koopman generator ($L_k$) through a decomposition $L_k = S - D_k$, where $S$ is skew-symmetric representing conservative inter-modal coupling, and $D_k$ is a positive-definite diagonal encoding modal dissipation. This architectural decomposition enforces stability and enables interpretability through direct spectral access to the learned dynamics. On two-dimensional
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!