App Store submissions now open for the latest OS releases
<div class="inline-article-image"><img src="https://devimages-cdn.apple.com/wwdc-services/articles/images/62DE05BC-5D60-4277-A8A9-F53D5FA4D2D2/2048.jpeg" data-img-dark="https://devimages-cdn.apple.com/wwdc-services/articles/images/62DE05BC-5D60-4277-A8A9-F53D5FA4D2D2/2048.jpeg" data-hires="false" alt="A collage of icons of Apple apps — including Xcode, Apple Developer, Swift, and more — set diagonally against a light blue background."></div><p>iOS 26, iPadOS 26, macOS Tahoe 26, tvOS 26, visionOS 26, and watchOS 26 will soon be available to customers worldwide — which means you can now submit apps and games that take advantage of Apple’s broadest design update ever.</p><p>Build your apps and games using the <a href="https://developer.apple.com/download/">Xcode 26 Release Candidate</a> and l
September 9, 2025
iOS 26, iPadOS 26, macOS Tahoe 26, tvOS 26, visionOS 26, and watchOS 26 will soon be available to customers worldwide — which means you can now submit apps and games that take advantage of Apple’s broadest design update ever.
Build your apps and games using the Xcode 26 Release Candidate and latest SDKs, test with TestFlight, and submit for review to the App Store. By taking advantage of the new design and Liquid Glass, the Foundation Models framework, the new Apple Games app, and more, you can deliver even more unique experiences on Apple platforms.
Starting April 2026, apps and games uploaded to App Store Connect need to meet the following minimum requirements.
-
iOS and iPadOS apps must be built with the iOS 26 & iPadOS 26 SDK or later
-
tvOS apps must be built with the tvOS 26 SDK or later
-
visionOS apps must be built with the visionOS 26 SDK or later
-
watchOS apps must be built with the watchOS 26 SDK or later
Learn more about submitting
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelfoundation modelrelease
HippoMM: Hippocampal-inspired Multimodal Memory for Long Audiovisual Event Understanding
arXiv:2504.10739v2 Announce Type: replace-cross Abstract: Comprehending extended audiovisual experiences remains challenging for computational systems, particularly temporal integration and cross-modal associations fundamental to human episodic memory. We introduce HippoMM, a computational cognitive architecture that maps hippocampal mechanisms to solve these challenges. Rather than relying on scaling or architectural sophistication, HippoMM implements three integrated components: (i) Episodic Segmentation detects audiovisual input changes to split videos into discrete episodes, mirroring dentate gyrus pattern separation; (ii) Memory Consolidation compresses episodes into summaries with key features preserved, analogous to hippocampal memory formation; and (iii) Hierarchical Memory Retriev

Simulating Realistic LiDAR Data Under Adverse Weather for Autonomous Vehicles: A Physics-Informed Learning Approach
arXiv:2604.01254v1 Announce Type: cross Abstract: Accurate LiDAR simulation is crucial for autonomous driving, especially under adverse weather conditions. Existing methods struggle to capture the complex interactions between LiDAR signals and atmospheric phenomena, leading to unrealistic representations. This paper presents a physics-informed learning framework (PICWGAN) for generating realistic LiDAR data under adverse weather conditions. By integrating physicsdriven constraints for modeling signal attenuation and geometryconsistent degradations into a physics-informed learning pipeline, the proposed method reduces the sim-to-real gap. Evaluations on real-world datasets (CADC for snow, Boreas for rain) and the VoxelScape dataset show that our approach closely mimics realworld intensity p
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

HippoMM: Hippocampal-inspired Multimodal Memory for Long Audiovisual Event Understanding
arXiv:2504.10739v2 Announce Type: replace-cross Abstract: Comprehending extended audiovisual experiences remains challenging for computational systems, particularly temporal integration and cross-modal associations fundamental to human episodic memory. We introduce HippoMM, a computational cognitive architecture that maps hippocampal mechanisms to solve these challenges. Rather than relying on scaling or architectural sophistication, HippoMM implements three integrated components: (i) Episodic Segmentation detects audiovisual input changes to split videos into discrete episodes, mirroring dentate gyrus pattern separation; (ii) Memory Consolidation compresses episodes into summaries with key features preserved, analogous to hippocampal memory formation; and (iii) Hierarchical Memory Retriev

ToolMisuseBench: An Offline Deterministic Benchmark for Tool Misuse and Recovery in Agentic Systems
arXiv:2604.01508v1 Announce Type: new Abstract: Tool using agents often fail for operational reasons even when language understanding is strong. Common causes include invalid arguments, interface drift, weak recovery, and inefficient retry behavior. We introduce ToolMisuseBench, an offline deterministic benchmark for evaluating tool misuse and recovery under explicit step, call, and retry budgets. The benchmark covers CRUD, retrieval, file, and scheduling environments with replayable fault injection. It reports success, invalid call behavior, policy violations, recovery quality, and budgeted efficiency. We release a public dataset with 6800 tasks and a reproducible evaluation pipeline. Baseline results show fault specific recovery gains for schema aware methods, while overall success remai

GAP-URGENet: A Generative-Predictive Fusion Framework for Universal Speech Enhancement
arXiv:2604.01832v1 Announce Type: new Abstract: We introduce GAP-URGENet, a generative-predictive fusion framework developed for Track 1 of the ICASSP 2026 URGENT Challenge. The system integrates a generative branch, which performs full-stack speech restoration in a self-supervised representation domain and reconstructs the waveform via a neural vocoder, along with a predictive branch that performs spectrogram-domain enhancement, providing complementary cues. Outputs from both branches are fused by a post-processing module, which also performs bandwidth extension to generate the enhanced waveform at 48 kHz, later downsampled to the original sampling rate. This generative-predictive fusion improves robustness and perceptual quality, achieving top performance in the blind-test phase and rank

MOVis: A Visual Analytics Tool for Surfacing Missed Patches Across Software Variants
arXiv:2604.01494v1 Announce Type: new Abstract: Clone-and-own development produces families of related software variants that evolve independently. As variants diverge, important fixes applied in one repository are often missing in others. PaReco has shown that thousands of such missed opportunity (MO) patches exist across real ecosystems, yet its textual output provides limited support for understanding where and how these fixes should be propagated. We present MOVis, a lightweight, interactive desktop tool that visualizes MO patches between a source and target variant. MOVis loads PaReco's MO classifications and presents patched and buggy hunks side-by-side, highlighting corresponding regions and exposing structural differences that hinder reuse. This design enables developers to quickly



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!