Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessI Renamed All 43 Tools in My MCP Server. Here's Why I Did It Now.Dev.to AIWhy AI Pilots Fail — And the 5 Patterns That Actually Get to ProductionDev.to AIBuilding Predictive Maintenance Systems for Infrastructure MonitoringDev.to AIThe Best Scribe Alternative in 2026 (Privacy-First, AI-Ready)Dev.to AII Started Building a Roguelike RPG — Powered by On-Device AI #2Dev.to AIGR4AD: Kuaishou's Production-Ready Generative Recommender for Ads Delivers 4.2% Revenue LiftDev.to AIFAOS Neurosymbolic Architecture Boosts Enterprise Agent Accuracy by 46% via Ontology-Constrained ReasoningDev.to AIOwn Your Data: The Wake-Up CallDev.to AIHow I Replaced 6 Paid AI Subscriptions With One Free Tool (Saved $86/Month)Dev.to AIClaude Code subagent patterns: how to break big tasks into bounded scopesDev.to AIIntercom Opens Fin to the World - The AI Economy | Ken YeungGNews AI RAGAnthropic says Claude subscriptions will no longer cover usage on third-party tools like OpenClaw starting April 4 at 12pm PT, to better manage capacity (Boris Cherny/@bcherny)TechmemeBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessI Renamed All 43 Tools in My MCP Server. Here's Why I Did It Now.Dev.to AIWhy AI Pilots Fail — And the 5 Patterns That Actually Get to ProductionDev.to AIBuilding Predictive Maintenance Systems for Infrastructure MonitoringDev.to AIThe Best Scribe Alternative in 2026 (Privacy-First, AI-Ready)Dev.to AII Started Building a Roguelike RPG — Powered by On-Device AI #2Dev.to AIGR4AD: Kuaishou's Production-Ready Generative Recommender for Ads Delivers 4.2% Revenue LiftDev.to AIFAOS Neurosymbolic Architecture Boosts Enterprise Agent Accuracy by 46% via Ontology-Constrained ReasoningDev.to AIOwn Your Data: The Wake-Up CallDev.to AIHow I Replaced 6 Paid AI Subscriptions With One Free Tool (Saved $86/Month)Dev.to AIClaude Code subagent patterns: how to break big tasks into bounded scopesDev.to AIIntercom Opens Fin to the World - The AI Economy | Ken YeungGNews AI RAGAnthropic says Claude subscriptions will no longer cover usage on third-party tools like OpenClaw starting April 4 at 12pm PT, to better manage capacity (Boris Cherny/@bcherny)Techmeme
AI NEWS HUBbyEIGENVECTOREigenvector

ShotStream: Streaming Multi-Shot Video Generation for Interactive Storytelling

HuggingFace Papersby Yawen Luo ,March 26, 20262 min read1 views
Source Quiz

ShotStream enables real-time interactive multi-shot video generation through causal architecture design, dual-cache memory mechanisms, and two-stage distillation to maintain visual consistency and reduce latency. (45 upvotes on HuggingFace)

Abstract

ShotStream enables real-time interactive multi-shot video generation through causal architecture design, dual-cache memory mechanisms, and two-stage distillation to maintain visual consistency and reduce latency.

AI-generated summary

Multi-shot video generation is crucial for long narrative storytelling, yet current bidirectional architectures suffer from limited interactivity and high latency. We propose ShotStream, a novel causal multi-shot architecture that enables interactive storytelling and efficient on-the-fly frame generation. By reformulating the task as next-shot generation conditioned on historical context, ShotStream allows users to dynamically instruct ongoing narratives via streaming prompts. We achieve this by first fine-tuning a text-to-video model into a bidirectional next-shot generator, which is then distilled into a causal student via Distribution Matching Distillation. To overcome the challenges of inter-shot consistency and error accumulation inherent in autoregressive generation, we introduce two key innovations. First, a dual-cache memory mechanism preserves visual coherence: a global context cache retains conditional frames for inter-shot consistency, while a local context cache holds generated frames within the current shot for intra-shot consistency. And a RoPE discontinuity indicator is employed to explicitly distinguish the two caches to eliminate ambiguity. Second, to mitigate error accumulation, we propose a two-stage distillation strategy. This begins with intra-shot self-forcing conditioned on ground-truth historical shots and progressively extends to inter-shot self-forcing using self-generated histories, effectively bridging the train-test gap. Extensive experiments demonstrate that ShotStream generates coherent multi-shot videos with sub-second latency, achieving 16 FPS on a single GPU. It matches or exceeds the quality of slower bidirectional models, paving the way for real-time interactive storytelling. Training and inference code, as well as the models, are available on our

View arXiv page View PDF Project page GitHub 93 Add to collection

Get this paper in your agent:

hf papers read 2603.25746

Don't have the latest CLI?

curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.25746 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.25746 in a Space README.md to link it from this page.

Collections including this paper 2

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
ShotStream:…researchpaperarxivcausal arch…multi-shot …next-shot g…HuggingFace…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 140 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Research Papers