ShotStream: Streaming Multi-Shot Video Generation for Interactive Storytelling
ShotStream enables real-time interactive multi-shot video generation through causal architecture design, dual-cache memory mechanisms, and two-stage distillation to maintain visual consistency and reduce latency. (45 upvotes on HuggingFace)
Abstract
ShotStream enables real-time interactive multi-shot video generation through causal architecture design, dual-cache memory mechanisms, and two-stage distillation to maintain visual consistency and reduce latency.
AI-generated summary
Multi-shot video generation is crucial for long narrative storytelling, yet current bidirectional architectures suffer from limited interactivity and high latency. We propose ShotStream, a novel causal multi-shot architecture that enables interactive storytelling and efficient on-the-fly frame generation. By reformulating the task as next-shot generation conditioned on historical context, ShotStream allows users to dynamically instruct ongoing narratives via streaming prompts. We achieve this by first fine-tuning a text-to-video model into a bidirectional next-shot generator, which is then distilled into a causal student via Distribution Matching Distillation. To overcome the challenges of inter-shot consistency and error accumulation inherent in autoregressive generation, we introduce two key innovations. First, a dual-cache memory mechanism preserves visual coherence: a global context cache retains conditional frames for inter-shot consistency, while a local context cache holds generated frames within the current shot for intra-shot consistency. And a RoPE discontinuity indicator is employed to explicitly distinguish the two caches to eliminate ambiguity. Second, to mitigate error accumulation, we propose a two-stage distillation strategy. This begins with intra-shot self-forcing conditioned on ground-truth historical shots and progressively extends to inter-shot self-forcing using self-generated histories, effectively bridging the train-test gap. Extensive experiments demonstrate that ShotStream generates coherent multi-shot videos with sub-second latency, achieving 16 FPS on a single GPU. It matches or exceeds the quality of slower bidirectional models, paving the way for real-time interactive storytelling. Training and inference code, as well as the models, are available on our
View arXiv page View PDF Project page GitHub 93 Add to collection
Get this paper in your agent:
hf papers read 2603.25746
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/2603.25746 in a dataset README.md to link it from this page.
Spaces citing this paper 0
No Space linking this paper
Cite arxiv.org/abs/2603.25746 in a Space README.md to link it from this page.
Collections including this paper 2
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
researchpaperarxiv
RFT FPCM OV - a Hugging Face Space by RFTSystems
huggingface.co RFT FPCM OV - a Hugging Face Space by RFTSystems RFT Fixed Parameter Cosmology Model, Open Validation 1. Fixed‑Parameter Cosmology Panel (FPCM‑OV) This side of the Space shows the core RFT cosmology running on one locked parameter set. Nothing adjusts itself — the whole model stands or falls on this single solution. What people can see here Age at z = 13.67: RFT gives 568.52 Myr , which lines up with JWST early‑galaxy maturity without any tuning. Horizon Ratio: The model naturally produces a horizon about 490× larger than ΛCDM. (This removes the horizon problem without inflation.) Unified Expansion Curve (H_RFT) The purple curve shows how expansion behaves across all redshifts using the same fixed parameters. JWST Maturity Plot The cyan and red lines show how RFT’s predicted
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers

RFT FPCM OV - a Hugging Face Space by RFTSystems
huggingface.co RFT FPCM OV - a Hugging Face Space by RFTSystems RFT Fixed Parameter Cosmology Model, Open Validation 1. Fixed‑Parameter Cosmology Panel (FPCM‑OV) This side of the Space shows the core RFT cosmology running on one locked parameter set. Nothing adjusts itself — the whole model stands or falls on this single solution. What people can see here Age at z = 13.67: RFT gives 568.52 Myr , which lines up with JWST early‑galaxy maturity without any tuning. Horizon Ratio: The model naturally produces a horizon about 490× larger than ΛCDM. (This removes the horizon problem without inflation.) Unified Expansion Curve (H_RFT) The purple curve shows how expansion behaves across all redshifts using the same fixed parameters. JWST Maturity Plot The cyan and red lines show how RFT’s predicted



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!