You can now fine-tune open-source video models
Train your own versions of Tencent's HunyuanVideo for style, motion, and characters on Replicate.
AI video generation has gotten really good.
Some of the best video models like tencent/hunyuan-video are open-source, and the community has been hard at work building on top of them. We’ve adapted the Musubi Tuner by @kohya_tech to run on Replicate, so you can fine-tune HunyuanVideo on your own visual content.
Never Gonna Give You Up animal edition, courtesy of @flngr and @fofr.
HunyuanVideo is good at capturing the style of the training data, not only in the visual appearance of the imagery and the color grading, but also in the motion of the camera and the way the characters move.
This in-motion style transfer is unique to this implementation: other video models that are trained only on images cannot capture it.
Here are some examples of videos created using different fine-tunes, all with the same settings, size, prompt and seed:
Twin PeaksPixarCowboy BebopWestworld
You can make your own fine-tuned video model to:
-
Create videos in a specific visual style
-
Generate animations of particular characters
-
Capture specific types of motion or movement
-
Build custom video effects
In this post, we’ll show you how to gather training data, create a fine-tuned video model, and generate videos with it.
Prerequisites
-
A Replicate account
-
A video or YouTube URL to use as training data
Step 1: Create your training data
To train a video model, you’ll need a dataset of video clips and text captions describing each video.
This process can be time-consuming, so we’ve created a model to make it easier: zsxkib/create-video-dataset takes a video file or YouTube URL as input, slices it into smaller clips, and generates captions for each clip.
Here’s how to create training data right in your browser with just a few clicks:
-
Find a YouTube URL (or video file) that you want to use for training.
-
Go to replicate.com/zsxkib/create-video-dataset
-
Paste your video URL, or upload a video file from your computer.
-
Choose a unique trigger word like RCKRLL. Avoid using real words that have existing associations.
-
Click Run and download the resulting ZIP file.
Optional: Check out the logs from your training run if you want to see the auto-generated captions for each clip.
Step 2: Train your model
Now you’ll create your own fine-tuned video generation model using the training data you just compiled.
-
Go to replicate.com/zsxkib/hunyuan-video-lora/train
-
Choose a name for your model.
-
For the input_videos input, upload the ZIP file you just downloaded.
-
Enter the same trigger word you used before, e.g. RCKRLL
-
Adjust training settings (we recommend starting with 2 epochs)
-
Click Create training
Training typically takes about 5-10 minutes with default settings, but depends on the size and number of clips.
Step 3: Run your model
Once the training is complete, you can generate new videos in several ways:
-
Run the model in your browser directly from your model’s page.
-
Run your model in Replicate’s Playground: Go to “Manage models” and type your model name.
-
Use the API: Go to your model’s page and click the API tab for code snippets.
You can run your model as an API with just a few lines of code.
Here’s an example using the replicate-javascript client:
Step 4: Experiment for best results
Video fine-tuning is pretty new, so we’re still learning what works best.
Here are some early tips:
-
Use a unique trigger word that doesn’t have associations with real words.
-
Experiment with training settings:
More epochs == better quality but longer training time Adjust the LoRA rank Increase batch size to speed up training Use max_steps to control training duration precisely
- If training looks like it’s going to take several hours, cancel it and try:
Reducing the number of epochs Reducing the rank Increasing batch size
- Check the GitHub README for detailed parameter explanations
Extra credit: Train new models programmatically
If you want to automate the process or build applications, you can use our API.
Here’s an example of how to train a new model programmatically using the Replicate Python client:
What’s next?
Fine-tuning video models is in its early days, so we don’t really know yet what is possible, and what might be able to be built on top of it.
Give it a try and show us what you’ve made on Discord, or tag @replicate on X.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelversionopen-sourceWebhook Best Practices: Retry Logic, Idempotency, and Error Handling
<h1> Webhook Best Practices: Retry Logic, Idempotency, and Error Handling </h1> <p>Most webhook integrations fail silently. A handler returns 500, the provider retries a few times, then stops. Your system never processed the event and no one knows.</p> <p>Webhooks are not guaranteed delivery by default. How reliably your integration works depends almost entirely on how you write the receiver. This guide covers the patterns that make webhook handlers production-grade: proper retry handling, idempotency, error response codes, and queue-based processing.</p> <h2> Understand the Delivery Model </h2> <p>Before building handlers, understand what you are dealing with:</p> <ul> <li>Providers send webhook events as HTTP POST requests</li> <li>They expect a 2xx response within a timeout (typically 5
Building a scoring engine with pure TypeScript functions (no ML, no backend)
<p>We needed to score e-commerce products across multiple dimensions: quality, profitability, market conditions, and risk.</p> <p>The constraints:</p> <ul> <li>Scores must update in real time</li> <li>Must run entirely in the browser (Chrome extension)</li> <li>Must be explainable (not a black box)</li> </ul> <p>We almost built an ML pipeline — training data, model serving, APIs, everything.</p> <p>Then we asked a simple question:</p> <p><strong>Do we actually need machine learning for this?</strong></p> <p>The answer was no.</p> <p>We ended up building several scoring engines in pure TypeScript.<br> Each one is a single function, under 100 lines, zero dependencies, and runs in under a millisecond.</p> <h2> What "pure function" means here </h2> <p>Each scoring engine follows 3 rules:</p> <
My React Portfolio SEO Checklist: From 0 to Rich Results in 48 Hours
<h2> The Problem with React & SEO </h2> <p>Here's the dirty secret: <strong>Google can render JavaScript.</strong> But most developers still treat React SPAs as SEO-unfriendly. The real issue isn't rendering — it's the missing fundamentals.</p> <p>I audited my portfolio at <a href="https://www.hailamdev.space" rel="noopener noreferrer">hailamdev.space</a> and went from zero structured data to <strong>passing Google's Rich Results Test</strong> in 48 hours. Here's my exact checklist.</p> <h2> The Complete Checklist </h2> <h3> 1. Meta Tags (index.html) </h3> <p>Every React SPA needs these in <code>public/index.html</code>:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight html"><code><span class="c"><!-- Primary Meta Tags --></span> <span class="nt"><title></span>Y
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases
How to Use the ES2026 Temporal API in Node.js REST APIs (2026 Guide)
<p>After 9 years in development and countless TC39 meetings, the JavaScript Temporal API officially reached <strong>Stage 4 on March 11, 2026</strong>, locking it into the ES2026 specification. That means it's no longer a proposal — it's the future of date and time handling in JavaScript, and you should start using it in your Node.js APIs today.</p> <p>If you've ever shipped a date-related bug in production — DST edge cases, wrong timezone conversions, silent mutation bugs from <code>Date.setDate()</code> — you're not alone. The <code>Date</code> object was designed in 1995, copied from Java, and has been causing developer pain ever since. Temporal is the fix.</p> <p>This guide covers <strong>how to use the ES2026 Temporal API in Node.js REST APIs</strong> with practical, real-world patter
缓存架构深度指南:如何设计高性能缓存系统
<h1> 缓存架构深度指南:如何设计高性能缓存系统 </h1> <blockquote> <p>在现代分布式系统中,缓存是提升系统性能的核心组件。本文将深入探讨缓存架构的设计原则、策略与实战技巧。</p> </blockquote> <h2> 为什么要使用缓存? </h2> <p>在软件系统中,缓存的本质是<strong>用空间换时间</strong>。通过将频繁访问的数据存储在高速存储介质中,减少对慢速数据源的访问次数,从而显著提升系统响应速度。</p> <p>典型场景:</p> <ul> <li>数据库查询结果缓存</li> <li>API响应缓存</li> <li>会话状态缓存</li> <li>计算结果缓存</li> </ul> <h2> 缓存架构设计原则 </h2> <h3> 1. 缓存层级策略 </h3> <p>现代系统通常采用多级缓存架构:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight plaintext"><code>┌─────────────────────────────────────────────┐ │ CDN (边缘缓存) │ ├─────────────────────────────────────────────┤ │ Redis/Memcached │ ├─────────────────────────────────────────────┤ │ 本地缓存 │ ├─────────────────────────────────────────────┤ │ 数据库 │ └─────────────────────────────────────────────┘ </code></pre> </div> <p><strong>原则<
Axios Hijack Post-Mortem: How to Audit, Pin, and Automate a Defense
<p>On March 31, 2026, the <code>axios</code> npm package was compromised via a hijacked maintainer account. Two versions, <code>1.14.1</code> and <code>0.30.4</code>, were weaponised with a malicious phantom dependency called <code>plain-crypto-js</code>. It functions as a Remote Access Trojan (RAT) that executes during the <code>postinstall</code> phase and silently exfiltrates environment variables: AWS keys, GitHub tokens, database credentials, and anything present in your <code>.env</code> at install time.</p> <p>The attack window was approximately 3 hours (00:21 to 03:29 UTC) before the packages were unpublished. A single CI run during that window is sufficient exposure.<br> This post documents the forensic audit and remediation steps performed on a Next.js production stack immediatel
Guilford Technical CC to Launch Degrees in AI, Digital Media - govtech.com
<a href="https://news.google.com/rss/articles/CBMipgFBVV95cUxQOXdfNFpXQjJyRlo4aTA1cjdwZk5IbTNTNi1BU25hQUNlSjVXcE5ZelJNbFRMYUZsVFNWZ3lxX21TQ3NocHdLbldydkR0Q1JURXR5eVhXd3ItNjlJcE1TdHFPMnA1c0FQWDBmbWtNRC04YWRIelU5LWU3Rl9ZWHctYU02d2M4WHJ5a2pwaW0xcTRyNkVqSThhNkNxbFlZSkF4Q2tIZHNn?oc=5" target="_blank">Guilford Technical CC to Launch Degrees in AI, Digital Media</a> <font color="#6f6f6f">govtech.com</font>

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!