ByteDance adds watermarking and IP guardrails to Seedance 2.0 as it begins cautious global rollout
Six weeks ago, a video of Tom Cruise fighting Brad Pitt on a rooftop went viral. It was, of course, not real. It was generated by Seedance 2.0, ByteDance’s AI video model, and it set off a firestorm that drew cease-and-desist letters from six major Hollywood studios, a formal denunciation from the Motion Picture Association, […] This story continues at The Next Web
Six weeks ago, a video of Tom Cruise fighting Brad Pitt on a rooftop went viral. It was, of course, not real. It was generated by Seedance 2.0, ByteDance’s AI video model, and it set off a firestorm that drew cease-and-desist letters from six major Hollywood studios, a formal denunciation from the Motion Picture Association, and a pointed rebuke from SAG-AFTRA over the unauthorised use of its members’ likenesses. Rhett Reese, the screenwriter behind the Deadpool films, watched the clip and offered a blunt assessment of the technology’s implications for his profession.
Now ByteDance is attempting something delicate: relaunching the very tool that provoked that backlash, but with enough safeguards to make the case that it has heard the criticism. On Wednesday, the TikTok parent company said its global safety and intellectual property teams had worked with a third-party red-teaming partner to bolster Seedance 2.0 ahead of its international release through CapCut, ByteDance’s video editing platform, which reports more than 400 million monthly active users.
The new safeguards are substantive, at least on paper. Seedance 2.0 now blocks video generation from images or videos containing real faces, a direct response to the deepfake controversy that engulfed the model in February. CapCut will also block the unauthorised generation of copyrighted characters, addressing the parade of AI-rendered Shreks, SpongeBobs, Darth Vaders, and Deadpools that the MPA had cited in its complaint.
On the transparency front, all output will carry both visible watermarks and embedded C2PA Content Credentials, the industry-standard protocol for identifying AI-generated content across platforms. ByteDance is also introducing what it calls an “advanced invisible watermarking” technology designed to identify content made with the model even after it has been shared or altered off-platform, and the company says it will conduct proactive monitoring for IP violations.
The rollout itself reflects a calculated caution. CapCut will initially make Seedance 2.0 available to paid users in Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand, and Vietnam. Conspicuously absent from the list are the United States and India, ByteDance’s two most complex regulatory markets. Europe, Africa, South America, and Southeast Asia are expected to follow, according to the company, though no firm timeline has been offered for the US.
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!
The AI video arms race
The timing of the relaunch is notable. Just days earlier, OpenAI announced it was shutting down Sora, its own AI video generation tool, after downloads fell 45 per cent by January and a licensing deal with Disney collapsed. Where OpenAI retreated, ByteDance is advancing, though into a market now acutely sensitised to the regulatory questions that AI-generated content raises.
The EU AI Act’s transparency requirements, which take effect in August 2026, will mandate that providers of generative AI systems mark their output in machine-readable formats and disclose the artificial origin of deepfakes. ByteDance’s adoption of C2PA watermarking and invisible marking appears to anticipate these obligations, though whether its safeguards will satisfy European regulators remains to be seen.
Red-teaming reports suggest the guardrails are not impenetrable. According to testing documented by industry observers, creative prompting can still bypass the filters to produce what have been described as “likeness-adjacent” characters, content that evokes a real person or copyrighted figure without technically reproducing them. It is a familiar challenge in AI governance: the gap between what a policy forbids and what a model can be coaxed into producing.
ByteDance’s vertical integration gives it a unique position in this contest. It builds the AI model, owns the editing platform where it is deployed, and controls TikTok, the dominant short-form video distribution channel. That control means it can, in theory, enforce IP protections across the entire pipeline from generation to distribution. Whether it will do so with sufficient rigour to satisfy Hollywood and its lawyers is another matter entirely.
The AI boom of 2025 produced a generation of tools that could generate text, images, and code at scale. Video was always the next frontier, and the hardest to govern. ByteDance’s bet is that it can be the company to commercialise AI video generation globally without drowning in litigation. The safeguards it has added to Seedance 2.0 are a necessary first step. Whether they are sufficient is a question that Hollywood, regulators, and policymakers across multiple jurisdictions will be answering for months to come.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelglobal

135,000 OpenClaw Users Just Got a 50x Price Hike. Anthropic Says It's 'Unsustainable.'
Originally published at news.skila.ai A single OpenClaw session can burn through $1,000 to $5,000 in compute. Anthropic was eating that cost on a $200/month Max plan. As of April 4, 2026 at 12pm PT, that arrangement is dead. More than 135,000 OpenClaw instances were running when Anthropic flipped the switch. Claude Pro ($20/month) and Max ($200/month) subscribers can no longer route their flat-rate plans through OpenClaw or any third-party agentic tool. The affected users now face cost increases of up to 50 times what they were paying. This is the biggest pricing disruption in the AI developer tool space since OpenAI killed free API access in 2023. And the ripple effects reach far beyond Anthropic's customer base. What Actually Happened (and Why) Boris Cherny, Head of Claude Code at Anthro

Gemma 4 Complete Guide: Architecture, Models, and Deployment in 2026
Google DeepMind released Gemma 4 on April 3, 2026 under Apache 2.0 — a significant licensing shift from previous Gemma releases that makes it genuinely usable for commercial products without legal ambiguity. This guide covers the full model family, architecture decisions worth understanding, and practical deployment paths across cloud, local, and mobile. The Four Models and When to Use Each Gemma 4 ships in four sizes with meaningfully different architectures: Model Params Active Architecture VRAM (4-bit) Target E2B ~2.3B all Dense + PLE ~2GB Mobile / edge E4B ~4.5B all Dense + PLE ~3.6GB Laptop / tablet 26B A4B 25.2B 3.8B MoE ~16GB Consumer GPU 31B 30.7B all Dense ~18GB Workstation The E2B result is the most surprising: multiple community benchmarks confirm it outperforms Gemma 3 27B on s
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.






Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!