Jump to content




China’s new AI video tools close the uncanny valley for good

Featured Replies

rssImage-f5c36a1e7016eea14b3409d8d8803641.webp

Every TV and movie critic is loving to hate on Darren Aronofsky these days. The Academy Award-nominated filmmaker—creator of lyrical, surreal, and deeply human movies like Black Swan, The Whale, Mother!, and Pi—has released an AI-generated series called On This Day . . . 1776 to commemorate the semiquincentennial anniversary of the American Revolution. Though the series has garnered millions of views, commentators everywhere call it “a horror,” slamming Aronofsky’s work for how stiff the faces look, how everything morphs unrealistically. Although calling it “requiem for a filmmaker” seems excessive, they are not wrong about these faults.

The series, created using real human voice-overs and Google’s generative video AI, does suffer from “uncanny valley syndrome” (our brains can very easily detect what’s off with faces, and we don’t buy it as real, feeling an automatic repulsion). But this month, two new generative AI models from China have closed the valley’s gap: Kling 3.0 and Seedance 2.0. For the first time, AI is generating video content that is truly indistinguishable from film, with the time and subject coherence that will make the 2020s “It’s AI slop!” crybabies disappear like their predecessors in the aughts (“It’s CGI!”) and the 1990s (“It’s Photoshop!”).

Seedance 2.0, developed by TikTok parent company ByteDance, released in beta on February 9—exclusively in China for now. It’s widely considered the first “director’s tool.” Unlike previous models that gave the feeling you were pulling a slot machine lever and hoping for a coherent result, Seedance allows for what analysts at Chinese investment firm Kaiyuan Securities call director-level control.

It achieves this through a breakthrough multimodal input system. ByteDance has redesigned its model to accept images, videos, audio, and text simultaneously as inputs, rather than relying on text prompts alone. A creator can upload up to a dozen reference files—mixing character sheets, specific camera movement demos, and audio tracks—and the AI will synthesize them into a scene that follows cinematic logic.

The results have been startling. “With its reality enhancements, I feel it’s very hard to tell whether a video is generated by AI,” says Wang Lei, a programmer in Guangdong who tested the model to generate a 10-second history of humanity. He described the output as “smooth in storytelling with cinematic grandeur.” One of the tricks is that ByteDance trained it on the vast video dataset of Douyin (China’s TikTok). This gave the model the capacity to understand human nuance, which shows in the everyday shots it produces in addition to the Hollywood-level cinematic shots it can create. 

And then there’s Kling

If Seedance is the visionary director, Kling 3.0 is the rigorous cinematographer. Launched February 5 by Kuaishou Technology, Kling 3.0 has earned the moniker “Motion Engine.” While other models struggle with the basic laws of physics—cars floating, people walking through walls—Kling 3.0 respects gravity and light.

i-1-91489530-china-ai-engine.jpg

“The physics simulation finally lets you art direct motion instead of hoping for it,” Bilawal Sidhu, a former Google product manager and AI strategist, said on LinkedIn. This makes it uniquely suited to be integrated into commercial work where a product must look and behave like a real object. Commenters on Reddit were in awe of the model’s new abilities, especially for long takes and multishot.

Kling’s major breakthrough is its Elements feature, which allows users to upload reference videos to lock in character consistency. Before, generative video AI would change the characters’ faces at random, like in Aronofsky’s series. With Kling, they always look exactly the same in any shot it generates—a holy grail feature for filmmakers who need actors to look like the same person from shot to shot. It doesn’t just generate pixels; it understands narrative pacing, cutting, and continuity.

The level of realism is so high that Kaiyuan Securities believes the new model is positioned to be “widely adopted first in AI manga and short drama areas, bringing down costs and improving efficiency to benefit companies with large holdings of intellectual property or traffic.”

The markets agreed. The release of these models immediately sent shockwaves through the Chinese tech sector. Digital content company house COL Group skyrocketed in the anticipation it will use these models. Shares in studio giant Huace Media and game developer Perfect World rallied 7% and 10% respectively. Investors aren’t betting on a toy; they’re betting on the total replacement of traditional production pipelines in gaming, film, and publishing.

An industrial revolution for the visual arts

For many professionals in the trenches, generative AI tools are not toys; they are the new standard. Julian Muller, an award-winning director and creative producer, told me the shift is already visible to everyone. “Just from what I saw in the Super Bowl commercials on Sunday, many incorporated AI elements to achieve creative results. We are definitely at the beginning of a shift in what is possible under tighter timelines and leaner production investments,” Muller says.

“I’d say these models [Seedance 2.0 and Kling 3.0] clearly can produce stunning visual results,” Muller tells me, noting, however, that they’re not perfect. “They are very close to being indistinguishable from real production footage, yet I think there is still a detectable artificial quality to it.” 

Muller does believe that we have passed the point of no return. “Directors and producers who don’t use AI tools to enhance their projects will soon become the exception and not the rule,” he says. “This is te future, and we’re definitely not going back.”

This sentiment is echoed by Tim Simmons, a 17-year Hollywood veteran who analyzes the industry on YouTube’s channel Theoretically Media. He told me that while big studios are paralyzed by their own infrastructure, indie creators are adapting.

“Adoption at the large studios will remain slower because of the rigid postproduction specs that necessitate building customized AI workflows,” he says. “The challenge is the time required to build such a workflow versus the speed at which AI models are evolving.” Basically, by the time the studios have finished constructing your bridge, the river has moved 150 miles to the north, he points out.

“Setting aside the complex discussions regarding unions and talent for a moment, it’s safe to say that through 2026, you’ll see tentative steps from larger studios,” Simmons says. “But for indie studios and international production houses working outside the traditional Hollywood system? Utilization will rise rapidly.”

No soul in the machine

Not everyone is ready to embrace the algorithm, of course. While the technology has nearly conquered the visual uncanny valley, a deeper, emotional chasm remains. “I don’t think we’ve ever been amazed and saddened like we are today,” Peter Quinn, a VFX artist and director known for his surreal, handcrafted effects, told me via email. “Spectacular ‘art’ has just become so dull,” he says.

Quinn argues that we value art not just for the final image, but for the human struggle behind it—the painter mixing colors, the stop-motion artist moving a puppet millimeter by millimeter. “Kling 3.0 and Seedance 2.0, while spectacular, are 2026’s latest shiny AI toys . . . capable of generating soulless marvels, birthed in a data center somewhere,” he says. “It’s interesting how the ‘wow’ fades when we hear it’s AI.”

In fact, Quinn is in the process of creating a TV docuseries about the anti-AI. Titled The Creators, it intentionally features dozens of “real” artists who’ve found interesting ways to express creativity by leaning heavily into showing the process, time, and effort it takes to make something.

“We see a painter mixing and painstakingly applying paint to a canvas over days, stop-motion artist’s timelapse of weeks of tiny well-considered adjustments, a dancer getting it wrong, a collage artist cutting hundreds of pieces by hand, an artist who can create photo-real pencil sketches, a sculptor who knows the nuance of clay, or a photographer who sees something nobody else does,” he  tells me. “It just feels like it’s time. [The] time it takes is what makes it valuable and worthy of looking at or hanging on a wall.”

Titans of the industry share his skepticism. Guillermo del Toro has famously dismissed AI art as “an insult to life itself,” while Breaking Bad and Better Call Saul creator Vince Gilligan says he won’t use tools that remove the human element from storytelling. In Pluribus’s credits there is a line that says that humans proudly made it. Maybe TV and cinema will bifurcate between a minority of human-only-made art for the galleries and the purists, and algorithmic content for the masses. Just like there are fanatics of real film, like Christopher Nolan and Quentin Tarantino, who refuse using digital cameras like everyone else in the industry.

The new impressionism

I understand Quinn, Del Toro, Nolan, and every purist out there. But, from a historical perspective, it really doesn’t make a lot of sense. Despite the existential angst—and leaving aside the huge problem that this will cause in terms of jobs and copyright issues, a topic for another article—there is reason for deep optimism.

We are standing at a moment in history that mirrors the state of art in the late 19th century. Before the industrial revolution brought us the collapsible paint tube and pre-stretched factory-made canvas, painting was an expensive, studio-bound endeavor reserved for the elite who had the patrons that would pay them enough for them to grind their own pigments. The industrial revolution in paint manufacturing liberated every artist. It allowed Monet and Renoir to leave the studio, go outside, and paint the light. It birthed Impressionism.

Seedance 2.0 and Kling 3.0 may be the paint tubes of cinema and TV, which has seen its cost go down with the analog and video revolutions, but it’s still reserved for a very few. Those models—and the ones that will come next from Google and others—truly open the gates for true AI-generated stories that will feel as real as the ones produced with real people, whether the purists like it or not.

Simmons believes “there is a ‘new media’ coming that isn’t ‘just movies but cheaper.’” It will be interactive, generative, and personalized in ways we can’t fully articulate yet, he says. “I don’t think we have the language for it yet. Right now, we are looking at the internet in 1990 and asking, ‘How will this change the fax machine?’ The answer wasn’t a better fax machine.”

I believe that he is right. By lowering the barrier to entry to zero, Seedance and Kling are inviting billions of people who have never held a camera to tell their stories. With the uncanny valley closed, the gatekeepers are gone. The only thing left is to see what humanity decides to paint with this terrifying, wonderful new brush.


View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.