This paper is also available as an interactive experience with key stats, visualizations, and navigable sections.Explore it

The End of the Wrapper Era: Why Hybrid AI Architectures Are the Only Viable Path for Enterprise Brand Equity

Executive Summary: The "Coca-Cola Moment" and the Divergence of AI Value

The marketing landscape of late 2024 and 2025 has been defined by a single, polarizing inflection point: The Coca-Cola "Holidays Are Coming" campaign. This moment did not merely represent a technological stumble; it served as a definitive market signal separating superficial AI adoption from deep, architectural integration. When one of the world's most valuable brands released a fully AI-generated commercial that consumers immediately rejected as "soulless," "dystopian," and "uncanny," it exposed the fundamental fragility of relying on raw generative outputs for premium storytelling. 1

For enterprise leaders, this incident serves as a critical case study in what we at Veriprajna term "Aesthetic Hallucination" —the phenomenon where generative models produce visually plausible but emotionally hollow and physically incoherent content. While the advertisement featured technically impressive textures—snow that glistened, trucks that reflected light—it failed at the physics of human emotion and biological movement. The smiles didn't reach the eyes; the motion felt "floaty"; the reality was simulated, not captured. 2

This whitepaper posits that the era of the "LLM Wrapper"—simple interfaces that pass prompts to foundational models like OpenAI’s Sora or Runway Gen-3—is effectively over for high-stakes enterprise applications. The backlash against Coca-Cola, juxtaposed with the success of campaigns like Nike’s "Never Done Evolving," demonstrates that Hybrid AI Workflows are the only viable path forward. True enterprise value lies not in replacing human creativity with automated slop, but in architecting deep AI solutions where human intent directs machine velocity.

In this comprehensive analysis, we dissect the technical failures of current video diffusion models—specifically their inability to model Newtonian physics and human micro-expressions—and contrast them with robust, control-net-driven pipelines that ensure brand consistency. We argue that the future of brand communications belongs to those who use AI to accelerate the craft, not replace the humanity.

1. The Anatomy of Aesthetic Hallucination: Why Generative Video Fails the "Premium" Test

To understand why the Coca-Cola ad failed, one must look beyond the surface criticism of "bad CGI" and examine the underlying limitations of current generative video architectures. "Aesthetic Hallucination" is not just a glitch; it is a byproduct of how diffusion models perceive—and fail to perceive—reality.

1.1 The Simulation of Reality vs. The Hallucination of Meaning

Jean Baudrillard’s concept of the simulacrum —a copy without an original—has manifested literally in 2025’s generative media. The AI-generated polar bears and crowds in the Coca-Cola ad were not representations of real bears or real people; they were statistical averages of millions of images of bears and people. 5 This process creates a "hyperreality" that is visually dense but ontologically empty. Baudrillard warned of a stage where the image has "no relation to any reality whatsoever," becoming a "pure simulacrum". 7 In the context of the Coca-Cola campaign, this manifested as a visual product that mimicked the signs of a holiday commercial (snow, red trucks, smiling crowds) without containing the referent of actual human joy or physical presence.

The "uncanny valley" effect observed by critics is not merely a matter of graphical fidelity but of biological dissonance. While AI can render the geometry of a smile, it struggles to render the physics of a smile. A human smile involves a complex, involuntary interplay of micro-muscles, particularly the orbicularis oculi, which creates the "Duchenne marker" of genuine happiness. Diffusion models, which operate on pixel-level probability distributions rather than anatomical rules, frequently miss these subtle cues. The result is the "dead-eyed" look critics noted in the Coca-Cola ad—a smile that exists on the mouth but does not reach the eyes, signaling to the viewer's subconscious that the entity is not human. 1

Furthermore, critics noted a specific "AI quality" described as "part shiny, part plastic". 2 This aesthetic signature acts as a subconscious warning signal to viewers, instantly categorizing the content as synthetic. This "glossy sheen" is a byproduct of how models like Midjourney and Sora resolve texture; they tend to over-smooth surfaces and exaggerate specular highlights to achieve a "high-definition" look, inadvertently creating a world that looks like it is coated in varnish. When a brand like Coca-Cola, whose identity is rooted in "Real Magic" and tangible, sensory experiences, presents a world that looks hermetically sealed in plastic, the brand promise is broken before the first frame is finished.

1.2 The Physics of Failure: Why Models Cannot "Pour" Coke

A pivotal study by ByteDance Research in 2025 revealed that video generation models like Sora and Gen-3 lack a fundamental understanding of physical laws. They do not learn Newtonian physics; they memorize visual transitions. The study found that while these models can generate videos that closely match their training data, they fail to abstract general physical rules, instead relying on the mimicry of their closest training examples. 8

This "mimicry over abstraction" leads to "case-based" behavior. If the model has seen

thousands of videos of a truck driving, it can reproduce the appearance of driving. However, it does not understand the mechanics of driving—suspension, friction, weight transfer. This results in the "floaty" motion observed in the Coca-Cola ad, where the trucks appeared to glide over the snow rather than interact with it physically. The wheels turned, but the chassis did not react to the terrain, creating a disconnect between the object and its environment. 1

The ByteDance researchers identified a hierarchy of "Attribute Prioritization" in these models: Color > Size > Velocity > Shape . The models are most accurate at reproducing color (hence the perfect Coca-Cola red), but struggle progressively more with size consistency, velocity, and shape constancy. 8

●​ Volume Conservation Failure: In a beverage ad, this hierarchy is fatal. The liquid might look like caramel-colored soda (Color), but it flows like mercury or disappears into the glass because the model does not understand volume conservation (Shape/Physics).

●​ Schrodinger's Truck: The prioritization of color over shape explains why the Coca-Cola trucks exhibited "Schrodinger's Truck" behavior—changing length, wheel count, and cabin shape from shot to shot. 3 The model ensured the truck was red and shiny in every frame, but "forgot" how many wheels it had because it generated the video in latent chunks without a unified 3D representation of the truck object.

1.3 The "Soulless" Critique: A Crisis of Brand Resonance

The most damaging critique was not technical but emotional: the ad was labeled "soulless". 1 "Real Magic" is Coca-Cola's brand promise. By delegating the depiction of that magic to an algorithm incapable of experiencing it, the brand created a dissonance between its message (connection) and its medium (automation).

This reaction is supported by broader consumer sentiment data. A 2025 report found that while 57% of consumers are generally warming to digital advertising, a significant 44% are actively bothered by AI-generated content. 9 More critically, trust drops precipitously—from

48% to 13%—when ads are created entirely by AI versus being co-created with humans. 10 Consumers are developing a "sixth sense" for synthetic content, and the "AI slop" narrative—a term used to describe low-effort, high-volume synthetic content—is becoming a reputational hazard. 1

Brands that utilize raw AI output risk associating their premium identity with this digital detritus. The Coca-Cola ad, despite its high production value relative to average AI content, fell into this trap because it prioritized the efficiency of generation over the authenticity of expression. It signaled to consumers that the brand did not care enough to film the real thing, violating the tacit contract of effort and craft that underpins luxury and heritage branding. 3

Table 1: Technical Failures of Generative Video in Commercial Applications

Failure Mode Technical Cause Visual Symptom Consumer
Reaction
Temporal
Inconsistency
Frame-independen
t generation; lack
of 3D object
permanence11
Objects morphing
shape; "fickering"
textures; changing
wheel counts on
trucks3
"Glitchy," "Cheap,"
"Distracting"
Physics
Hallucination
Lack of Newtonian
physics model;
"case-based"
mimicry8
Liquids fowing
unnaturally;
vehicles "foating"
over terrain;
incorrect gravity13
"Uncanny," "Fake,"
"Video Game-like"
Biological
Dissonance
Statistical
averaging of facial
landmarks; missing
micro-expressions
14
"Dead eyes"; smiles
that don't engage
eye muscles;
smooth, plastic skin
textures2
"Creepy,"
"Soulless,"
"Dystopian"
Mode Collapse Overfting to
training data
paterns; lack of
diversity in latent
sampling15
Generic, repetitive
imagery; "AI
sheen"; lack of
specifc brand
identity
"Boring," "Generic,"
"Slop"

2. Forensic Analysis: The Coca-Cola vs. Toys 'R' Us Case Studies

The failures of 2024 and 2025 provide a roadmap of what not to do. Both Coca-Cola and Toys 'R' Us attempted to use "LLM Wrapper" approaches—relying heavily on text-to-video generation with insufficient human intervention—and suffered significant reputational damage. These case studies illustrate the pitfalls of allowing the tool to become the creator.

2.1 Coca-Cola: Doubling Down on the Uncanny

Despite facing backlash for a similar AI experiment in 2024, Coca-Cola "doubled down" in 2025, commissioning studios Secret Level and Silverside AI to produce a fully synthetic "Holidays Are Coming" spot. 1 The project was ambitious, reportedly involving the generation of over 70,000 video clips to piece together a single 30-second spot. 3

The Workflow Failure: This "brute force" approach highlights the inefficiency of raw generation. Instead of directing a specific shot with intention, the team essentially rolled the dice 70,000 times, hoping for a coherent result that matched their vision. This is the antithesis of the "Director's Vision." It reduces the creative process to a curation task, sifting through thousands of "hallucinations" to find the one that looks least wrong.

The "Secret Level" Disconnect: Pratik Thakar, Coca-Cola's head of generative AI, insisted that the "craftsmanship is ten times better" than the previous year. 1 However, the public disagreed. Viewers spotted inconsistencies in the truck's mechanics and the animals' expressions that a human animator would never have allowed. 3 The critique was not just about the visuals, but about the labor . Comments like "Coca-Cola is red because it's made from the blood of out-of-work artists" highlight the reputational risk of appearing to automate creativity solely for cost savings. 1 The narrative shifted from "Coca-Cola is innovative" to "Coca-Cola is cheap."

The "Silverside AI" Perspective: In interviews, the creators at Silverside AI argued that the controversy was merely "resistance to new technology," comparing it to the early backlash against CGI in Toy Story . 19 However, this comparison is flawed. Toy Story used technology to tell a story that could not be told otherwise; the Coca-Cola ad used technology to retell a story that had already been told better with practical effects 30 years prior. The use of AI did not add value; it subtracted humanity.

2.2 Toys 'R' Us & Sora: The "Origin Story" Blunder

Toys 'R' Us attempted to use OpenAI's Sora to tell the origin story of Geoffrey the Giraffe. The reception was brutal, with sentiments plummeting as viewers called the ad "creepy" and "cynical". 20

The Child Actor Problem: The use of an AI-generated child actor was particularly jarring. The "uncanny valley" effect is magnified when applied to children, triggering a primal rejection response in viewers. 20 Humans are biologically programmed to protect and respond to children; when a "child" is presented that is clearly not human—with shifting features and dead eyes—it creates a revulsion response that is disastrous for a toy brand.

Inconsistency: Like the Coca-Cola ad, the Toys 'R' Us spot suffered from morphing backgrounds and inconsistent character models. 22 The disconnect between the warmth of a childhood toy store and the cold calculation of a generative algorithm created a "creepy" dissonance. It revealed the immaturity of pure text-to-video for narrative storytelling, where character identity must remain constant across multiple shots—a capability that standard diffusion models struggle with. 11

2.3 The Counter-Example: Nike's "Never Done Evolving"

In stark contrast, Nike’s 50th-anniversary campaign used AI to simulate a tennis match between a 1999 Serena Williams and a 2017 Serena Williams. This campaign won a Cannes Grand Prix and universal acclaim.

Data vs. Hallucination: Nike did not ask an AI to "imagine" Serena. They fed a machine learning model real archival footage of her gameplay to analyze her speed, shot selection, and reactivity. 23 The AI was used to calculate possibilities based on reality, not to fabricate a fake reality from statistical noise.

Purposeful Application: The AI acted as a "time machine" to visualize data, a feat impossible with traditional filming. It was not a cost-cutting measure to avoid filming Serena; it was a storytelling device to celebrate her evolution. 25 The "vid2player" technique used by the production team (developed at Stanford) utilized domain knowledge of tennis rallies to create behaviorally accurate sprites, rather than relying on pixel-level diffusion alone. 24

The Hybrid Difference: The workflow combined rigorous data science with high-end VFX compositing. The AI generated the movements and the gameplay logic, but human compositors and editors ensured the visual fidelity and narrative pacing. This "Hybrid" approach—Human Intent + AI Execution—is the model for success.

3. The Veriprajna Approach: Deep AI & Hybrid Workflows

The lesson from 2025 is clear: Do not let AI render the final pixel. Veriprajna advocates for a Hybrid AI Workflow where AI acts as the accelerator of craft, not the replacement of humanity. We position ourselves not as an interface to OpenAI, but as architects of bespoke production pipelines.

3.1 The "Human-in-the-Loop" Architecture

We reject the "prompt-and-pray" methodology. Our workflows are designed around the principle that human intent must govern machine execution at every layer. This aligns with findings that "Human-in-the-Loop" (HITL) systems achieve 97.8% recall accuracy in compliance tasks compared to significantly lower rates for fully automated systems. 26

Pre-Production (AI as Dreamer)

We use AI for rapid storyboarding and "photomatics". 27 Tools like Atlabs and Krea AI allow for real-time visualization of concepts, reducing pre-visualization costs by 60-80% without committing to the final look. 28

●​ The Benefit: This allows Directors to "shoot" the commercial virtually before a single camera rolls. We can iterate on lighting, composition, and pacing instantly.

●​ The Tooling: Krea AI's real-time generation capabilities allow creatives to sketch a layout and see it rendered photorealistically in milliseconds. 29 This moves the creative process from "text-based" to "visual-based," re-empowering the artist.

Production (Human as Capturer)

For elements requiring emotional resonance—human faces, crucial product interactions—we film real talent. As the ByteDance physics study proves, AI cannot yet reliably simulate the micro-expressions of joy or the fluid dynamics of a pouring drink. 8

●​ The "Sandwich" Method: We film the "hero" elements (the actor, the product) on green screen or LED volumes. These are the "human layers."

●​ Virtual Production: We use AI to generate high-fidelity backgrounds and environments that are projected onto LED walls, allowing the actor to interact with the light of the scene. 30

Post-Production (AI as Sculptor)

This is where Deep AI shines. We use Video-to-Video pipelines (not text-to-video) to transform, style, and enhance captured footage.

●​ Compositing: Using AI to seamlessly integrate real actors into synthetic environments.

●​ Style Transfer: Applying consistent brand aesthetics using custom-trained LoRA (Low-Rank Adaptation) models. 31

●​ Upscaling: Utilizing tools like Topaz Video AI to ensure broadcast-quality resolution (4K), eliminating the "fuzzy" artifacts of raw AI video. 33

3.2 The Technical Stack: Beyond the Wrapper

While "Wrappers" rely on the generic capabilities of public models (ChatGPT, Midjourney), Veriprajna builds Agentic AI Architectures . 35

ComfyUI Enterprise Pipelines

We utilize ComfyUI, a node-based workflow interface, rather than simple web prompts. This allows for granular control over every step of the generation process. 36

●​ Granular Control: In ComfyUI, we can control the denoising strength, the latent upscale method, and the specific layers of the U-Net that are affected by a prompt. This ensures that the "Schrodinger's Truck" effect is mathematically impossible because we are not regenerating the truck every frame; we are guiding its evolution.

●​ Enterprise Scale: Platforms like ComfyICU and InstaSD allow us to deploy these complex workflows as scalable APIs, enabling high-volume generation for enterprise clients without managing local GPU farms. 38

ControlNet Integration: The Structural Anchor

Instead of hoping a prompt preserves a product's shape, we use ControlNet .

●​ Mechanism: ControlNet creates a "locked" copy of the diffusion model's weights and a "trainable" copy. We can feed a Canny Edge Map or Depth Map of the actual product into the network. 40

●​ Result: The AI is forced to generate the video around the exact geometry of the brand's asset. The lighting and background can be generative, but the product silhouette remains mathematically perfect. This achieves a structural integrity rate of 94.2% compared to the variable output of prompting alone. 41

LoRA (Low-Rank Adaptation): The Brand DNA

To ensure the specific "look and feel" of a brand—the color grading, the specific grain of the film, the style of the illustration—we train custom LoRA adapters. 31

●​ Efficiency: Unlike fine-tuning an entire model (which is expensive and slow), LoRAs are lightweight files that can be swapped in and out.

●​ Application: For a client like Nike, we would train a LoRA on 20 years of their specific cinematography style. This ensures that even AI-generated footage "feels" like a Nike ad, preserving the Brand Codes that are essential for long-term equity. 42

4. Deep Technical Analysis: Solving the "Consistency Crisis"

The primary technical barrier to enterprise AI video is Temporal Consistency . In the Coca-Cola ad, the shifting details revealed the lack of a "World Model." Veriprajna solves this through rigorous technical interventions.

4.1 Solving Temporal Inconsistency with VCD

We implement advanced metrics like Video Consistency Distance (VCD) in our fine-tuning process. VCD measures the frequency domain distance between the conditioning image (the brand asset) and the generated frames. By penalizing high VCD values during training, we force the model to prioritize temporal coherence, ensuring objects don't morph or flicker. 11

●​ Mechanism: VCD operates in the frequency space of video frame features. It captures frame information effectively through frequency-domain analysis, allowing the model to distinguish between "natural motion" (which is allowed) and "unnatural distortion" (which is penalized).

●​ Result: Models fine-tuned with VCD show substantial improvements in subject consistency (95.22%) and background consistency (96.32%) on benchmark tests like VBench-I2V. 12

4.2 Combatting Mode Collapse

"Mode Collapse" occurs when a model outputs repetitive or generic content due to limited training diversity. 15 Public models often default to "safe," generic visuals (the "average" Christmas truck) because they are trained to minimize error against a massive, average dataset.

●​ Veriprajna Solution: We use diversity-driven regularization and dynamic noise scheduling during the sampling process to ensure creative variance while adhering to brand guardrails. We also employ Latent Space Manipulation to explore specific, on-brand regions of the model's creative potential, avoiding the generic "AI sheen". 43

4.3 The "Object Permanence" Fix

Generative video models struggle with object permanence—remembering that a person walking behind a tree should re-emerge on the other side. 44

●​ Veriprajna Solution: We utilize 3D-aware video generation and NeRF (Neural Radiance Fields) integration. By anchoring AI generation to a 3D proxy scene (a "blockout"), we ensure that occlusion and perspective are handled by rigid 3D geometry, not probabilistic guessing. The AI effectively "skins" the 3D scene, combining the logic of CGI with the aesthetic flexibility of generative AI. 46 This hybrid approach—using physics simulations to drive the motion and AI to generate the texture—bridges the gap between simulation and hallucination.

5. Comparative Market Analysis: The ROI of Quality

The market data from 2025 creates a compelling business case for the High-Quality Hybrid approach over the Low-Cost Fully Generative approach.

5.1 The Trust Deficit and Authenticity Premium

Trust is a finite resource in the digital economy.

●​ Trust Gap: Only 13% of consumers trust ads created entirely by AI, whereas 48% trust ads co-created by humans and AI. 10 This statistic alone invalidates the "full automation" strategy for consumer-facing brands.

●​ Authenticity Premium: Brands like Dove ("The Code") have successfully campaigned against AI distortion, building massive brand equity by championing authenticity. 48 This suggests that for many sectors (beauty, food, wellness), "Real" is a premium differentiator.

●​ Negative Halo: NielsenIQ research found that even polished AI ads can cause a "negative halo effect," damaging brand perception beyond the individual campaign. 49 Viewers labeled AI ads as "annoying," "boring," and "confusing," even when the visual quality was high.

5.2 Success Stories: The Hybrid Winners

●​ Heinz (AI Ketchup): Rather than presenting AI as reality, Heinz used AI to prove a brand truth—that "ketchup" equals "Heinz." This meta-commentary was clever, transparent, and relied on the AI's limitations as the joke, rather than trying to fool the audience. 50 It turned the "hallucination" bug into a "brand dominance" feature.

●​ Under Armour (Anthony Joshua): While it faced some criticism, the campaign succeeded technically by using a mixed media approach. It combined AI motion graphics with CG and licensed footage, avoiding the "uncanny valley" of a fully AI-generated human face. The production team used AI to generate "surreal, high-concept footage" while relying on actual footage of Anthony Joshua for the facial performance. 52

●​ Volkswagen (Nostalgia): VW used AI to insert the new Tiguan into "everyday" scenarios, but the campaign relied on human actors and storytelling scripts. The AI was an invisible enabler for post-production efficiency, not the star of the show. 54

5.3 Efficiency Metrics

Veriprajna’s Hybrid Workflow delivers efficiency without sacrificing quality. The ROI is found in process acceleration, not creative replacement .

●​ Pre-Production: 60-80% cost reduction in storyboarding/animatics using AI tools like Atlabs and Storyboard Hero. 28

●​ Production: 30-40% reduction in shoot days by using AI for backgrounds and set extensions (Virtual Production), allowing the budget to be focused on high-quality talent and directors. 57

●​ Post-Production: 90% reduction in localization costs using AI dubbing, allowing global campaign rollouts in days rather than months. 58

Table 2: Comparative Analysis of AI Campaign Strategies

Campaign Strategy AI Role Outcome Key Lesson
Coca-Cola Replacement Generate
entire video
(crowds,
trucks,
animals)
Backlash
("Soulless")
Don't
automate
emotional
connection.
Toys 'R' Us Replacement Generate
narrative and
characters
Sentiment
Plummet
Avoid AI for
human/child
characters in
Col1 Col2 (Sora) Col4 emotional
roles.
Nike Augmentation Analyze data
to simulate
scenarios
Cannes Grand
Prix
Use AI to
visualize
data/possibiliti
es, not to fake
reality.
Heinz Meta-Commen
tary
Highlight AI's
bias towards
the brand
Viral Success Transparency
and humor
build trust.
Under Armour Mixed Media Generate
surreal
environments/
graphics
Technical
Success
Hybrid
workfows
(footage + AI)
yield best
visual fdelity.

6. Strategic Implementation: The Veriprajna Roadmap

For an enterprise to transition from "AI Experimentation" to "AI Maturity," it requires a structured roadmap that prioritizes governance and architectural soundness.

Phase 1: Governance & Infrastructure (Weeks 1-4)

●​ Establish "Air-Gapped" Environments: Secure, private instances of models (Stable Diffusion, Flux) to protect IP. Enterprise platforms like ComfyICU allow for private cluster deployment. 38

●​ Data Sovereignty: Ensure no brand assets are used to train public foundation models. Implement contracts that guarantee data isolation.

●​ Audit & Benchmarking: Assess current asset libraries for "LoRA-readiness"—identifying which assets (logos, mascots, product shots) can be used to train custom style models. 59

Phase 2: The Hybrid Pilot (Weeks 5-8)

●​ Low-Risk High-Volume: Start with social media variations or localization tasks, not the Super Bowl ad.

●​ The "Sandwich" Workflow: Implement the Human Ideation -> AI Generation -> Human Refinement pipeline.

●​ ControlNet Training: Develop custom ControlNets for key product SKUs. For example, training a ControlNet specifically on the geometry of a brand's bottle ensures 100% shape consistency in all future generations. 60

Phase 3: Agentic Scale (Months 3-6)

●​ Deploy Agentic Workflows: Implement AI agents that can autonomously generate variations of a master asset for different platforms (9:16 for TikTok, 16:9 for TV), checking their own work against brand guidelines using Vision Language Models (VLMs). 35

●​ Real-Time Optimization: Connect generation pipelines to performance data. If a specific "style" performs better in A/B testing, the Agentic system automatically weights future generations toward that style. 25

7. Future Outlook: The Rise of "Physical Intelligence"

The next frontier, which Veriprajna is actively developing, is Physical Intelligence. Current models are "brains in jars"—they know what a glass looks like, but not how it feels to hold. The next generation of models (World Models) will simulate the physics of the world, not just the pixels.8 Until these World Models mature (estimated 2026-2027), the Hybrid Workflow is the only safe bridge. It allows brands to harness the rendering power of 2025 AI while borrowing the physical and emotional intelligence of human creators. We are entering a phase where the novelty of "look what the AI made" has faded. The new standard is "look what we made with AI."

Conclusion: "Real Magic" Requires Real Humans

The failure of the Coca-Cola ad was not a failure of technology; it was a failure of strategy. It attempted to substitute the output (the video file) for the outcome (human connection). They forgot that magic is a human experience, not a data point.

Veriprajna stands at the intersection of Algorithm and Artistry . We do not sell "AI Videos." We sell Brand Resilience in the age of synthetic media. We ensure that when your brand uses AI, it builds your legend rather than cheapening your legacy.

Actionable Takeaway: Stop asking "How much money can AI save us on production?" Start asking "How can AI enable us to visualize stories we couldn't afford to tell before, while maintaining the human soul of our brand?" The answer to the first question leads to the uncanny valley. The answer to the second leads to the future of advertising.

Report compiled by Veriprajna Strategy Team, December 2025.

Works cited

  1. Coca-Cola's new AI Christmas advert sparks backlash from angry ..., accessed December 10, 2025, https://www.foodbible.com/news/drinks/cocacolachristmasad2025controversy-733475-20251104

  2. Coca-Cola's AI Holiday Ad Is Everywhere. It's a Sign of a Much ..., accessed December 10, 2025, https://www.cnet.com/tech/services-and-software/the-worst-thing-about-coca-colas-holiday-ad-isnt-the-ai/

  3. Devastating graphic shows just how bad the Coca-Cola Christmas ..., accessed December 10, 2025, https://www.creativebloq.com/design/advertising/devastating-graphic-shows-just-how-bad-the-coca-cola-christmas-ad-really-is

  4. Coca-Cola's monstrous AI advert is the absolute opposite of Christmas - Voice Magazine, accessed December 10, 2025, https://www.voicemag.uk/blog/15852/coca-cola-ai-christmas-advert-2025

  5. Lecture 1 - Vanderbilt University, accessed December 10, 2025, https://cdn.vanderbilt.edu/t2-my/my-prd/wp-content/uploads/sites/470/2014/03/PM-Lectures-Vandy07.14.doc

  6. Lecture 1 - Vanderbilt University, accessed December 10, 2025, https://cdn.vanderbilt.edu/t2-my/my-prd/wp-content/uploads/sites/470/2014/03/PM-Lectures-Vanderbilt.doc

  7. Beyond the Physical Self: Understanding the Perversion of Reality and the Desire for Digital Transcendence via Digital Avatars in the Context of Baudrillard's Theory - Qeios, accessed December 10, 2025, https://www.qeios.com/read/F3Y8IG

  8. AI Video Models Fail to Learn Real-World Physics: New ByteDance Study Reveals Key Limitations - CTOL Digital Solutions, accessed December 10, 2025, https://www.ctol.digital/news/ai-video-generation-lacks-physics-bytedance-study/

  9. Kantar study reveals disconnect between consumers and marketers on ad platforms, accessed December 10, 2025, https://campaignme.com/kantar-study-reveals-disconnect-between-consumers-and-marketers-on-ad-platorms/f

  10. AI and Advertising in 2025: Consumer Expectations and Research Insights Smartly, accessed December 10, 2025, https://www.smartly.io/resources/ai-and-advertising-in-2025-what-consumers-really-expect

  11. Enhancing Temporal Consistency for Image-to-Video Generation via Reward-Based Fine-Tuning - arXiv, accessed December 10, 2025, https://arxiv.org/html/2510.19193v2

  12. Enhancing Temporal Consistency for Image-to-Video Generation via Reward-Based Fine-Tuning - ChatPaper, accessed December 10, 2025, https://chatpaper.com/paper/202606

  13. "Do generative video models learn physical principles from watching videos?", Motamed et al 2025 (no; undermined by fictional data & esthetic/tuning training?) : r/mlscaling - Reddit, accessed December 10, 2025, https://www.reddit.com/r/mlscaling/comments/1irwvb9/do_generative_video_models_learn_physical/

  14. The Main Limitation of Generative AI: Understanding Human Emotions, accessed December 10, 2025, https://fredypascal.medium.com/the-main-limitation-of-generative-ai-understanding-human-emotions-c2f5cd92f1b9

  15. How do you prevent mode collapse in diffusion models? - Milvus, accessed December 10, 2025, https://milvus.io/ai-quick-reference/how-do-you-prevent-mode-collapse-in-diffusion-models

  16. A Closer Look at Model Collapse: From a Generalization-to-Memorization Perspective, accessed December 10, 2025, https://arxiv.org/html/2509.16499v2

  17. Coca-Cola Refreshes Givers of the Season, Embraces AI-Powered Storytelling in Global Holiday Campaign, accessed December 10, 2025, https://www.coca-colacompany.com/media-center/coca-cola-refreshes-givers-of-the-season-embraces-ai-powered-storytelling-in-global-holiday-campaign

  18. AI or No AI, Coke Gets the Christmas Love | System1 Group | Coca ..., accessed December 10, 2025, https://system1group.com/ad-of-the-week/ai-or-no-ai-coke-gets-the-christmas-love

  19. A Q&A with the Makers of Coca-Cola's Controversial AI Holiday Ad - Futureweek, accessed December 10, 2025, https://futureweek.com/a-qa-with-the-makers-of-coca-colas-controversial-ai-holiday-ad/

  20. Toys "R" Us sees sentiments plummet over 'soulless' AI-generated ad | Marketing-Interactive, accessed December 10, 2025, https://www.marketing-interactive.com/toys-r-us-sora-ai-sentiments-plummet

  21. Does AI help or hinder creativity? "Nerds" and creatives sound off | IBM, accessed December 10, 2025, https://www.ibm.com/think/insights/ai-in-art

  22. 5 AI Marketing Failures and What We Can Learn From Them, accessed December 10, 2025, https://endash.ai/use_case/5-ai-marketing-failures-and-what-we-can-learn-from-them

  23. AI is a One Trick Pony. Analyzing Nike's 50th Anniversary —… | by Maulana Saputra | Towards Explainable AI | Medium, accessed December 10, 2025, https://medium.com/towards-explainable-ai/ai-is-a-one-trick-pony-24f539c573a2

  24. Never Done Evolving - AKQA, accessed December 10, 2025, https://www.akqa.com/work/nike/nike-50th-anniversary/never-done-evolving/

  25. Top 10 AI-Powered Marketing Campaigns of 2025 - Pixis, accessed December 10, 2025, https://pixis.ai/blog/10-breakthrough-ai-marketing-campaigns-we-loved-in-2025/

  26. Human in the Loop | Approveit, accessed December 10, 2025, https://approveit.today/human-in-the-loop

  27. AI Photomatics | 60% Less Cost Photorealistic Ad Testing | Animatic Media, accessed December 10, 2025, https://www.animaticmedia.com/ai-photomatics

  28. AI Video Production FAQ - Copyright Protected - Animatic Media, accessed December 10, 2025, https://www.animaticmedia.com/faq/

  29. Krea AI: The Ultimate All-in-One AI Tool Review (2025), accessed December 10, 2025, https://psychelicht.com/en/krea-ai-2/

  30. DIGITAL CATAPULT – AI tools for image asset creation and augmenting existing assets in advanced media production, accessed December 10, 2025, https://iuk-business-connect.org.uk/wp-content/uploads/2024/10/Digital-Catapult-AI-Tools-Report.pdf

  31. How Stable Diffusion Is Powering the Next Generation of AI-Driven Content, accessed December 10, 2025, https://www.datasciencesociety.net/how-stable-diffusion-is-powering-the-next-generation-of-ai-driven-content/

  32. Flux LoRA Training: Quick 30-Minute Guide for Character Design, Style, and Specific Outfit LoRA - YouTube, accessed December 10, 2025, https://www.youtube.com/watch?v=K7Q_bjmtre4

  33. Secret Level and Coca-Cola's AI-Driven Holiday Classic - Topaz Labs, accessed December 10, 2025, https://www.topazlabs.com/news/branded-content-reimagined-secret-level-and-coca-colas-ai-driven-holiday-classic

  34. Sora 2 vs. Runway Gen-3: Rendering Speed, Resolution & Physics Accuracy for 20-Second Clips (Q4 2025 Benchmarks) - Sima Labs, accessed December 10, 2025, https://www.simalabs.ai/resources/sora-2-vs-runway-gen-3-rendering-speed-resolution-physics-accuracy-20-second-clips-q4-2025-benchmarks

  35. Agentic AI Architecture: Blueprints for Autonomous Systems - Quantiphi, accessed December 10, 2025, https://quantiphi.com/blog/agentic-ai-architecture/

  36. Generative AI for VFX – Victor Perez | ComfyUI × NUKE Course, accessed December 10, 2025, https://victorperez.co.uk/

  37. ComfyUI | Generate video, images, 3D, audio with AI, accessed December 10, 2025, https://www.comfy.org/

  38. Enterprise ComfyUI - Private Cloud Deployment & Management - ComfyICU, accessed December 10, 2025, https://comfy.icu/for/enterprises

  39. InstaSD Case Studies | AI Workflow Hosting & ComfyUI API Deployment, accessed December 10, 2025, https://www.instasd.com/case-studies

  40. ComfyUI ControlNet Usage Example, accessed December 10, 2025, https://docs.comfy.org/tutorials/controlnet/controlnet

  41. Image to Image ControlNet: Master Precision AI Art in 2025 [72% Accuracy Boost], accessed December 10, 2025, https://www.cursor-ide.com/blog/image-to-image-controlnet-guide

  42. Tailored Generation | On-Brand Content Engine for Custom AI Models | Bri Bria.ai, accessed December 10, 2025, https://bria.ai/tailored-generation

  43. How to avoid mode collapse problem in large model video generation? - Tencent Cloud, accessed December 10, 2025, https://www.tencentcloud.com/techpedia/124559

  44. Runway's Gen-4.5 Claims 'Unprecedented' AI Video Accuracy | The Tech Buzz, accessed December 10, 2025, https://www.techbuzz.ai/articles/runway-s-gen-4-5-claims-unprecedented-ai-video-accuracy

  45. Learning Object Permanence from Video - European Computer Vision Association, accessed December 10, 2025, https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123610035.pdf

  46. DreamPhysics: Learning Physics-Based 3D Dynamics with Video Diffusion Priors arXiv, accessed December 10, 2025, https://arxiv.org/html/2406.01476v3

  47. Grounding Creativity in Physics: A Brief Survey of Physical Priors in AIGC - IJCAI, accessed December 10, 2025, https://www.ijcai.org/proceedings/2025/1176.pdf

  48. 27+ Recent Innovative Marketing Campaigns from 2025 - StoryChief, accessed December 10, 2025, https://storychief.io/blog/recent-innovative-marketing-campaigns

  49. Consumers call AI-generated video ads annoying, confusing, per NIQ | Marketing Dive, accessed December 10, 2025, https://www.marketingdive.com/news/consumer-perceptions-generative-ai-in-marketing-openai-sora/735761/

  50. AI Ketchup - D&AD, accessed December 10, 2025, https://www.dandad.org/work/d-ad-awards-archive/ai-ketchup

  51. When AI Knows Ketchup—Inside Heinz's 2022 'AI Ketchup' Campaign - The Greatscape, accessed December 10, 2025, https://thegreatscape.com/2025/03/21/when-ai-knows-ketchup-inside-heinzs-2022-ai-ketchup-campaign/

  52. UNDER ARMOUR x ANTHONY JOSHUA - Studio Vergult, accessed December 10, 2025, https://www.studiovergult.com/work/underarmour

  53. The Making of “Forever Is Made Now” - Tool of NA, accessed December 10, 2025, https://toolofna.com/news/p/the-making-of-forever-is-made-now/

  54. Volkswagen debuts creative campaign “Now anybody can drive like Somebody” for all-new 2025 Tiguan, accessed December 10, 2025, https://media.vw.com/releases/1871

  55. Boosting innovation, reshaping mobility: Volkswagen Group invests in AI, accessed December 10, 2025, https://www.volkswagen-group.com/en/press-releases/boosting-innovation-reshaping-mobility-volkswagen-group-invests-in-ai-19852

  56. The 2025 Guide to the Best AI Storyboard Generators for Creatives. - Atlabs, accessed December 10, 2025, https://www.atlabs.ai/blog/best-ai-storyboard-generators

  57. These fashion brands cut production costs with AI models | Claid.ai, accessed December 10, 2025, https://claid.ai/blog/article/brands-using-ai-fashion-models/

  58. AI dubbing in 2025: the complete guide for global business and content leaders RWS, accessed December 10, 2025, https://www.rws.com/blog/ai-dubbing-in-2025/

  59. dataset.md - lucataco/cog-hunyuanvideo-lora-trainer - GitHub, accessed December 10, 2025, https://github.com/lucataco/cog-hunyuanvideo-lora-trainer/blob/main/assets/dataset.md

  60. How to achieve precise composition with ControlNet? - Tencent Cloud, accessed December 10, 2025, https://www.tencentcloud.com/techpedia/125080

  61. A Comparative Analysis of Runway and Sora: The New Frontier of AI Video Generation, accessed December 10, 2025, https://skywork.ai/skypage/en/A-Comparative-Analysis-of-Runway-and-Sora:-The-New-Frontier-of-AI-Video-Generation/1948241392539652096

Prefer a visual, interactive experience?

Explore the key findings, stats, and architecture of this paper in an interactive format with navigable sections and data visualizations.

View Interactive

Build Your AI with Confidence.

Partner with a team that has deep experience in building the next generation of enterprise AI. Let us help you design, build, and deploy an AI strategy you can trust.

Veriprajna Deep Tech Consultancy specializes in building safety-critical AI systems for healthcare, finance, and regulatory domains. Our architectures are validated against established protocols with comprehensive compliance documentation.