YOUR TASTE IS THE ONLY THING AI CAN’T GENERATE

Open your Instagram feed right now. Count how many AI images you see before the scroll gets sammy. The dreamy, diffused lighting. The hyper-detailed fantasy landscapes. The portraits with that specific softness around the eyes that looks almost real but not quite. The motion reels where everything flows like water through an invisible wind machine.

Now ask yourself: can you tell who made any of it?

This is the part of the AI conversation that doesn't get enough airtime. Not "will AI replace designers?" — that one's been done to death, and frankly it's the wrong question. The more urgent question is: in a world where AI can generate unlimited creative output, what happens when unlimited creative output all starts to look the same?

Juan Teppa has been asking this question since before most people had even opened a Midjourney account. And he built a whole studio around the answer.

Taste is the only thing AI can't generate: Juan Teppa

The Ghiblification of everything

If you've been online in 2025 and 2026, you've lived through it. First it was the Ghibli wave — everyone's photos reimagined in the warm, hand-drawn aesthetic of Studio Ghibli. Then the hyper-realistic product renders that all share the same lighting rig. Then the AI fashion imagery with the same impossible fabrics catching the same impossible light.

Juan Teppa — Senior Creative, GenAI Strategist, and founder of AItelier — named this phenomenon better than anyone else has managed to. He called his ongoing critical series about it The Slop Machine Operator. Episodes include "The Ghiblification of Everything" and "ASMR Cutting Glass Fruits" — a forensic examination of the aesthetic loops that generative AI keeps producing at scale.

The series asks a pointed question: when AI can generate anything, why does so much of it look and feel like everything else?

The answer isn't that the tools are bad. The tools are extraordinary. The answer is that most people using them are letting the tools make the creative decisions. They're prompting. They're not directing.

And there's a difference. A big one.

AI makes average infinite. Your taste is what's scarce.

Here's the thing about generative AI that the hype cycle consistently buries: the tools have their own gravitational pull. Midjourney has an aesthetic. Runway has a motion signature. Every model trained on the internet inherits the visual biases of everything the internet has produced — which means it defaults, again and again, toward the most statistically common version of beautiful.

That's not a flaw. It's just what these tools are. They're extraordinarily good at generating the mean. The most-likely. The aesthetically acceptable.

What they can't generate is you. Your references. Your friction. Your specific obsessions and cultural shorthand and the weird thing you find beautiful that doesn't have ten million training examples behind it.

Massimo Vignelli — the modernist titan behind the NYC subway map and some of the most enduring identity systems ever made — put it simply: "The life of a designer is a life of fight: fight against the ugliness." He was talking about bad typography and lazy layouts. But the principle scales perfectly to the generative era. The fight against aesthetic homogenisation is now something every creative professional has to consciously choose to show up for.

Your taste is not a soft skill. In 2026, it is your primary competitive advantage. The question is whether you're actively developing it, protecting it, and leading with it — or whether you're outsourcing it to a model that's been trained to give everyone the same answer.

What Juan Teppa built instead

Juan has 20+ years of creative leadership across Sony Pictures, Telemundo, and Warner Bros. Discovery. He's worked across broadcast, brand integrations, live events, and streaming — through every major industry disruption of the past two decades. He has seen what happens when a new tool arrives and half the industry rushes to use it the same way.

So when generative AI arrived, he didn't just start prompting. He built a studio with a philosophy baked in.

AItelier — his Mexico City-based AI Studio and Creative Consultancy — operates around a single guiding question: What happens when taste leads and tools follow?

The studio works across generative film, brand identity, IP development, computational fashion, and executive training. Its clients and collaborators include Runway and Dreamina.ai. But what makes AItelier distinctive isn't the tool stack — it's the creative stance. The studio's principles are explicit: originality first. No borrowed IP. No AI influencers. Human creativity drives every decision; AI provides velocity, scale, and new dimensions.

This is not a vibe. It's an operating model.

The proof is in the work. Juan's generative short film The Last Apple (La Última Manzana) explores loss, resistance, and Palestine — made with RunwayML, and sitting in sharp contrast to every piece of AI content that prioritises spectacle over stakes. He used a generative tool for something it wasn't designed for: moral weight. The result is work that couldn't have been made without the technology, and couldn't have been made by the technology without him.

That's the distinction worth holding onto.

So what does this actually mean for you?

If you're a designer, art director, or creative professional reading this while quietly wondering whether your skills still matter — this is the reframe you needed.

The problem was never AI. The problem is treating AI as a decision-maker rather than a collaborator. When you open a generative tool and accept the first output that looks good enough, you're not using AI as a creative tool — you're using it as a creative replacement. And yes, in that mode, the work starts to look like everyone else's. Because you've handed the decision-making to a system trained on everyone else's output.

The designers and creatives who are building practices that AI genuinely can't replicate are doing something different. They're bringing their aesthetic POV before they touch the tools. They're iterating with intent, not just generating options. They're directing, not just prompting.

The best AI-made work won't look AI-made. It'll look like yours.

That's not a reassuring platitude. That's a practical brief. The question it asks of you is: do you have a strong enough point of view that your AI-assisted work looks distinctly, unmistakably like yours? If the answer is yes, you're in the game. If the answer is "I'm not sure," that's the work to do — and it has nothing to do with learning a new tool.


Come hear it directly from Juan

On Tuesday 31 March at 3pm CST, Juan Teppa joins Anthony Wood for a live fireside chat inside the Creative Futures Hub. They'll be getting into all of it — the Slop Machine, what AItelier actually looks like in practice, where the line is between AI doing the work and you doing the work, and what 20 years of creative disruption looks like from the inside.

This is not a tools tutorial. This is a conversation about how to stay original when the tools are available to everyone.

Already a member of the Creative Futures Hub?Grab your spot here

Not a member yet? It's free to join


Questions Worth Asking

Q. Why does AI art all look the same?

Because most people are letting the tool make the creative decisions. Generative AI models are trained on billions of existing images — which means they default, again and again, toward the most statistically common version of beautiful. Technically competent. Aesthetically predictable. When you prompt without a strong point of view and accept the first result that looks good enough, you're not directing the tool — you're inheriting its biases. The output looks like everyone else's because it's drawing from the same well everyone else is drawing from.

Q. What is AI slop?

AI slop is generative content that prioritises volume over vision. It's technically produced work with no discernible creative fingerprint — heavily influenced by a model's default aesthetic tendencies rather than a human maker's intent. You know it when you see it: the dreamlike lighting, the impossible fabrics, the portraits that are almost real but somehow hollow. It's not that the tools made bad work. It's that nobody was directing them.

Q. How do designers stay original when using AI tools?

Lead with taste before you touch the tools. The creatives building practices that AI genuinely can't replicate aren't better at prompting — they're bringing a stronger point of view into the process. They iterate with intent. They direct rather than generate. The question to ask yourself isn't "what can this tool make?" It's "what do I want to make, and how can this tool help me get there faster?"

Q. What is AItelier?

AItelier is an AI Studio and Creative Consultancy founded by Juan Teppa in Mexico City, built on one deceptively simple premise: human taste first, AI second. The studio works across generative film, brand identity, IP development, computational fashion, and executive training — with collaborators including Runway and Dreamina.ai. Its guiding question: what happens when taste leads and tools follow?

Next
Next

I ASKED FOUR AIs TO ROAST EACH OTHER. THE RESULTS WERE VERY REVEALING.