Recently our team ran an experiment to see how quickly we could go from concept to finished work. The experiment started inside our own creative process. We were playing in Springboards and explored nearly 20 different ways to talk about our brand. But we kept coming back to the elephant in the room – that these models are all giving people the same answers.
We wanted to lean into this and decided to dramatise the problem instead of explaining it.
We picked an ad, a recent spot from OpenAI, and flipped the ending to make a point about what happens when everyone uses the same tools – they end up getting sent to the same destination both in real life and creatively.
Our original thought was to see how quickly we could conceptualise this approach and to mock up the concept. What we didn’t expect was how quickly the work would become uncomfortably close to the original.
The acceleration of LLM adoption across marketing and creative industries over the past couple of years has been remarkable. These tools are being woven into workflows everywhere – from concepting to copywriting to production.
When deployed thoughtfully, generative AI can push creative boundaries and help teams explore territory they may never reach by themselves, or help to short-circuit work that could take days or weeks in that creative exploration to help teams move more quickly.
Recent research from MIT and other institutions — published as the “Artificial Hivemind” study — documents something many of us have felt but struggled to quantify: these models are gravitating toward remarkably similar outputs, even in open-ended scenarios where countless valid answers should exist.
The simple test is just to ask your LLM to generate a random number between one and 10. With 95%+ accuracy you will get a seven every single time regardless of the model, where you live or your chat history. And while there are parallels with humans who also pick seven the most often, at 28%, of the time, LLMs are amplifying the average – from 28% probability to 98%. Doesn’t that tell you everything you need to know?
This isn’t about people using the technology incorrectly. It’s about how the models themselves are designed. They’re trained on patterns and they optimise for coherence and probability. They deliver what’s most likely, not what’s most interesting or unexpected.
Which brings me back to our experiment.
Firstly, credit should go where it’s due — our production partners absolutely nailed the brief. Frame composition, lighting, movement – all of it was eerily accurate. Too accurate.
The result raised immediate questions about likeness, intellectual property and how effortlessly these systems can blur ethical lines without anyone deliberately trying to.
We wanted to get close to the original and the technology made it almost effortless to get there.
It crystallised the convergence problem in a way that felt impossible to ignore. If we could recreate a high-production advertisement this accurately, this quickly, with relatively little iteration – what does that mean for originality across the board? What does that mean for our craft? And how busy are copyright lawyers (or their bots) going to be in the years to come?
So we kept playing and pulled the work right back into a safer zone. We needed to make it less perfect. And as anyone in the industry knows, adjusting the brief halfway through a campaign means deadlines and costs often get blown out.
The final version we ended up with was a bit rougher around the edges, but it was necessary.
Another thing that became obvious through this process is how easy it’s become to mistake polish for purpose.
AI-generated content now rivals or exceeds human-created work across massive portions of the web. By late 2024, the balance had already tipped in many categories. That stat alone isn’t the problem – the problem is why.
Speed and frictionless production are replacing deliberation. Teams are shipping work that looks finished even if it took minutes to create instead of days. The question has now shifted from “Is this the right direction?” to “Is this ready to publish?”.
When creating something that looks finished, that takes minutes instead of days, we risk conflating output with outcome, volume with value and “good enough” with genuinely good.
Here’s where it gets tricky for our industry specifically.
Marketing has always been about standing out and saying something in a way that cuts through. That’s the craft.
But if the tools we’re using to generate ideas are all trained on the same corpus, optimised for similar outputs and rewarding safe, predictable thinking – how do we avoid becoming indistinguishable from each other?
The answer isn’t to abandon AI. That ship has sailed, and frankly, I think it’d be the wrong move anyway. The answer is to fundamentally change how we interact with what these systems give us.
Our experiment forced us to reckon with something uncomfortable: the first thing AI gives you is almost never the right thing to run with. It’s a spark but it’s not the answer.
Interrogate everything.
The moment something looks finished, that’s when you need to push harder. Ask what’s been smoothed away, what assumptions the model made and what directions got optimised out in favour of coherence. The rough edges are often where the truth lives;
Resist the path of least resistance.
Just because you can generate a hundred options in ten minutes doesn’t mean you should use the first one that’s good enough. Speed is valuable but only if it’s pointed in an interesting direction;
Make imperfection intentional.
We deliberately pulled our final version back from perfection because perfection wasn’t the goal – purpose was. Sometimes the most polished version is the least honest one.
The advertising industry has always been vulnerable to trends, templates and formulas. We’ve dealt with this before – when everything looked like an Apple ad, when every brand tried to sound like Dove, when “purpose-driven” became a checkbox instead of a commitment.
AI accelerates that tendency. It makes it easier to drift toward the middle. But when used with intent, it can help us generate unexpected combinations and surface connections we’d miss.
So yes, we used AI to recreate an LLM ad to criticise how LLMs create sameness. The irony isn’t lost on us. In fact, through play, it became the point. But, sometimes, to make people aware of the danger, you need to take them there. Because if we let convenience override craft, if we confuse ease with excellence, we won’t just end up with boring work – we’ll end up in a boring industry.
And none of us got into this business for that.
This article first appeared on Mumbrella, one of Australia’s leading media and marketing industry publications. Read the original piece by Pip here.
