A creative tool to inspire, not give you the answers.
We’ve built generative AI tools around a creative workflow to keep humans in the creative equation.
Where will a spark take you?
Springboards is your partner in ideation, a creative AI tool designed to inspire ad creatives and advertising teams, not replace them.
We believe great ideas come from unexpected connections. That’s why we’ve redesigned generative AI to fit seamlessly into an agency’s workflow, from Strategy → Execution. We’ve taught our tools to think like an ad person, unlocking creative leaps instead of predictable answers.
Start Faster
Break through creative blocks with quick bursts of inspiration so you can dive right in.
Go further
Look under every rock, chase wild connections, and find creativity where no one’s looking.
Think Together
Bounce ideas, riff harder, and create work that only happens when minds collide.
We’re not coders who learned about creativity—we’re creatives who learned to code. Our mission is to keep people in the creative equation, and our promise is to help you get places you wouldn’t go alone by building tools that spark ideas, not dictate them.
We wanted to lean into this and decided to dramatise the problem instead of explaining it.
We picked an ad, a recent spot from OpenAI, and flipped the ending to make a point about what happens when everyone uses the same tools – they end up getting sent to the same destination both in real life and creatively.
Our original thought was to see how quickly we could conceptualise this approach and to mock up the concept. What we didn’t expect was how quickly the work would become uncomfortably close to the original.
The acceleration of LLM adoption across marketing and creative industries over the past couple of years has been remarkable. These tools are being woven into workflows everywhere – from concepting to copywriting to production.
When deployed thoughtfully, generative AI can push creative boundaries and help teams explore territory they may never reach by themselves, or help to short-circuit work that could take days or weeks in that creative exploration to help teams move more quickly.
But LLMs are converging – and not enough of us are paying attention.
Recent research from MIT and other institutions — published as the “Artificial Hivemind” study — documents something many of us have felt but struggled to quantify: these models are gravitating toward remarkably similar outputs, even in open-ended scenarios where countless valid answers should exist.
The simple test is just to ask your LLM to generate a random number between one and 10. With 95%+ accuracy you will get a seven every single time regardless of the model, where you live or your chat history. And while there are parallels with humans who also pick seven the most often, at 28%, of the time, LLMs are amplifying the average – from 28% probability to 98%. Doesn’t that tell you everything you need to know?
This isn’t about people using the technology incorrectly. It’s about how the models themselves are designed. They’re trained on patterns and they optimise for coherence and probability. They deliver what’s most likely, not what’s most interesting or unexpected.
And when everyone’s drawing from the same well, standing out becomes exponentially harder.
Which brings me back to our experiment.
Firstly, credit should go where it’s due — our production partners absolutely nailed the brief. Frame composition, lighting, movement – all of it was eerily accurate. Too accurate.
The result raised immediate questions about likeness, intellectual property and how effortlessly these systems can blur ethical lines without anyone deliberately trying to.
We wanted to get close to the original and the technology made it almost effortless to get there.
It crystallised the convergence problem in a way that felt impossible to ignore. If we could recreate a high-production advertisement this accurately, this quickly, with relatively little iteration – what does that mean for originality across the board? What does that mean for our craft? And how busy are copyright lawyers (or their bots) going to be in the years to come?
So we kept playing and pulled the work right back into a safer zone. We needed to make it less perfect. And as anyone in the industry knows, adjusting the brief halfway through a campaign means deadlines and costs often get blown out.
The final version we ended up with was a bit rougher around the edges, but it was necessary.
Another thing that became obvious through this process is how easy it’s become to mistake polish for purpose.
AI-generated content now rivals or exceeds human-created work across massive portions of the web. By late 2024, the balance had already tipped in many categories. That stat alone isn’t the problem – the problem is why.
Speed and frictionless production are replacing deliberation. Teams are shipping work that looks finished even if it took minutes to create instead of days. The question has now shifted from “Is this the right direction?” to “Is this ready to publish?”.
When creating something that looks finished, that takes minutes instead of days, we risk conflating output with outcome, volume with value and “good enough” with genuinely good.
Here’s where it gets tricky for our industry specifically.
Marketing has always been about standing out and saying something in a way that cuts through. That’s the craft.
But if the tools we’re using to generate ideas are all trained on the same corpus, optimised for similar outputs and rewarding safe, predictable thinking – how do we avoid becoming indistinguishable from each other?
The answer isn’t to abandon AI. That ship has sailed, and frankly, I think it’d be the wrong move anyway. The answer is to fundamentally change how we interact with what these systems give us.
Our experiment forced us to reckon with something uncomfortable: the first thing AI gives you is almost never the right thing to run with. It’s a spark but it’s not the answer.
Here’s what that means in practice:
Interrogate everything.
The moment something looks finished, that’s when you need to push harder. Ask what’s been smoothed away, what assumptions the model made and what directions got optimised out in favour of coherence. The rough edges are often where the truth lives;
Resist the path of least resistance.
Just because you can generate a hundred options in ten minutes doesn’t mean you should use the first one that’s good enough. Speed is valuable but only if it’s pointed in an interesting direction;
Make imperfection intentional.
We deliberately pulled our final version back from perfection because perfection wasn’t the goal – purpose was. Sometimes the most polished version is the least honest one.
The advertising industry has always been vulnerable to trends, templates and formulas. We’ve dealt with this before – when everything looked like an Apple ad, when every brand tried to sound like Dove, when “purpose-driven” became a checkbox instead of a commitment.
AI accelerates that tendency. It makes it easier to drift toward the middle. But when used with intent, it can help us generate unexpected combinations and surface connections we’d miss.
So yes, we used AI to recreate an LLM ad to criticise how LLMs create sameness. The irony isn’t lost on us. In fact, through play, it became the point. But, sometimes, to make people aware of the danger, you need to take them there. Because if we let convenience override craft, if we confuse ease with excellence, we won’t just end up with boring work – we’ll end up in a boring industry.
And none of us got into this business for that.
This article first appeared on Mumbrella, one of Australia’s leading media and marketing industry publications. Read the original piece by Pip here.
SYDNEY, Australia – 27 January 2026: Springboards, a creative tool built to inspire creativity in advertising, has released a new piece of AI-created work that puts the spotlight on large-scale generative models by revealing how quickly these systems can produce polished advertising output and how easily they drift into unsafe or unoriginal territory.
‘The Dangers of AI’, the latest experiment drop from Springboards, involved taking inspiration from an existing ad to create a new one, showing how quickly these models can generate work that appears finished but frequently crosses copyright lines and collapses into familiar patterns.
CEO and co-founder Pip Bingemann said the team wanted to put a spotlight on the dangers of AI when used in creative practices. “We’re very aware of the irony here. We’re dramatising the problem of large models sending everyone to the same place by deliberately using a technique that exposes how easily they drift into infringement. But sometimes the only way to show the danger is to step into it. This work is about making those risks visible, not pretending they don’t exist."
The project makes clear the challenges agencies and advertisers now face as generative models become more widely adopted. These systems can produce near-finished creatives in minutes, yet they also drift into copyright-sensitive territory, replicate distinctive likenesses and collapse different directions into outputs that feel largely the same.
Springboards created the piece to highlight the gap between what generic large language driven models can generate and what agencies actually need, showing how the speed of these tools often comes at the cost of originality, safety and true creative variation.
This widening gap between speed and originality underscores the role of Springboards. Founded by Pip Bingemann, Amy Tucker and Kieran Browne, Springboards is used by more than 200 agencies and companies worldwide and was built to help creative teams explore a broader range of ideas without sacrificing the craft, judgement and originality essential to great work.
CMO and co-founder Amy Tucker said the project reaffirmed why dedicated creative tools matter: “This experiment really showed the dual reality. The models are powerful, but they narrow creative possibilities as much as they expand them. Creativity needs tools built for the craft, not systems that smooth every idea into the same outcome.”
“That’s why at Springboards, we aim to be an enabler, not the final answer. Springboards gives teams the variation and space they need to unlock new creative directions while keeping the taste, judgement and originality human.”
For agencies, this work serves as a wake-up call for 2026. Generative tools are accelerating, but the creative standard is not. The industry does not need faster shortcuts; it needs stronger ideas supported by tools built specifically to elevate the work.
CREDITS
Client: Springboards
Strategy & Concept Development: Springboards inhouse team
Production & Delivery: Vinne Schifferstein, Marie-Celine Merret
AI Artist: Bob Connelly
Sound Design & VO: Jaron Ransley
About Springboards
Springboards is an AI-powered platform built to inspire creativity in advertising. The platform empowers teams to explore more ideas, without sacrificing the craft of great work. Founded by industry veterans Pip Bingemann, Amy Tucker, and Kieran Browne, Springboards has already partnered with 200+ companies globally. For more information, visit Springboards or contact hello@springboards.ai.
Experiment Drops
Springboards creates new experiments designed to spark creativity and push the boundaries of advertising. Sign up for our newsletter to be the first to receive these monthly experiment drops and explore fresh ideas before anyone else. Sign up here to get started.
The act of noticing what others have learned not to see. It’s pattern recognition mixed with emotional intelligence and just enough mischief to rearrange the world. It’s not the spark, so much as it's the reframing. The ability to pick something up - an object, a memory, a myth, a moment in culture - turn it in your hand and say: What if this means something else?
Is AI a friend or a foe?
AI is a mirror - and we don’t always like what we see. If you treat AI as a shortcut to “content,” you’ll get the flavourless soup you deserve. Rooms full of brand decks that sound like they were all written by the same middle manager in Slough. The cultural beige-ification of everything. But if you treat AI as a thinking partner - a provocation engine - you get something very different: velocity. Expansion. A chance to stretch beyond the first, obvious idea and get to the uncomfortable, interesting ones faster. AI doesn’t threaten originality. AI threatens laziness. And perhaps that’s overdue.
Name a piece of work that AI could never have come up with?
Ursula K. Le Guin’s The Carrier Bag Theory of Fiction. A model could replicate the sentences. It could summarise the thesis. It could imitate the rhythm, even. But it could never arrive at the idea. Because that essay wasn’t produced through analysis, it was produced through perception. Through noticing. Le Guin takes the oldest story humans tell - the hero, the weapon, the conquest - and quietly dismantles it with a single reframing move: The first human tool was not a spear. It was a carrier bag. A container. A holder. A vessel for sustaining life rather than taking it. From that one shift, the entire architecture of storytelling tilts. Narrative becomes collective, not competitive. Power becomes relational, not dominant. Survival becomes shared, not won. No machine does that. No dataset teaches you to subvert the underlying myth of civilisation itself.
What’s the weirdest place you’ve ever found a great idea?
Cleaning out cupboards is a favourite pastime. When the brain is bored and the hands are busy, the doors between the conscious and the sub-basement swing open. The good ideas live in the plumbing. They surface when the performative, clever, “I am ideating now” brain shuts up.
Favourite AI hack or use case? What do you think it is good for?
I use AI like a conceptual centrifuge. I throw in: a paragraph, a suspicion, the outline of a thought.I ask it to reshape it - longer, shorter, slower, mythic, corporate, angrier, whispering, bored. Not to pick one, but to see the shape of what it could be. Draft zero, rather than draft one. It keeps me from falling in love too early with my own cleverness.