In a world where AI is often used to replicate what's already been done, there's a growing opportunity to flip the script. Instead of settling for sameness, what if we used AI to break boundaries and fuel original thinking? In this post, we’ll explore how we can partner with AI to amplify imagination—not automate conformity.
AI can either be a creative thought starter and brainstorming tool or the world's most boring autopilot. The choice is ours. At Most Contagious New York, Springboards' co-founder and CEO, Pip Bingemann, took the stage to discuss the topic: AI Won’t Steal Your Job, But It Might Steal Our Creativity. Our take is that people shouldn’t be using AI as a tool for answers, but as a source of inspiration. When AI is left to its own devices, it does what it was built to do: serve up the safest, most predictable (boring) answers. That’s good for qualitative research and filling in a spreadsheet. But that’s bad for creativity.
AI Can Lift the Floor… But It Also Lowers the Ceiling
AI certainly makes a lot of things easier. But easy isn’t the same as good. In fact, Pip warns that the more we rely on AI to generate ideas, the more we risk flattening creativity into a neat, predictable algorithm. When AI is trained on the past, it just spits out more of the same. Great if you want “meh” ideas on repeat. Not so great if you want original, boundary-pushing work.
The real danger is not some dystopian, 1984 nightmare where creativity is outlawed. It’s more of a Brave New World scenario where we’re spoon-fed exactly what we think we want until everything turns into an optimised, algorithmic snoozefest.
Why We’re Breaking AI
At Springboards, we refuse to let AI turn creativity into a copy-paste machine. Instead of treating AI like an answer machine, we use it as a possibility machine. Our tools are built to generate unexpected ideas, not just obvious ones. They don’t narrow the options down to a single “best” answer, they explode the possibilities and hand the power back to creatives. Because creativity isn’t about efficiency, it’s about exploration.
AI Shouldn't Tell Us What's Good
AI doesn’t just influence how we create. It’s already shaping what we think is good. Popularity-based algorithms dominate everything—what we listen to, what we watch, and what we read. They push the safest, most crowd-approved content to the top, reinforcing sameness and stifling originality. Pip summed it up perfectly: "Van Gogh sucked in the moment."
Many of history’s greatest artists weren’t appreciated in their time. If AI had been curating their careers, they never would have made it past the algorithmic gatekeepers. We’d have missed out on some of the most groundbreaking work ever created.
Creativity Thrives on What’s Possible, Not What’s Probable
So what’s the move? We train AI differently. We teach it to spark weird, unexpected, and untamed ideas, not just remix the greatest hits of the past. We put the creative humans back in charge. Because AI should be a springboard for creativity, not a shortcut to mediocrity.
If we do this right, AI won’t replace creative people. It’ll supercharge them. It won’t flatten ideas, it’ll stretch them in wild, unexpected directions. And instead of a world full of average, we might just get something truly original.
If you'd like to watch the full video, check it out below:
Recently our team ran an experiment to see how quickly we could go from concept to finished work. The experiment started inside our own creative process. We were playing in Springboards and explored nearly 20 different ways to talk about our brand. But we kept coming back to the elephant in the room – that these models are all giving people the same answers.
We wanted to lean into this and decided to dramatise the problem instead of explaining it.
We picked an ad, a recent spot from OpenAI, and flipped the ending to make a point about what happens when everyone uses the same tools – they end up getting sent to the same destination both in real life and creatively.
Our original thought was to see how quickly we could conceptualise this approach and to mock up the concept. What we didn’t expect was how quickly the work would become uncomfortably close to the original.
The acceleration of LLM adoption across marketing and creative industries over the past couple of years has been remarkable. These tools are being woven into workflows everywhere – from concepting to copywriting to production.
When deployed thoughtfully, generative AI can push creative boundaries and help teams explore territory they may never reach by themselves, or help to short-circuit work that could take days or weeks in that creative exploration to help teams move more quickly.
But LLMs are converging – and not enough of us are paying attention.
Recent research from MIT and other institutions — published as the “Artificial Hivemind” study — documents something many of us have felt but struggled to quantify: these models are gravitating toward remarkably similar outputs, even in open-ended scenarios where countless valid answers should exist.
The simple test is just to ask your LLM to generate a random number between one and 10. With 95%+ accuracy you will get a seven every single time regardless of the model, where you live or your chat history. And while there are parallels with humans who also pick seven the most often, at 28%, of the time, LLMs are amplifying the average – from 28% probability to 98%. Doesn’t that tell you everything you need to know?
This isn’t about people using the technology incorrectly. It’s about how the models themselves are designed. They’re trained on patterns and they optimise for coherence and probability. They deliver what’s most likely, not what’s most interesting or unexpected.
And when everyone’s drawing from the same well, standing out becomes exponentially harder.
Which brings me back to our experiment.
Firstly, credit should go where it’s due — our production partners absolutely nailed the brief. Frame composition, lighting, movement – all of it was eerily accurate. Too accurate.
The result raised immediate questions about likeness, intellectual property and how effortlessly these systems can blur ethical lines without anyone deliberately trying to.
We wanted to get close to the original and the technology made it almost effortless to get there.
It crystallised the convergence problem in a way that felt impossible to ignore. If we could recreate a high-production advertisement this accurately, this quickly, with relatively little iteration – what does that mean for originality across the board? What does that mean for our craft? And how busy are copyright lawyers (or their bots) going to be in the years to come?
So we kept playing and pulled the work right back into a safer zone. We needed to make it less perfect. And as anyone in the industry knows, adjusting the brief halfway through a campaign means deadlines and costs often get blown out.
The final version we ended up with was a bit rougher around the edges, but it was necessary.
Another thing that became obvious through this process is how easy it’s become to mistake polish for purpose.
AI-generated content now rivals or exceeds human-created work across massive portions of the web. By late 2024, the balance had already tipped in many categories. That stat alone isn’t the problem – the problem is why.
Speed and frictionless production are replacing deliberation. Teams are shipping work that looks finished even if it took minutes to create instead of days. The question has now shifted from “Is this the right direction?” to “Is this ready to publish?”.
When creating something that looks finished, that takes minutes instead of days, we risk conflating output with outcome, volume with value and “good enough” with genuinely good.
Here’s where it gets tricky for our industry specifically.
Marketing has always been about standing out and saying something in a way that cuts through. That’s the craft.
But if the tools we’re using to generate ideas are all trained on the same corpus, optimised for similar outputs and rewarding safe, predictable thinking – how do we avoid becoming indistinguishable from each other?
The answer isn’t to abandon AI. That ship has sailed, and frankly, I think it’d be the wrong move anyway. The answer is to fundamentally change how we interact with what these systems give us.
Our experiment forced us to reckon with something uncomfortable: the first thing AI gives you is almost never the right thing to run with. It’s a spark but it’s not the answer.
Here’s what that means in practice:
Interrogate everything.
The moment something looks finished, that’s when you need to push harder. Ask what’s been smoothed away, what assumptions the model made and what directions got optimised out in favour of coherence. The rough edges are often where the truth lives;
Resist the path of least resistance.
Just because you can generate a hundred options in ten minutes doesn’t mean you should use the first one that’s good enough. Speed is valuable but only if it’s pointed in an interesting direction;
Make imperfection intentional.
We deliberately pulled our final version back from perfection because perfection wasn’t the goal – purpose was. Sometimes the most polished version is the least honest one.
The advertising industry has always been vulnerable to trends, templates and formulas. We’ve dealt with this before – when everything looked like an Apple ad, when every brand tried to sound like Dove, when “purpose-driven” became a checkbox instead of a commitment.
AI accelerates that tendency. It makes it easier to drift toward the middle. But when used with intent, it can help us generate unexpected combinations and surface connections we’d miss.
So yes, we used AI to recreate an LLM ad to criticise how LLMs create sameness. The irony isn’t lost on us. In fact, through play, it became the point. But, sometimes, to make people aware of the danger, you need to take them there. Because if we let convenience override craft, if we confuse ease with excellence, we won’t just end up with boring work – we’ll end up in a boring industry.
And none of us got into this business for that.
This article first appeared on Mumbrella, one of Australia’s leading media and marketing industry publications. Read the original piece by Pip here.
SYDNEY, Australia – 27 January 2026: Springboards, a creative tool built to inspire creativity in advertising, has released a new piece of AI-created work that puts the spotlight on large-scale generative models by revealing how quickly these systems can produce polished advertising output and how easily they drift into unsafe or unoriginal territory.
‘The Dangers of AI’, the latest experiment drop from Springboards, involved taking inspiration from an existing ad to create a new one, showing how quickly these models can generate work that appears finished but frequently crosses copyright lines and collapses into familiar patterns.
CEO and co-founder Pip Bingemann said the team wanted to put a spotlight on the dangers of AI when used in creative practices. “We’re very aware of the irony here. We’re dramatising the problem of large models sending everyone to the same place by deliberately using a technique that exposes how easily they drift into infringement. But sometimes the only way to show the danger is to step into it. This work is about making those risks visible, not pretending they don’t exist."
The project makes clear the challenges agencies and advertisers now face as generative models become more widely adopted. These systems can produce near-finished creatives in minutes, yet they also drift into copyright-sensitive territory, replicate distinctive likenesses and collapse different directions into outputs that feel largely the same.
Springboards created the piece to highlight the gap between what generic large language driven models can generate and what agencies actually need, showing how the speed of these tools often comes at the cost of originality, safety and true creative variation.
This widening gap between speed and originality underscores the role of Springboards. Founded by Pip Bingemann, Amy Tucker and Kieran Browne, Springboards is used by more than 200 agencies and companies worldwide and was built to help creative teams explore a broader range of ideas without sacrificing the craft, judgement and originality essential to great work.
CMO and co-founder Amy Tucker said the project reaffirmed why dedicated creative tools matter: “This experiment really showed the dual reality. The models are powerful, but they narrow creative possibilities as much as they expand them. Creativity needs tools built for the craft, not systems that smooth every idea into the same outcome.”
“That’s why at Springboards, we aim to be an enabler, not the final answer. Springboards gives teams the variation and space they need to unlock new creative directions while keeping the taste, judgement and originality human.”
For agencies, this work serves as a wake-up call for 2026. Generative tools are accelerating, but the creative standard is not. The industry does not need faster shortcuts; it needs stronger ideas supported by tools built specifically to elevate the work.
CREDITS
Client: Springboards
Strategy & Concept Development: Springboards inhouse team
Production & Delivery: Vinne Schifferstein, Marie-Celine Merret
AI Artist: Bob Connelly
Sound Design & VO: Jaron Ransley
About Springboards
Springboards is an AI-powered platform built to inspire creativity in advertising. The platform empowers teams to explore more ideas, without sacrificing the craft of great work. Founded by industry veterans Pip Bingemann, Amy Tucker, and Kieran Browne, Springboards has already partnered with 200+ companies globally. For more information, visit Springboards or contact hello@springboards.ai.
Experiment Drops
Springboards creates new experiments designed to spark creativity and push the boundaries of advertising. Sign up for our newsletter to be the first to receive these monthly experiment drops and explore fresh ideas before anyone else. Sign up here to get started.
New York, NY – October 21, 2025 – A comprehensive new study by Springboards, an AI platform inspiring creativity in advertising, found that popular AI tools like ChatGPT, Gemini, Claude and others perform much more similarly on creative tasks than many people think. Creativity Benchmark, conducted in collaboration with the 4As, ACA, APG, D&AD, IAA, IPA, and The One Club for Creativity, challenges the idea that there's a single "best" AI tool for creative work and shows agencies need more efficient ways to test AI tools for their specific needs.
Sixteen different AI systems – from OpenAI, Google, Anthropic, Meta, DeepSeek, Alibaba and others – were tested on real marketing challenges across 100 notable brands. Over 600 creative professionals from ad agencies, marketing teams, and strategy firms made over 11,000 comparisons to see which ones worked best. The biggest surprise? There was no clear winner. The differences between the "best" and "worst" AI tools were much smaller than expected.
"Everyone assumes some AI tools are way better than others for creative work," said Pip Bingemann, CEO and co-founder of Springboards. "But our tests showed the results were pretty close. Why? Because these models are machines designed to recognize patterns and give you the most probable answer—and 'probable' has never been called 'creative.' Keeping humans in the loop and optimizing for a wider range of varied ideas is crucial.”
The study looked at three types of creative challenges: finding surprising insights about consumers, creating big campaign ideas, and coming up with bold, attention-grabbing concepts.
Key Findings:
Different AI Tools Win at Different Tasks: No single AI system was best at everything. Some were better at strategic thinking, others at wild, creative ideas. This means agencies might want to use different tools for different jobs.
Variety of Ideas Matters Most: Some AI tools generated lots of different creative options for the same brief. Others kept suggesting similar ideas over and over. For real creative work, having many different options is just as important as having good ones.
AI Can't Judge Creative Work Well: When researchers had AI systems evaluate creative ideas, they gave very different scores than human experts. This means agencies can't rely on AI to pick the best creative concepts – they still need human judgment.
Standard Creativity Tests Don't Work for Marketing: Traditional creativity tests used in psychology don't predict which AI will be better at marketing-specific creative tasks. Brand work requires its own way of measuring creativity.
Creative Preferences Vary by Location: Interestingly, creative professionals in different countries preferred different AI tools, suggesting that cultural differences affect what people consider good creative work.
“LLMs aren’t a one-size-fits-all solution—they're general purpose tools that require human creativity to unlock breakthrough outcomes," said Jeremy Lockhorn, SVP, Creative Technologies & Innovation, 4As. "These findings suggest agencies and brands should continue to evaluate which models are best suited for creative work - and that a multi-model approach may well be the best path forward."
“This study highlights that creativity isn’t about which AI you use, it’s about how you use it,” remarked Tony Hale, CEO, Advertising Council Australia. “The results reinforce what we see across the industry: the human spark remains essential to transforming good ideas into great ones. For agencies, the real opportunity is learning how to collaborate with these systems to expand, not replace, creative thinking.”
Methodology
The study involved 678 advertising professionals of diverse backgrounds, who participated in blind A/B idea judgments, likened to a "Tinder for Ideas." The data, collected over four weeks starting June 10, 2025, comprised 11,012 human comparisons across various brands, prompts, and models. This was analyzed using Bradley-Terry modeling and cosine distance for diversity scoring.
The research used four different ways to test AI creativity:
Real Creative Professionals Made the Calls: Nearly 700 people working in advertising, marketing, and strategy compared AI-generated ideas side-by-side. They didn't know which AI created which idea, so they couldn't play favorites. The study covered ideas for 100 major brands across 12 different business categories.
Tested How Many Different Ideas AI Can Create: Researchers asked each AI system to create 10 different responses to the same creative brief, then measured how different those responses were from each other. Some AI tools generated very similar ideas every time, while others came up with lots of variety.
Checked If AI Can Judge Its Own Work: The team had three leading AI systems evaluate the same creative ideas that humans had already scored, to see if AI judges agreed with human experts. They didn't.
Tried Standard Creativity Tests: The AI systems took adapted versions of creativity tests that psychologists use on humans, measuring things like how many ideas they generate and how original those ideas are.
All tests used the same settings and compared current AI systems from companies like OpenAI, Google, Anthropic, and Meta.
If you'd like to learn more about the results, visit this page. To access the original research, visit creativitybenchmark.ai
About Springboards
Springboards is an AI-powered platform built to inspire creativity in advertising. The platform empowers teams to explore more ideas, without sacrificing the craft of great work. Founded by industry veterans Pip Bingemann, Amy Tucker, and Kieran Browne, Springboards has already partnered with 150+ agencies globally and secured $3 million USD in seed funding from Blackbird Ventures. For more information, visit Springboards or contact hello@springboards.ai.