Insights from Ladies Who Strategize AI Summer Webinar
November 20, 2025
Appearances
At the Ladies Who Strategize AI Summer kickoff, Springboards co-founder Amy Tucker broke down why most AI outputs aren’t wrong, they’re just boring.
Unleashing Human Creativity in an AI-Driven World
AI is moving at warp speed. ChatGPT hit 100 million users faster than TikTok, and now every second LinkedIn post is someone breathlessly declaring “the future of work.” But speed doesn’t equal creativity. In fact, if we’re not careful, AI could speed us straight into a world of beige.
Demystifying the magic trick
When we kicked off our Springboards + Ladies Who Strategize (LWS) AI Summer webinar series, our co-founder Amy Tucker shared her take with the group: most AI outputs aren’t bad, they’re just boring. LLMs like ChatGPT are built to give the “most likely” answer. LLMs don’t “understand” your words; they break everything down into tokens and then spit back the most likely sequence. In other words, they’re probability machines, not possibility machines.
That’s why they’re fast and impressive, but also trained to follow a pattern. Amy demoed it live: ask an LLM any number 1 to 10 and you’ll get the same “random” outputs. Another example: Two different marathon runners received identical training plans from an LLM. Even creative asks like ideas for “band names” collapse into predictable, low-entropy answers.
Machines don’t dream. They echo.
Breaking the gray with Springboards
Springboards was built to be the antithesis of LLMs. We don’t hand you the most probable answer. We hand you sparks, weird prompts, thought-starters, and curveballs, designed to push your brain somewhere it wouldn’t go alone.
Our platform has modes literally called LSD and Asylum for a reason. They’re not about being safe or efficient; they’re about messing with the machine until something interesting, unexpected, and fun falls out.
Because we believe the real power of AI isn’t about removing humans from the process. It’s about supercharging them.
What AI is good for (and not)
Amy also talked about where AI does shine:
Getting you up to speed fast on new topics
Spotting patterns in mountains of data
Helping you flip perspectives or reframe a problem
Generating starter ideas you can riff off
But let’s not kid ourselves. AI isn’t good at originality. It isn’t good at taste. And it definitely isn’t good at giving you the kind of hard feedback that makes an idea go from okay to killer.
That’s still the human superpower.
Humans + AI > boring
So where does that leave us? Pretty optimistic, actually. New tools always bring new possibilities. Just like the camera didn’t kill painting, AI won’t kill creativity. It just changes the canvas.
The challenge — and the fun — is learning how to break these tools in ways that make space for more imagination, not less. That’s exactly what Springboards is here for: keeping creativity human, playful, and just the right amount of weird.
Springboards has launched Experiments, a new place to showcase unconventional and creative uses of AI. Every month the Springboards team will release a new experiment exploring AI, Creativity and human thought patterns. They’re free to use, open to all, and built to spark new ideas.
First up is Fork AI, a prototype designed to break the predictable flow of AI-generated outputs.
Fork AI lets users “fork” individual words in a large language model (LLM) response, exploring alternative versions at the token level. It’s a way to invite variation, spark unexpected thinking and remix machine-generated text in real time. The prototype made its debut at SXSW Sydney last week and is now live and free to use at experiments.springboards.ai. No login required (but sign up for email alerts for future drops!).
“We’re obsessed with the problem of variation; I don’t know whether people have realised just how repetitive these models are. What would a UX look like if you allowed a person to get in the middle of that generation process and knock it in a different direction?” - Kieran Browne CTO and Co-Founder of Springboards
Fork AI is part technical test, part creative provocation. It’s a hands-on example of how humans and machines can collaborate to push ideas in unexpected directions. Springboards will be releasing a new creative experiment each month with the aim of inspiring more creative thinking, fresher work and keeping creative humans firmly in the mix.
Want to hear when Springboards release the next experiment? Sign up for the monthly drop at experiments.springboards.ai.
The true sign of intelligence is not knowledge, but imagination.” —Albert Einstein
Is AI a friend or a foe?
It depends who is in the driving seat and who is the back seat. If it helps you detect early disease and shares a personalised treatment plan to help heal you then it’s a saviour. If you just lost your job as AI has taken over then it’s the devil.
Name a piece of work that AI could never have come up with?
John Lewis Christmas Advert 2011 - The Long Wait
One of my all time English favourites that soothes the soul for all parents who worry about their kids and the damage of consumerism. Honestly though there isn’t a creative in history that AI couldn’t have a helping hand in to make the process shorter & smoother.
What’s the weirdest place you’ve ever found a great idea?
Pain. Pain is the place where you meet a new version of yourself and is a catalyst for thinking and seeing the world in a different way.
Favourite AI hack or use case? What do you think it is good for?
Uploading medical reports on a damaged knee to find out absolutely everything I can do to get back on the black runs one day.
New York, NY – October 21, 2025 – A comprehensive new study by Springboards, an AI platform inspiring creativity in advertising, found that popular AI tools like ChatGPT, Gemini, Claude and others perform much more similarly on creative tasks than many people think. Creativity Benchmark, conducted in collaboration with the 4As, ACA, APG, D&AD, IAA, IPA, and The One Club for Creativity, challenges the idea that there's a single "best" AI tool for creative work and shows agencies need more efficient ways to test AI tools for their specific needs.
Sixteen different AI systems – from OpenAI, Google, Anthropic, Meta, DeepSeek, Alibaba and others – were tested on real marketing challenges across 100 notable brands. Over 600 creative professionals from ad agencies, marketing teams, and strategy firms made over 11,000 comparisons to see which ones worked best. The biggest surprise? There was no clear winner. The differences between the "best" and "worst" AI tools were much smaller than expected.
"Everyone assumes some AI tools are way better than others for creative work," said Pip Bingemann, CEO and co-founder of Springboards. "But our tests showed the results were pretty close. Why? Because these models are machines designed to recognize patterns and give you the most probable answer—and 'probable' has never been called 'creative.' Keeping humans in the loop and optimizing for a wider range of varied ideas is crucial.”
The study looked at three types of creative challenges: finding surprising insights about consumers, creating big campaign ideas, and coming up with bold, attention-grabbing concepts.
Key Findings:
Different AI Tools Win at Different Tasks: No single AI system was best at everything. Some were better at strategic thinking, others at wild, creative ideas. This means agencies might want to use different tools for different jobs.
Variety of Ideas Matters Most: Some AI tools generated lots of different creative options for the same brief. Others kept suggesting similar ideas over and over. For real creative work, having many different options is just as important as having good ones.
AI Can't Judge Creative Work Well: When researchers had AI systems evaluate creative ideas, they gave very different scores than human experts. This means agencies can't rely on AI to pick the best creative concepts – they still need human judgment.
Standard Creativity Tests Don't Work for Marketing: Traditional creativity tests used in psychology don't predict which AI will be better at marketing-specific creative tasks. Brand work requires its own way of measuring creativity.
Creative Preferences Vary by Location: Interestingly, creative professionals in different countries preferred different AI tools, suggesting that cultural differences affect what people consider good creative work.
“LLMs aren’t a one-size-fits-all solution—they're general purpose tools that require human creativity to unlock breakthrough outcomes," said Jeremy Lockhorn, SVP, Creative Technologies & Innovation, 4As. "These findings suggest agencies and brands should continue to evaluate which models are best suited for creative work - and that a multi-model approach may well be the best path forward."
“This study highlights that creativity isn’t about which AI you use, it’s about how you use it,” remarked Tony Hale, CEO, Advertising Council Australia. “The results reinforce what we see across the industry: the human spark remains essential to transforming good ideas into great ones. For agencies, the real opportunity is learning how to collaborate with these systems to expand, not replace, creative thinking.”
Methodology
The study involved 678 advertising professionals of diverse backgrounds, who participated in blind A/B idea judgments, likened to a "Tinder for Ideas." The data, collected over four weeks starting June 10, 2025, comprised 11,012 human comparisons across various brands, prompts, and models. This was analyzed using Bradley-Terry modeling and cosine distance for diversity scoring.
The research used four different ways to test AI creativity:
Real Creative Professionals Made the Calls: Nearly 700 people working in advertising, marketing, and strategy compared AI-generated ideas side-by-side. They didn't know which AI created which idea, so they couldn't play favorites. The study covered ideas for 100 major brands across 12 different business categories.
Tested How Many Different Ideas AI Can Create: Researchers asked each AI system to create 10 different responses to the same creative brief, then measured how different those responses were from each other. Some AI tools generated very similar ideas every time, while others came up with lots of variety.
Checked If AI Can Judge Its Own Work: The team had three leading AI systems evaluate the same creative ideas that humans had already scored, to see if AI judges agreed with human experts. They didn't.
Tried Standard Creativity Tests: The AI systems took adapted versions of creativity tests that psychologists use on humans, measuring things like how many ideas they generate and how original those ideas are.
All tests used the same settings and compared current AI systems from companies like OpenAI, Google, Anthropic, and Meta.
If you'd like to learn more about the results, visit this page. To access the original research, visit creativitybenchmark.ai
About Springboards
Springboards is an AI-powered platform built to inspire creativity in advertising. The platform empowers teams to explore more ideas, without sacrificing the craft of great work. Founded by industry veterans Pip Bingemann, Amy Tucker, and Kieran Browne, Springboards has already partnered with 150+ agencies globally and secured $3 million USD in seed funding from Blackbird Ventures. For more information, visit Springboards or contact hello@springboards.ai.
Let’s Break the Boring
Step right up, request a demo
Thanks - you're in the queue! We’ll be in-touch soon.
Oops! Something went wrong while submitting the form.