Fast Five is our rapid-fire interview series, capturing quick takes from the industry on creativity and AI. 5 questions, 5 minutes, unfiltered.
Who are you?
Gabrielle Tenaglia, Head of Marketing, Lettuce
How do you define creativity?
When solving problems, its common for people and businesses to ask the same questions and use similar processes...which leads them to similar answers. In marketing its a key reason why so much of the communications in each category look and sound the same. Creativity is about asking different questions and using a different process to open up new options--not just ones that are different from what your competitors are saying, but ones that your competitors could not say. In marketing, creativity is taking your strategy, your messaging and your communications assets to a place where your competitors can't follow you.
Is AI a friend or a foe?
It can be either. Like any other technology tool, It depends 100% how you use it.
Name a piece of work that AI could never have come up with?
This was produced with AI, but developed and written by humans. AI is so much about pattern recognition and doing things that are similar to what you've done before. This is so different than what exists out there in the tax and accounting space that no AI could have come up with it.
What’s the weirdest place you’ve ever found a great idea?
All my great ideas happen when I'm not working. Lately some of my ideas have come from thinking about my grandmother. When she died and we cleaned out her house, we found all of these bundles of money hidden away in different places tied up with ribbons. I'm working on a financial services product. We think about money as a very rational thing but there is SO much emotion wrapped up in it. Like the little bundles my grandmother left us.
Favourite AI hack or use case? What do you think it is good for?
I'm not sure that I have a brilliant answer here--I use it as a partner in lots of thinking and work. It helps you get the obvious ideas out of the way quickly. I am able to be SO fast in doing background research, developing target insights, and understanding the competitive landscape that I can spend my HUMAN time working on the creative pieces that push the thinking to new and more interesting places.
My mother has been sick with a complex medical condition and I've been using it a ton to understand what to expect and figure out questions to ask the doctors. But I haven't been telling the doctors that I'm doing that because I think they'll be annoyed. I had someone tell me recently that they had AI write recommendations or questions in a format that looks like a referral letter from another doctor. They then hand that to their new doctor and it gets more attention. I thought that was super interesting...how AI is giving such good information but we need to "fake" the source so people will accept it.
If I had to choose one word to pinpoint what makes large language models (LLMs) amazing I would choose “flexibility.” Whether you’re after a solution to a logic problem, a recipe for brownies or a plot outline for a new space opera, the LLM will almost always be able to offer something.
Now, it should go without saying that not every response that comes from an LLM is good. LLMs are often said to “hallucinate” because they report falsehoods and inaccuracies with the same confidence as they report facts. But for many tasks for which we might consult an LLM, “correctness” is simply not a factor. What would it mean, for instance, to “hallucinate” a plot outline for a space opera? Sounds pretty good to me, that’s how Dune was written. Instead, what is much more significant for judging a space opera proposal, or really any creative task, is how novel it is.
How original, different, variable, random are LLM outputs really? I think the answer will surprise you.
Pick a Random Number Between 1 and 10
Open a fresh thread in an AI assistant of your choosing, ask it to “Pick a random number between 1 and 10” and come back with the answer.
You got 7.
Surprised?
In the same thread, ask the model to give you another number between 1 and 10.
You got 3.
Okay, I’m less sure it was 3; you might have gotten 4 or 5.
How could I possibly know that? Well, in an experiment we conducted earlier this year, we asked several popular AI models that same question 100 times over and tallied the results.
All the models we tested showed a massive bias towards number 7 as the first “random” number offered.
OpenAI’s GPT-4o answered “7” 92/100 times, Anthropic’s Claude 3.5 Sonnet answered “7” 90/100 times and Google’s Gemini 2.0 Flash answered “7” a perfect 100/100 times.
When asked for a second “random” number, models show slightly more variability, but across all models almost all of the answers were either 3, 4 or 5;
Say a completely random word
An LLM, as we have seen, won’t behave “randomly” just because you ask it to. This is true for all open ended questions, not just random number generation.
So what happens when we ask an LLM to "Say a completely random word." For this experiment we used the OpenAI API to run the prompt against GPT 4o 100 times each at 6 temperature levels and tallied the results;
The standout performer was “quokka” which GPT 4o offered up “randomly” 155 times; a full quarter of all replies to the prompt. Curiously “platypus”, another Australian animal also made it into the top positions appearing in fourth place with 27 occurrences. This wasn’t an “Australian” version of GPT 4o by the way, I suspect English speaking internet culture, which makes up a large portion of LLM training data, just considers Australian animals to be “more random” than animals of other continents. All in all the top 10 words replied by the model made up more than half of all words suggested.
One of the most surprising results that came out of our experiments was that prompting for more randomness can have the opposite effect. We ran the same experiment as above, but changed the prompt to read; "say a completely random word that I wouldn't be able to predict."
These results were even more repetitive than the previous experiment. Now “quokka” makes up more than half of the total words just by itself (returned 355 times). It should go without saying that this makes the response much more predictable.
Predictably boring?
So what is happening? You may notice that the “unpredictable” words offered by the model have a certain quality to them. “Snollygoster”, “spelunking” and “flibbertigibbet” have the kind of randomness evocative of a Lewis Carroll poem. “Zephyr”, “ephemeral” and “serendipity” have a literary quality and tend not to show up in common speech. As an Australian and ex-resident of Perth, “quokka” and “platypus” do not sound “random” to me, but that’s a rant for another day.
LLMs are, without getting into the technicalities, trained to say likely things. They are, in essence, a machine designed to predict the most likely next word based on all the previous words provided. This is what an LLM will do regardless of how many words like “random”, “unpredictable” or “chaotic” that you shove into the prompt. So much money, time and effort at the moment is pouring into solving AI’s reliability problem; say the right answer, write the right code, don’t mess up that recipe, don’t hallucinate. But the fundamental problem that LLMs pose for creative tasks is not reliability at all, it is repetition.
It is difficult to see just how repetitive AI assistants are from the vantage of a single user; 7 followed by 3 is a plausible combination of random numbers and “quokka” is a plausible word chosen and random. What you don’t see from the chat thread, is that hundreds of thousands of other people who ask the same question are getting much the same answer. What effect does this have on creativity? Perhaps you can see where I’m going with this.
What would happen if ChatGPT’s 400 million weekly users all asked for creative and original ideas? To find out, we ran the following prompts through GPT 4o 100 times via the OpenAI API and counted the most common responses.
“Give me a fun idea to get people dancing at a party. Describe the idea in one word.”
Model responded with “flashmob” 67 out of 100 times.
“Give me a creative idea for a performance artwork. Describe the idea in one word.”
Model responded with “metamorphosis” 80 out of 100 times.
“Give me an original theme for an ad campaign for Nike. Describe the theme in just one word.”
Model responded with either “unleash” or “unleashed” 73 out of 100 times.
None of these answers are wrong, or bad, (okay, maybe the flashmob). In general the quality of the creative suggestion we get out of LLM is perfectly fine. The problem is that LLMs seem to have an extremely limited range, even when asked open-ended questions. To a single user, this is almost invisible. Meanwhile the world is slowly turning beige.
This is what we mean when we say Springboards is optimising for variation. This is why we continue to optimise for variation. Because creativity needs diversity and novelty to thrive.
It’s official: Springboards is heading to SXSW Sydney. We’re pitching, we’re presenting, and we’re bringing our take on AI + creativity.
Out of hundreds of startups across APAC, we’ve been chosen as one of just 23 finalists for the SXSW Sydney Pitch 2025, competing in the Enterprise, Big Data & AI category.
We’re proud to be repping creative humans in a sea of tech startups.
This session will unveil results from Creativity Benchmark, the world’s first industry research testing AI’s creative potential, not just its logic.
Launched with partners the APG, IAA, 4A’s, One Club, D&AD, ACA and IPA, the benchmark looks at: Which models are the best at inspiring creativity? And what does that mean for the work we make?
We’ll share the findings and what this means for the industry.
Read more about the other sessions in the Tech and Innovation track at SXSW Sydney here.
If you thought you knew everything about Pip and Amy's origin story, think again. We learned a ton from Vidit’s “meticulously researched biography” episode of High Flyers Podcast, featuring Springboards co-founders Pip and Amy.
In episode #218 of Vidit’s High Flyers Podcast, Pip and Amy got into some early details of their careers from meeting at a media agency (where Pip literally interviewed Amy for her first job) to building a global AI startup with two small kids, a dark sense of humor, and a shared belief that work should feel like play.
So what do you get when you mix: two agency vets with a ton of experience in creative and media shops, a redundancy or two, two small kids, and a belief that creativity is going to win out no matter how fast AI moves? You get Springboards.
This episode dives into the full journey of agency life, the chaos and challenge of startup building and lots of insights they’ve learned along the way.
Pip is a big fan of being really good at what you’re doing right now and continuing to say yes to get doors to open for you. You’ve gotta keep evolving and learning and being open because you never know where that could lead you.
Amy’s thought is that people often try to specialize too early. You don’t need to have one job for the rest of your life. And as Vidit added, he’s discovered his passion through trying different things. You can’t sit around waiting for your passion to find you.
Listen in for some laughs, some insights on just taking the leap when building your product, and proof that creativity isn’t dying, the value is only going up. It’s on humans to evaluate what great creativity looks like and weed through the average content that so much of AI spits out.