75 years ago, Alan Turing posed a provocative question:
“Can machines think?”
To explore it, he proposed the imitation game — a test to determine whether a machine could exhibit intelligence indistinguishable from a human. The setup was simple but clever:
The test — now known as the Turing Test — wasn’t just about language. It was about perception.
Could a machine appear human enough to be believed?
For decades, the answer was no.
Until now.
In March 2024, researchers from Stanford and UC San Diego published a peer-reviewed study showing that a version of GPT-4 fooled human judges into thinking it was a person 73% of the time.
Actual human participants?
Correctly identified just 63% of the time.
This wasn’t a PR stunt. No smoke and mirrors. Just short, text-based conversations — exactly in line with Turing’s original vision — and the AI outperformed the humans.
So What?
This isn’t just about machines getting smarter.
It’s about how we respond — emotionally, socially, and cognitively — to things that sound smart, fluent, and familiar.
Because our brains don’t just process facts.
We respond to tone, confidence, and emotional fluency.
That’s not new. What’s new is that language models are now generating those signals — at scale — in a way that feels human enough to trust.
What the Study Actually Showed
Let’s break it down:
The results:
Conversational Partner |
Judged as Human |
GPT-4 |
73% |
Human |
63% |
GPT-3.5 |
50% |
So not only did GPT-4 pass the Turing Test — it beat the humans at sounding human.
We don’t evaluate ideas, people, or brands based on facts alone. We’re drawn to what’s:
✅ Easy to understand
✅ Emotionally resonant
✅ Socially familiar
In psychology, this is known as cognitive fluency, emotional mirroring, and social presence. It’s how we make decisions, build trust, and form impressions.
What the study showed is that LLMs are now capable of mimicking these patterns with startling precision — even if the output is generated, not lived.
For Brands, This Changes the Game
If a bot now sounds warmer than your customer service team…
More confident than your sales assistant…
Or more consistent than your brand tone across channels…
That’s not a small thing. That’s a shift in perceived trust and credibility.
The Risk Isn’t AI. It’s Lazy Application.
The danger here isn’t that AI is getting too smart.
It’s that we’ll use it to replace real empathy with performative tone, to generate content without thinking about meaning, or to automate everything that once signalled care or intent
The illusion of humanness is now cheap and scalable.
But for brands to create a real connection still takes work.
[inset section or change background if possible to cover the 3 ways we use AI at BH&P]
How We’re Using AI at BH&P
At BH&P, we’re a behaviour change agency — we’re strategy-led, brand-driven, and obsessed with how we can use creative thought to change what people think, feel and do.
We use AI where it helps us enhance clarity, craft, and consistency — but never as a substitute for critical thinking or creative judgment.
Here’s how:
We’re starting to build LLMs trained on individual client assets — not to create campaigns, but to act as knowledge repositories.
They include:
This lets us fact-check and refine creative briefs, interrogate tone, and ensure consistency — across teams, channels, and time.
It’s not automation. It’s augmentation — protecting and scaling strategic quality.
When we run first-party research — surveys, interviews, audits — AI helps us process the data fast, spot themes, and build a clear structure.
It reduces human error, accelerates time-to-insight, and helps us benchmark against competitor narratives.
Which means we spend more time on crafting the narrative — not drowning in data.
It also improves internal knowledge sharing — so insight isn’t stuck in someone’s head or lost in last quarter’s deck.
See our Moorhouse Ai case study here.
We use AI to generate campaign visuals only when it supports an idea that would be hard to execute with photography or illustration alone.
Then our designers take over — refining, retouching, animating.
We don’t use AI as the subject matter expert.
That’s the client.
We don’t use it to understand behaviour.
That’s us.
We use it to bring ideas together in fresh, emotionally resonant ways — to enhance, not replace, the creative process.
So, Where Do We Go From Here?
Team BH&P not anti-AI. Far from it, we love it.
But we are cautious, and we are very much pro-intent.
Brands have a responsibility to be transparent, thoughtful, and human — especially when they’re not.
It’s not about sounding human.
It’s about being worth trusting.
We’re entering a world where bots can sound like people, and in this context, brands need to decide: what will your voice stand for?
Final thought: Back to Turing
Alan Turing was a man of enormous brain. It was his thoughts, his creativity, his humanity, that made him inimitable. And he never said the goal was to create machines that think.
He asked whether we’d know the difference.
Now we don’t.
And the burden of responsibility has shifted from the machines, to the people designing and deploying them.
Because the real test isn’t whether AI can sound like us. We already know it can.
The real test is whether we can stay honest, intentional, and impactful in a world where fluency is cheap — and trust is earned.