GenAI Cold Call - Bot or Not?
So, this happened to me yesterday, and here we go – a perfect storm brewing in GenAI-powered cold call bots.
I got a cold call that wasn't just automated, it was powered by sophisticated generative AI. Unlike the clunky, robotic spam calls we're all used to rolling our eyes at, this conversation was designed to feel totally human.
The exchange went something like this:
Bot: Hello, is this Stacey Edmonds?
Me: Yes.
Bot: Great, um, I'm calling to ask if your business uses skilled remote staff from the Philippines.
Me: Yes, but not when it's sold by a bot.
Bot: Ah! You got me…
Now, I spotted this was a bot immediately and then started recording the call. Listen here
I currently have no idea if this is dodgy or not. The company sending this message could be https://ctinnovations.com/ - but who knows!
But here's the concerning part: the companies selling these AI tools are actually bragging that "customers often don't even realise they've been talking to a bot."
And that's creating a massive problem - not just ethically, but practically.
Welcome to what I'm calling "Bishing" (bot + phishing) - the era of bot-powered phishing.
Here's the real problem: while legitimate businesses are proudly training people NOT to notice they're talking to bots, they're simultaneously making scammers' jobs infinitely easier.
It's dodgy for businesses to hide the fact you're talking to AI. But the bigger danger? If people can't tell the difference between human and bot conversations, scammers now have access to:
Unlimited "employees": Thousands of convincing scam calls running simultaneously, 24/7
Perfect personas: AI that can adapt its personality, accent, and approach in real-time
Infinite patience: Bots that can build rapport over multiple calls and conversations
Zero human limitations: No need for skilled scammers, training, or management
The same technology that legitimate companies use to fool customers into thinking they're talking to humans can be weaponised by scammers - and at massive scale.
Why People NEED to Spot Bots
This isn't just about transparency anymore - it's about survival in the new digital landscape. When people can't distinguish between human and AI conversations, they lose a critical defense mechanism.
Traditional scam indicators - poor grammar, robotic responses, pressure tactics - go out the window when AI can:
Sound genuinely conversational and empathetic
Remember previous interactions perfectly
Adapt its approach based on your responses
Be patient and build trust over time
Sound exactly like a legitimate customer service representative
The bottom line: if you can't tell you're talking to a bot, you can't assess whether that bot has good or malicious intentions.
People need to develop bot-detection skills not just for ethical reasons, but for their own protection. Every time a legitimate business trains people to ignore the signs of AI interaction, they're inadvertently training them to be vulnerable to AI-powered scams.
This Is EXACTLY Why "Dodgy or Not?" Exists
Look, we built Dodgy or Not? for precisely this moment. When the line between human and AI blurs, traditional "spot the scam" checklists become useless.
Our game teaches the one skill that matters: questioning every interaction.
Built by behavioural scientists, game designers, learning experts and cyber professionals, we help businesses build instinct through play.
The Bottom Line
Bishing isn't coming. It's here. And anyone who can't spot a bot is a potential entry point.