Inspire AI: Transforming RVA Through Technology and Automation

Ep 33 - Digital Deception: How Scammers are Using Technology to Target You

AI Ready RVA Season 1 Episode 33

Send us a text

Those three seconds of your voice on social media? That's all scammers need to clone it perfectly. That routine video call with your CFO requesting an urgent transfer? It might be entirely fabricated by AI.

Artificial intelligence has supercharged fraud, creating a frightening new reality where familiar voices and faces can no longer be trusted. In this eye-opening episode, we explore the alarming rise of AI-powered scams that exploit our most basic human instinct – trust.

Through chilling real-world examples, we reveal how these attacks unfold. You'll hear about Jennifer DeStefano, the mother who received a terrifying call from her "kidnapped" daughter begging for help, only to discover it was an AI voice clone. We examine the sophisticated $25 million corporate heist where an entire video meeting – participants and all – was completely fabricated. These aren't futuristic scenarios; they're happening right now.

The scope of these deceptions extends far beyond financial damage. They create what experts call the "liar's dividend" – a world where real evidence can be dismissed as fake while fabrications gain credibility. This erosion of trust threatens relationships, businesses, and even our grasp on shared reality.

But knowledge is power. We provide practical, actionable strategies to protect yourself and your organization: verification techniques, technological safeguards, and the critical importance of slowing down when faced with urgent requests. As we navigate this new landscape, remember that awareness and healthy skepticism aren't cynicism – they're your best defense in a world where seeing is no longer believing.

Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

Speaker 1:

Welcome back to Inspire AI, where we explore how artificial intelligence is transforming our world for better and sometimes for worse. I'm your host, jason McGinty, and today we're tackling a serious but crucial topic how AI is powering a new era of scams. Imagine getting a late night call from a loved one, their voice trembling with fear, only to learn it wasn't them but an AI-generated clone crafted by scammers. Welcome to the new reality of fraud, where AI gives criminals shockingly convincing tools to deceive tools to deceive. In this episode, we'll reveal how AI scams have evolved, far beyond old school phishing emails. You'll hear stats showing the explosive rise of these crimes, expert insights into why they're so effective and chilling case studies, from clone voices demanding ransom to deep fake videos authorizing million dollar transfers. But don't worry, we'll also share some practical tips to help you protect yourself and your organization as we dive in. Just remember, in the age of AI, even familiar voices deserve a second check. Here's a case study in 2023.

Speaker 1:

Arizona mom Jennifer DeStefano got a call that made her heart stop Her 15-year-old daughter's voice sobbing for help, claiming she'd been kidnapped. A man then demanded $1 million ransom, threatening horrific harm if police were contacted. For several agonizing minutes, jennifer believed it was real until she reached her daughter, who was safe on a ski trip. It was all an AI-generated voice clone built from a short recording of her daughter. Jennifer later learned some software can create a realistic voice with just three seconds of audio. Just three seconds of audio. A McAfee survey found 70% of people can't reliably tell a fake voice from a real one, and law enforcement warns. Similar scams have targeted grandparents with urgent calls from grandkids needing bail money. As AI expert Hani Farid explains, the bad guy can fail 99% of the time and still get rich Because just a few successes pay off. Even tech-savvy victims can be tricked when they hear a loved one's voice in crisis.

Speaker 1:

In early 2024, a Hong Kong Binance employee joined what looked like a normal video call with several colleagues, including the CFO, who urgently instructed a $25 million transfer for an acquisition. But the entire meeting was a deepfake. The CFO and all the participants were AI-generated videos. By the time the company realized the deception, the money was gone One of the most sophisticated scams reported to date. Other incidents show how common this tactic is becoming. In the UK, scammers used a deepfake of WPP CEO Mark Reed to set up a fake video meeting and tried to steal funds. The plot failed only because an employee grew suspicious. Even insiders have used AI fakes. In 2021, an Aussie media executive impersonated a YouTube rep with AI-generated voice to secure a $40 million investment, a scam that was eventually exposed. These real cases show how scammers exploit trust in voices and videos. No-transcript, they remind us that AI scams are happening now to families and businesses alike.

Speaker 1:

Ai gives scammers powerful tools for a variety of frauds. Here's a few that are making headlines. You already heard about the voice cloning, where criminals clone voices of loved ones or executives to demand money urgently. Of loved ones or executives to demand money urgently. This includes fake hostage and grandparent scams, where victims hear a realistic voice begging for help. And of course, we have the video deepfakes, where AI-generated videos place someone's face in a clip saying or doing things they never did. Scammers use deepfake videos to impersonate CEOs or celebrities, and these fake videos can also authorize fraudulent wire transfers, exploiting our trust in video evidence.

Speaker 1:

Here's one you might not have come across as much AI enhanced phishing and chatbots. Ai writes flawless phishing emails or chats that read like legitimate business messages. These easily avoid old grammatical red flags. Reports show a 1200% plus surge in malicious phishing emails since the generative AI tools went mainstream. Since the generative AI tools went mainstream, ai chatbots can also convincingly pose as a customer service agent or romantic interests, adapting responses in real time to keep victims hooked. Speaking of romantic interests, here's some scams from AI lovers Romance scams already costing Americans $1.3 billion in 2022. These scams now leverage AI-generated profiles, deepfake photos or chatbots to build fake relationships.

Speaker 1:

Some scammers use deepfake videos on calls to prove they're real, making it easier to ask for emergency funds, as I alluded to a minute ago. There's also fake customer service and tech support. This is where AI-powered imposter hotlines or website chatbots mimic real company reps, tricking victims into giving up sensitive info. Tricking victims into giving up sensitive info For instance, an AI voice answering a fake support line can capture customer numbers directly Always confirm you're on official websites or calling official numbers. And then, finally, there's AI-generated misinformation. Beyond direct theft, ai created fake documents, videos or announcements. It can manipulate stock prices, ruin reputations or spread political disinformation. A deepfake of a public figure can cause chaos and criminals can use AI-made fake IDs or bank statements for fraud or money laundering. Each of these scams attacks our ability to trust what we see, hear or read, showing how AI is transforming fraud into something faster, more scalable and alarmingly believable. Knowing these tactics is the first step in spotting something off and staying ahead.

Speaker 1:

Ai scams don't just harm individual victims. They threaten trust across society. We've long relied on voices and videos to confirm reality, but with AI-generated fakes, even familiar calls and videos can be deceptive. This fuels what experts call the liar's dividend. Real evidence can be dismissed as fake and fakes can be mistaken for truth. Just take a look at X. Imagine a politician caught on video claiming that's a deep fake or doubting your boss's urgent call because you can't tell it's real no-transcript. Even when victims don't lose money, the trauma can leave lasting scars and make people more guarded or paranoid, straining personal and professional relationships. For businesses, ai scams bring serious reputational and financial risks. A deep fake of a CEO making outrageous statements could tank a company's stock overnight. Even attempted scams can force costly investigations, worse success stories inspire copycats and law enforcement warns.

Speaker 1:

Ai tools let scammers scale globally, overwhelming the resources. Beyond fraud, ai fakes can spread disinformation and defamation, undermining public trust in information itself. As the internet calls the info apocalypse or infocalypse, where we can't tell real from fake. This confusion helps scammers thrive. The good news Awareness is growing and researchers are developing detection tools and authentication systems, but technology and legislation are still catching up. As regulators push for stronger penalties and anti-scam tech, we all need to stay alert Because, for now, the bad actors have a head start Alright.

Speaker 1:

So how can we stay ahead of the AI-powered fraud? First, verify. Verify through multiple channels. Don't just trust urgent requests via single call, email or chat. Always confirm unexpected money or info requests by calling back on a known number. Money or info requests by calling back on a known number.

Speaker 1:

Scammers rely on a panic and secrecy. Break that by double checking. You can also use code words for your family or your workplace, where only they know what the code word is. A quick code check can expose imposters and prevent rushed mistakes and be wary of unusual payments. If someone demands wire transfers, crypto or gift cards under pressure, it's almost certainly a scam. Legitimate businesses don't ask for secret, untraceable payments.

Speaker 1:

Slow down and think Scammers want you to act. Before thinking, take a moment to ask does this make sense? Odd requests late at night or out of character should raise red flags. Protect your data and your voice. Limit what you share online Public voice recordings Uh-oh, that's mine, I'm in trouble. Or oversharing personal details can feed scammers.

Speaker 1:

Ai tools Adjust privacy settings and avoid giving voice samples to unknown callers. Education Educate others. Share what you learn with family and coworkers. Training sessions or conversations about AI scams can prepare others and reduce risks. And use the tech defenses that are there. Turn on spam call filters, keep software updated and use multi-factor authentication for transactions. These extra layers make it much harder for scammers to succeed.

Speaker 1:

Remember awareness, caution and verification are your best defenses in the age of AI scams. Last piece of advice here trust your instincts. If something feels off the timing, the tone, the request don't ignore that feeling. It's better to take an extra minute to verify than to rush into a costly mistake. Law enforcement agencies encourage people to report attempted scams, even if you didn't fall for it. This helps them track trends and warn others. So stay alert and remember.

Speaker 1:

In an age of AI magic, a healthy dose of skepticism is not cynicism, it's just savvy. All right, today we uncovered how AI has transformed fraud into something faster, more convincing and harder to detect. From clone voices to deepfake meetings, these scams exploit our trust in what we see and hear. But knowing the signs gives you power. Remember, stay alert, question the unexpected and verify before you act and share what you've learned with others so they don't fall victim. Informed communities are the strongest defense. Last thought, as the old saying goes, on the internet, nobody knows you're a dog In 2025, nobody knows if you're a deepfake. So treat every unexpected call and message with healthy skepticism. Awareness is your superpower. So thanks for listening to Inspire AI. Until next time, stay curious, stay cautious and stay inspired.

People on this episode