CyberSmart(er): Outsmarting Artificial Intelligence Scams

If you answered the phone and heard the voice of a loved one in despair, what would you do? Panic? Barrage them with questions? Offer help? Unless you would think to ask yourself “is this really them,” you could be falling victim to an imposter scam—powered by AI.

“AI” or “artificial intelligence” has advanced to the point where AI-powered machines can now drive cars, detect cancer and even generate proficient-sounding prose, like the popular generative AI tool Chat GPT. However, recent reports indicate that AI is also increasingly being used for criminal purposes.

A report from The Washington Post found that imposter scams became the second most popular type of fraud in America in 2022, totaling $11 million in losses according to Federal Trade Commission officials. Most of this criminal success can be attributed to the topic of this installment of the CyberSmart(er) series: AI scams.

Stealing Your Passwords: Old Scams Get Supercharged

Scammers are using AI technology to inject new life into old scams.

Phishing attacks, for example—those emails and text messages that trick you into clicking fraudulent links or falling for other cons—are now, with the help of AI, able to access vast amounts of information from multiple sources, so that their fraudulent claims appear even more accurate, legitimate and personalized. This can explain why so many cybercriminals are now focusing their energy specifically on spear-phishing attempts, according to cybersecurity experts. These attacks do not come in the form of an eager sender baiting you to click a link but instead contains believable language tailored to you directly in an attempt to get you to reveal passwords or other sensitive information that can be used to access your finances, hijack your company computer network or inflict other damage. Spear-phishing was once more complicated, but now AI makes it easier to search the internet for data about you, including from social media, and write a grammar-free piece of communication, tailored specifically to you.

AI can even be used to post fake product reviews that include malicious links or attachments convincingly masquerading as coupons or photos. Simply put, AI is allowing scammers to increase the means and likelihood of phishing success.

Hacking is another area where AI is transforming cybercrime. AI lends almost limitless speed and accuracy to even the most inexperienced hackers, aiding them to effectively bypass trusted security tools and strong passwords. Cybercriminals are using AI to create code, software and links that can quickly infect other computers with ransomware, spyware, keystroke loggers, viruses and other malware. Some AI tools tout the ability to crack 51% of passwords in under a minute, making the use of complex passwords and two-factor authentication more critical than ever. Taking these extra precautions could help protect you from the consequences some companies like Twitter faced when attackers successfully phished their staff in order to hack celebrity accounts. Once hacked, the accounts were used to endorse a Bitcoin investment opportunity linked to the hackers themselves, allowing them to acquire $100,000.

Stealing Your Identity: Newer Scams Get Personal

Supercharged phishing and hacking are not the only scams to be wary of, as AI technology is also being used in more inventive ways.

In imposter scams, for example, AI has been used to turn a short audio clip of a person’s voice into an accurate duplicate of that voice that can be manipulated to say anything. These AI-generated voice clones are then used in fake ransom and emergency calls to extort money or create chaos. This technology has already evolved to the level where AI-generated voices no longer sound reliably robotic as they did in the past. AI can now mimic tone, inflection and even empathy, so that many victims genuinely believe they are talking to someone they know on the other line.

And it doesn’t stop with audio. AI-based video technology can use photographs or brief footage of a person to create video “deepfakes” that can be made to say or do anything onscreen. In romance scams, deepfakes have been used over video calls—even in real-time. Elsewhere, deepfakes have been used to spread misinformation and damage reputations.

What You Can Do

Scammers are perfecting how to take advantage of AI- generated content to separate victims from their money and even alter their perceptions of reality. And as the use and capabilities of AI expand, so will the number of victims. Take the Cybersmart(er): Beat the AI quiz here to review practices and mindsets that can help you test your savvy for not becoming one of them.

Additional Cybersecurity Tips:

  1. Create code words and use them to verify acquaintances in the case of requests for money.
  2. Create unique and complex passwords and store them in safe places.
  3. Be skeptical of any unsolicited calls that include an urgent request for sensitive information or money.
  4. Listen for inconsistencies in the voice of someone requesting money or sensitive information over the phone. Ask questions only the person who is claiming to be on the other line would know the answer to.
  5. Avoid posting audio or video recordings of yourself publicly.