AI Scams: How to Stay Safe

AI Scams: How to Stay Safe

AI is here to help, whether you’re writing an email, creating concept art, or deceiving sensitive people into believing you’re a friend or relative in difficulty. AI is so adaptable! But, because some individuals would rather not be duped, let us discuss what to look out for.

The last several years have witnessed a significant increase in the quality of generated media, ranging from text to audio to photos and video, and how inexpensively and readily such material can be produced. The same technology that allows a concept artist to create some fancy creatures or spacecraft, or assists a non-native speaker in improving their business English, may be put to evil use as

It’s unrealistic to anticipate the Terminator showing up at your door to try to sell you a Ponzi scheme; these are the same old scams that have been there for years, except now they use generative AI to make them more plausible, simpler, or less expensive.

These are only a handful of the most evident ways that AI might boost creativity; by no means is this an exhaustive list. As new ones surface in the wild, we’ll make sure to add them, along with any other precautions you may take.

Cloning the Voices of Relatives and Friends

Synthetic voices have been available for decades, but only in the last year or two have technological breakthroughs enabled a new voice to be formed from as little as a few seconds of audio. This implies that everyone whose voice has ever been broadcast publicly, such as in a news story, a YouTube video, or on social media, is at risk of having it cloned.

Scammers can and have utilized this technology to create convincing counterfeit replicas of loved ones or friends. These may be programmed to say anything, but in the context of a hoax, they are most likely to record a voice clip begging for assistance.

For example, a parent may receive a voicemail from an unknown number that sounds like their kid, informing them that their belongings were stolen while traveling, that someone let them use their phone, and that Mom or Dad may pay money to this location, Venmo recipient, company, and so on. Variations include automotive difficulty (“they won’t release my car until someone pays them”), medical concerns (“this treatment isn’t covered by insurance”), and so on.

This kind of hoax has already been attempted using President Biden’s voice. They found the perpetrators, but future fraudsters will be more cautious.

How Can You Fight Voice Cloning?

First, don’t bother attempting to recognize a false voice. They’re growing better by the day, and there are several techniques to conceal any quality flaws. Experts are not immune to deception.

Anything that comes from an unfamiliar phone number, email address, or account should be viewed with suspicion. If someone claims to be your friend or loved one, proceed to contact them as usual. They’ll probably say they’re right and that it’s a hoax, as you might expect.

If they are ignored, scammers are unlikely to follow up, however, a family member is more likely to. It is OK to leave a questionable message on read while you consider.

Personalized Phishing and Spam Using Email and Messaging

We all get spam from time to time, but text-generating AI allows us to send mass emails that are tailored to each individual. With data breaches occurring regularly, a large amount of your personal information is exposed.

So what used to be “Dear Customer, please find your invoice attached” has become “Hi Doris! I work on Etsy’s promotional team. An item you were thinking about recently is now 50% discounted! Shipping to your Bellingham location is also free if you utilize this link to get the discount. Even if it’s a simple example. With a real name, purchasing habits (simple to figure out), geographical locality (ditto), and so on, the message becomes much less evident.

Finally, these are still simply spam. However, this type of customized spam was previously done by low-wage workers on content farms in other nations. It can now be done at scale by an LLM with superior writing abilities than many professionals.

How Can You Fight Email Spam?

Vigilance, like traditional spam, is your most effective weapon. However, don’t expect to distinguish generated text from human-written material in the wild. Few can, and certainly no other AI model.

As much as the wording has improved, the core issue of this form of fraud remains: convincing you to open suspicious attachments or links. As always, unless you are certain of the sender’s authenticity and identity, do not click or open anything. If you’re even slightly unsure — and this is a wonderful habit to develop — don’t click, and if you have someone qualified to forward it to for a second opinion, do so.

Fraudulent Identification and Verification

Because of the frequency of data breaches in recent years (thanks, Equifax), it’s reasonable to conclude that almost everyone has some personal information floating around on the dark web. If you follow excellent online security practices, you can reduce the risk by changing your passwords, using multi-factor authentication, and other measures. However, generative AI could pose a new and serious danger in this space.

With so much information about someone available online, including a sample or two of their voice, it’s becoming increasingly easier to develop an AI persona that sounds like the target person and has access to many of the details required to authenticate identity.

Consider this. What would you do if you were having trouble signing in, couldn’t configure your authentication app properly, or misplaced your phone? Call customer support, and they will “verify” your identification using minor information such as your date of birth, phone number, or Social Security number. Even more complex tactics, such as “take a selfie,” are getting easier to exploit.

The customer care agent who may also be an AI may very well obey this bogus you and provide it all of the credentials you would have if you had truly called in. What they can accomplish from that position varies greatly, but none of it is beneficial.

As with the others on this list, the concern is not so much how realistic this fake you is, but how easy it is for scammers to launch such an attack publicly and frequently. Previously, this type of impersonation attack was expensive and time-consuming, thus it was limited to high-value targets such as wealthy individuals and CEOs. Nowadays, you can develop a workflow that generates thousands of impersonation agents with minimum supervision, and these agents may call the customer care numbers for all of a person’s known accounts or even create new ones. Only a few must succeed to justify the attack’s cost.

How Can You Combat Identity Fraud?

“Cybersecurity 101” is still your best bet, just as it was before AIs joined the scammers’ ranks. Your information is already out there; you can’t put toothpaste back in the tube. However, you can ensure that your accounts are appropriately safeguarded against the most obvious threats.

Multi-factor authentication is by far the most critical single step anyone can take here. Any severe account activity is immediately sent to your phone and suspected logins or password changes are sent by email. Even (particularly) if you receive a lot of these warnings, do not ignore or classify them as spam.

Artificial Intelligence-Generated Deep Fakes and Blackmail

Blackmail using deepfake photographs of you or a loved one is maybe the deadliest sort of embryonic AI fraud. This future and horrifying idea can be attributed to the rapidly evolving field of open image models. People interested in certain parts of cutting-edge image production have developed methods for creating nude bodies as well as connecting them to any face they can photograph. I won’t go into detail on how it’s currently used.

However, one unforeseen result is an expansion of the fraud widely known as “revenge porn,” but more appropriately defined as nonconsensual sharing of personal photos (though, like “deepfake,” replacing the original word may be problematic). When someone’s private photographs are made public, whether, through hacking or a vindictive ex, they can be used as blackmail by a third party who threatens to disseminate them extensively unless a fee is paid.

How Can You Combat Deepfakes Created by Artificial Intelligence?

Unfortunately, the society we are creating will allow for the production of phony nude photographs of practically anyone on command. It’s creepy, strange, and nasty, but unfortunately, the secret is out.

Except for the evil people, no one is satisfied with the scenario. However, there are a few advantages for potential victims. These picture models may create realistic bodies in certain cases, but they, like other generative AI, only know what they have been trained on. As a result, the phony photos will lack distinguishing characteristics and are likely to be incorrect in other ways.

While the threat is unlikely to go away completely, victims now have more legal options, such as suing image servers to remove images or banning fraudsters from sites where they post. As the problem spreads, so will the legal and private ways of combating it.

Source- TechCrunch

Leave a Reply

Your email address will not be published. Required fields are marked *