Artificial Intelligence Leads to Drastic Increase in Phishing Scams

( – A group of three cybersecurity researchers says that Artificial Intelligence (AI) will make online email scams even harder to detect and easier to fall for.

Fredrik Heiding, Bruce Schneier, and Arun Vishwanath contend that Large Language Models, a type of AI that “learns” based on human speech and writing, will make exploiting email and Internet users even easier and cheaper.

One form of email scam most are familiar with is called “phishing.” This is when a malevolent actor sends an email that appears to come from a person or organization that the recipient knows. These emails may claim that the recipient owes an outstanding bill for a product or service, or they may appeal to the recipient’s emotions by claiming to be an urgent request from a loved one.

The researchers say these are going to get more sophisticated because “worryingly,” LLM-type AI programs will be able to make the entire phishing scam process automatic. Worse, they say the cost to run the scams will drop “by more than 95 percent.”

The authors outline five steps in the phishing process. First, the scammers “collect targets,” and then gather information about the people they intend to exploit. The next steps are to create plausible emails, and then test and refine how well these scam emails work. Because AI now so closely mimics human speech and writing, scammers can use popular AI programs such as ChatGPT to “automate each phase.”

The ChatGPT service is now almost entirely free for anyone to use, for better or worse.

It’s almost impossible to open a news site without seeing alerts about citizens, companies, and governments being extorted or otherwise taken advantage of by such scams. For example, the state of Connecticut is warning residents of a realistic phishing scam used to steal from people who think they’re doing business with the state. The Department of Consumer Protection says the thieves are going after any resident who holds any kind of permit, license, or official credential with the state. This could include anything from a driver’s license to a hunting license or fishing permit.

The fake emails are so good the state has published images of them to show people how to tell legitimate communications from phonies.

The trio of researchers behind the report noted above recommend that businesses and organizations get ahead of the risk. Entities need to understand the capabilities of “AI-enhanced phishing,” figure out how vulnerable they are, and develop “awareness routines” about phishing.

What exactly ordinary people are supposed to do, however, seems unknown even to those who study the electronic phenomena.

Copyright 2024,