Fake support chats, spam support calls, and caller ID spoofing: the online world has long been full of social engineering schemes. Now these social engineers are getting even more advanced in their attacks because they’re combining multiple attack vectors together, using ChatGPT alongside other social engineering methods to gain access to information about you. ChatGPT is giving social engineers a boost, providing a convincing story about fake identities and stories, making their attacks seem even more legitimate, accurate, and genuine. In this post we’re diving into the state of ChatGPT-driven social engineering and how you can prevent falling prey to it.
To baseline, social engineering is a form of hacking in which the attacker uses human relationships to gain control over another person’s computer system.
What does this look like? Social engineering can take many forms and has many nuances, which is why it can be so deceptive. The hacker might use digital communication to gain illegal access, including social media and email.
Another example, phishing, is a type of hacking in which the perpetrator tries to obtain personal information by masquerading as someone else with whom you have an existing relationship (such as your bank). Phishing is particularly hard to spot because the perpetrator often uses official-looking email addresses and logos, or they send you a link that appears to be from your bank but is actually taking you to a site where they can gather information about you.
Social engineering scammers are savvy
If you think falling for a social engineering scam could never happen to you or your intelligent employees, think again. Social engineering scammers are savvy. And, it’s important to know that cybercriminals are getting more sophisticated every day. The latest tool in social engineering fraudsters’ belt is ChatGPT.
What is ChatGPT?
ChatGPT is a chatbot developed by OpenAI, based on the GPT (Generative Pretrained Transformer) language model. It uses deep learning techniques to answer questions in a human-like way.
With that said, anything that can create realistic, conversational responses is powerful, and dangerous, if placed in the wrong hands.
While ChatGPT is an exciting new technology, social engineers have been co-opting it for nefarious use as well. Social engineering fraudsters have been using ChatGPT in the following ways :
Creating a fake customer service chatbot. This allows them to trick people into typing their credit card information and social security numbers into the fake chatbot, which they then steal and use for fraudulent purchases.
Creating a fake online banking chatbot. This allows them to trick people into thinking they are logging into the bank’s website when in fact it is just another ChatGPT bot that has been customized by fraudsters.
To create realistic looking emails, designed to trick the recipient into believing that they are from a legitimate source. These emails are often used in phishing campaigns or as part of a social engineering attack.
To create seamlessly professional web landing pages that sway visitors into thinking that they are on an authentic company website. This can be used as a component of an information security breach or data theft campaign.
Prevent ChatGPT-driven social engineering
How can you protect yourself against ChatGPT-driven social engineering schemes? Vigilance. The best way to protect yourself from any social engineering scam is to be aware, observant, and constantly educating yourself on the social engineering trends. Know that even if you’re tech confident, social engineering hackers can find ways to outsmart even the most savvy.
If you are a business owner, remember to put the right security measures in place. Train your employees so they can spot social engineering attempts and warn them about possible tactics hackers could use. Continually share information on the latest social engineering trends and phishing schemes. Don’t let the ChatGPT intelligence win over the intelligence of your employees and your organization.