New technology inevitably meant new methods for cyber criminals to operate. Cybercrime is booming, with 3 in 4 security professionals saying their organisation’s cyber risk has increased due to geopolitics, AI and remote work.
Here are a few of the ways AI is being leveraged to launch cyber attacks:
Password cracking
Password cracking involves using programs to attempt various combinations of common passwords or dictionary words until a password is successfully cracked, known as a brute force attack.
Recent findings highlight the ease with which AI can crack commonly-used passwords. Using an AI-powered tool called PassGAN, the report tested over 15 million frequently-used passwords, revealing that 51% can be cracked within a minute, 65% within an hour, 71% within a day, and 81% within a month.
AI-generated phishing emails
Cybercriminals just got an easy way to create content for their phishing emails. Just as content creators utilise Chat GPT to write articles and captions, cybercriminals have had their work simplified.
Phishing, a social engineering attack, aims to deceive victims into revealing sensitive information. Traditionally, phishing emails were easy to identify due to frequent grammatical errors and spelling mistakes. However, AI now enables cybercriminals to craft convincing phishing content by mimicking the tone, language, and style of legitimate emails.
By leveraging AI, cybercriminals can personalise emails based on internet data or provided information, making the scams more believable and challenging to detect.
Impersonation
AI impersonation has become increasingly common when cybercriminals are carrying out vishing scams. Through synthesis techniques, AI can even mimic voices from audio and video recordings, making it challenging for targets to discern the authenticity of the caller.
Deepfakes
We’ve all seen that Black Mirror episode. Deepfakes, created using AI, manipulate images or videos to depict individuals falsely. As technology advances, deepfakes have become increasingly difficult to detect, even by law enforcement.
These malicious alterations are often used to spread false information, posing significant challenges in discerning authentic media from manipulated content.