5 Worrying Ways AI Could Shape The Future of Cybercrime

15th June 2023

We’ve all seen the scary talk doing the rounds about AI.

Yet we believe that forewarned is forearmed when it comes to cybersecurity. That’s why, instead of adding to the usual media wailing and gnashing of teeth around artificial intelligence, we’re going to explore some real attacks that have taken place using AI, before diving into 5 ways that we foresee AI playing a big part in cybercrime.

But don’t worry – we’ll share our advice about how to fight back too. Let’s get started.


Real-Life Examples of AI-Powered Cybercrime

With all of the media hype around AI, you’d think that AI-driven cybercrime was a recent, cutting-edge concern. However, the age of these examples may surprise you:

A Convincing Cyber-Mimic

“Deepfake” technology is a branch of AI that can be used to create a convincing digital mimic of another person’s likeness over video or audio. We’ll explore more of the implications of deepfake technology later in this article.

In 2019, the CEO of an unnamed British energy firm received a call from his boss – a specific chief exec at the firm’s German parent company. During the call, his boss requested immediate payment to a supplier, which the CEO duly made. Except the voice on the other end of the line wasn’t the CEO’s boss at all. He was being convincingly impersonated using deepfake technology, and the attack defrauded the company out of €220,000.

The CEO reported that it was a shockingly believable mimic – the boss’s “slight German accent and the melody of his voice on the phone” was a highly believable dupe of the real thing.

Piracy Tutorial Videos with a Sting in their Tail

The lure of software piracy is still strong, and criminals commonly use it as a powerful social engineering attack vector; except now they have a spot of AI at their disposal.

The “legacy” version of the crime goes as follows. Threat actors create a screen-recorded video to show users how to download a cracked version of a piece of desirable software like Adobe Photoshop, with a link to the supposed pirated software in the description of the video. However, far from being the software as promised, the software linked in the description is actually malware.

Yet reportedly since late 2022, there has been a massive influx of YouTube videos running this scam with a chilling generative AI component. Instead of the corny old screen recordings, the latest trend is to create videos starring an AI-generated “presenter”; often specifically designed to look trustworthy; to walk you through the process and add credibility to the video’s claims.

This modus operandi is currently being used to spread “stealer” malware which can compromise things like passwords, cookies, crypto wallet details, sensitive files, and more.

Personal Data Goes Down the Rabbit Hole

In April 2018, gig-economy site TaskRabbit suffered an attack that reportedly used an AI powered botnet (a network of infected “zombie” computers) to carry out a DDoS attack on TaskRabbit’s servers.

Not only did this attack allow the hackers to compromise highly sensitive financial information from around 3.75 million users according to reports, but the entire site had to be disabled until security was restored, which affected all service users – estimated to be around 141 million individuals.


5 Ways AI Could Shake Cyber Security to its Core: Our Predictions

It would be practically impossible to list all of the possibilities that AI puts at cybercriminals’ fingertips, but at the time of writing, we can identify 5 themes to keep an eye out for. Sometimes we will refer to real cases, but some of it is a tad hypothetical for now. But we can tell you one thing: it will be fascinating to see what the future holds.

Deepfakes

Deepfake technology uses AI to create what can often be a strikingly believable mimic of a real person’s voice and/or likeness. Here is a particularly impressive (or scary) example impersonating actor Morgan Freeman:

Understandably this technology can be used for alarming levels of social engineering. In an audio-only attack like the example above, the AI may only need to “listen to” 10-20 seconds of a person’s voice in order to believably duplicate it. Then the criminals can effectively put their words into that real person’s mouth.

If you have a team member or leader who is particularly present on video sharing platforms or podcasts, this audio data will be more than enough material for an AI to impersonate that individual.

Some of our more eagle-eyed readers may have noticed something off with the Morgan Freeman example above. In terms of believability, video deepfakes are a little harder to perfect; yet creating a somewhat believable replica of a person is far from impossible at the time of writing. A deepfake video of Google CEO Sundar Pichai has already been part of an attack that hijacks YouTube channels. Deepfake videos of media personalities have been taking over TikTok – some are humorous, some are more insidious.

How to Stay Safe from Deepfake Attacks

When you receive an email, a voicemail, or a video message from someone making an unusual request, always verify that request with the person over another channel of communication.

Had a voicemail from your boss asking you to make a random payment? Grab them on a video call to double check their request. Received an email from an IT provider asking you to log in using a specific link to receive an update? Log in using the link you normally use and check for relevant notifications or messaging. Verify, verify, verify!

Advanced Social Engineering Capabilities

Video and audio impersonation aside, AI could also be used to improve other kinds of social engineering attacks.

If you’ve already played with the likes of ChatGPT, you’ll know that generative AI tools can be used to create quite natural sounding written text. Another branch of AI, Natural Language Processing (NLP), is concerned with getting computers to understand human communication – textually or verbally – in a similar way to how humans understand it.

NLP could therefore hypothetically be used to analyse the communication patterns of a target individual who criminals wish to mimic, in order to create highly convincing spear phishing emails, messages, or indeed deepfake scripts that convincingly impersonate that individual.

How to Stay Safe from Social Engineering Attacks

Cyber awareness training is an essential part of keeping social engineering attacks at bay. The key point here is to train everyone about the risks of phishing, social engineering, and deepfakes; from your entry level teams to your C-Suite execs; and even operational teams that don’t use standard IT that much.

As well as training, Human Risk Management solutions can help you keep tabs on your team’s cyber awareness levels and deliver specific training resources where needed.

Data Gathering, Analysis & Recon at Scale

Cyber reconnaissance is an essential information gathering step in many cyber attacks and can involve scooping up massive amounts of data. AI tools can be used to gather and analyse massive amounts of data at a speed that would be practically impossible for even a whole team of analysts. There are three ways that we can see this benefiting cybercriminals.

The first is that by automating their illicit information gathering, tasking an AI with analysing that data, and using that analysis to identify potential approaches or weaknesses in a victim’s defences; cybercriminals would hypothetically be able to deploy highly targeted attacks far more quickly and efficiently.

Our second point follows on from the natural language processing capabilities above. Given that AI tools can analyse massive amounts of data at breakneck speed, a criminal could therefore task their AI tool with analysing large amounts of communication data (email comms from a data breach, for example). The criminals could use AI to identify possible targets, uncover potential approaches, and hone in on communication patterns that may improve a phishing attack’s chances of success.

And thirdly, artificial intelligence could be used to analyse target systems and networks in order to observe security measures, adapt attack vectors, and analyse a victim’s normal network traffic patterns. This could help the criminals to spread malware without detection or exfiltrate data in a way that would ring the fewest alarm bells.

How to Stay Safe from Cyber Recon and Analysis

The best way to stay safe from snooping is to “harden” your infrastructure in such a way that any would-be snoopers are turned away at the door – whether they’re powered by AI or not.

As per our article about cyber recon, we touch on hardening a network by closing unused ports, encrypting data, using gateway antivirus tools, and investing in penetration testing. To harden individual devices, endpoint protection and managed detection and response (MDR) solutions come highly recommended. Strengthening password/login policies and implementing cyber training across the board are also useful here.

Increased Criminal Efficiency

AI isn’t some world-ending threat to our existence. Neither is it the harbinger of a utopian paradise. It is merely a tool to help us get more done. Sadly, this convenience extends to cybercriminals too.

Rather than physically spending time and energy on creating text for a scam email, observing a victim’s communication patterns, cracking a password, or coding new malware strains, criminals can now delegate these tasks to a robot.

Alas, this only stands to further line the pockets of cybercriminals as it will likely increase their profit margins. This stands to benefit both the lone wolf “hacker in their mum’s spare room” criminals and the highly sophisticated cybercrime-as-a-service organisations – potentially bolstering their efforts and keeping them going for longer.

However, we also predict that this criminal-racket-on-autopilot approach is bound to blow up in some attackers’ faces, especially the lazier and less knowledgeable ones. If an experienced  , then an inexperienced, would-be hacker who puts all of their trust in AI tools is likely to drop their own OpSec ball somewhere too!

How to Stay Safe from Increasingly Efficient Criminals

Vigilance is essential here. There’s the possibility that all online entities may be barraged with attempted attacks at an unprecedented scale, so good cyber tools, training, and habit-forming will potentially be more important than ever.

However, when researching for this article, we were surprised at how few AI-related attacks actually tickled the headlines – not to mention how old two of our above examples are. Does this mean that AI-powered attacks are flying under the radar? Or are criminals sticking with tried and tested attack methods for now? It certainly has us pondering.

AI-Powered Malware & Adaptive Threats

As far back as 2018, IBM created a proof-of-concept piece of malware called DeepLocker which was described as “a new breed of highly targeted and evasive attack tools powered by AI”. Its focus was to negatively impact a specific human target, but it also had a focus on stealth.

In a demonstration, it lay dormant in a video conferencing app until its facial recognition AI identified the specific “victim’s” face, wherein it deployed the WannaCry ransomware hidden within. It operated a highly complex neural network, making it “almost impossible to reverse engineer”. So not only can AI make criminals’ jobs easier, but it could also help make the malware so complex – and so highly targeted – that it defies the security community’s efforts to fix it.

Actively evasive malware has been around for a good many years now. Yet we foresee the possibility that bad actors may develop ways to make malware more independent; perhaps even with the ability to analyse the systems it’s attacking and to adapt its approach for best results – with no input from a human.

Hackers may even develop malware with machine learning algorithms so it can change its own code on the fly for minimum detection and maximum impact. But what would happen if/when malware is able to change itself in this way? Would we be able to contain it? Or would it start to evolve and branch off just like real viruses do? Time will tell – if “smart” malware even gets that far.

How to Stay Safe from AI-Powered Threats

This is the most hypothetical threat on this list, so it’s tough to say how organisations and individuals will defend themselves from these risks with the information we have at the moment. However, when it comes to AI, there is such a thing as fighting fire with fire.

Thankfully, just as criminals’ efforts will be bolstered by AI tools, the security community’s efforts will be too . Whether it’s harnessing AI to detect anomalies in real time, to automate speedy incident response, or simply to reduce alert fatigue, we foresee the cyber-good guys making lots of positive strides in the coming years.

But this isn’t just a far-off hope for the future. Companies like Check Point, Darktrace, SonicWall , Sophos, and Tessian are already harnessing AI to bring the fight to the cyber bad guys.


So if you are concerned about the AI-powered future of cybercrime or you’re looking for tools to strengthen your security posture in light of the possible cybersecurity “rise of the machines”, the Just Cyber Security team is here to help. Book a call with one of our guaranteed human technicians today!