Welcome to part one of the 3rd edition to the Cyber Compass, our monthly thought leadership series, featuring articles expertly written by our Technical Director, William Poole. In this month’s edition, Will explores how phishing techniques have evolved and how artificial intelligence (AI) is taking these scams to a new level. 

The Evolution of Phishing Attacks 

In the early days, phishing was often a “spray and pray” operation. Attackers would send out masses of impersonalised emails – essentially casting a wide net – and hope that even a small percentage of recipients would take the bait. These classic phishing emails were usually generic in greeting (“Dear Customer”). They would be riddled with red flags, like poor grammar and strange requests. Some might claim you’ve won a prize or need to verify an account, with a link that led to a fake website eager to steal your credentials. The approach was volume-driven: even if 99 out of 100 people ignored the email, that one click could make it worthwhile for the attacker. In short, early phishers relied on quantity over quality, betting that someone would fall for the scam. 

Over time, cyber criminals realised they could improve their odds by personalising their attacks. Phishing evolved from the one-size-fits-all spam blast into more refined tactics. Tactics like spear phishing have become more common – where specific individuals or companies are targeted with information tailored just for them. Instead of obvious misspellings and generic pleas, these emails might mention your actual name, your job title, a seemingly legitimate customer request. In recent years, attackers have taken this a step further by engaging in multiple-turn conversations before delivering a malicious link or payload. This technique is sometimes called barrel phishing or conversational phishing. The attacker doesn’t drop the malicious link in the first message. Instead, they start an innocuous dialogue to build trust, then spring the trap. For example, as CYFOR saw many times in 2024/25, threat actors may email into conveyancing firms with legitimateappearing enquiries, only transitioning to malicious documents or links after a week or more. After a few back-and-forth replies they then send something dangerous (like a malware-laced attachment or a link to a fake login page). This low-volume, highly personalised approach means each target might get a lot of attention from the attacker, but the payoff and success rates for the threat actor are often much higher. 

Such multi-email conversational attacks have proven harder for traditional security filters to catch. Security tools that scan one email at a time often miss the context – a single message might not contain anything obviously malicious or might even seem perfectly routine. It’s the combination of messages (the normal chit-chat followed by an out-of-character request) that signals a phishing attack in progress.  For instance, an attacker might email you from what appears to be HR, chat a bit about your recent vacation or a project, and only later ask you to click a link to view a “document”. By then, your guard is down. In essence, phishing has shifted from obvious one-shot attempts to patient cons that feel more like genuine interactions. 

AI and the Future of Phishing

Whilst businesses figure out how to combat these multi-turn phishing, AI is poised to supercharge them even further. Today’s artificial intelligence can combine the widereach of old-school phishing with the personal touch of spear phishing – effectively giving attackers the best of both worlds. AI doesn’t get tired or overwhelmed by research and is capable of holding many conversations at once. It can automate the brunt work involved in crafting messages that would normally require lots of time and effort from a human scammer. In practical terms, this means a criminal could let an AI system gather detailed info on thousands of potential targets and then fire off uniquely tailored messages to each of them at scale. What used to be a painstaking manual task can now be done in minutes by a machine. 

Let’s break down the game-changing roles AI can play in phishing: 

  • Automated Reconnaissance: AI tools scour public sources, like LinkedIn and company websites, to compile personal details about targets. For example, an AI might discover that you recently attended a specific conference, got a promotion, or work closely with a certain colleague. This information becomes the bait. Attackers no longer need to spend hours Googling their victims. 
  • Generating Convincing Messages: Modern AI language models, the technology behind chatbots, are remarkably adept at crafting convincing messages. They produce fluent and friendly-sounding emails that mimic the tone and style of coworkers or friends, eliminating awkward phrasing and typos. Moreover, AI also powers chatbots that engage in realistic, real-time conversations over email or messaging apps. They can even use humour or emojis to fit the organisation’s culture, making conversations feel natural and unscripted. This capability allows phishers to execute a multi-turn “trust-building” strategy automatically, handling entire conversations until the perfect moment to introduce a malicious link. 
  • Adapting to Victim Responses: A big giveaway of a phishing attempt is if the sender ignores your questions or gives odd responses. AI eliminates that problem. If a target gets sceptical or asks for more info, an AI-driven scam can adapt on the fly – adjusting their approach based on the victim’s tone or level of engagement, making them incredibly persuasive and hard to trip up. 
  • Scaling Up Personalisation: Perhaps the most frightening aspect is scale. AI can personalise and manage thousands of phishing conversations simultaneously. It enables “spear phishing on steroids” – highly personalised attacks, done wholesale. 
  • Multi-Language and 24/7 Operation: It’s worth noting AI doesn’t care about language barriers or time zones. It can easily draft phishing messages in whatever language the target speaks (with native-level fluency). Further, it can engage targets at any time, day or night, with instantaneous responses. The result is a tireless, globally effective phishing operation that greatly increases the surface area of potential victims to each scammer. 

All these factors combined mean AI can blend the broad reach of traditional phishing with the credibility of a personal touch. It’s the equivalent of having an army of skilled con artists at your disposal, courtesy of AI. Further, there’s chatter about custom AI “cyber assistants” that can write malware or phishing emails without the usual errors, or that can guide a less-skilled hacker through the steps of compromising a system. Malicious versions of language models such as WormGPT or FraudGPT are being utilised, that have no ethical restrictions. More on these in next week’s episode ‘Beyond email & beyond phishing – the future of AI in threats’. 

Conversational Phishing in the Wild  

CYFOR Secure have recently handled multiple cases where organisations (often conveyancing firms) have been targeted by conversational phishing. The firms receive legitimate-seeming enquiries and, in some cases, even well-crafted documents and images of real houses over many days before the sender delivers a malicious link. 

These emails have, at times, been so convincing, that we even considered whether they were originally legitimate, before the prospective customer was compromised and the email chain hijacked. However, common property details, images and documents seen across multiple cases investigated by CYFOR found this not to be the case.  

How to Protect Against AI-Driven Phishing 

As phishing becomes more advanced, it’s important to ensure your cyber security defences are up to date. Organisations can take the following steps to mitigate the risks of AI-driven phishing: 

Security Awareness Training: Employees should be trained to recognise conversational phishing tactics, such as overly personalised emails or extended small talk leading up to a request. Simulated phishing attacks can test and reinforce employees’ understanding of their training. 

Email Authentication Protocols: Implementing SPF (Sender Policy Framework), DKIM (DomainKeys Identified Mail), and DMARC (Domain-based Message Authentication, Reporting & Conformance) can prevent email spoofing, reducing the risk that scammers can impersonate your legitimate email accounts to conduct phishing 

AI-Powered Threat Detection: It would be strategic for organisations to invest in AI-driven security tools that analyse patterns over multiple messages, rather than scanning individual emails in isolation. This way we can build a more rounded image of a phishing attack in action.  

Zero Trust Policies: Always verify requests through a method other than email if they seem suspicious or involve sensitive information, even if they appear to come from a trusted source. 

Multi-Factor Authentication (MFA): Using MFA reduces the risk of account compromise, even if credentials are stolen. Read more about multi-factor authentication and ways to strengthen yours here: The Cyber Compass – January 2025 – CYFOR Secure 

Incident Response Plans: Have a clear plan in place for handling phishing incidents, including reporting and isolating compromised accounts. Check out our Managed Cyber Security Service offerings too, where we help manage your cyber security infrastructure and can quickly respond to any risks or threats. 

Conclusion 

The evolution of phishing from the crude “Nigerian prince” scams to AI-driven con artistry is a microcosm of the broader changes we’ll see in cyber threats. Attackers are moving from low-effort mass emails to sophisticated, highly targeted campaigns powered by AI. By understanding how phishing is changing and anticipating how AI might be misused in the future, organisations can better prepare themselves. Next week, we’ll go beyond email and phishing to explore how AI is being used in cyber threats across the board – from deepfake scams to AI-generated malware. Stay tuned!