Welcome to the second part of our third edition to the Cyber Compass. In this article, Will discusses the future of phishing, and the move away from just email with the help of AI, and the infiltration into apps, and cloud services. If you didn’t catch part one, ‘Scaling Social Engineering: Phishing with AI’, discussing how phishing is scaling up and the methods cyber criminals are using, you can find it here.

While email remains the number one channel for phishing, today’s attackers aren’t stopping there. Modern phishing tactics often extend into the very apps and cloud services employees use daily. Why? Because as organisations get better at filtering out suspicious emails, attackers are finding creative ways to blend into normal business workflows.

Let’s look at a few examples of how phishing has moved beyond the inbox:

Enterprise Messaging Platforms:

If your company uses Microsoft Teams or Slack for internal communication, attackers see an opportunity. They may try to infiltrate these platforms by either stealing credentials or tricking a user into inviting a malicious bot or app. Once inside, the attacker can pose as a fellow employee or a system integration. For instance, there have been cases where hackers used Microsoft Teams chats to impersonate IT support, messaging employees with something like “Hello, this is IT. We’re doing an urgent security update, please approve this login attempt.” Because the request comes through an official company chat, employees might be less suspicious than they would be of an email. In attacks observed by CYFOR, criminals used Microsoft Teams to send messages and even initiate calls to employees while impersonating help desk staff. This is all part of a scheme to trick users into revealing their passwords or installing remote access software.

Cloud Document Shares and Collaboration Links:

We’re accustomed to receiving links to documents via Google Drive, OneDrive, Dropbox, and so on. Attackers take advantage of this trust. CYFOR are seeing multi-turn phishing attacks where a business is compromised, and the threat actor hosts malicious documents and other files on their OneDrive/Sharepoint. They then utilise links to these files in their emails to customers of the business. The result – a phishing email where the malicious link appears, to both the recipient and email filtering systems, to come from a trusted source.

These steps are not new and have been a component of social engineering for a long time. But they present new opportunities to threat actors – opportunities that employee awareness training, monitoring systems, and other security defences need to consider.

The expansion of phishing into cloud apps and services is a reminder that security can’t just focus on email alone. Any platform where people communicate, or store data can be targeted. Attackers will follow the trail of our digital lives – and AI will help them jump platforms seamlessly, from email to chat to whatever comes next.

Defensive Measures Against AI-Driven Phishing

Facing this new era of AI-augmented phishing might sound daunting, but there are concrete steps organisations and individuals can take to defend themselves. A layered security approach is essential – combining human awareness with advanced technology. At CYFOR, we believe that an employee clicking a link or interacting with a document – making a human mistake – should never be sufficient for your organisation to become compromised. Measures should always be in place to detect and mitigate malicious activity before it causes harm to your business or your customers.

Here are key defensive measures to consider:

  • Continuous Employee Awareness and Phishing Training: Whilst it shouldn’t be the only line of defence, the first line of defence is always people. Regular training and phishing simulations can dramatically improve employees’ ability to spot suspicious messages. The training should emphasise new trends, like the possibility of multi-email conversations being phishy, not just one-off weird emails.

  • Advanced Email Filtering: Traditional spam filters alone are not enough for the new phishing tricks. Organisations should consider modern email security gateways or cloud email security services that use AI themselves to detect threats. These systems go beyond simple rule-based detection; they might use natural language processing to flag when an email conversation that was benign suddenly takes a turn asking for sensitive info. Some advanced filters evaluate the context of email threads, not just individual messages, to catch those “bait then hook” sequences. Others deploy ‘sandboxing’ – opening attachments in a safe, isolated environment to see if they do anything malicious before letting them through.

  • Monitoring Tools: Whilst how threat actors gain access to systems is advancing, the telltale signs of a compromise still exist. Unexpected IP addresses, “impossible travel”, or unusual email sending patterns can alert your business that an attack is underway – and give you time to mitigate the threat. It is not feasible for organisations to manually sift through all logs generated by a platform such as Microsoft 365 – but advanced monitoring systems can help.

  • Expert Monitoring and Incident Response: Even with all the best tools and training, some sophisticated phishing attempts might slip through. That’s where having a strong incident response plan and expert support comes in. It can be very effective to partner with cyber security experts who specialise in threat monitoring and can keep an eye on your environment 24/7. For example, consulting with a security firm like CYFOR Secure can provide advanced email protection and monitoring services that augment your in-house capabilities. These experts can help fine-tune phishing detection systems, swiftly investigate any suspected breach, and guide your team in containing and recovering from incidents if they occur. The value of such a partnership is in not having to face these AI-enhanced threats alone – you’ll have seasoned professionals and cutting-edge threat intel on your side.

In summary, defending against AI-driven phishing is about staying proactive and layered in your defences. Train your people, equip them with smart technology, lock down those login doors, and have backup from security specialists. Phishing may be getting smarter, but so are our defences.

The Future of AI in Cyber Threats

Phishing is just one arena where AI is making an impact. Looking forward, we can expect AI to feature in other cyber-criminal tools and tactics – sometimes in unsettling ways. Security researchers and hackers alike are exploring the concept of autonomous or semi-autonomous hacking agents. These are AI programs that could perform tasks like vulnerability scanning, exploitation, and pivoting through networks without needing constant human guidance. It sounds like science fiction, but studies have already shown promising (or worrying) results. For instance, a study found that, when given basic instructions, an AI system was able to autonomously exploit 87% of systems where a known vulnerability existed. In other words, the AI could effectively act like a junior hacker, finding and exploiting published security holes in test environments on its own. Note that though this was done in a controlled and ethical setting, it stands as a proof of concept that AI can potentially handle complex hacking tasks. One can imagine that in the wrong hands, such an AI agent could be directed at real targets, dramatically lowering the skill barrier for launching cyber-attacks.

There’s also the evolution of AI-driven tools that were originally intended for good, such as a legitimate security tool gets co-opted by attackers. A prime example from the last few years is Cobalt Strike, a professional penetration testing suite used by security teams to simulate attacks and find weaknesses. Unfortunately, it’s also become a favourite tool of ransomware gangs and nation-state hackers. In fact, malicious use of Cobalt Strike has exploded in recent years – one report noted a 161% increase in its use by cyber-criminals in just a one-year period. Why do attackers love it? Because it’s a ready-made toolkit that has all the features a hacker needs (stealthy malware deployment, controlling infected machines, etc.).  Since it’s used by legitimate testers, its behaviour can blend in with “normal” admin activity. Attackers essentially took a tool designed to help companies and repurposed it to hurt them, turning Cobalt Strike into a cornerstone of many ransomware operations.

Now consider the next generation of tools: AI-driven penetration testing and defence tools. These might be platforms that automatically probe networks, or AI systems that learn how to bypass certain defences. They could be incredibly useful for security teams – think of an AI that finds vulnerabilities before the bad guys do, or an AI that can automatically respond to an intruder in real-time. But in the cat-and-mouse game of cyber security, we must assume that whatever tools the good guys have, the bad guys will try to twist to their advantage. If an AI tool can autonomously test your defences, an attacker could use that same tool to attack those defences. It’s the Cobalt Strike scenario all over again, but with AI.

We’re already seeing hints of this on underground forums. There’s chatter about custom AI “cyber assistants” that can write malware or phishing emails without the usual errors, or that can guide a less-skilled hacker through the steps of compromising a system. In some cases, cyber criminals have advertised AI-based services – for example, malicious versions of language models (with names like “WormGPT” or “FraudGPT”) that have no ethical restrictions and are tuned to assist in crafting attacks. These are early and often crude, but they signal a trend: AI-as-a-service for bad actors. Essentially, someone with money but not hacking expertise could rent an AI that does the dirty work for them, from creating phishing campaigns to finding vulnerabilities, much like hiring a mercenary.

So, what does this future look like?

We might see a kind of arms race where companies deploy AI cyber defences and attackers respond with AI cyber offenses. AI might battle AI behind the scenes – one trying to breach, the other trying to block. Scenarios that were once theoretical, like a network worm that intelligently adapts its strategy as it encounters different environments, could become reality with AI logic built in. The threats might also become more autonomous. A traditional hacker must make a lot of decisions along the way and often can be slowed down by uncertainty or lack of knowledge about a target. An AI agent, given a broad goal like “find and exfiltrate sensitive data”, might iteratively figure out the steps, all without a person manually steering every move.

This all sounds intimidating, but it’s important to note that AI will also greatly aid defenders (and is already doing so in many security products).

Many of the advanced threat detection systems are underpinned by machine-learning models that sort through billions of events to find that one anomaly. AI can help predict attacks by analysing patterns across data that no human could manually digest. It can also take on mundane tasks, freeing up human analysts to focus on strategy and creative problem-solving. In the same way that attackers might use AI to write malware, defenders use AI to write better detection signatures or to automatically quarantine infected machines at the first sign of trouble.

The presence of AI in both attack and defence doesn’t change the fundamental need for vigilance, but it does raise the stakes in terms of speed and sophistication. Companies will need to be nimble and perhaps consider adopting a “fight fire with fire” approach, deploying AI to counter AI. At CYFOR, we are always adapting to new technology in both our proactive security monitoring services, and our reactive incident response offerings – providing businesses with the right defences against threats both new and old.

 

Conclusion

In conclusion, AI is amplifying both the attack and defence sides of the equation. Awareness, education, and cutting-edge defences will be key to staying one step ahead. Cyber security has always been about adapting to new tactics – AI is just the latest twist, and with the right approach, it can be managed and mitigated. Through a mix of technology and information, we can prevent today’s AI-supercharged phishing attempts from becoming tomorrow’s successful breaches. Stay vigilant, stay informed, and don’t hesitate to seek expert help in this fast-evolving landscape of cyber threats.

Written by William Poole, Technical Director, CYFOR Secure