Check Point Email Security | Blog

Storm-0324 Threat Group Switches Phishing Tactics to Teams

Written by Jeremy Fuchs | September 13, 2023

A stunning new development in the world of Teams phishing was announced by Microsoft.

An initial access broker that's previously worked with ransomware groups is switching from email to Teams as their way into corporate networks.

The group, known as Storm-0324, is a financially-motivated organization, and has worked with the Fin7 group, which is known for deploying Clop ransomware.

Starting in July, according to Microsoft researchers, Storm-0324 used Teams to send phishing attacks. 

According to Microsoft, they used the publicly available tool TeamsPhisher.

We wrote about TeamsPhisher recently. TeamsPhisher first finds a target Teams user, verifying that they can receive external messages. After doing so, TeamsPhisher starts a thread with the target, with a message and a link to a SharePoint attachment. 

Here's what it looks like in action:

And then here's the file:

We also broke down this attack via video:

This type of tactic has now been used a few times, including by the group behind SolarWinds. And it's related to the attack we wrote about that spreads a DarkGate loader via Teams

What's most interesting about this is that Storm-0324 has changed its tactics.

They have been relying on email as the first step in their campaigns since at least 2019, typically sending emails that spoof invoices or payments. The link on the email goes to a SharePoint site.

That's generally the same here, except the attack starts over Teams. Why the switch?

We've found, in our three years of delivering Teams protection, that there's a few things going on.

One, many organizations don't have Teams protection beyond what Microsoft provides. Forrester notes that, "protections developed for the email inbox must extend to these environments."

And yet, we haven't seen that across the ecosystem of vendors. At most, we'll see configuration visibility and understanding of anomalous logins or behavior. 

That's all very valuable, but organizations need to go deeper and that's where groups like Storm-0324 can come in.

Second, we've found that users are generally more trusting on Teams than they are on email. Think about it: have you ever received security awareness training about Teams? If you have, you're one of the few. 

This attack takes advantage of the lack of URL scanning and sandboxing from most providers. And it takes advantage of the fact that most end-users aren't scrutinizing external users on Teams. Organizations have countless chats going on with external partners and clients. Receiving a message from an external user isn't unusual--it's standard practice.

We've written about a number of unique Teams attacks in the past. We've seen how hackers have attached malicious .exe files. The file is called "User Centric", but it is in fact a Trojan, which will then install DLL files and create shortcut links to self-administer. This will eventually lead to the hacker taking over the computer.

We've also uncovered partner compromise attacks, where a partner organization had an account that was compromised for almost a year, unbeknown to either companies. This hacker acted differently on Teams. Instead of the traditional spray-and-pray campaigns, the hacker waited nearly a year before contributing in a channel. Instead, they waited and bided their time. When an opportunity arrived and sharing a file was part of a natural conversation, the hacker shared a zip file, which included a version of a malware kit designed for desktop monitoring and configured to install silently upon clicking the file. This Trojan would've given the attacker full access to control the victim's desktop. 

Finally, in an analysis of hospitals that use Teams, we found that doctors share patient medical information with practically no limits on the platform. Though medical staff generally know the security rules and risk of sharing sensitive information via email, they tend to ignore those rules on teams. In one extreme case, we identified a Teams channel with roughly 250 end-users, many of which with emails external to the hospital domain. In this channel, sensitive medical information is shared freely. In one case, medical information, procedures and family circumstance of a minor was shared together with name and social security information. 

How We Protect Teams

With us, every file is scanned in a sandbox for malicious content, as are links within files and messages. When we detect sensitive information, such as social security numbers, it's blocked and the sender is notified. 

Additionally, we have a user behavior anomaly engine that identifies suspicious logins and compromised accounts, and then cross-correlates that with other protected SaaS apps to detect compromised accounts, insider threats and insecure configurations.

Beyond that, we have a compliance bot for education, which helps reduce the amount of freely shared sensitive data.

We first launched Teams protection in 2020, envisioning a shift in phishing an malware attacks onto the collaboration platform. We were a little early--but we'd rather be early than late. In the last few months, we've seen more publically reported Teams attack than in the last few years. 

The tide is changing. Teams protection is no longer nice to have.

It's a need to have.