By Alon Klayman and Tomer Kachlon, Team AXON
Phishing is no longer just an email problem. Microsoft Teams, the collaboration tool trusted by millions, is rapidly becoming a new entry point for attackers. Adversaries are abusing Teams’ default external collaboration features to impersonate IT staff, launch vishing calls, share malicious files, and even bypass Microsoft’s built-in warnings.
In this blog, we break down how attackers are exploiting Teams for initial access, the forensic artifacts left behind in Microsoft 365 audit logs, and the detection logic SOC teams can use to uncover these threats. Whether you’re a threat hunter or security leader, understanding Teams phishing is no longer optional – it’s essential.
Microsoft Teams usage for initial access is not just a theoretical vector; it has been observed in the wild as part of various attacks and campaigns, with a significant spike noted over the past year.
In November 2024, Team AXON published VEILDrive an unconventional campaign targeting a U.S. critical infrastructure company. The operation featured untypical tactics, techniques, and procedures, with one standout method being the abuse of Microsoft Teams for initial access.
Following the VEILDrive discovery, our team observed a growing pattern of threat actors using Microsoft Teams as an initial access vector.
One of the follow-up patterns observed by our team was the usage of Fake IT/Help Desk communication from external sources via Microsoft Teams. This pattern was sometimes seen as a follow-up to spam flooding initiated by the threat actor to make it appear that the targeted user is facing a technical issue that requires assistance from the IT department.
Given the growing frequency of these attacks, our team conducted a focused research effort into how Microsoft Teams is being abused. Our goal was to understand the mechanics of these attacks, why Teams is emerging as a preferred vector for criminal groups, and how defenders can detect and mitigate these threats before damage occurs.
External Communication Capabilities are the primary threat to initial access in Microsoft Teams. The fact that this capability is enabled by default in Microsoft 365 Teams tenants makes it particularly attractive for attackers.
Based on in-the-wild examples we've spotted so far, threat actors first use a previously compromised Teams user account or create an Entra ID tenant. In many cases, we observed the usage of .onmicrosoft.com domains that are the fallback domains provided by Microsoft for Microsoft 365 business or school accounts, for companies that don’t configure their own custom domain.
One of the questions we asked ourselves when digging deeper was, why would threat actors create their own tenants and purchase M365 licenses? Can’t they use trial licenses or even personal Microsoft accounts?
Based on our simulations, the main insights were that using free Microsoft accounts (outlook[.]com), with a free Teams account, or using a trial Teams licence are not equivalent, both in terms of logging, and in terms of potential functionality and attack surface:
Offensive POV
Threat actors and red teamers can easily initiate one-on-one chats and send messages to individual or multiple users. Microsoft Teams, as a collaboration platform, simplifies the identification of external users through email address searches within the application (GUI or web). This allows for clear confirmation of user existence and the ability to receive messages from external tenants.
If the email address can’t be found, it means that it either doesn’t exist or that the target organization blocks this type of external communication.
Another important aspect to be aware of is the usage of additional security measures implemented by Microsoft that notify the end user in the case of a potential impersonation in an incoming message from an external identity.
There are two types of messages we identified as part of our research:
1. The “classic” external communication warning: Shown every time the specific external sender communicates with the specific target user.
2. Potential phishing warning message: This warning banner is displayed only in specific cases, differing from the standard version. It appears that Microsoft likely employs particular logical tests to identify messages with a higher probability of being phishing attempts.
Defensive POV
In the case of written One-On-One Chat, which is one of the main attack methods used by threat actors, the logs created in M365 Audit Logs are:
In addition to the main log types mentioned above, there are complementary log types that are created in specific scenarios:
Offensive POV
Threat actors have adapted their tactics over time, moving beyond message-based phishing to incorporate voice chat phishing (vishing). This shift may be attributed to Microsoft's enhanced security measures, which make it more challenging to deceive victims with malicious textual messages, as previously discussed in the One-On-One Chat section.
When using voice-based phishing, an external sender can, by default, call an organizational user without first sending a message - it does create a new chat though.
It can be, and actually seems to be the case in the wild, a very attractive option for threat actors, mainly because no warning pop-up appears on the victim's side.
Defensive POV
Surprisingly, M365 audit logs, which include a relatively in-depth logging of Microsoft Teams activities, didn’t include any indication of an incoming Teams voice chat. The only log entries created were ChatCreated and MessageSent, the exact two event types that we observed being created when an external text-based message is sent. According to our research, there is no definitive way to distinguish between them.
This gap becomes even more important when taking the next part (Screen Sharing) into consideration.
Offensive POV
Attackers commonly exploit Microsoft Teams for initial access, often alongside Remote Monitoring and Management (RMM) tools like QuickAssist, AnyDesk, and DWAgent. This led us to investigate whether attackers could instead utilize Microsoft Teams' integrated screen sharing and remote control features. While we found this to be only partially feasible, it remains a notable concern, and future discoveries may reveal further potential vulnerabilities.
No dedicated M365 audit log entries were created when screen sharing. This, of course, prevents us from distinguishing between a classic text-based/voice conversation and one that includes screen sharing.
Offensive POV
It’s not the focus of this short blog post; however, we did want to mention that during our simulations, we validated that sending files via OneOnOne chat communication is possible, even though it is not an option using the GUI. Here’s how:
Section 3 of the following blog post can be used as a reference - https://posts.inthecyber.com/leveraging-microsoft-teams-for-initial-access-42beb07f12c4
Note that the “file attachment” is actually uploaded to the sender’s SharePoint, and even though represented as a file logo in the OneOnOne chat, it points to the SharePoint URL.
Food for thought: This file can be modified later by the threat actor by simply modifying the SharePoint file itself.
It means that, even though incoming One-On-One chats shouldn’t include attached files by design, as a defender, it is important to remember that this option is available and not to ignore it.
Defensive POV
Embedded files (in practice, SharePoint links) can be identified in the relevant “MessageSent” M365 Audit logs entries in the “MessageURLs” field.
Microsoft has implemented security pop-ups to warn users about potential threats, posing a significant hurdle for attackers. While previous methods to bypass these warnings have been documented, our recent tests indicate that these methods are no longer effective.
During our simulations, we asked ourselves, is there a way to actually send text messages that will be accessible by the victim, without the need to first witness those “external user” pop-ups? We found that some of the security measures enforced on Voice Chats and One-On-One communications are not consistently enforced in the case of Teams’ Meetings.
Teams Meetings vs Voice Calls
In Microsoft Teams, meetings and voice calls serve different purposes and offer distinct features. Meetings are designed for larger, collaborative sessions, potentially involving multiple participants and featuring tools such as screen sharing and recording. Calls are typically one-on-one or small-group interactions, more akin to a phone call, with a focus on quick and direct communication.
For instance, while the GUI facilitates file sharing, this capability isn't a significant concern. As previously demonstrated, this functionality can still be exploited in one-on-one chats, even when using tools like Burp, regardless of the GUI setting.
A crucial point to note is that the warning banner is not consistently enforced during meetings.
Given the increasing prevalence and evolution of Microsoft Teams phishing as an initial access vector, our team has developed comprehensive detection logic to address this threat. This logic identifies suspicious external communications as threat-hunting signals and incorporates a dedicated enrichment and scoring layer. This ensures that the most significant hits are prioritized and classified as alerts.
Key Detection Logic
We employed a UEBA approach, focusing on identifying new Microsoft Teams chats where the domain of the sender is both external and not typically used in communications with organizational Teams users. This emphasis on "ChatCreated" events stems from their prevalence across various chat types, including those used in vishing and text-based phishing attempts by threat actors.
The hits are initially considered threat-hunting signals (a.k.a. leads). On top of those hunting signals, we created robust enrichment and scoring layers to ensure that SOC analysts who use Hunters will be able to prioritize the most important and interesting leads and quickly investigate them to determine if it is indeed true-positive or not.
Contextual Enrichments & Scoring Layer
To provide the analyst with the option to conduct a quick triage and analyze the lead, we created a dedicated drill-down that fetches relevant M365 unified audit log entries based on the relevant entity (sender or receiver) and the relevant chat ID. The following screenshot shows an example of this drill-down, which includes a significant number of TIMailData log entries right before the malicious incoming Teams’ message:
This summarized view of events further enhances efficiency, allowing for quick determination of significant event types within associated log entries.
Based on the information available in the lead itself and also in the drill-down/enrichment, we created multiple scoring layers to increase the severity or the confidence of the leads based on different characteristics:
Confidence
The following logic can be applied to enhance the confidence level of a suspicious new chat initiated by an incoming Microsoft Teams message from an external user. This method can effectively narrow down the alerts to those more likely to be true positives.
Severity
The following logic can be used to identify suspicious chats with a higher potential for escalating to a significant incident based on follow-up activities that were logged in M365 audit logs:
WITH COMMONLY_USED_DOMAINS AS (
SELECT LOWER(SPLIT_PART(USER_ID , '@', 2)) AS DOMAIN_COMMONLY_USED,
MIN(EVENT_TIME) AS MIN_EVENT_TIME,
MAX(EVENT_TIME) AS MAX_EVENT_TIME,
ARRAY_AGG(DISTINCT OPERATION) AS OPERATIONS,
COUNT(*) AS COUNTER
FROM RAW.O365_AUDIT_LOGS
WHERE 1=1
AND WORKLOAD = 'MicrosoftTeams'
-- This is the learning period, adjust per your needs
AND EVENT_TIME BETWEEN CURRENT_TIMESTAMP - interval '60d' AND CURRENT_TIMESTAMP - interval '30d'
AND USER_ID ILIKE '%@%'
GROUP BY DOMAIN_COMMONLY_USED
-- Threshold that determines how many appearances for a domain to be considered common
HAVING COUNTER > 50
)
SELECT
MIN(EVENT_TIME) AS FIRST_SEEN,
MAX(EVENT_TIME) AS LAST_SEEN,
USER_ID AS USER_ID,
LOWER(SPLIT_PART(USER_ID , '@', 2)) AS USER_DOMAIN,
RECORD_SPECIFIC_DETAILS:chat_thread_id AS CHAT_THREAD_ID,
ARRAY_AGG(DISTINCT RECORD_SPECIFIC_DETAILS:members[0].DisplayName) AS MEMBER_DISPLAY_NAME_1,
ARRAY_AGG(DISTINCT RECORD_SPECIFIC_DETAILS:members[0].UPN) AS MEMBER_UPN_1,
ARRAY_AGG(DISTINCT RECORD_SPECIFIC_DETAILS:members[1].DisplayName) AS MEMBER_DISPLAY_NAME_2,
ARRAY_AGG(DISTINCT RECORD_SPECIFIC_DETAILS:members[1].UPN) AS MEMBER_UPN_2,
ARRAY_AGG(DISTINCT RECORD_SPECIFIC_DETAILS:message_ur_ls) AS MESSAGE_URLS,
ARRAY_AGG(DISTINCT OPERATION) AS OPERATIONS,
ARRAY_AGG(DISTINCT RECORD_SPECIFIC_DETAILS:resource_tenant_id) AS RESOURCE_TENANT_ID,
ARRAY_AGG(DISTINCT RECORD_SPECIFIC_DETAILS:communication_type) AS COMMUNICATION_TYPE,
ARRAY_AGG(DISTINCT RAW:ClientIP) AS CLIENT_IP,
ARRAY_AGG(DISTINCT RAW:ParticipantInfo.HasForeignTenantUsers) AS INCLUDE_EXTERNAL_ENTITY
FROM RAW.O365_AUDIT_LOGS
WHERE 1=1
AND WORKLOAD = 'MicrosoftTeams'
AND NOT USER_ID IN ('app@sharepoint')
AND USER_ID ILIKE '%@%'
AND OPERATION IN ('ChatCreated')
AND RECORD_SPECIFIC_DETAILS:participant_info:has_foreign_tenant_users = true
AND RECORD_SPECIFIC_DETAILS:communication_type = 'OneOnOne'
-- remove commonly used domains
AND LOWER(SPLIT_PART(USER_ID , '@', 2)) NOT IN (SELECT DOMAIN_COMMONLY_USED FROM COMMONLY_USED_DOMAINS)
AND EVENT_TIME > CURRENT_TIMESTAMP - interval '30d'
GROUP BY USER_ID, CHAT_THREAD_ID,USER_DOMAIN
HAVING COUNT(*) < 20
To make it even easier, here is a list of guiding questions to answer when investigating a Threat-Hunting hit:
Microsoft Teams phishing isn’t a fringe technique anymore — it’s an active, evolving threat that bypasses traditional email defenses and exploits trust in collaboration tools. By monitoring audit logs like ChatCreated and MessageSent, enriching signals with contextual data, and training users to spot IT/help desk impersonations, SOC teams can close this new gap before it’s exploited.
Attackers are already moving into Teams. The question is whether defenders will adapt quickly enough. Start by applying the hunting queries and detection logic we’ve shared here — and make Teams phishing detection a first-class priority in your SOC.