Threat actors are embedding AI into cybercrime operations to accelerate speed across the attack lifecycle—compressing timelines, lowering technical barriers, and increasing operational resilience without fundamentally changing attacker objectives. https://msft.it/6044QLNdk Drawing on insights from RSAC 2026, Sherrod DeGrippo shares how AI is evolving from a tool into an attack surface, and what this shift means for defenders. Most activity today is not fully autonomous or using agentic AI to run campaigns end to end. Instead, AI is being operationalized within scalable cybercrime ecosystems, where services are modular, repeatable, and optimized for efficiency. This model lowers the barrier to entry while increasing precision, persistence, and the difficulty of disruption. As AI accelerates tempo and iteration across the attack lifecycle, effective defense depends on disrupting these ecosystems and closing the intelligence loop: disruption generates signal, signal feeds intelligence, intelligence strengthens detection, and detection drives response. These dynamics illustrate how established tradecraft becomes more efficient—and more resilient—when AI is embedded at scale. Microsoft Threat Intelligence has observed AI-driven workflows that enable faster recon., rapid infra. and malware iteration, and sustained misuse of legitimate access, allowing threat actors to adapt more quickly and operate at greater scale. Learn more: https://msft.it/6045QLNdZ
Microsoft Threat Intelligence
Computer and Network Security
Redmond, Washington 112,247 followers
We are Microsoft's global network of security experts. Follow for security research and threat intelligence.
About us
The Microsoft Threat Intelligence community is made up of more than 10,000 world-class experts, security researchers, analysts, and threat hunters analyzing 78 trillion signals daily to discover threats and deliver timely and hyper-relevant insight to protect customers. Our research covers a broad spectrum of threats, including threat actors and the infrastructure that enables them, as well as the tools and techniques they use in their attacks.
- Website
-
https://aka.ms/threatintelblog
External link for Microsoft Threat Intelligence
- Industry
- Computer and Network Security
- Company size
- 10,001+ employees
- Headquarters
- Redmond, Washington
- Specialties
- Computer & network security, Information technology & services, Cybersecurity, Threat intelligence, Threat protection, and Security
Updates
-
Microsoft Threat Intelligence has attributed the Axios npm supply chain attack to North Korean state actor Sapphire Sleet. Malicious npm packages for updated versions of Axios (1.14.1 and 0.30.4) downloaded payloads from command and control attributed to Sapphire Sleet. Organizations affected by this attack are urged to roll back to safe versions (1.14.0 or 0.30.3 or earlier), rotate secrets and credentials that are exposed to compromised systems, and disable auto-updates. Our latest blog has our analysis of the attack, additional mitigation recommendations, and Microsoft Defender detection and hunting guidance: https://msft.it/6043QLPxS
-
Prompt abuse is a critical security concern, with threat actors increasingly manipulating AI systems through carefully crafted inputs that push models beyond their intended boundaries. Incident response investigations highlight how hidden instructions embedded in content such as URLs, documents, or messages can bias outputs, alter summaries, or expose sensitive context—often without the user doing anything unsafe. Real world incidents show how tactics like direct prompt overrides, extractive prompt abuse against sensitive inputs, and indirect prompt injection can influence AI behavior in subtle ways that are difficult to detect through traditional security signals. Because prompt abuse leverages natural language, malicious activity can blend into legitimate interactions, leaving little trace without the right visibility and telemetry. Effective defense moves threat modeling into practice by monitoring prompt activity, investigating anomalous AI behavior, and applying governance and access controls to reduce impact and prevent recurrence. Learn more and get guidance from this Microsoft Incident Response blog post. https://msft.it/6044Qv9ga Threat actors operationalize AI across the cyberattack lifecycle—understand how prompt abuse fits into a wider pattern of AI-enabled tradecraft: https://msft.it/6048Qv9g0
-
In the latest Microsoft Threat Intelligence Podcast episode, Microsoft’s Sherrod DeGrippo and the FBI Cyber Division’s Jarrod Forgues Schlenker discuss what actually reduces breaches: consistent execution of foundational controls. https://msft.it/6041QQxYL. They talk in depth about Operation Winter Shield, which aims to turn law enforcement visibility from real investigations into simple, actionable defensive steps that organizations can take to create barriers for adversaries. “We are uniquely situated, given the optics and the information that we have through our investigations, to empower the public to protect themselves and to be that catalyst for positive change,” Jarrod said. Sherrod and Jarrod discuss the themes tackled in Operation Winter Shield, including phishing-resistant MFA, highlighting that credential theft shows up consistently in investigations as initial access path. They also emphasize the importance of securing and retaining high-quality logs, noting that in cyber investigations, the crime scene is a network, and missing logs means the crime scene disappears. At its core, Operation Winter Shield reinforces a strong call to focus on prevention. Small, consistent improvements in foundational controls compound into real resilience. Listen to the episode here: https://msft.it/6041QQxYL. To learn more about Operation Winter Shield, visit https://msft.it/6042QQxY0. To learn how Microsoft is supporting Operation Winter Shield, read: https://msft.it/6043QQxYF.
-
The Microsoft Defender Research team has published guidance on detecting, investigating, and defending against the sophisticated CI/CD-focused supply chain compromise involving the widely used open-source vulnerability scanner Trivy: https://msft.it/6049QQ6Xz
-
Microsoft Threat Intelligence has observed threat actors actively experimenting with techniques to bypass or “jailbreak” AI safety controls. By reframing malicious requests, chaining instructions across multiple interactions, and misusing system‑ or developer‑style prompts, threat actors can coerce models into generating restricted content that bypasses built‑in safeguards. These techniques demonstrate how generative AI models are probed, shaped, and redirected to support reconnaissance, malware development, and social engineering while minimizing friction from moderation. AI guardrails have become dynamic surfaces that attackers test and manipulate to sustain operational advantage. As AI becomes more deeply embedded in enterprise workflows, understanding how attackers test and manipulate these guardrails is critical for defenders. Learn more about securing generative AI models on Azure AI Foundry: https://msft.it/6048Qseva Understand how threat actors are operationalizing AI and get mitigation guidance from this Microsoft Threat Intelligence blog post: https://msft.it/6040QsevI
-
During tax season, threat actors exploit the urgency and familiarity of time-sensitive emails like refund notices, filing reminders, and requests from tax professionals to push malicious attachments, QR codes, and multi-step link chains. Microsoft Threat Intelligence has observed campaigns themed around W-2 and other tax documents that impersonate government agencies, tax services firms, and financial institutions. These campaigns aim to steal personal and financial data, harvest credentials through phishing-as-a-service (PhaaS) platforms, or deliver malware. Many campaigns target individuals but others specifically target accountants and other professionals who handle sensitive documents, have access to financial data, and are accustomed to receiving tax-related emails during this period. Our latest blog has details from our analysis of several campaigns leveraging the tax season for social engineering, as well as Microsoft Defender protection, detection, and hunting guidance: https://msft.it/6046QUflq
-
Microsoft Defender Experts is sharing an investigation into the sophisticated social engineering operation known as Contagious Interview, which targets software developers and continues to be prevalent. https://msft.it/6042QmHbg Threat actors target developers to attempt to compromise developer endpoints with access to source code, CI/CD pipelines, and production infrastructure. They pose as recruiters from cryptocurrency trading firms or AI-based solution providers and achieve initial access through a convincingly staged recruitment process that mirrors legitimate interviews but leads to a backdoor. The modular backdoor then enables theft of sensitive information like API tokens, cloud credentials, signing keys, cryptocurrency wallets, and password manager artifacts, and also leads to follow-on malicious activity and other payloads. Organizations can defend against this threat by monitoring developer endpoints and build tools, and by hunting for suspicious repository activity and dependency execution patterns. Read the latest Microsoft Defender Experts blog to get the full attack chain analysis, as well as protection, detection, and hunting guidance:
-
The cybercriminal threat actor tracked by Microsoft Threat Intelligence as Storm-2561 is running an SEO-poisoning campaign that redirects people searching for enterprise VPN software to spoofed sites and malicious ZIP downloads leading to credential theft. https://msft.it/6045QlyZF The ZIP file contains a malicious, digitally signed installer that masquerade as a trusted VPN client. The attack chain ultimately loads a variant of Hyrax infostealer that captures VPN sign-in credentials and configuration data, and exfiltrates it to attacker infrastructure. Read the full Microsoft Defender Experts analysis of the tactics, techniques, and procedures (TTPs) and indicators of compromise of this Storm-2561 campaign, and get protection, detection, and hunting guidance:
-
Threat actors are rapidly integrating AI as a core component of their tradecraft, using it across the attack lifecycle to move faster, scale more easily, and experiment with new tactics at unprecedented speed. https://msft.it/6048QYVBo AI is being used to operationalize reconnaissance, social engineering, malware development, and infrastructure setup, enabling actors to quickly test ideas, abandon what fails, and expand what works. This shift is especially visible among North Korean threat actors, where AI lowers the barrier to entry and enables less sophisticated operators to demonstrate greater agility. Actors affiliated with Democratic People's Republic of Korea (DPRK) activity have been observed using AI to generate end-to-end malware and refresh tooling in ways that remove traditional indicators used for attribution. AI-assisted social engineering has also reduced telltale language errors, making phishing and impersonation campaigns more convincing. At scale, AI enables threat actors to create believable online personas and sustain long-running operations without previous growth bottlenecks. Learn how defenders must think about detection and response from Greg Schloemer and Vlad H. on this episode of the Microsoft Threat Intelligence Podcast, hosted by Sherrod DeGrippo. For more information on how threat actors are operationalizing AI, read: https://msft.it/6040QYVBq