What’s new in Microsoft Security Copilot

What’s new in Microsoft Security Copilot

This post was originally published on this site.


Security and IT teams move fast – and so does Security Copilot. This month, we’re delivering powerful new capabilities that help security and IT professionals investigate threats, manage identities, and automate protection with greater speed and precision. From AI-powered triage and policy optimization to smarter data exploration and expanded language support, these updates are designed to help you stay ahead of threats, reduce manual effort, and unlock new levels of efficiency.

Let’s dive into what’s new.

 

Improve IT efficiency with Copilot in Microsoft Intune – now generally available

IT admins can now use Security Copilot in Intune which includes a dedicated data exploration experience, allowing them to ask questions, extract insights, and take action – all from within the Intune admin center. Whether it’s identifying non-compliant devices, managing updates, or automating remediation, Copilot simplifies complex workflows and brings data and actions together in one place.
Learn more: Copilot in Microsoft Intune announcement

 

Streamline identity security with Copilot in Microsoft Entra – now generally available

Security Copilot in Microsoft Entra now brings AI-assisted investigation and identity management directly into the Entra admin center. Admins can ask natural language questions to troubleshoot sign-ins, review access, monitor tenant health, and analyze role assignments – without writing queries or switching tools. With expanded coverage and improved performance, Copilot helps teams move faster, close gaps, and stay ahead of threats.
Learn more: Copilot in Microsoft Entra announcement

 

Close gaps quickly with the Conditional Access Optimization Agent – now generally available

The Conditional Access Optimization Agent in Microsoft Entra brings AI-powered automation to identity workflows. The agent runs autonomously to detect gaps, overlaps, and outdated policy assignments – then recommends precise, one-click remediations to close them fast.

Key benefits include:

  • Autonomous protection: Automatically identifies users and apps not covered by policies
  • Explainable decisions: Plain-language summaries and visual activity maps
  • Custom adaptability: Learns from natural-language feedback and supports business rules
  • Full auditability: All actions logged for compliance and transparency

As one security leader put it:

“The Conditional Access Optimization Agent is like having a security analyst on call 24/7. It proactively identifies gaps in our Conditional Access policies and ensures every user is protected from day one… It’s a secure path to innovation that every chief information security officer can trust.”
Julian Rasmussen, Senior Consultant and Partner, Point Taken, Microsoft MVP
Learn more: Conditional Access Optimization Agent in Microsoft Entra GA announcement

 

Investigate phishing alerts faster with the new Phishing Triage Agent in Microsoft Defender

The Phishing Triage Agent in Microsoft Defender is now in public preview, bringing autonomous, AI-powered threat detection to your SOC workflows. Powered by large language models, the agent performs deep semantic analysis of emails, URLs, and files to determine whether a submission is a phishing threat or a false alarm – without relying on static rules.

It learns from analyst feedback, adapts to your organization’s patterns, and provides clear, natural language explanations for every verdict. A visual decision map shows exactly how the agent reached its conclusion, making the process fully transparent and reviewable.

Learn more: Announcing public preview Phishing Triage Agent in Microsoft Defender

 

The Threat Intelligence Briefing Agent is now in Public Preview: Build organization-specific briefings in just minutes

The Threat Intelligence Briefing Agent has entered public preview in the Security Copilot standalone experience, transforming how security teams stay ahead of emerging threats. With this powerful agent, creating highly relevant, organization-specific threat intelligence briefings now takes minutes rather than hours or days, empowering teams to act with speed and confidence. Through real-time dynamic reasoning, the agent surfaces the most relevant threat intelligence based on attributes such as the organization’s industry, geographic location, and unique attack surface to deliver critical context and invaluable situational awareness.

Learn more: aka.ms/ti-briefing-agent

 

Streamline operations with workspace-level management

Security Copilot now supports workspaces, giving organizations a flexible way to segment environments by team, region, or business unit. With workspaces now in public preview, admins can align access, data boundaries, and SCU capacity with operational and compliance needs. Each workspace supports role-based access control, localized prompt history, and independent capacity planning – making it easier to manage complex, distributed security and IT operations.

As part of this model, workspace-level plugin management is now generally available, allowing admins to configure plugin settings at the workspace or organization level. This eliminates the need for per-user setup and improves efficiency across large environments.

Learn more: New tools for Security Copilot management and capacity planning

 

Plan smarter with the new Security Copilot Capacity Calculator

The Security Copilot Capacity Calculator is now available in the standalone experience (Azure account required), helping teams estimate how many SCUs they may need.
Security Copilot supports:

  • Provisioned SCUs for predictable workloads
  • Overage SCUs to scale with variable workloads

Teams can estimate initial capacity using the capacity calculator, monitor usage in the in-product usage dashboard, and adjust their SCU allocation as needed. Learn more about Security Copilot pricing here.

Learn more: New tools for Security Copilot management and capacity planning

 

Automate Entra workflows with embedded NL2API skill

Security Copilot can now reason over Microsoft Graph APIs to answer complex, multi-stage questions across Entra resources. This embedded experience in Entra, powered by the NL2API skill, is now generally available – bringing advanced automation and intelligence directly into your Entra workflows.

 

Get faster suggestions with dynamic suggested prompts for Entra skills

Dynamic suggested prompts are now generally available for Entra skills, offering faster and more deterministic follow-up suggestions using direct skill invocation – bypassing the orchestrator for improved performance.

 

Meet compliance needs with FedRAMP High authorization for Security Copilot

Security Copilot is now included within the Federal Risk and Authorization Management Program (FedRAMP) High Authorization for Azure Commercial. This Provisional Authorization to Operate (P-ATO) within the existing FedRAMP High Azure Commercial environment was approved by the FedRAMP Joint Authorization Board (JAB). This milestone marks a significant step forward in our mission to bring Microsoft Security Copilot’s cutting-edge AI-powered security capabilities to our Government Community Cloud (GCC) customers. Stay tuned for updates on when Security Copilot will be fully available for GCC customers.

 

Expand global reach with Korean language and Swiss data residency

Security Copilot now supports Korean in both standalone and embedded experiences. For a full list of supported languages, visit Supported languages in Microsoft Security Copilot

Additionally, customers in Switzerland can now benefit from Swiss region data residency, ensuring Security Copilot data is stored within Swiss boundaries to meet local compliance requirements.

Learn more: Availability and recovery of Security Copilot

 

Improve accuracy and scale with GPT-4.1 and large output support

We’ve upgraded Security Copilot to support GPT-4.1 across all experiences at the evaluation level, offering larger context windows, improved interactions, and up to 50% accuracy improvements in some scenarios.

Also now generally available is large output support, which removes the previous 2MB limit for data used in LLMs – giving teams more flexibility when working with large datasets.

 

Audit agent changes with Purview UAL integration

Agent administration auditing is now generally available in Microsoft Purview Unified Audit Log, allowing teams to trace agent creation, updates, and deletions with detailed metadata for improved visibility and compliance.

Learn more: Access the Security Copilot audit log

 

Stay tuned and explore more!

Security Copilot is transforming how security and IT teams operate – bringing AI-powered insights, automation, and decision support into everyday workflows. With new capabilities landing every month, the pace of innovation is accelerating.

We’ll be back in September with more updates. Until then, explore these resources to get hands-on, deepen your understanding, and see what’s possible:

 

Don’t miss Microsoft Secure digital event on September 30th – we’ll be announcing exciting new capabilities for Security Copilot and sharing what’s next in AI-powered security. Register now to be the first to hear the announcements and see what’s coming.

What’s new in Microsoft Security Copilot

New tools for Security Copilot management and capacity planning

This post was originally published on this site.


Last year, we launched Microsoft Security Copilot with a bold goal: to help organizations protect at the speed of AI. Since then, Security Copilot has been transforming how IT and security operations teams respond to threats and manage their environments. In fact, research from live operations indicates that Security Copilot users have seen impact like a 30% reduction in mean time to resolution for SOC teams, and a 54% decrease in time to resolve a device policy conflict for IT teams. 

As adoption has grown, so has the complexity of customer needs. In many organizations, different teams, business units, and regions require distinct approaches to data access, capacity planning, and tooling. At the same time, customers want the flexibility to start small, test scenarios, and scale usage over time, without committing to long-term contracts. 

To meet these needs, Security Copilot is offered as a consumptive solution, allowing organizations to provision Security Compute Units (SCUs) as needed. This flexible model lowers the barrier to entry and encourages experimentation. And now, with workspaces and the Security Copilot capacity calculator to help manage capacity, customers can adopt Security Copilot with even more confidence and control. 

Workspaces 

Security operations don’t happen in a vacuum – different teams, business units, and regions have unique operational needs. This is why we’re excited to launch workspaces in public preview – a major enhancement to how teams can manage access, resources, and collaboration within Security Copilot. Workspaces provide a flexible way to segment environments, making it easier to align access and capacity with organizational needs, legal structures, or compliance requirements. 

 

Let’s take the example of a multinational organization with separate security and IT teams in North America, Europe, and Asia. With workspaces, this company can realize benefits in: 

  • Data boundaries: Each regional team operates within its own dedicated workspace, keeping data like prompt history local and accessible only to that team. This isolation ensures information stays relevant to the team and supports compliance with regional data residency requirements and internal policies. 
  • Role-based access control: Only authorized users specified by the admin have access to each workspace, and workspace management is restricted to users with administrator roles. 
  • Capacity planning: SCUs can be provisioned per workspace, giving admins the ability to right-size capacity based on each team’s workload. APAC can scale up during a surge while the US conserves usage during a quiet period. 

 Note: multi-workspace support is now available in Security Copilot, enabling users to manage prompt sessions across multiple workspaces. However, available agents that run autonomously are currently limited to a single workspace, and embedded experiences continue to route traffic exclusively through the tenant-level default workspace. Please refer to the documentation for full details. 

Security Copilot capacity calculator 

One of the most common questions we hear from customers is: “How many SCUs do I need to get started with Security Copilot?” Given the dynamic nature of AI-powered security workflows, forecasting compute needs can be a challenge, especially for teams just starting their journey. To make planning easier, we’re excited to announce the launch of the Security Copilot capacity calculator, now available in the Security Copilot standalone experience (Azure account required). 

This tool offers a practical starting point to help estimate how many SCUs your organization may require. With a few clicks, customers can get an idea of estimated SCU usage based on inputs like number of users in an embedded Security Copilot experience. While actual consumption may vary as it depends on real-time prompt activity, the calculator serves as a helpful guide for initial provisioning and budget planning.  

Once you’ve estimated your baseline needs, you can get started in Security Copilot or in the Azure portal. Security Copilot offers two flexible models to support both predictable workloads and unplanned spikes in usage: 

  • Provisioned SCUs: Ideal for predictable, ongoing operations. A minimum of one provisioned SCU is required. 
  • Overage SCUs: Designed for variable demand. Overage SCUs allow usage to scale seamlessly, and customers only pay for what they use, up to their chosen optional overage limit. 

With the capacity calculator, organizations can confidently begin their Security Copilot journey and better manage usage to align with their business needs. After getting started, teams can monitor consumption through the in-product usage dashboard and adjust capacity as demand fluctuates. Learn more about Security Copilot pricing here. 

Get Started with Security Copilot today 

Together, workspaces and the capacity calculator provide organizations with deeper insight, flexibility, and control over their Security Copilot usage. These features address the real-world challenges of managing diverse teams, complex environments, and evolving workloads. Whether you’re just starting your Security Copilot journey or looking to optimize your existing usage, these tools help you right-size capacity, maintain compliance, and deliver actionable AI assistance for your security and IT teams. 

Discover Security Copilot use cases, best practices, and customer success stories in the Security Copilot adoption hub. Learn more about our most recent Security Copilot innovations for IT teams here. If you have questions or need support, don’t hesitate to contact us or reach out to your account manager. 

What’s new in Microsoft Security Copilot

Smarter Prompts for Smarter Investigations: Dynamic Prompt Suggestions in Security Copilot

This post was originally published on this site.


When a security analyst turns to an AI system for help—whether to hunt threats, investigate alerts, or triage incidents—the first step is usually a natural language prompt. But if that prompt is too vague, too general, or not aligned with the system’s capabilities, the response won’t be helpful. In high-stakes environments like cybersecurity, that’s not just a missed opportunity, it’s a risk.

That’s exactly the problem we tackled in our recent paper, Dynamic Context-Aware Prompt Recommendations for Domain-Specific Applications, now published and deployed as a new skill in Security Copilot.

Why Prompting Is a Bigger Problem in Security Than It Seems

LLMs have made impressive progress in general-purpose settings—helping users write emails, summarize documents, or answer trivia. These systems often include smart prompt recommendations based on the flow of conversation. But when you shift into domain-specific systems like Microsoft Security Copilot, the game changes.

Security analysts don’t ask open-ended questions. They ask task-specific ones:

  • “List devices that ran a malicious file in the last 24 hours.”
  • “Correlate failed login attempts across services.”
  • “Visualize outbound traffic from compromised machines.”

These questions map directly to skills—domain-specific functions that query data, connect APIs, or launch workflows. And that means prompt recommendations need to be tightly aligned with the available skills, underlying datasets, and current investigation context. General-purpose prompt systems don’t know how to do that.

What Makes Domain-Specific Prompting Hard

Designing prompt recommendations for systems like Security Copilot comes with unique constraints:

  1. Constrained Skill Set: The AI can only take actions it’s configured to support. Prompts must align with those skills—no hallucinations allowed.
  2. Evolving Context: A single investigation might involve multiple rounds of prompts, results, follow-ups, and pivots. Prompt suggestions must adapt dynamically.
  3. Deep Domain Knowledge: It’s not enough to suggest “Check network logs.” A useful prompt needs to reflect how real analysts work—across Defender, Sentinel, and more.
  4. Scalability: As new skills are added, prompt systems must scale without requiring constant manual curation or rewriting.
Our Approach: Dynamic, Context-Aware, and Skill-Constrained

 

We introduce a dynamic prompt recommendation system for Security Copilot. The key innovations include:

  • Contextual understanding of the session: We track the user’s investigation path and surface prompts that are relevant to what they’re doing now, not just generic starters.
  • Skill-awareness: The system knows what internal capabilities exist (e.g., “list devices,” “query login events”) and only recommends prompts that can be executed via those skills.
  • Domain knowledge injection: By encoding metadata about products, datasets, and typical workflows (e.g., MITRE attack stages), the system produces prompts that make sense in security analyst workflows.
  • Scalable prompt generation: Rather than relying on hardcoded lists, our system dynamically generates and ranks prompt suggestions.
What It Looks Like in Action

The dynamic prompt suggestion system is now live in Microsoft Entra, available in both Embedded and Immersive experiences. When a user enters a natural language prompt, the system automatically suggests several context-aware follow-up prompts, based on the user’s prior interactions and the system’s understanding of the current task.

 

These suggestions are generated in real time—users can simply click on a suggestion, and it’s executed immediately, allowing for quick and seamless follow-up queries without needing to rephrase or retype.

Let’s walk through two examples:

Embedded Experience

We begin with the prompt: “How does Microsoft determine Risky Users?”

 

The system returns the response and generates 3 follow-up suggestions, such as: “List dismissed risky detections.”

We click on that suggestion, which executes the query and shows the results.

New suggestions continue to appear after each prompt execution, making it easy to explore related insights.

Immersive Experience

We start with a prompt: “Who am I?”

 

Among the 5 suggested prompts, we select: “List the groups user nase74@woodgrove.ms is a member of.”

The user clicks, the query runs, and more follow-up suggestions appear, enabling a natural, guided flow throughout the session.

 

Why This Matters for the Future of Security AI

Prompting isn’t just an interface detail—it’s the entry point to intelligence. And in cybersecurity, where time, accuracy, and reliability matter, we need AI systems that are not just capable, but cooperative. Our research contributes to a future where security analysts don’t have to be prompt engineers to get the most out of AI.

By making prompt recommendations dynamic, contextual, and grounded in real domain knowledge, we help close the gap between LLM potential and security reality.

 

Interested in learning more?
Check out the full paper: Dynamic Context-Aware Prompt Recommendations for Domain-Specific Applications

If you’re using or building upon this work in your own research, we’d appreciate you citing our paper:

@article {tang2025dynamic,
title={Dynamic Context-Aware Prompt Recommendation for Domain-Specific AI Applications},
author={Tang, Xinye and Zhai, Haijun and Belwal, Chaitanya and Thayanithi, Vineeth and Baumann, Philip and Roy, Yogesh K},
journal={arXiv preprint arXiv:2506.20815},
year={2025}
}

 

What’s new in Microsoft Security Copilot

Automating Phishing Email Triage with Microsoft Security Copilot

This post was originally published on this site.


This blog details automating phishing email triage using Azure Logic Apps, Azure Function Apps, and Microsoft Security Copilot. Deployable in under 10 minutes, this solution primarily analyzes email intent without relying on traditional indicators of compromise, accurately classifying benign/junk, suspicious, and phishing emails. Benefits include reducing manual workload, improved threat detection, and (optional) integration seamlessly with Microsoft Sentinel – enabling analysts to see Security Copilot analysis within the incident itself. 

Designed for flexibility and control, this Logic App is a customizable solution that can be self-deployed from GitHub. It helps automate phishing response at scale without requiring deep coding expertise, making it ideal for teams that prefer a more configurable approach and want to tailor workflows to their environment. The solution streamlines response and significantly reduces manual effort.

Access the full solution on the Security Copilot Github:
GitHub – UserReportedPhishing Solution.

For teams looking for a more sophisticated, fully integrated experience, the Security Copilot Phishing Triage Agent represents the next generation of phishing response. Natively embedded in Microsoft Defender, the agent autonomously triages phishing incidents with minimal setup. It uses advanced LLM-based reasoning to resolve false alarms, enabling analysts to stay focused on real threats. The agent offers step-by-step decision transparency and continuously learns from user feedback. Read the official announcement here.

Introduction: Phishing Challenges Continue to Evolve

Phishing continues to evolve in both scale and sophistication, but a growing challenge for defenders isn’t just stopping phishing, it’s scaling response. Thanks to tools like Outlook’s “Report Phishing” button and increased user awareness, organizations are now flooded with user-reported emails, many of which are ambiguous or benign. This has created a paradox: better detection by users has overwhelmed SOC teams, turning email triage into a manual, rotational task dreaded for its repetitiveness and time cost, often taking over 25 minutes per email to review.

Our solution addresses that problem, by automating the triage of user-reported phishing through AI-driven intent analysis. It’s not built to replace your secure email gateways or Microsoft Defender for Office 365; those tools have already done their job. This system assumes the email:

  • Slipped past existing filters,
  • Was suspicious enough for a user to escalate,
  • Lacks typical IOCs like malicious domains or attachments.

As a former attacker, I spent years crafting high-quality phishing emails to penetrate the defenses of major banks. Effective phishing doesn’t rely on obvious IOCs like malicious domains, URLs, or attachments… the infrastructure often appears clean. The danger lies in the intent. This is where Security Copilot’s LLM-based reasoning is critical, analyzing structure, context, tone, and seasonal pretexts to determine whether an email is phishing, suspicious, spam, or legitimate.

What makes this novel is that it’s the first solution built specifically for the “last mile” of phishing defense, where human suspicion meets automation, and intent is the only signal left to analyze. It transforms noisy inboxes into structured intelligence and empowers analysts to focus only on what truly matters.

Solution Overview: How the Logic App Solution Works (and Why It’s Different)

Core Components:

  • Azure Logic Apps: Orchestrates the entire workflow, from ingestion to analysis, and 100% customizable.
  • Azure Function Apps: Parses and normalizes email data for efficient AI consumption.
  • Microsoft Security Copilot: Performs sophisticated AI-based phishing analysis by understanding email intent and tactics, rather than relying exclusively on predefined malicious indicators.

Key Benefits:

  • Rapid Analysis: Processes phishing alerts and, in minutes, delivers comprehensive reports that empower analysts to make faster, more informed triage decisions – compared to manual reviews that can take up to 30 minutes. And, unlike analysts, Security Copilot requires zero sleep! 
  • AI-driven Insights: LLM-based analysis is leveraged to generate clear explanations of classifications by assessing behavioral and contextual signals like urgency, seasonal threats, Business Email Compromise (BEC), subtle language clues, and otherwise sophisticated techniques. Most importantly, it identifies benign emails, which are often the bulk of reported emails.
  • Detailed, Actionable Reports: Generates clear, human-readable HTML reports summarizing threats and recommendations for analyst review.
  • Robust Attachment Parsing: Automatically examines attachments like PDFs and Excel documents for malicious content or contextual inconsistencies.
  • Integrated with Microsoft Sentinel: Optional integration with Sentinel ensures central incident tracking and comprehensive threat management. Analysis is attached directly to the incident, saving analysts more time.
  • Customization: Add, move, or replace any element of the Logic App or prompt to fit your specific workflows.
Deployment Guide: Quick, Secure, and Reliable Setup

The solution provides Azure Resource Manager (ARM) templates for rapid deployment:

Prerequisites:

  • Azure Subscription with Contributor access to a resource group.
  • Microsoft Security Copilot enabled.
  • Dedicated Office 365 shared mailbox (e.g., phishing@yourdomain.com) with Mailbox.Read.Shared permissions.
  • (Optional) Microsoft Sentinel workspace.

Refer to the up to date deployment instructions on the Security Copilot GitHub page.

Technical Architecture & Workflow:

The automated workflow operates as follows:

Email Ingestion:

  • Monitors the shared mailbox via Office 365 connector.
  • Triggers on new email arrivals every 3 minutes.
  • Assumes that the reported email has arrived as an attachment to a “carrier” email.

Determine if the Email Came from Defender/Sentinel:

If the email came from Defender, it would have a prepended subject of “Phishing”, if not, it takes the “False” branch. Change as necessary.

Initial Email Processing:

  • Exports raw email content from the shared mailbox.
  • Determines if .msg or .eml attachments are in binary format and converts if necessary.

Email Parsing via Azure Function App:

  • Extracts data from email content and attachments (URLs, sender info, email body, etc.) and returns a JSON structure.
  • Prepares clean JSON data for AI analysis.
  • This step is required to “prep” the data for LLM analysis due to token limits.
  • Click on the “Parse Email” block to see the output of the Function App for any troubleshooting. You’ll also notice a number of JSON keys that are not used but provided for flexibility.

Security Copilot Advanced AI Reasoning:

  • Analyzes email content using a comprehensive prompt that evaluates behavioral and seasonal patterns, BEC indicators, attachment context, and social engineering signals.
  • Scores cumulative risk based on structured heuristics without relying solely on known malicious indicators.
  • Returns validated JSON output (some customers are parsing this JSON and performing other action).
  • This is where you would customize the prompt, should you need to add some of your own organizational situations if the Logic App needs to be tuned:

JSON Normalization & Error Handling:

  • A “normalization” Azure Function ensures output matches the expected JSON schema.
  • Sometimes LLMs will stray from a strict output structure, this aims to solve that problem.
  • If you add or remove anything from the Parse Email code that alters the structure of the JSON, this and the next block will need to be updated to match your new structure.

Detailed HTML Reporting:

  • Generates a detailed HTML report summarizing AI findings, indicators, and recommended actions.
  • Reports are emailed directly to SOC team distribution lists or ticketing systems.

Optional Sentinel Integration:

Adds the reasoning & output from Security Copilot directly to the incident comments. This is the ideal location for output since the analyst is already in the security.microsoft.com portal. It waits up to 15 minutes for logs to appear, in situations where the user reports before an incident is created.

The solution works pretty well out of the box but may require some tuning, give it a test. Here are some examples of the type of Security Copilot reasoning.

Benign email detection: 

 

Example of phishing email detection:

 

 

More sophisticated phishing with subtle clues:

 

 

 

Enhanced Technical Details & Clarifications

Attachment Processing:

  • When multiple email attachments are detected, the Logic App processes each binary-format email sequentially.
  • If PDF or Excel attachments are detected, they are parsed for content and are evaluated appropriately for content and intent.

Security Copilot Reliability:

  • The Security Copilot Logic App API call uses an extensive retry policy (10 retries at 10-minute intervals) to ensure reliable AI analysis despite intermittent service latency.
  • If you run out of SCUs in an hour, it will pause until they are refreshed and continue.

Sentinel Integration Reliability:

  • Acknowledges inherent Sentinel logging delays (up to 15 minutes).
  • Implements retry logic and explicit manual alerting for unmatched incidents, if the analysis runs before the incident is created.

Security Best Practices:

  • Compare the Function & Logic App to your company security policies to ensure compliance.
  • Credentials, API keys, and sensitive details utilize Azure Managed Identities or secure API connections. No secrets are stored in plaintext.
  • Azure Function Apps perform only safe parsing operations; attachments and content are never executed or opened insecurely.

Be sure to check out how the Microsoft Defender for Office team is improving detection capabilities as well Microsoft Defender for Office 365’s Language AI for Phish: Enhancing Email Security | Microsoft Community Hub.

What’s new in Microsoft Security Copilot

Using parameterized functions with KQL-based custom plugins in Microsoft Security Copilot

This post was originally published on this site.


 

 

 

In this blog, I will walk through how you can build functions based on a Microsoft Sentinel Log Analytics workspace for use in custom KQL-based plugins for Security Copilot. The same approach can be used for Azure Data Explorer and Defender XDR, so long as you follow the specific guidance for either platform. A link to those steps is provided in the Additional Resources section at the end of this blog.

But first, it’s helpful to clarify what parameterized functions are and why they are important in the context of Security Copilot KQL-based plugins. Parameterized functions accept input details (variables) such as lookback periods or entities, allowing you to dynamically alter parts of a query without rewriting the entire logic

Parameterized functions are important in the context of Security Copilot plugins because of:

  1. Dynamic prompt completion:
    Security Copilot plugins often accept user input (e.g., usernames, time ranges, IPs). Parameterized functions allow these inputs to be consistently injected into KQL queries without rebuilding query logic.
  2. Plugin reusability:
    By using parameters, a single function can serve multiple investigation scenarios (e.g., checking sign-ins, data access, or alerts for any user or timeframe) instead of hardcoding different versions.
  3. Maintainability and modularity:
    Parameterized functions centralize query logic, making it easier to update or enhance without modifying every instance across the plugin spec. To modify the logic, just edit the function in Log Analytics, test it then save it- without needing to change the plugin at all or re-upload it into Security Copilot. It also significantly reduces the need to ensure that the query part of the YAML is perfectly indented and tabbed as is required by the Open API specification, you only need to worry about formatting a single line vs several-potentially hundreds.
  4. Validation:
    Separating query logic from input parameters improves query reliability by avoiding the possibility of malformed queries. No matter what the input is, it’s treated as a value, not as part of the query logic.
  5. Plugin Spec mapping:
    OpenAPI-based Security Copilot plugins can map user-provided inputs directly to function parameters, making the interaction between user intent and query execution seamless.
Practical example

In this case, we have a 139-line KQL query that we will reduce to exactly one line that goes into the KQL plugin. In other cases, this number could be even higher.  Without using functions, this entire query would have to form part of the plugin

Note: The rest of this blog assumes you are familiar with KQL custom plugins-how they work and how to upload them into Security Copilot.

 

CloudAppEvents | where RawEventData.TargetDomain has_any ( ‘grok.com’, ‘x.ai’, ‘mistral.ai’, ‘cohere.ai’, ‘perplexity.ai’, ‘huggingface.co’, ‘adventureai.gg’, ‘ai.google/discover/palm2’, ‘ai.meta.com/llama’, ‘ai2006.io’, ‘aibuddy.chat’, ‘aidungeon.io’, ‘aigcdeep.com’, ‘ai-ghostwriter.com’, ‘aiisajoke.com’, ‘ailessonplan.com’, ‘aipoemgenerator.org’, ‘aissistify.com’, ‘ai-writer.com’, ‘aiwritingpal.com’, ‘akeeva.co’, ‘aleph-alpha.com/luminous’, ‘alphacode.deepmind.com’, ‘analogenie.com’, ‘anthropic.com/index/claude-2’, ‘anthropic.com/index/introducing-claude’, ‘anyword.com’, ‘app.getmerlin.in’, ‘app.inferkit.com’, ‘app.longshot.ai’, ‘app.neuro-flash.com’, ‘applaime.com’, ‘articlefiesta.com’, ‘articleforge.com’, ‘askbrian.ai’, ‘aws.amazon.com/bedrock/titan’, ‘azure.microsoft.com/en-us/products/ai-services/openai-service’, ‘bard.google.com’, ‘beacons.ai/linea_builds’, ‘bearly.ai’, ‘beatoven.ai’, ‘beautiful.ai’, ‘beewriter.com’, ‘bettersynonyms.com’, ‘blenderbot.ai’, ‘bomml.ai’, ‘bots.miku.gg’, ‘browsegpt.ai’, ‘bulkgpt.ai’, ‘buster.ai’, ‘censusgpt.com’, ‘chai-research.com’, ‘character.ai’, ‘charley.ai’, ‘charshift.com’, ‘chat.lmsys.org’, ‘chat.mymap.ai’, ‘chatbase.co’, ‘chatbotgen.com’, ‘chatgpt.com’, ‘chatgptdemo.net’, ‘chatgptduo.com’, ‘chatgptspanish.org’, ‘chatpdf.com’, ‘chattab.app’, ‘claid.ai’, ‘claralabs.com’, ‘claude.ai/login’, ‘clipdrop.co/stable-diffusion’, ‘cmdj.app’, ‘codesnippets.ai’, ‘cohere.com’, ‘cohesive.so’, ‘compose.ai’, ‘contentbot.ai’, ‘contentvillain.com’, ‘copy.ai’, ‘copymatic.ai’, ‘copymonkey.ai’, ‘copysmith.ai’, ‘copyter.com’, ‘coursebox.ai’, ‘coverler.com’, ‘craftly.ai’, ‘crammer.app’, ‘creaitor.ai’, ‘dante-ai.com’, ‘databricks.com’, ‘deepai.org’, ‘deep-image.ai’, ‘deepreview.eu’, ‘descrii.tech’, ‘designs.ai’, ‘docgpt.ai’, ‘dreamily.ai’, ‘editgpt.app’, ‘edwardbot.com’, ‘eilla.ai’, ‘elai.io’, ‘elephas.app’, ‘eleuther.ai’, ‘essayailab.com’, ‘essay-builder.ai’, ‘essaygrader.ai’, ‘essaypal.ai’, ‘falconllm.tii.ae’, ‘finechat.ai’, ‘finito.ai’, ‘fireflies.ai’, ‘firefly.adobe.com’, ‘firetexts.co’, ‘flowgpt.com’, ‘flowrite.com’, ‘forethought.ai’, ‘formwise.ai’, ‘frase.io’, ‘freedomgpt.com’, ‘gajix.com’, ‘gemini.google.com’, ‘genei.io’, ‘generatorxyz.com’, ‘getchunky.io’, ‘getgptapi.com’, ‘getliner.com’, ‘getsmartgpt.com’, ‘getvoila.ai’, ‘gista.co’, ‘github.com/features/copilot’, ‘giti.ai’, ‘gizzmo.ai’, ‘glasp.co’, ‘gliglish.com’, ‘godinabox.co’, ‘gozen.io’, ‘gpt.h2o.ai’, ‘gpt3demo.com’, ‘gpt4all.io’, ‘gpt-4chan+)’, ‘gpt6.ai’, ‘gptassistant.app’, ‘gptfy.co’, ‘gptgame.app’, ‘gptgo.ai’, ‘gptkit.ai’, ‘gpt-persona.com’, ‘gpt-ppt.neftup.app’, ‘gptzero.me’, ‘grammarly.com’, ‘hal9.com’, ‘headlime.com’, ‘heimdallapp.org’, ‘helperai.info’, ‘heygen.com’, ‘heygpt.chat’, ‘hippocraticai.com’, ‘huggingface.co/spaces/tiiuae/falcon-180b-demo’, ‘humanpal.io’, ‘hypotenuse.ai’, ‘ichatwithgpt.com’, ‘ideasai.com’, ‘ingestai.io’, ‘inkforall.com’, ‘inputai.com/chat/gpt-4’, ‘instantanswers.xyz’, ‘instatext.io’, ‘iris.ai’, ‘jasper.ai’, ‘jigso.io’, ‘kafkai.com’, ‘kibo.vercel.app’, ‘kloud.chat’, ‘koala.sh’, ‘krater.ai’, ‘lamini.ai’, ‘langchain.com’, ‘laragpt.com’, ‘learn.xyz’, ‘learnitive.com’, ‘learnt.ai’, ‘letsenhance.io’, ‘letsrevive.app’, ‘lexalytics.com’, ‘lgresearch.ai’, ‘linke.ai’, ‘localbot.ai’, ‘luis.ai’, ‘lumen5.com’, ‘machinetranslation.com’, ‘magicstudio.com’, ‘magisto.com’, ‘mailshake.com/ai-email-writer’, ‘markcopy.ai’, ‘meetmaya.world’, ‘merlin.foyer.work’, ‘mieux.ai’, ‘mightygpt.com’, ‘mosaicml.com’, ‘murf.ai’, ‘myaiteam.com’, ‘mygptwizard.com’, ‘narakeet.com’, ‘nat.dev’, ‘nbox.ai’, ‘netus.ai’, ‘neural.love’, ‘neuraltext.com’, ‘newswriter.ai’, ‘nextbrain.ai’, ‘noluai.com’, ‘notion.so’, ‘novelai.net’, ‘numind.ai’, ‘ocoya.com’, ‘ollama.ai’, ‘openai.com’, ‘ora.ai’, ‘otterwriter.com’, ‘outwrite.com’, ‘pagelines.com’, ‘parallelgpt.ai’, ‘peppercontent.io’, ‘perplexity.ai’, ‘personal.ai’, ‘phind.com’, ‘phrasee.co’, ‘play.ht’, ‘poe.com’, ‘predis.ai’, ‘premai.io’, ‘preppally.com’, ‘presentationgpt.com’, ‘privatellm.app’, ‘projectdecember.net’, ‘promptclub.ai’, ‘promptfolder.com’, ‘promptitude.io’, ‘qopywriter.ai’, ‘quickchat.ai/emerson’, ‘quillbot.com’, ‘rawshorts.com’, ‘read.ai’, ‘rebecc.ai’, ‘refraction.dev’, ‘regem.in/ai-writer’, ‘regie.ai’, ‘regisai.com’, ‘relevanceai.com’, ‘replika.com’, ‘replit.com’, ‘resemble.ai’, ‘resumerevival.xyz’, ‘riku.ai’, ‘rizzai.com’, ‘roamaround.app’, ‘rovioai.com’, ‘rytr.me’, ‘saga.so’, ‘sapling.ai’, ‘scribbyo.com’, ‘seowriting.ai’, ‘shakespearetoolbar.com’, ‘shortlyai.com’, ‘simpleshow.com’, ‘sitegpt.ai’, ‘smartwriter.ai’, ‘sonantic.io’, ‘soofy.io’, ‘soundful.com’, ‘speechify.com’, ‘splice.com’, ‘stability.ai’, ‘stableaudio.com’, ‘starryai.com’, ‘stealthgpt.ai’, ‘steve.ai’, ‘stork.ai’, ‘storyd.ai’, ‘storyscapeai.app’, ‘storytailor.ai’, ‘streamlit.io/generative-ai’, ‘summari.com’, ‘synesthesia.io’, ‘tabnine.com’, ‘talkai.info’, ‘talkpal.ai’, ‘talktowalle.com’, ‘team-gpt.com’, ‘tethered.dev’, ‘texta.ai’, ‘textcortex.com’, ‘textsynth.com’, ‘thirdai.com/pocketllm’, ‘threadcreator.com’, ‘thundercontent.com’, ‘tldrthis.com’, ‘tome.app’, ‘toolsaday.com/writing/text-genie’, ‘to-teach.ai’, ‘tutorai.me’, ‘tweetyai.com’, ‘twoslash.ai’, ‘typeright.com’, ‘typli.ai’, ‘uminal.com’, ‘unbounce.com/product/smart-copy’, ‘uniglobalcareers.com/cv-generator’, ‘usechat.ai’, ‘usemano.com’, ‘videomuse.app’, ‘vidext.app’, ‘virtualghostwriter.com’, ‘voicemod.net’, ‘warmer.ai’, ‘webllm.mlc.ai’, ‘wellsaidlabs.com’, ‘wepik.com’, ‘we-spots.com’, ‘wordplay.ai’, ‘wordtune.com’, ‘workflos.ai’, ‘woxo.tech’, ‘wpaibot.com’, ‘writecream.com’, ‘writefull.com’, ‘writegpt.ai’, ‘writeholo.com’, ‘writeme.ai’, ‘writer.com’, ‘writersbrew.app’, ‘writerx.co’, ‘writesonic.com’, ‘writesparkle.ai’, ‘writier.io’, ‘yarnit.app’, ‘zevbot.com’, ‘zomani.ai’ ) | extend sit = parse_json(tostring(RawEventData.SensitiveInfoTypeData)) | mv-expand sit | summarize Event_Count = count() by tostring(sit.SensitiveInfoTypeName), CountryCode, City, UserId = tostring(RawEventData.UserId), TargetDomain = tostring(RawEventData.TargetDomain), ActionType = tostring(RawEventData.ActionType), IPAddress = tostring(RawEventData.IPAddress), DeviceType = tostring(RawEventData.DeviceType), FileName = tostring(RawEventData.FileName), TimeBin = bin(TimeGenerated, 1h) | extend SensitivityScore = case(tostring(sit_SensitiveInfoTypeName) in~ (“U.S. Social Security Number (SSN)”, “Credit Card Number”, “EU Tax Identification Number (TIN)”,”Amazon S3 Client Secret Access Key”,”All Credential Types”), 90, tostring(sit_SensitiveInfoTypeName) in~ (“All Full names”), 40, tostring(sit_SensitiveInfoTypeName) in~ (“Project Obsidian”, “Phone Number”), 70, tostring(sit_SensitiveInfoTypeName) in~ (“IP”), 50,10 ) | join kind=leftouter ( IdentityInfo | where TimeGenerated > ago(lookback) | extend AccountUpn = tolower(AccountUPN) ) on $left.UserId == $right.AccountUpn | join kind=leftouter ( BehaviorAnalytics | where TimeGenerated > ago(lookback) | extend AccountUpn = tolower(UserPrincipalName) ) on $left.UserId == $right.AccountUpn //| where BlastRadius == “High” //| where RiskLevel == “High” | where Department == User_Dept | summarize arg_max(TimeGenerated, *) by sit_SensitiveInfoTypeName, CountryCode, City, UserId, TargetDomain, ActionType, IPAddress, DeviceType, FileName, TimeBin, Department, SensitivityScore | summarize sum(Event_Count) by sit_SensitiveInfoTypeName, CountryCode, City, UserId, Department, TargetDomain, ActionType, IPAddress, DeviceType, FileName, TimeBin, BlastRadius, RiskLevel, SourceDevice, SourceIPAddress, SensitivityScore

With parameterized functions, follow these steps to simplify the plugin that will be built based on the query above

  1. Define the variable/parameters upfront in the query (BEFORE creating the parameters in the UI). This will put the query in a “temporary” unusable state because the parameters will cause syntax problems in this state. However, since the plan is to run the query as a function this is ok

Fig. 1: Image showing partial query with the parameters to defined highlighted in red i.e. lookback and User_Dept

  1. Create the parameters in the Log Analytics UI

Fig 2. Screenshot showing how the function menu in the Log Analytics UI

Give the function a name and define the parameters exactly as they show up in the query in step 1 above. In this example, we are defining two parameters: lookback – to store the lookback period to be passed to the time filter and User_Dept to the user’s department.

Fig. 3. Function menu showing the two parameters defined in the function creation menu of Log Analytics

3. Test the query. Note the order of parameter definition in the UI. i.e. first the User_Dept THEN the lookback period. You can interchange them if you like but this will determine how you submit the query using the function. If the User_Dept parameter was defined first then it needs to come first when executing the function. See the below screenshot. Switching them will result in the wrong parameter being passed to the query and consequently 0 results will be returned.

Fig. 4: Sample run of the function with the parameters specified in the correct order

Effect of switched parameters:

Fig. 5: Sample function run with the functions switched to show effect of this situation

To edit the function, follow the steps below:

Navigate to the Logs menu for your Log Analytics workspace then select the function icon

 

Fig. 6: Partial view of the function being edited within the Log Analytics UI

Fig. 7: Image showing how to select the code button in the function menu to edit the function code

Once satisfied with the query and function, build your spec file for the Security Copilot plugin. Note the parameter definition and usage in the sections highlighted in red below

Fig. 8: Partial view of the YAML plugin showing the encapsulation of the 139 lines of KWL into a single one

And that’s it, from 139 unwieldy KQL lines to one very manageable one! You are welcome 😊

Let’s now put it through its paces once uploaded into Security Copilot. We start by executing the plugin using its default settings via the direct skill invocation method. We see indeed that the prompt returns results based on the default values passed as parameters to the function:

Fig. 9: View of Secuity Copilot landing page showing an example of direct skill execution of the created pluginFig. 10: Sample output showing records of users from the Sales department

Next, we still use direct skill invocation, but this time specify our own parameters:

Fig. 11: Direct skill invocation example but with specified parameters-Department, and lookback periodFig 12: Prompt run showing the output corresponding to the selections of the previous direct skill invocation prompt

Lastly, we test it out with a natural language prompt:

Fig 13: Security Copilot prompt bar showing example of natural language prompt seeking events related to users in the Human Resources departmentFig 14: Output from previous natural language prompt focused on users from the HR department

Tip: The function does not execute successfully if the default summarize function is used without creating a variable i.e. If the summarize count() command is used in your query, it results in a system-defined output variable named count_. To bypass this issue, ensure to use a user-defined variable such as Event_Count as shown in line 77 below:

Fig. 15: Highlighting the creation of a variable to store results from the summarize count() command

Conclusion

In conclusion, leveraging parameterized functions within KQL-based custom plugins in Microsoft Security Copilot can significantly streamline your data querying and analysis capabilities. By encapsulating reusable logic, improving query efficiency, and ensuring maintainability, these functions provide an efficient approach for tapping into data stored across Microsoft Sentinel, Defender XDR and Azure Data Explorer clusters. Start integrating parameterized functions into your KQL-based Security Copilot plugins today and let us have your feedback.

Additional Resources

Using parameterized functions in Microsoft Defender XDR

Using parameterized functions with Azure Data Explorer

Functions in Azure Monitor log queries – Azure Monitor | Microsoft Learn

Kusto Query Language (KQL) plugins in Microsoft Security Copilot | Microsoft Learn

Harnessing the power of KQL Plugins for enhanced security insights with Copilot for Security | Microsoft Community Hub

What’s new in Microsoft Security Copilot

Busting myths on Microsoft Security Copilot

This post was originally published on this site.


Microsoft’s Security Copilot is a new AI-powered security assistant (launched in April 2024) that integrates with Microsoft Defender, Sentinel, Intune, Entra and Purview to help analysts protect and defend at the speed and scale of AI. As a cutting-edge generative AI tool, Security Copilot has naturally sparked interest and close attention from users and experts. This has resulted in various articles and blogs sharing experiences, perspectives, and feedback about the product. As a Microsoft Certified Trainer and a Microsoft ‘Consultant’, I happen to both teach and implement Security Copilot for professionals and organizations respectively. Lucky me! But one thing that I encounter frequently in both my roles, is a list of common myths (or concerns) that people have about Security Copilot especially given that it is a relatively newer product.

Today we are going to talk about such myths (or concerns) and try to see how they are either completely hokum or does have another aspect which you may/may not know about. In other words, we will try to dot all the i’s and cross all the t’s. I’ll do it in respective sections which may have one or more myths included, so let’s get started.

I sincerely appreciate the efforts of all authors and publishers who have shared their insights on Security Copilot. This article is intended to address common concerns and encourage professionals to explore the product with confidence, rather than to challenge or dismiss any shared opinions.

Cost and Licensing

Myth #1: High Consumption Cost:

  • Validity: The perception of high cost is relative and often lacks full context. While the consumption-based pricing of Security Copilot may appear higher when compared to certain other tools, it delivers significantly greater value through its advanced capabilities, seamless integration with the Microsoft Security ecosystem, and ability to accelerate threat detection and response. When evaluated alongside comparable AI-driven security solutions—both Microsoft and non-Microsoft—Security Copilot stands out for its category-defining use cases and operational efficiency, helping security teams do more with less.
  • Reasoning: While cost considerations are valid, they should be viewed through the lens of operational impact rather than raw consumption. Security Copilot functions as an intelligent assistant operating around the clock—enhancing threat detection, accelerating incident response, and enabling deeper, more proactive threat hunting. Many organizations have reported significant improvements in reducing mean time to respond (MTTR), increasing automation in routine investigations such as phishing, and expanding their overall security coverage without scaling headcount. By augmenting human expertise with AI, Security Copilot empowers teams to focus on high value tasks and strengthens organizational resilience against evolving threats.

Myth #2: Unpredictable billing:

  • Validity: This is a complete myth not only with Security Copilot but with any other Microsoft solution.
  • Reasoning: You get a dedicated usage dashboard in the Security Copilot portal and a link to the billing view that takes you to Microsoft Azure where you can not only see the incurred costs but can also have a reliable forecast of future costs. Whether you are a large organization with multiple instances of Security Copilot or an SMB with a limited usage, these dashboards and views will help you equally to ensure you are not under or overspending on Security Copilot.

Myth #3: It’s free or covered by an existing license:

  • Validity: This misconception likely arises from confusion with other Copilot offerings and becomes a myth!
  • Reasoning: The overall pricing model of Security Copilot is completely different from other Microsoft Security solutions. While other solutions operate on a licensing model, Security Copilot works on a consumption-based model meaning there is no per user or per device charges here! Hence, no existing license whether Entra or Office 365 based, can give you access to ‘Security Copilot’. Also, please note that Microsoft 365 Copilot (available in Teams, Word, PowerPoint or Azure portal) is not the same as Security Copilot.

Performance and Reliability

Myth #4: Slow responses and high latency:

  • Validity: This is a completely anecdotal and definitely a myth. There are a variety of factors that affects the response latency of Security Copilot.
  • Reasoning: You need to consider some important factors like number of SCUs provisioned, concurrent number of Security Copilot users, number of plugins and/or skills being invoked, length and complexity of the prompt etc. in order to understand why you may have gotten a response slower than usual. Moreover, Security Copilot has the feature of showing its response in streaming mode. This approach significantly enhances perceived latency for users, enabling them to begin reading responses as they are generated, like the below image. Reference: What’s new in Microsoft Security Copilot?

Source: Security Copilot Portal

Myth #5: Poor Quality or Unreliable responses:

  • Validity: All I am going to say here is ‘Your Copilot is as good as the quality of your prompts’!
  • Reasoning: AI is here to augment our intelligence, but it can only do that when it gets sufficient, clear and well thought prompts. There is a reason to call it a ‘Co’-‘Pilot’ because you are driving/flying/learning along with it. BTW, I prefer flying almost any time! Point is, we need to understand that the quality of AI output is heavily influenced by the tone, context and specificity of prompts. There have been numerous users who agree that refined prompts can yield better results if not the best! I am not suggesting going for in-depth prompt engineering classes here but just including the following elements when writing a prompt, should give you a considerable improvement in the quality of responses. More information on effective prompting practices here: Prompting in Microsoft Security Copilot
    1. Goal – specific, security-related information that you need
    2. Context – why you need this information or how you plan to use it
    3. Expectations – format or target audience you want the response tailored to
    4. Source – known information, data sources, or plugins Security Copilot should use
  • Moreover, I also suggest leveraging the OOTB (Out-Of-The-Box) prompts and promptbooks in order to understand the way on how you should structure your prompts. Security Copilot has a dedicated ‘Promptbook Library’ where you can see all the custom and OOTB prompts. You have the option of duplicating and creating a custom promptbook of your own from an OOTB promptbook. This way you can ensure you are leveraging the available resources to make your own use case work more efficiently.

Myth #6: Service Interruptions:

  • Validity: This is a fact portrayed as a myth. If provisioned Security Copilot Units (SCUs) are fully consumed without additional configuration, service may pause until capacity is restored. This behaviour aligns with standard consumption-based service models.
  • ReasoningTo maintain continuous service, Security Copilot now supports Overage Units, which automatically activate when the initially provisioned SCUs are exhausted. This helps ensure uninterrupted functionality without requiring manual intervention. Additionally, the platform provides clear usage notifications and warnings in advance, allowing teams to proactively monitor and manage consumption. Combined with its role as a 24/7 AI-powered assistant, Security Copilot continues to deliver high availability and operational efficiency—even under dynamic workloads. For details on how to configure and manage overage units, refer to this blog: Overage Units in Security Copilot.

Near Limit notification in Security Copilot standalone portalAbove Limit notification in Security Copilot standalone portal

Privacy and Data Security

Myth #7: Data sharing with Microsoft:

  • Validity: This is one of the most common myths that still exists amongst users and make them hesitant to adopt the product.
  • Reasoning: Microsoft has been very transparent and vocal on claiming that ‘customer data’ is never used to train the underlying LLM model nor is it accessible by any human including any non-relevant Microsoft employees. All Security Copilot data is handled according to Microsoft’s commitments to privacy, security, compliance, and responsible AI practices. Access to the systems that house your data is governed by Microsoft’s certified processes. Even when enabled by default, the option to share your data does:
    • Not shared with OpenAI
    • Not used for sales
    • Not shared with third parties
    • Not used to train Azure OpenAI foundational model

Security Copilot provides options to enable/disable user data collection

Myth #8: Data Privacy Compromises:

  • Validity: Concerns about data privacy are common with AI tools but this is another completely ironical myth for a security product.
  • Reasoning: One important thing to know when using Microsoft products and solutions is that Microsoft provides you with contractual commitments on giving you control over your own data! Microsoft takes data security so seriously that even if a law enforcement agency or the government requests your data, you will be notified and provided with a copy of the request! And hence Microsoft defends your data through clearly defined and well-established response policies and processes like:
    • Microsoft uses and enables the use of industry-standard encrypted transport protocols, such as Transport Layer Security (TLS) and Internet Protocol Security (IPsec) for any customer data in transit.
    • The Microsoft Cloud employs a wide range of encryption capabilities up to AES-256 for data at rest.
    • Your control over your data is reinforced by Microsoft compliance with broadly applicable privacy laws, such as GDPR and privacy standards. These include the world’s first international code of practice for cloud privacy, ISO/IEC 27018.

Uncategorized Myths

“Security Copilot will replace our SOC team”:

No! It’s a fact that Security Copilot is an assistant, not an infallible sensor. It is created to “assist security professionals” and acknowledges it may make mistakes (false positives/negatives). The very conception of Security Copilot is essentially taking over the manual and tiresome analysis of raw logs and events while giving time to security professionals to do what they do best, discovering vulnerabilities and securing organizations! Do you ever think why there is not a single capability in Security Copilot to take an action on its own or without your approval? What? You didn’t know that?! This is by design to ensure that you and I are always in the driving seat while our “Co”-pilot augments our capabilities, automates repetitive tasks and provides actionable insights. But users must always validate its advice.

“Copilot only works well with Microsoft products”:

Another anecdotal myth. While Security Copilot is deeply integrated with Microsoft’s own security tools, it is also designed to work effectively with a variety of third-party solutions. In fact, Microsoft provides you with more than 35+ non-Microsoft plugins out-of-the-box including some popular tools like Splunk, ServiceNow, Cyware, Shodan etc. And that’s not it, you can create your own custom plugin using one the three methods amongst API, GPT and KQL.

“You cannot track Copilot’s activities”:

The notion that “you cannot track Copilot’s activities” is definitively a myth. Security Copilot’s integration with Microsoft Purview and the Office 365 Management API provides full visibility into every interaction—prompt inputs, AI responses, plugin calls, and admin configurations. Administrators can enable, search, export, and retain these logs for compliance, forensics, or integration into broader SIEM and SOAR workflows, ensuring that Copilot becomes a transparent, auditable extension of your security operations rather than an untraceable “black box.”

Conclusion

As with any transformative technology, Microsoft Security Copilot has naturally invited speculations. However, many of the concerns—ranging from cost and licensing, to performance, reliability, and data privacy—are either based on misconceptions or lack full context. Through this article, we’ve examined these myths objectively and highlighted how Security Copilot’s design, operational model, and deep integration with Microsoft’s security ecosystem work together to empower, not replace, human defenders. It is built to scale security operations with intelligence and agility, not disrupt them with unpredictability. For organizations navigating increasingly complex threat landscapes, Security Copilot offers a way to enhance response, reduce fatigue, and operationalize AI securely and responsibly. The key is not to view it as just another product, but as a strategic co-pilot—working alongside your team to defend at the speed and scale that modern security demands.

Want to have a much deeper understanding of Security Copilot? Check out these awesome resources:

What’s new in Microsoft Security Copilot

RSA Conference 2025: Security Copilot Agents now in preview

This post was originally published on this site.


 

In a time of escalating cyber threats, security teams face relentless pressure to do more with less – more threats, more data, more tools, fewer resources. Microsoft Security Copilot was built to bridge that gap, delivering an AI-driven assistant that enhances detection, investigation, and response across the entire Microsoft Security stack. Since it was launched in April 2024, Copilot has been integrated into customer environments to assist security professionals at every level – amplifying human expertise, streamlining complex workflows, and helping teams stay ahead of evolving threats. 

New research from Microsoft live operations highlights Security Copilot’s tangible impact, showing productivity gains across security and IT. Organizations using Security Copilot have seen: 

 

At this year’s RSA Conference, we are excited to share updates that make Security Copilot even more powerful, flexible, and accessible to customers and partners. 

Security Copilot agents are now in preview 

Last month at Microsoft Secure, we introduced Security Copilot agents – autonomous AI designed to tackle high-volume security tasks. Built on Security Copilot and seamlessly integrated with Microsoft Security solutions and partner ecosystem, these agents are tailored to security-specific use cases, adapt to your workflows, and learn from feedback, all while keeping your team fully in control. Every agent launched is built on the Security Copilot platform, ensuring a consistent, secure, and unified experience across capabilities.  

Starting today, we’re beginning a phased public preview rollout which will gradually expand to more customers to ensure a smooth and scalable experience.  The following agents are now available in preview to select customers: 

And there’s more to come. Over the next few weeks, additional agents will become available to customers: 

  • Phishing Triage Agent in Microsoft Defender triages phishing alerts with accuracy to identify real cyberthreats and false alarms. It provides easy-to-understand explanations for its decisions and improves detection based on admin feedback. 
  • Partner agents from OneTrust, Tanium, BlueVoyant, Fletch, and Aviatrix that automate tasks like privacy breach response, SOC assessment, alert triage, task optimization, and root cause analysis.  

We’re also thrilled to announce two new partner agents that have joined our growing ecosystem since our Secure event last month, now in private preview:

  • Email Threat Analyst Agent by Performanta conducts investigations into email-based threats and compromised user activity and provides an impact and recommended mitigation assessment.  
  • IAM Supervisor Agent by Performanta uncovers and triages identity and access threats and provides an impact and recommended mitigation assessment. 

With these additions, our growing ecosystem of Security Copilot agents – now in preview – offers broader insights and powerful automation to help security teams respond faster and more effectively. We are excited to continue advancing agentic capabilities both at Microsoft and through collaboration with our third-party partners. Please visit the new Security Copilot video hub for demos or deep dives of Security Copilot agents.

Partner ecosystem updates 

Azure Lighthouse support for Sentinel use cases 

Security Copilot support for Azure Lighthouse Sentinel use cases for managed security service provider (MSSP) tenants is now generally available. With this support, MSSPs can purchase SCUs and attach them to the managing tenant in Azure Lighthouse and use those SCUs to run Security Copilot skills related to Microsoft Sentinel on their customer tenants via Azure Lighthouse. All the Sentinel skills available in Security Copilot will be invokable from the Azure Lighthouse tenant without the customer needing to have Security Copilot, thereby making Security Copilot available to MSSPs who manage multiple customers. 

Supported scenarios include querying the customer Sentinel incident, incident entities/ details, querying Sentinel workspaces, and fetching Sentinel incident query. These skills can be invoked on per customer Sentinel workspace. Managing tenants using Azure Lighthouse now can do the following, without their customers needing to provision SCUs: 

  • Use the same natural language-based prompts using Sentinel skills on customer data 
  • Create custom promptbooks using Sentinel skills to automate their investigations 
  • Use Logic Apps to trigger these promptbooks 

Learn more about how to get started with Azure Lighthouse Support for Sentinel use cases here. 

New Security Copilot plugins 

As part of our effort to provide customers with truly end-to-end security protection, we continue to prioritize expanding our Security Copilot partner ecosystem. We have worked with partners to develop plugins to enhance and extend the information and data brought into Security Copilot.  

The following plugins are now in preview:  

  • Censys plugin enables users to enrich investigations using threat intelligence from the Censys platform to scan a URL or domain and scan an IP address.  
  • HP Workforce Experience Platform (WXP) plugin for Security Copilot allows users to gain insight into warranty of devices, application crashes, data about their fleet, and more.  
  • Splunk plugin allows Security Copilot users to make calls to Splunk to perform queries to create, retrieve, and dispatch saved Splunk searches, and retrieve and view information about fired alerts.  
  • Quest Security Guardian plugin reduces alert fatigue by prioritizing your most exploitable vulnerabilities and Active Directory configurations that demand attention. 
  • The following plugins are now in GA:  
  • CheckPhish plugin allows users to utilize the CheckPhish AI to analyze URLs for potential phishing threats, tech support scams, cryptojacking, and other security risks.   

Integration spotlight: ServiceNow SIR plugin 

The integration of ServiceNow AI and Microsoft Security Copilot capabilities brings joint capabilities to empower our customers and enhance their security posture. The integration optimizes incident insights within SIR and enhances Microsoft Security product’s security incident resolution status and threat prioritization capabilities, driving continuous security posture and awareness. As a result, security teams benefit from faster, more accurate incident resolution – reinforcing our commitment to delivering cutting edge, AI-driven solutions that elevate the entire security ecosystem.  

Flexibility, scalability, and security for AI 

Microsoft Purview for Security Copilot 

As organizations adopt AI, implementing data controls and a  Zero Trust approach is crucial to mitigate risks like data oversharing and leakage, and potential non-compliant usage in AI. We are excited to announce Microsoft Purview capabilities in preview for Security Copilot. By combining Microsoft Purview and Security Copilot, users can: 

  • Discover data risks such as sensitive data in user prompts and responses and receive recommended actions in their Microsoft Purview Data Security Posture Management (DSPM) for AI dashboard to reduce these risks.  
  • Identify risky AI usage with Microsoft Purview Insider Risk Management to investigate risky AI usage, such as an inadvertent user who has neglected security best practices and shared sensitive data in AI or a departing employee using AI to find sensitive data and exfiltrating the data through a USB device. 
  • Govern AI usage with Microsoft Purview Audit, Microsoft Purview eDiscovery, retention policies, and non-compliant usage detection. 

Learn more about Purview for Security Copilot here. 

Copilot in Microsoft Defender for Cloud 

Copilot in Defender for Cloud helps security teams accelerate risk remediation, making it faster and easier for security admins to remediate cloud risks by providing AI-generated summaries, remediation actions, and delegation emails, guiding users in each step of the risk reduction process. Security admins can use AI to quickly summarize a specific recommendation, generate remediation scripts, and delegate tasks via email to resource owners. The capabilities help reduce investigation time, enabling security teams to understand the risk in context and identify resources to quickly remediate. The capabilities are now generally available. Learn more about Copilot in Defender for Cloud here. 

Enriched Incident Summaries in the Microsoft Sentinel Azure portal 

We’re excited to announce Security Copilot Incident Summaries in the Microsoft Sentinel Azure portal are now in public preview. This capability provides enriched, easy-to-digest insights into security incidents – streamlining triage and helping analysts quickly understand scope, impact, and next steps. Read the blog post here. 

Enhanced Consumption Flexibility for Security Copilot 

This month we introduced enhancements to Security Copilot to enhance customer flexibility and scalability, by supplementing the existing provisioned pricing structure for Security Copilot with the addition of an overage Security Compute Unit (SCU). This capability ensures that users can scale their Copilot workloads beyond their provisioned capacity, for uninterrupted protection. Read the blog post here. 

Learn more about Security Copilot at RSA Conference 2025

To learn more about Security Copilot and explore how it can elevate your organization’s security strategy, we invite you to connect with us at booth #5744. This is a great opportunity to engage with Microsoft security experts, dive deeper into the latest innovations, and experience how Security Copilot can simplify and strengthen your security operations. Join us for our Security Copilot sessions below, stop by our booth for a live demo, or schedule a one-on-one meeting with our team. 

What’s new in Microsoft Security Copilot

Using Security Copilot to Proactively Identify and Prioritize Vulnerabilities

This post was originally published on this site.


 

Introduction 

There are many different approaches when it comes to prioritizing the vulnerabilities which need addressing with urgency. Any information or guidance to help you make better informed decisions can be critical but how can you stay informed? Leveraging all the information sources available to you can be the difference and allow you to be proactive when trying to protect your organization. 

 One useful feed is offered by CISA (Cybersecurity & Infrastructure Security Agency) who works with partners to defend against today’s threats and collaborate to build a more secure and resilient infrastructure for the future. The Known Exploited Vulnerabilities (KEV) Catalog is a curated list maintained by CISA. It identifies vulnerabilities that have been actively exploited in the wild, posing significant risks to organizations and individuals. The catalog aims to enhance cybersecurity by providing timely information on these vulnerabilities, enabling proactive mitigation efforts. 

Key features of the KEV Catalog include: 

  • Identification: Lists vulnerabilities that are confirmed to be exploited. 
  • Details: Provides technical details, including affected products and versions. 
  • Mitigation: Offers guidance on how to address and remediate the vulnerabilities. 
  • Updates: Regularly updated to reflect new threats and exploited vulnerabilities. 

The KEV Catalog serves as a critical resource for cybersecurity professionals, helping them prioritize patching and defense strategies to protect against known threats.

The feed is designed to help organizations stay informed about vulnerabilities that have been exploited in the wild. It is part of CISA’s efforts to defend against current threats and build a more secure and resilient infrastructure for the future 

Workflow overview 

The automated CISA feed solution addresses prioritization challenges by streamlining the process of vulnerability management. This solution checks the latest CISA feed every 24 hours and queries the CVE findings against devices within Microsoft Defender for Endpoint. Security Copilot then checks for remediation actions and enriches the description, providing a comprehensive overview of the vulnerability. 

 

Figure 1: Example of the email output from the Logic App

Key benefits of the Logic App include: 

  • Automated Updates: The Logic App automatically retrieves the latest CISA feed, ensuring that analysts have up-to-date information without manual intervention. This eliminates the need for manual checks and reduces the risk of missing critical updates. 
  • Device Vulnerability Assessment: It queries the CVE findings against devices within the organization, identifying which devices are vulnerable to the reported CVEs. This targeted approach allows analysts to focus on the most critical vulnerabilities affecting their specific environment, enhancing the efficiency of the remediation process. 
  • Remediation Insights: Security Copilot provides detailed remediation actions, helping analysts understand the steps needed to mitigate the vulnerabilities. By enriching the description with actionable insights, it simplifies the decision-making process and accelerates the implementation of security measures. 
  • Email Notifications: An email with the findings is sent to a designated mailbox, allowing for easy review and follow-up. This ensures that all relevant stakeholders are informed promptly, facilitating coordinated responses and continuous monitoring of the organization’s security posture. 

Figure 2: Screenshot of the CISA Logic App

Click here to get started and install the Logic App today. 

Conclusion 

To prioritize effectively, gather all necessary information for informed decisions. While the Logic App CISA workflow is one approach, other methods may better suit your organization. Function Apps can enhance decision making by automating and streamlining security operations with integrated tools and processes. The Security Copilot GitHub repository offers AI-powered solutions using machine learning and natural language processing to improve security. These tools help identify vulnerabilities, predict risks, and implement protective measures. Check it out!