What’s new in Microsoft Security Copilot

What’s new in Microsoft Security Copilot

This post was originally published on this site.


Security and IT teams move fast – and so does Security Copilot. This month, we’re delivering powerful new capabilities that help security and IT professionals investigate threats, manage identities, and automate protection with greater speed and precision. From AI-powered triage and policy optimization to smarter data exploration and expanded language support, these updates are designed to help you stay ahead of threats, reduce manual effort, and unlock new levels of efficiency.

Let’s dive into what’s new.

 

Improve IT efficiency with Copilot in Microsoft Intune – now generally available

IT admins can now use Security Copilot in Intune which includes a dedicated data exploration experience, allowing them to ask questions, extract insights, and take action – all from within the Intune admin center. Whether it’s identifying non-compliant devices, managing updates, or automating remediation, Copilot simplifies complex workflows and brings data and actions together in one place.
Learn more: Copilot in Microsoft Intune announcement

 

Streamline identity security with Copilot in Microsoft Entra – now generally available

Security Copilot in Microsoft Entra now brings AI-assisted investigation and identity management directly into the Entra admin center. Admins can ask natural language questions to troubleshoot sign-ins, review access, monitor tenant health, and analyze role assignments – without writing queries or switching tools. With expanded coverage and improved performance, Copilot helps teams move faster, close gaps, and stay ahead of threats.
Learn more: Copilot in Microsoft Entra announcement

 

Close gaps quickly with the Conditional Access Optimization Agent – now generally available

The Conditional Access Optimization Agent in Microsoft Entra brings AI-powered automation to identity workflows. The agent runs autonomously to detect gaps, overlaps, and outdated policy assignments – then recommends precise, one-click remediations to close them fast.

Key benefits include:

  • Autonomous protection: Automatically identifies users and apps not covered by policies
  • Explainable decisions: Plain-language summaries and visual activity maps
  • Custom adaptability: Learns from natural-language feedback and supports business rules
  • Full auditability: All actions logged for compliance and transparency

As one security leader put it:

“The Conditional Access Optimization Agent is like having a security analyst on call 24/7. It proactively identifies gaps in our Conditional Access policies and ensures every user is protected from day one… It’s a secure path to innovation that every chief information security officer can trust.”
Julian Rasmussen, Senior Consultant and Partner, Point Taken, Microsoft MVP
Learn more: Conditional Access Optimization Agent in Microsoft Entra GA announcement

 

Investigate phishing alerts faster with the new Phishing Triage Agent in Microsoft Defender

The Phishing Triage Agent in Microsoft Defender is now in public preview, bringing autonomous, AI-powered threat detection to your SOC workflows. Powered by large language models, the agent performs deep semantic analysis of emails, URLs, and files to determine whether a submission is a phishing threat or a false alarm – without relying on static rules.

It learns from analyst feedback, adapts to your organization’s patterns, and provides clear, natural language explanations for every verdict. A visual decision map shows exactly how the agent reached its conclusion, making the process fully transparent and reviewable.

Learn more: Announcing public preview Phishing Triage Agent in Microsoft Defender

 

The Threat Intelligence Briefing Agent is now in Public Preview: Build organization-specific briefings in just minutes

The Threat Intelligence Briefing Agent has entered public preview in the Security Copilot standalone experience, transforming how security teams stay ahead of emerging threats. With this powerful agent, creating highly relevant, organization-specific threat intelligence briefings now takes minutes rather than hours or days, empowering teams to act with speed and confidence. Through real-time dynamic reasoning, the agent surfaces the most relevant threat intelligence based on attributes such as the organization’s industry, geographic location, and unique attack surface to deliver critical context and invaluable situational awareness.

Learn more: aka.ms/ti-briefing-agent

 

Streamline operations with workspace-level management

Security Copilot now supports workspaces, giving organizations a flexible way to segment environments by team, region, or business unit. With workspaces now in public preview, admins can align access, data boundaries, and SCU capacity with operational and compliance needs. Each workspace supports role-based access control, localized prompt history, and independent capacity planning – making it easier to manage complex, distributed security and IT operations.

As part of this model, workspace-level plugin management is now generally available, allowing admins to configure plugin settings at the workspace or organization level. This eliminates the need for per-user setup and improves efficiency across large environments.

Learn more: New tools for Security Copilot management and capacity planning

 

Plan smarter with the new Security Copilot Capacity Calculator

The Security Copilot Capacity Calculator is now available in the standalone experience (Azure account required), helping teams estimate how many SCUs they may need.
Security Copilot supports:

  • Provisioned SCUs for predictable workloads
  • Overage SCUs to scale with variable workloads

Teams can estimate initial capacity using the capacity calculator, monitor usage in the in-product usage dashboard, and adjust their SCU allocation as needed. Learn more about Security Copilot pricing here.

Learn more: New tools for Security Copilot management and capacity planning

 

Automate Entra workflows with embedded NL2API skill

Security Copilot can now reason over Microsoft Graph APIs to answer complex, multi-stage questions across Entra resources. This embedded experience in Entra, powered by the NL2API skill, is now generally available – bringing advanced automation and intelligence directly into your Entra workflows.

 

Get faster suggestions with dynamic suggested prompts for Entra skills

Dynamic suggested prompts are now generally available for Entra skills, offering faster and more deterministic follow-up suggestions using direct skill invocation – bypassing the orchestrator for improved performance.

 

Meet compliance needs with FedRAMP High authorization for Security Copilot

Security Copilot is now included within the Federal Risk and Authorization Management Program (FedRAMP) High Authorization for Azure Commercial. This Provisional Authorization to Operate (P-ATO) within the existing FedRAMP High Azure Commercial environment was approved by the FedRAMP Joint Authorization Board (JAB). This milestone marks a significant step forward in our mission to bring Microsoft Security Copilot’s cutting-edge AI-powered security capabilities to our Government Community Cloud (GCC) customers. Stay tuned for updates on when Security Copilot will be fully available for GCC customers.

 

Expand global reach with Korean language and Swiss data residency

Security Copilot now supports Korean in both standalone and embedded experiences. For a full list of supported languages, visit Supported languages in Microsoft Security Copilot

Additionally, customers in Switzerland can now benefit from Swiss region data residency, ensuring Security Copilot data is stored within Swiss boundaries to meet local compliance requirements.

Learn more: Availability and recovery of Security Copilot

 

Improve accuracy and scale with GPT-4.1 and large output support

We’ve upgraded Security Copilot to support GPT-4.1 across all experiences at the evaluation level, offering larger context windows, improved interactions, and up to 50% accuracy improvements in some scenarios.

Also now generally available is large output support, which removes the previous 2MB limit for data used in LLMs – giving teams more flexibility when working with large datasets.

 

Audit agent changes with Purview UAL integration

Agent administration auditing is now generally available in Microsoft Purview Unified Audit Log, allowing teams to trace agent creation, updates, and deletions with detailed metadata for improved visibility and compliance.

Learn more: Access the Security Copilot audit log

 

Stay tuned and explore more!

Security Copilot is transforming how security and IT teams operate – bringing AI-powered insights, automation, and decision support into everyday workflows. With new capabilities landing every month, the pace of innovation is accelerating.

We’ll be back in September with more updates. Until then, explore these resources to get hands-on, deepen your understanding, and see what’s possible:

 

Don’t miss Microsoft Secure digital event on September 30th – we’ll be announcing exciting new capabilities for Security Copilot and sharing what’s next in AI-powered security. Register now to be the first to hear the announcements and see what’s coming.

What’s new in Microsoft Security Copilot

New tools for Security Copilot management and capacity planning

This post was originally published on this site.


Last year, we launched Microsoft Security Copilot with a bold goal: to help organizations protect at the speed of AI. Since then, Security Copilot has been transforming how IT and security operations teams respond to threats and manage their environments. In fact, research from live operations indicates that Security Copilot users have seen impact like a 30% reduction in mean time to resolution for SOC teams, and a 54% decrease in time to resolve a device policy conflict for IT teams. 

As adoption has grown, so has the complexity of customer needs. In many organizations, different teams, business units, and regions require distinct approaches to data access, capacity planning, and tooling. At the same time, customers want the flexibility to start small, test scenarios, and scale usage over time, without committing to long-term contracts. 

To meet these needs, Security Copilot is offered as a consumptive solution, allowing organizations to provision Security Compute Units (SCUs) as needed. This flexible model lowers the barrier to entry and encourages experimentation. And now, with workspaces and the Security Copilot capacity calculator to help manage capacity, customers can adopt Security Copilot with even more confidence and control. 

Workspaces 

Security operations don’t happen in a vacuum – different teams, business units, and regions have unique operational needs. This is why we’re excited to launch workspaces in public preview – a major enhancement to how teams can manage access, resources, and collaboration within Security Copilot. Workspaces provide a flexible way to segment environments, making it easier to align access and capacity with organizational needs, legal structures, or compliance requirements. 

 

Let’s take the example of a multinational organization with separate security and IT teams in North America, Europe, and Asia. With workspaces, this company can realize benefits in: 

  • Data boundaries: Each regional team operates within its own dedicated workspace, keeping data like prompt history local and accessible only to that team. This isolation ensures information stays relevant to the team and supports compliance with regional data residency requirements and internal policies. 
  • Role-based access control: Only authorized users specified by the admin have access to each workspace, and workspace management is restricted to users with administrator roles. 
  • Capacity planning: SCUs can be provisioned per workspace, giving admins the ability to right-size capacity based on each team’s workload. APAC can scale up during a surge while the US conserves usage during a quiet period. 

 Note: multi-workspace support is now available in Security Copilot, enabling users to manage prompt sessions across multiple workspaces. However, available agents that run autonomously are currently limited to a single workspace, and embedded experiences continue to route traffic exclusively through the tenant-level default workspace. Please refer to the documentation for full details. 

Security Copilot capacity calculator 

One of the most common questions we hear from customers is: “How many SCUs do I need to get started with Security Copilot?” Given the dynamic nature of AI-powered security workflows, forecasting compute needs can be a challenge, especially for teams just starting their journey. To make planning easier, we’re excited to announce the launch of the Security Copilot capacity calculator, now available in the Security Copilot standalone experience (Azure account required). 

This tool offers a practical starting point to help estimate how many SCUs your organization may require. With a few clicks, customers can get an idea of estimated SCU usage based on inputs like number of users in an embedded Security Copilot experience. While actual consumption may vary as it depends on real-time prompt activity, the calculator serves as a helpful guide for initial provisioning and budget planning.  

Once you’ve estimated your baseline needs, you can get started in Security Copilot or in the Azure portal. Security Copilot offers two flexible models to support both predictable workloads and unplanned spikes in usage: 

  • Provisioned SCUs: Ideal for predictable, ongoing operations. A minimum of one provisioned SCU is required. 
  • Overage SCUs: Designed for variable demand. Overage SCUs allow usage to scale seamlessly, and customers only pay for what they use, up to their chosen optional overage limit. 

With the capacity calculator, organizations can confidently begin their Security Copilot journey and better manage usage to align with their business needs. After getting started, teams can monitor consumption through the in-product usage dashboard and adjust capacity as demand fluctuates. Learn more about Security Copilot pricing here. 

Get Started with Security Copilot today 

Together, workspaces and the capacity calculator provide organizations with deeper insight, flexibility, and control over their Security Copilot usage. These features address the real-world challenges of managing diverse teams, complex environments, and evolving workloads. Whether you’re just starting your Security Copilot journey or looking to optimize your existing usage, these tools help you right-size capacity, maintain compliance, and deliver actionable AI assistance for your security and IT teams. 

Discover Security Copilot use cases, best practices, and customer success stories in the Security Copilot adoption hub. Learn more about our most recent Security Copilot innovations for IT teams here. If you have questions or need support, don’t hesitate to contact us or reach out to your account manager. 

What’s new in Microsoft Security Copilot

Smarter Prompts for Smarter Investigations: Dynamic Prompt Suggestions in Security Copilot

This post was originally published on this site.


When a security analyst turns to an AI system for help—whether to hunt threats, investigate alerts, or triage incidents—the first step is usually a natural language prompt. But if that prompt is too vague, too general, or not aligned with the system’s capabilities, the response won’t be helpful. In high-stakes environments like cybersecurity, that’s not just a missed opportunity, it’s a risk.

That’s exactly the problem we tackled in our recent paper, Dynamic Context-Aware Prompt Recommendations for Domain-Specific Applications, now published and deployed as a new skill in Security Copilot.

Why Prompting Is a Bigger Problem in Security Than It Seems

LLMs have made impressive progress in general-purpose settings—helping users write emails, summarize documents, or answer trivia. These systems often include smart prompt recommendations based on the flow of conversation. But when you shift into domain-specific systems like Microsoft Security Copilot, the game changes.

Security analysts don’t ask open-ended questions. They ask task-specific ones:

  • “List devices that ran a malicious file in the last 24 hours.”
  • “Correlate failed login attempts across services.”
  • “Visualize outbound traffic from compromised machines.”

These questions map directly to skills—domain-specific functions that query data, connect APIs, or launch workflows. And that means prompt recommendations need to be tightly aligned with the available skills, underlying datasets, and current investigation context. General-purpose prompt systems don’t know how to do that.

What Makes Domain-Specific Prompting Hard

Designing prompt recommendations for systems like Security Copilot comes with unique constraints:

  1. Constrained Skill Set: The AI can only take actions it’s configured to support. Prompts must align with those skills—no hallucinations allowed.
  2. Evolving Context: A single investigation might involve multiple rounds of prompts, results, follow-ups, and pivots. Prompt suggestions must adapt dynamically.
  3. Deep Domain Knowledge: It’s not enough to suggest “Check network logs.” A useful prompt needs to reflect how real analysts work—across Defender, Sentinel, and more.
  4. Scalability: As new skills are added, prompt systems must scale without requiring constant manual curation or rewriting.
Our Approach: Dynamic, Context-Aware, and Skill-Constrained

 

We introduce a dynamic prompt recommendation system for Security Copilot. The key innovations include:

  • Contextual understanding of the session: We track the user’s investigation path and surface prompts that are relevant to what they’re doing now, not just generic starters.
  • Skill-awareness: The system knows what internal capabilities exist (e.g., “list devices,” “query login events”) and only recommends prompts that can be executed via those skills.
  • Domain knowledge injection: By encoding metadata about products, datasets, and typical workflows (e.g., MITRE attack stages), the system produces prompts that make sense in security analyst workflows.
  • Scalable prompt generation: Rather than relying on hardcoded lists, our system dynamically generates and ranks prompt suggestions.
What It Looks Like in Action

The dynamic prompt suggestion system is now live in Microsoft Entra, available in both Embedded and Immersive experiences. When a user enters a natural language prompt, the system automatically suggests several context-aware follow-up prompts, based on the user’s prior interactions and the system’s understanding of the current task.

 

These suggestions are generated in real time—users can simply click on a suggestion, and it’s executed immediately, allowing for quick and seamless follow-up queries without needing to rephrase or retype.

Let’s walk through two examples:

Embedded Experience

We begin with the prompt: “How does Microsoft determine Risky Users?”

 

The system returns the response and generates 3 follow-up suggestions, such as: “List dismissed risky detections.”

We click on that suggestion, which executes the query and shows the results.

New suggestions continue to appear after each prompt execution, making it easy to explore related insights.

Immersive Experience

We start with a prompt: “Who am I?”

 

Among the 5 suggested prompts, we select: “List the groups user nase74@woodgrove.ms is a member of.”

The user clicks, the query runs, and more follow-up suggestions appear, enabling a natural, guided flow throughout the session.

 

Why This Matters for the Future of Security AI

Prompting isn’t just an interface detail—it’s the entry point to intelligence. And in cybersecurity, where time, accuracy, and reliability matter, we need AI systems that are not just capable, but cooperative. Our research contributes to a future where security analysts don’t have to be prompt engineers to get the most out of AI.

By making prompt recommendations dynamic, contextual, and grounded in real domain knowledge, we help close the gap between LLM potential and security reality.

 

Interested in learning more?
Check out the full paper: Dynamic Context-Aware Prompt Recommendations for Domain-Specific Applications

If you’re using or building upon this work in your own research, we’d appreciate you citing our paper:

@article {tang2025dynamic,
title={Dynamic Context-Aware Prompt Recommendation for Domain-Specific AI Applications},
author={Tang, Xinye and Zhai, Haijun and Belwal, Chaitanya and Thayanithi, Vineeth and Baumann, Philip and Roy, Yogesh K},
journal={arXiv preprint arXiv:2506.20815},
year={2025}
}

 

What’s new in Microsoft Security Copilot

Automating Phishing Email Triage with Microsoft Security Copilot

This post was originally published on this site.


This blog details automating phishing email triage using Azure Logic Apps, Azure Function Apps, and Microsoft Security Copilot. Deployable in under 10 minutes, this solution primarily analyzes email intent without relying on traditional indicators of compromise, accurately classifying benign/junk, suspicious, and phishing emails. Benefits include reducing manual workload, improved threat detection, and (optional) integration seamlessly with Microsoft Sentinel – enabling analysts to see Security Copilot analysis within the incident itself. 

Designed for flexibility and control, this Logic App is a customizable solution that can be self-deployed from GitHub. It helps automate phishing response at scale without requiring deep coding expertise, making it ideal for teams that prefer a more configurable approach and want to tailor workflows to their environment. The solution streamlines response and significantly reduces manual effort.

Access the full solution on the Security Copilot Github:
GitHub – UserReportedPhishing Solution.

For teams looking for a more sophisticated, fully integrated experience, the Security Copilot Phishing Triage Agent represents the next generation of phishing response. Natively embedded in Microsoft Defender, the agent autonomously triages phishing incidents with minimal setup. It uses advanced LLM-based reasoning to resolve false alarms, enabling analysts to stay focused on real threats. The agent offers step-by-step decision transparency and continuously learns from user feedback. Read the official announcement here.

Introduction: Phishing Challenges Continue to Evolve

Phishing continues to evolve in both scale and sophistication, but a growing challenge for defenders isn’t just stopping phishing, it’s scaling response. Thanks to tools like Outlook’s “Report Phishing” button and increased user awareness, organizations are now flooded with user-reported emails, many of which are ambiguous or benign. This has created a paradox: better detection by users has overwhelmed SOC teams, turning email triage into a manual, rotational task dreaded for its repetitiveness and time cost, often taking over 25 minutes per email to review.

Our solution addresses that problem, by automating the triage of user-reported phishing through AI-driven intent analysis. It’s not built to replace your secure email gateways or Microsoft Defender for Office 365; those tools have already done their job. This system assumes the email:

  • Slipped past existing filters,
  • Was suspicious enough for a user to escalate,
  • Lacks typical IOCs like malicious domains or attachments.

As a former attacker, I spent years crafting high-quality phishing emails to penetrate the defenses of major banks. Effective phishing doesn’t rely on obvious IOCs like malicious domains, URLs, or attachments… the infrastructure often appears clean. The danger lies in the intent. This is where Security Copilot’s LLM-based reasoning is critical, analyzing structure, context, tone, and seasonal pretexts to determine whether an email is phishing, suspicious, spam, or legitimate.

What makes this novel is that it’s the first solution built specifically for the “last mile” of phishing defense, where human suspicion meets automation, and intent is the only signal left to analyze. It transforms noisy inboxes into structured intelligence and empowers analysts to focus only on what truly matters.

Solution Overview: How the Logic App Solution Works (and Why It’s Different)

Core Components:

  • Azure Logic Apps: Orchestrates the entire workflow, from ingestion to analysis, and 100% customizable.
  • Azure Function Apps: Parses and normalizes email data for efficient AI consumption.
  • Microsoft Security Copilot: Performs sophisticated AI-based phishing analysis by understanding email intent and tactics, rather than relying exclusively on predefined malicious indicators.

Key Benefits:

  • Rapid Analysis: Processes phishing alerts and, in minutes, delivers comprehensive reports that empower analysts to make faster, more informed triage decisions – compared to manual reviews that can take up to 30 minutes. And, unlike analysts, Security Copilot requires zero sleep! 
  • AI-driven Insights: LLM-based analysis is leveraged to generate clear explanations of classifications by assessing behavioral and contextual signals like urgency, seasonal threats, Business Email Compromise (BEC), subtle language clues, and otherwise sophisticated techniques. Most importantly, it identifies benign emails, which are often the bulk of reported emails.
  • Detailed, Actionable Reports: Generates clear, human-readable HTML reports summarizing threats and recommendations for analyst review.
  • Robust Attachment Parsing: Automatically examines attachments like PDFs and Excel documents for malicious content or contextual inconsistencies.
  • Integrated with Microsoft Sentinel: Optional integration with Sentinel ensures central incident tracking and comprehensive threat management. Analysis is attached directly to the incident, saving analysts more time.
  • Customization: Add, move, or replace any element of the Logic App or prompt to fit your specific workflows.
Deployment Guide: Quick, Secure, and Reliable Setup

The solution provides Azure Resource Manager (ARM) templates for rapid deployment:

Prerequisites:

  • Azure Subscription with Contributor access to a resource group.
  • Microsoft Security Copilot enabled.
  • Dedicated Office 365 shared mailbox (e.g., phishing@yourdomain.com) with Mailbox.Read.Shared permissions.
  • (Optional) Microsoft Sentinel workspace.

Refer to the up to date deployment instructions on the Security Copilot GitHub page.

Technical Architecture & Workflow:

The automated workflow operates as follows:

Email Ingestion:

  • Monitors the shared mailbox via Office 365 connector.
  • Triggers on new email arrivals every 3 minutes.
  • Assumes that the reported email has arrived as an attachment to a “carrier” email.

Determine if the Email Came from Defender/Sentinel:

If the email came from Defender, it would have a prepended subject of “Phishing”, if not, it takes the “False” branch. Change as necessary.

Initial Email Processing:

  • Exports raw email content from the shared mailbox.
  • Determines if .msg or .eml attachments are in binary format and converts if necessary.

Email Parsing via Azure Function App:

  • Extracts data from email content and attachments (URLs, sender info, email body, etc.) and returns a JSON structure.
  • Prepares clean JSON data for AI analysis.
  • This step is required to “prep” the data for LLM analysis due to token limits.
  • Click on the “Parse Email” block to see the output of the Function App for any troubleshooting. You’ll also notice a number of JSON keys that are not used but provided for flexibility.

Security Copilot Advanced AI Reasoning:

  • Analyzes email content using a comprehensive prompt that evaluates behavioral and seasonal patterns, BEC indicators, attachment context, and social engineering signals.
  • Scores cumulative risk based on structured heuristics without relying solely on known malicious indicators.
  • Returns validated JSON output (some customers are parsing this JSON and performing other action).
  • This is where you would customize the prompt, should you need to add some of your own organizational situations if the Logic App needs to be tuned:

JSON Normalization & Error Handling:

  • A “normalization” Azure Function ensures output matches the expected JSON schema.
  • Sometimes LLMs will stray from a strict output structure, this aims to solve that problem.
  • If you add or remove anything from the Parse Email code that alters the structure of the JSON, this and the next block will need to be updated to match your new structure.

Detailed HTML Reporting:

  • Generates a detailed HTML report summarizing AI findings, indicators, and recommended actions.
  • Reports are emailed directly to SOC team distribution lists or ticketing systems.

Optional Sentinel Integration:

Adds the reasoning & output from Security Copilot directly to the incident comments. This is the ideal location for output since the analyst is already in the security.microsoft.com portal. It waits up to 15 minutes for logs to appear, in situations where the user reports before an incident is created.

The solution works pretty well out of the box but may require some tuning, give it a test. Here are some examples of the type of Security Copilot reasoning.

Benign email detection: 

 

Example of phishing email detection:

 

 

More sophisticated phishing with subtle clues:

 

 

 

Enhanced Technical Details & Clarifications

Attachment Processing:

  • When multiple email attachments are detected, the Logic App processes each binary-format email sequentially.
  • If PDF or Excel attachments are detected, they are parsed for content and are evaluated appropriately for content and intent.

Security Copilot Reliability:

  • The Security Copilot Logic App API call uses an extensive retry policy (10 retries at 10-minute intervals) to ensure reliable AI analysis despite intermittent service latency.
  • If you run out of SCUs in an hour, it will pause until they are refreshed and continue.

Sentinel Integration Reliability:

  • Acknowledges inherent Sentinel logging delays (up to 15 minutes).
  • Implements retry logic and explicit manual alerting for unmatched incidents, if the analysis runs before the incident is created.

Security Best Practices:

  • Compare the Function & Logic App to your company security policies to ensure compliance.
  • Credentials, API keys, and sensitive details utilize Azure Managed Identities or secure API connections. No secrets are stored in plaintext.
  • Azure Function Apps perform only safe parsing operations; attachments and content are never executed or opened insecurely.

Be sure to check out how the Microsoft Defender for Office team is improving detection capabilities as well Microsoft Defender for Office 365’s Language AI for Phish: Enhancing Email Security | Microsoft Community Hub.

What’s new in Microsoft Security Copilot

Using parameterized functions with KQL-based custom plugins in Microsoft Security Copilot

This post was originally published on this site.


 

 

 

In this blog, I will walk through how you can build functions based on a Microsoft Sentinel Log Analytics workspace for use in custom KQL-based plugins for Security Copilot. The same approach can be used for Azure Data Explorer and Defender XDR, so long as you follow the specific guidance for either platform. A link to those steps is provided in the Additional Resources section at the end of this blog.

But first, it’s helpful to clarify what parameterized functions are and why they are important in the context of Security Copilot KQL-based plugins. Parameterized functions accept input details (variables) such as lookback periods or entities, allowing you to dynamically alter parts of a query without rewriting the entire logic

Parameterized functions are important in the context of Security Copilot plugins because of:

  1. Dynamic prompt completion:
    Security Copilot plugins often accept user input (e.g., usernames, time ranges, IPs). Parameterized functions allow these inputs to be consistently injected into KQL queries without rebuilding query logic.
  2. Plugin reusability:
    By using parameters, a single function can serve multiple investigation scenarios (e.g., checking sign-ins, data access, or alerts for any user or timeframe) instead of hardcoding different versions.
  3. Maintainability and modularity:
    Parameterized functions centralize query logic, making it easier to update or enhance without modifying every instance across the plugin spec. To modify the logic, just edit the function in Log Analytics, test it then save it- without needing to change the plugin at all or re-upload it into Security Copilot. It also significantly reduces the need to ensure that the query part of the YAML is perfectly indented and tabbed as is required by the Open API specification, you only need to worry about formatting a single line vs several-potentially hundreds.
  4. Validation:
    Separating query logic from input parameters improves query reliability by avoiding the possibility of malformed queries. No matter what the input is, it’s treated as a value, not as part of the query logic.
  5. Plugin Spec mapping:
    OpenAPI-based Security Copilot plugins can map user-provided inputs directly to function parameters, making the interaction between user intent and query execution seamless.
Practical example

In this case, we have a 139-line KQL query that we will reduce to exactly one line that goes into the KQL plugin. In other cases, this number could be even higher.  Without using functions, this entire query would have to form part of the plugin

Note: The rest of this blog assumes you are familiar with KQL custom plugins-how they work and how to upload them into Security Copilot.

 

CloudAppEvents | where RawEventData.TargetDomain has_any ( ‘grok.com’, ‘x.ai’, ‘mistral.ai’, ‘cohere.ai’, ‘perplexity.ai’, ‘huggingface.co’, ‘adventureai.gg’, ‘ai.google/discover/palm2’, ‘ai.meta.com/llama’, ‘ai2006.io’, ‘aibuddy.chat’, ‘aidungeon.io’, ‘aigcdeep.com’, ‘ai-ghostwriter.com’, ‘aiisajoke.com’, ‘ailessonplan.com’, ‘aipoemgenerator.org’, ‘aissistify.com’, ‘ai-writer.com’, ‘aiwritingpal.com’, ‘akeeva.co’, ‘aleph-alpha.com/luminous’, ‘alphacode.deepmind.com’, ‘analogenie.com’, ‘anthropic.com/index/claude-2’, ‘anthropic.com/index/introducing-claude’, ‘anyword.com’, ‘app.getmerlin.in’, ‘app.inferkit.com’, ‘app.longshot.ai’, ‘app.neuro-flash.com’, ‘applaime.com’, ‘articlefiesta.com’, ‘articleforge.com’, ‘askbrian.ai’, ‘aws.amazon.com/bedrock/titan’, ‘azure.microsoft.com/en-us/products/ai-services/openai-service’, ‘bard.google.com’, ‘beacons.ai/linea_builds’, ‘bearly.ai’, ‘beatoven.ai’, ‘beautiful.ai’, ‘beewriter.com’, ‘bettersynonyms.com’, ‘blenderbot.ai’, ‘bomml.ai’, ‘bots.miku.gg’, ‘browsegpt.ai’, ‘bulkgpt.ai’, ‘buster.ai’, ‘censusgpt.com’, ‘chai-research.com’, ‘character.ai’, ‘charley.ai’, ‘charshift.com’, ‘chat.lmsys.org’, ‘chat.mymap.ai’, ‘chatbase.co’, ‘chatbotgen.com’, ‘chatgpt.com’, ‘chatgptdemo.net’, ‘chatgptduo.com’, ‘chatgptspanish.org’, ‘chatpdf.com’, ‘chattab.app’, ‘claid.ai’, ‘claralabs.com’, ‘claude.ai/login’, ‘clipdrop.co/stable-diffusion’, ‘cmdj.app’, ‘codesnippets.ai’, ‘cohere.com’, ‘cohesive.so’, ‘compose.ai’, ‘contentbot.ai’, ‘contentvillain.com’, ‘copy.ai’, ‘copymatic.ai’, ‘copymonkey.ai’, ‘copysmith.ai’, ‘copyter.com’, ‘coursebox.ai’, ‘coverler.com’, ‘craftly.ai’, ‘crammer.app’, ‘creaitor.ai’, ‘dante-ai.com’, ‘databricks.com’, ‘deepai.org’, ‘deep-image.ai’, ‘deepreview.eu’, ‘descrii.tech’, ‘designs.ai’, ‘docgpt.ai’, ‘dreamily.ai’, ‘editgpt.app’, ‘edwardbot.com’, ‘eilla.ai’, ‘elai.io’, ‘elephas.app’, ‘eleuther.ai’, ‘essayailab.com’, ‘essay-builder.ai’, ‘essaygrader.ai’, ‘essaypal.ai’, ‘falconllm.tii.ae’, ‘finechat.ai’, ‘finito.ai’, ‘fireflies.ai’, ‘firefly.adobe.com’, ‘firetexts.co’, ‘flowgpt.com’, ‘flowrite.com’, ‘forethought.ai’, ‘formwise.ai’, ‘frase.io’, ‘freedomgpt.com’, ‘gajix.com’, ‘gemini.google.com’, ‘genei.io’, ‘generatorxyz.com’, ‘getchunky.io’, ‘getgptapi.com’, ‘getliner.com’, ‘getsmartgpt.com’, ‘getvoila.ai’, ‘gista.co’, ‘github.com/features/copilot’, ‘giti.ai’, ‘gizzmo.ai’, ‘glasp.co’, ‘gliglish.com’, ‘godinabox.co’, ‘gozen.io’, ‘gpt.h2o.ai’, ‘gpt3demo.com’, ‘gpt4all.io’, ‘gpt-4chan+)’, ‘gpt6.ai’, ‘gptassistant.app’, ‘gptfy.co’, ‘gptgame.app’, ‘gptgo.ai’, ‘gptkit.ai’, ‘gpt-persona.com’, ‘gpt-ppt.neftup.app’, ‘gptzero.me’, ‘grammarly.com’, ‘hal9.com’, ‘headlime.com’, ‘heimdallapp.org’, ‘helperai.info’, ‘heygen.com’, ‘heygpt.chat’, ‘hippocraticai.com’, ‘huggingface.co/spaces/tiiuae/falcon-180b-demo’, ‘humanpal.io’, ‘hypotenuse.ai’, ‘ichatwithgpt.com’, ‘ideasai.com’, ‘ingestai.io’, ‘inkforall.com’, ‘inputai.com/chat/gpt-4’, ‘instantanswers.xyz’, ‘instatext.io’, ‘iris.ai’, ‘jasper.ai’, ‘jigso.io’, ‘kafkai.com’, ‘kibo.vercel.app’, ‘kloud.chat’, ‘koala.sh’, ‘krater.ai’, ‘lamini.ai’, ‘langchain.com’, ‘laragpt.com’, ‘learn.xyz’, ‘learnitive.com’, ‘learnt.ai’, ‘letsenhance.io’, ‘letsrevive.app’, ‘lexalytics.com’, ‘lgresearch.ai’, ‘linke.ai’, ‘localbot.ai’, ‘luis.ai’, ‘lumen5.com’, ‘machinetranslation.com’, ‘magicstudio.com’, ‘magisto.com’, ‘mailshake.com/ai-email-writer’, ‘markcopy.ai’, ‘meetmaya.world’, ‘merlin.foyer.work’, ‘mieux.ai’, ‘mightygpt.com’, ‘mosaicml.com’, ‘murf.ai’, ‘myaiteam.com’, ‘mygptwizard.com’, ‘narakeet.com’, ‘nat.dev’, ‘nbox.ai’, ‘netus.ai’, ‘neural.love’, ‘neuraltext.com’, ‘newswriter.ai’, ‘nextbrain.ai’, ‘noluai.com’, ‘notion.so’, ‘novelai.net’, ‘numind.ai’, ‘ocoya.com’, ‘ollama.ai’, ‘openai.com’, ‘ora.ai’, ‘otterwriter.com’, ‘outwrite.com’, ‘pagelines.com’, ‘parallelgpt.ai’, ‘peppercontent.io’, ‘perplexity.ai’, ‘personal.ai’, ‘phind.com’, ‘phrasee.co’, ‘play.ht’, ‘poe.com’, ‘predis.ai’, ‘premai.io’, ‘preppally.com’, ‘presentationgpt.com’, ‘privatellm.app’, ‘projectdecember.net’, ‘promptclub.ai’, ‘promptfolder.com’, ‘promptitude.io’, ‘qopywriter.ai’, ‘quickchat.ai/emerson’, ‘quillbot.com’, ‘rawshorts.com’, ‘read.ai’, ‘rebecc.ai’, ‘refraction.dev’, ‘regem.in/ai-writer’, ‘regie.ai’, ‘regisai.com’, ‘relevanceai.com’, ‘replika.com’, ‘replit.com’, ‘resemble.ai’, ‘resumerevival.xyz’, ‘riku.ai’, ‘rizzai.com’, ‘roamaround.app’, ‘rovioai.com’, ‘rytr.me’, ‘saga.so’, ‘sapling.ai’, ‘scribbyo.com’, ‘seowriting.ai’, ‘shakespearetoolbar.com’, ‘shortlyai.com’, ‘simpleshow.com’, ‘sitegpt.ai’, ‘smartwriter.ai’, ‘sonantic.io’, ‘soofy.io’, ‘soundful.com’, ‘speechify.com’, ‘splice.com’, ‘stability.ai’, ‘stableaudio.com’, ‘starryai.com’, ‘stealthgpt.ai’, ‘steve.ai’, ‘stork.ai’, ‘storyd.ai’, ‘storyscapeai.app’, ‘storytailor.ai’, ‘streamlit.io/generative-ai’, ‘summari.com’, ‘synesthesia.io’, ‘tabnine.com’, ‘talkai.info’, ‘talkpal.ai’, ‘talktowalle.com’, ‘team-gpt.com’, ‘tethered.dev’, ‘texta.ai’, ‘textcortex.com’, ‘textsynth.com’, ‘thirdai.com/pocketllm’, ‘threadcreator.com’, ‘thundercontent.com’, ‘tldrthis.com’, ‘tome.app’, ‘toolsaday.com/writing/text-genie’, ‘to-teach.ai’, ‘tutorai.me’, ‘tweetyai.com’, ‘twoslash.ai’, ‘typeright.com’, ‘typli.ai’, ‘uminal.com’, ‘unbounce.com/product/smart-copy’, ‘uniglobalcareers.com/cv-generator’, ‘usechat.ai’, ‘usemano.com’, ‘videomuse.app’, ‘vidext.app’, ‘virtualghostwriter.com’, ‘voicemod.net’, ‘warmer.ai’, ‘webllm.mlc.ai’, ‘wellsaidlabs.com’, ‘wepik.com’, ‘we-spots.com’, ‘wordplay.ai’, ‘wordtune.com’, ‘workflos.ai’, ‘woxo.tech’, ‘wpaibot.com’, ‘writecream.com’, ‘writefull.com’, ‘writegpt.ai’, ‘writeholo.com’, ‘writeme.ai’, ‘writer.com’, ‘writersbrew.app’, ‘writerx.co’, ‘writesonic.com’, ‘writesparkle.ai’, ‘writier.io’, ‘yarnit.app’, ‘zevbot.com’, ‘zomani.ai’ ) | extend sit = parse_json(tostring(RawEventData.SensitiveInfoTypeData)) | mv-expand sit | summarize Event_Count = count() by tostring(sit.SensitiveInfoTypeName), CountryCode, City, UserId = tostring(RawEventData.UserId), TargetDomain = tostring(RawEventData.TargetDomain), ActionType = tostring(RawEventData.ActionType), IPAddress = tostring(RawEventData.IPAddress), DeviceType = tostring(RawEventData.DeviceType), FileName = tostring(RawEventData.FileName), TimeBin = bin(TimeGenerated, 1h) | extend SensitivityScore = case(tostring(sit_SensitiveInfoTypeName) in~ (“U.S. Social Security Number (SSN)”, “Credit Card Number”, “EU Tax Identification Number (TIN)”,”Amazon S3 Client Secret Access Key”,”All Credential Types”), 90, tostring(sit_SensitiveInfoTypeName) in~ (“All Full names”), 40, tostring(sit_SensitiveInfoTypeName) in~ (“Project Obsidian”, “Phone Number”), 70, tostring(sit_SensitiveInfoTypeName) in~ (“IP”), 50,10 ) | join kind=leftouter ( IdentityInfo | where TimeGenerated > ago(lookback) | extend AccountUpn = tolower(AccountUPN) ) on $left.UserId == $right.AccountUpn | join kind=leftouter ( BehaviorAnalytics | where TimeGenerated > ago(lookback) | extend AccountUpn = tolower(UserPrincipalName) ) on $left.UserId == $right.AccountUpn //| where BlastRadius == “High” //| where RiskLevel == “High” | where Department == User_Dept | summarize arg_max(TimeGenerated, *) by sit_SensitiveInfoTypeName, CountryCode, City, UserId, TargetDomain, ActionType, IPAddress, DeviceType, FileName, TimeBin, Department, SensitivityScore | summarize sum(Event_Count) by sit_SensitiveInfoTypeName, CountryCode, City, UserId, Department, TargetDomain, ActionType, IPAddress, DeviceType, FileName, TimeBin, BlastRadius, RiskLevel, SourceDevice, SourceIPAddress, SensitivityScore

With parameterized functions, follow these steps to simplify the plugin that will be built based on the query above

  1. Define the variable/parameters upfront in the query (BEFORE creating the parameters in the UI). This will put the query in a “temporary” unusable state because the parameters will cause syntax problems in this state. However, since the plan is to run the query as a function this is ok

Fig. 1: Image showing partial query with the parameters to defined highlighted in red i.e. lookback and User_Dept

  1. Create the parameters in the Log Analytics UI

Fig 2. Screenshot showing how the function menu in the Log Analytics UI

Give the function a name and define the parameters exactly as they show up in the query in step 1 above. In this example, we are defining two parameters: lookback – to store the lookback period to be passed to the time filter and User_Dept to the user’s department.

Fig. 3. Function menu showing the two parameters defined in the function creation menu of Log Analytics

3. Test the query. Note the order of parameter definition in the UI. i.e. first the User_Dept THEN the lookback period. You can interchange them if you like but this will determine how you submit the query using the function. If the User_Dept parameter was defined first then it needs to come first when executing the function. See the below screenshot. Switching them will result in the wrong parameter being passed to the query and consequently 0 results will be returned.

Fig. 4: Sample run of the function with the parameters specified in the correct order

Effect of switched parameters:

Fig. 5: Sample function run with the functions switched to show effect of this situation

To edit the function, follow the steps below:

Navigate to the Logs menu for your Log Analytics workspace then select the function icon

 

Fig. 6: Partial view of the function being edited within the Log Analytics UI

Fig. 7: Image showing how to select the code button in the function menu to edit the function code

Once satisfied with the query and function, build your spec file for the Security Copilot plugin. Note the parameter definition and usage in the sections highlighted in red below

Fig. 8: Partial view of the YAML plugin showing the encapsulation of the 139 lines of KWL into a single one

And that’s it, from 139 unwieldy KQL lines to one very manageable one! You are welcome 😊

Let’s now put it through its paces once uploaded into Security Copilot. We start by executing the plugin using its default settings via the direct skill invocation method. We see indeed that the prompt returns results based on the default values passed as parameters to the function:

Fig. 9: View of Secuity Copilot landing page showing an example of direct skill execution of the created pluginFig. 10: Sample output showing records of users from the Sales department

Next, we still use direct skill invocation, but this time specify our own parameters:

Fig. 11: Direct skill invocation example but with specified parameters-Department, and lookback periodFig 12: Prompt run showing the output corresponding to the selections of the previous direct skill invocation prompt

Lastly, we test it out with a natural language prompt:

Fig 13: Security Copilot prompt bar showing example of natural language prompt seeking events related to users in the Human Resources departmentFig 14: Output from previous natural language prompt focused on users from the HR department

Tip: The function does not execute successfully if the default summarize function is used without creating a variable i.e. If the summarize count() command is used in your query, it results in a system-defined output variable named count_. To bypass this issue, ensure to use a user-defined variable such as Event_Count as shown in line 77 below:

Fig. 15: Highlighting the creation of a variable to store results from the summarize count() command

Conclusion

In conclusion, leveraging parameterized functions within KQL-based custom plugins in Microsoft Security Copilot can significantly streamline your data querying and analysis capabilities. By encapsulating reusable logic, improving query efficiency, and ensuring maintainability, these functions provide an efficient approach for tapping into data stored across Microsoft Sentinel, Defender XDR and Azure Data Explorer clusters. Start integrating parameterized functions into your KQL-based Security Copilot plugins today and let us have your feedback.

Additional Resources

Using parameterized functions in Microsoft Defender XDR

Using parameterized functions with Azure Data Explorer

Functions in Azure Monitor log queries – Azure Monitor | Microsoft Learn

Kusto Query Language (KQL) plugins in Microsoft Security Copilot | Microsoft Learn

Harnessing the power of KQL Plugins for enhanced security insights with Copilot for Security | Microsoft Community Hub