New tools for Security Copilot management and capacity planning

New tools for Security Copilot management and capacity planning

This post was originally published on this site.


Last year, we launched Microsoft Security Copilot with a bold goal: to help organizations protect at the speed of AI. Since then, Security Copilot has been transforming how IT and security operations teams respond to threats and manage their environments. In fact, research from live operations indicates that Security Copilot users have seen impact like a 30% reduction in mean time to resolution for SOC teams, and a 54% decrease in time to resolve a device policy conflict for IT teams. 

As adoption has grown, so has the complexity of customer needs. In many organizations, different teams, business units, and regions require distinct approaches to data access, capacity planning, and tooling. At the same time, customers want the flexibility to start small, test scenarios, and scale usage over time, without committing to long-term contracts. 

To meet these needs, Security Copilot is offered as a consumptive solution, allowing organizations to provision Security Compute Units (SCUs) as needed. This flexible model lowers the barrier to entry and encourages experimentation. And now, with workspaces and the Security Copilot capacity calculator to help manage capacity, customers can adopt Security Copilot with even more confidence and control. 

Workspaces 

Security operations don’t happen in a vacuum – different teams, business units, and regions have unique operational needs. This is why we’re excited to launch workspaces in public preview – a major enhancement to how teams can manage access, resources, and collaboration within Security Copilot. Workspaces provide a flexible way to segment environments, making it easier to align access and capacity with organizational needs, legal structures, or compliance requirements. 

 

Let’s take the example of a multinational organization with separate security and IT teams in North America, Europe, and Asia. With workspaces, this company can realize benefits in: 

  • Data boundaries: Each regional team operates within its own dedicated workspace, keeping data like prompt history local and accessible only to that team. This isolation ensures information stays relevant to the team and supports compliance with regional data residency requirements and internal policies. 
  • Role-based access control: Only authorized users specified by the admin have access to each workspace, and workspace management is restricted to users with administrator roles. 
  • Capacity planning: SCUs can be provisioned per workspace, giving admins the ability to right-size capacity based on each team’s workload. APAC can scale up during a surge while the US conserves usage during a quiet period. 

 Note: multi-workspace support is now available in Security Copilot, enabling users to manage prompt sessions across multiple workspaces. However, available agents that run autonomously are currently limited to a single workspace, and embedded experiences continue to route traffic exclusively through the tenant-level default workspace. Please refer to the documentation for full details. 

Security Copilot capacity calculator 

One of the most common questions we hear from customers is: “How many SCUs do I need to get started with Security Copilot?” Given the dynamic nature of AI-powered security workflows, forecasting compute needs can be a challenge, especially for teams just starting their journey. To make planning easier, we’re excited to announce the launch of the Security Copilot capacity calculator, now available in the Security Copilot standalone experience (Azure account required). 

This tool offers a practical starting point to help estimate how many SCUs your organization may require. With a few clicks, customers can get an idea of estimated SCU usage based on inputs like number of users in an embedded Security Copilot experience. While actual consumption may vary as it depends on real-time prompt activity, the calculator serves as a helpful guide for initial provisioning and budget planning.  

Once you’ve estimated your baseline needs, you can get started in Security Copilot or in the Azure portal. Security Copilot offers two flexible models to support both predictable workloads and unplanned spikes in usage: 

  • Provisioned SCUs: Ideal for predictable, ongoing operations. A minimum of one provisioned SCU is required. 
  • Overage SCUs: Designed for variable demand. Overage SCUs allow usage to scale seamlessly, and customers only pay for what they use, up to their chosen optional overage limit. 

With the capacity calculator, organizations can confidently begin their Security Copilot journey and better manage usage to align with their business needs. After getting started, teams can monitor consumption through the in-product usage dashboard and adjust capacity as demand fluctuates. Learn more about Security Copilot pricing here. 

Get Started with Security Copilot today 

Together, workspaces and the capacity calculator provide organizations with deeper insight, flexibility, and control over their Security Copilot usage. These features address the real-world challenges of managing diverse teams, complex environments, and evolving workloads. Whether you’re just starting your Security Copilot journey or looking to optimize your existing usage, these tools help you right-size capacity, maintain compliance, and deliver actionable AI assistance for your security and IT teams. 

Discover Security Copilot use cases, best practices, and customer success stories in the Security Copilot adoption hub. Learn more about our most recent Security Copilot innovations for IT teams here. If you have questions or need support, don’t hesitate to contact us or reach out to your account manager. 

New tools for Security Copilot management and capacity planning

Smarter Prompts for Smarter Investigations: Dynamic Prompt Suggestions in Security Copilot

This post was originally published on this site.


When a security analyst turns to an AI system for help—whether to hunt threats, investigate alerts, or triage incidents—the first step is usually a natural language prompt. But if that prompt is too vague, too general, or not aligned with the system’s capabilities, the response won’t be helpful. In high-stakes environments like cybersecurity, that’s not just a missed opportunity, it’s a risk.

That’s exactly the problem we tackled in our recent paper, Dynamic Context-Aware Prompt Recommendations for Domain-Specific Applications, now published and deployed as a new skill in Security Copilot.

Why Prompting Is a Bigger Problem in Security Than It Seems

LLMs have made impressive progress in general-purpose settings—helping users write emails, summarize documents, or answer trivia. These systems often include smart prompt recommendations based on the flow of conversation. But when you shift into domain-specific systems like Microsoft Security Copilot, the game changes.

Security analysts don’t ask open-ended questions. They ask task-specific ones:

  • “List devices that ran a malicious file in the last 24 hours.”
  • “Correlate failed login attempts across services.”
  • “Visualize outbound traffic from compromised machines.”

These questions map directly to skills—domain-specific functions that query data, connect APIs, or launch workflows. And that means prompt recommendations need to be tightly aligned with the available skills, underlying datasets, and current investigation context. General-purpose prompt systems don’t know how to do that.

What Makes Domain-Specific Prompting Hard

Designing prompt recommendations for systems like Security Copilot comes with unique constraints:

  1. Constrained Skill Set: The AI can only take actions it’s configured to support. Prompts must align with those skills—no hallucinations allowed.
  2. Evolving Context: A single investigation might involve multiple rounds of prompts, results, follow-ups, and pivots. Prompt suggestions must adapt dynamically.
  3. Deep Domain Knowledge: It’s not enough to suggest “Check network logs.” A useful prompt needs to reflect how real analysts work—across Defender, Sentinel, and more.
  4. Scalability: As new skills are added, prompt systems must scale without requiring constant manual curation or rewriting.
Our Approach: Dynamic, Context-Aware, and Skill-Constrained

 

We introduce a dynamic prompt recommendation system for Security Copilot. The key innovations include:

  • Contextual understanding of the session: We track the user’s investigation path and surface prompts that are relevant to what they’re doing now, not just generic starters.
  • Skill-awareness: The system knows what internal capabilities exist (e.g., “list devices,” “query login events”) and only recommends prompts that can be executed via those skills.
  • Domain knowledge injection: By encoding metadata about products, datasets, and typical workflows (e.g., MITRE attack stages), the system produces prompts that make sense in security analyst workflows.
  • Scalable prompt generation: Rather than relying on hardcoded lists, our system dynamically generates and ranks prompt suggestions.
What It Looks Like in Action

The dynamic prompt suggestion system is now live in Microsoft Entra, available in both Embedded and Immersive experiences. When a user enters a natural language prompt, the system automatically suggests several context-aware follow-up prompts, based on the user’s prior interactions and the system’s understanding of the current task.

 

These suggestions are generated in real time—users can simply click on a suggestion, and it’s executed immediately, allowing for quick and seamless follow-up queries without needing to rephrase or retype.

Let’s walk through two examples:

Embedded Experience

We begin with the prompt: “How does Microsoft determine Risky Users?”

 

The system returns the response and generates 3 follow-up suggestions, such as: “List dismissed risky detections.”

We click on that suggestion, which executes the query and shows the results.

New suggestions continue to appear after each prompt execution, making it easy to explore related insights.

Immersive Experience

We start with a prompt: “Who am I?”

 

Among the 5 suggested prompts, we select: “List the groups user nase74@woodgrove.ms is a member of.”

The user clicks, the query runs, and more follow-up suggestions appear, enabling a natural, guided flow throughout the session.

 

Why This Matters for the Future of Security AI

Prompting isn’t just an interface detail—it’s the entry point to intelligence. And in cybersecurity, where time, accuracy, and reliability matter, we need AI systems that are not just capable, but cooperative. Our research contributes to a future where security analysts don’t have to be prompt engineers to get the most out of AI.

By making prompt recommendations dynamic, contextual, and grounded in real domain knowledge, we help close the gap between LLM potential and security reality.

 

Interested in learning more?
Check out the full paper: Dynamic Context-Aware Prompt Recommendations for Domain-Specific Applications

If you’re using or building upon this work in your own research, we’d appreciate you citing our paper:

@article {tang2025dynamic,
title={Dynamic Context-Aware Prompt Recommendation for Domain-Specific AI Applications},
author={Tang, Xinye and Zhai, Haijun and Belwal, Chaitanya and Thayanithi, Vineeth and Baumann, Philip and Roy, Yogesh K},
journal={arXiv preprint arXiv:2506.20815},
year={2025}
}

 

New tools for Security Copilot management and capacity planning

Automating Phishing Email Triage with Microsoft Security Copilot

This post was originally published on this site.


This blog details automating phishing email triage using Azure Logic Apps, Azure Function Apps, and Microsoft Security Copilot. Deployable in under 10 minutes, this solution primarily analyzes email intent without relying on traditional indicators of compromise, accurately classifying benign/junk, suspicious, and phishing emails. Benefits include reducing manual workload, improved threat detection, and (optional) integration seamlessly with Microsoft Sentinel – enabling analysts to see Security Copilot analysis within the incident itself. 

Designed for flexibility and control, this Logic App is a customizable solution that can be self-deployed from GitHub. It helps automate phishing response at scale without requiring deep coding expertise, making it ideal for teams that prefer a more configurable approach and want to tailor workflows to their environment. The solution streamlines response and significantly reduces manual effort.

Access the full solution on the Security Copilot Github:
GitHub – UserReportedPhishing Solution.

For teams looking for a more sophisticated, fully integrated experience, the Security Copilot Phishing Triage Agent represents the next generation of phishing response. Natively embedded in Microsoft Defender, the agent autonomously triages phishing incidents with minimal setup. It uses advanced LLM-based reasoning to resolve false alarms, enabling analysts to stay focused on real threats. The agent offers step-by-step decision transparency and continuously learns from user feedback. Read the official announcement here.

Introduction: Phishing Challenges Continue to Evolve

Phishing continues to evolve in both scale and sophistication, but a growing challenge for defenders isn’t just stopping phishing, it’s scaling response. Thanks to tools like Outlook’s “Report Phishing” button and increased user awareness, organizations are now flooded with user-reported emails, many of which are ambiguous or benign. This has created a paradox: better detection by users has overwhelmed SOC teams, turning email triage into a manual, rotational task dreaded for its repetitiveness and time cost, often taking over 25 minutes per email to review.

Our solution addresses that problem, by automating the triage of user-reported phishing through AI-driven intent analysis. It’s not built to replace your secure email gateways or Microsoft Defender for Office 365; those tools have already done their job. This system assumes the email:

  • Slipped past existing filters,
  • Was suspicious enough for a user to escalate,
  • Lacks typical IOCs like malicious domains or attachments.

As a former attacker, I spent years crafting high-quality phishing emails to penetrate the defenses of major banks. Effective phishing doesn’t rely on obvious IOCs like malicious domains, URLs, or attachments… the infrastructure often appears clean. The danger lies in the intent. This is where Security Copilot’s LLM-based reasoning is critical, analyzing structure, context, tone, and seasonal pretexts to determine whether an email is phishing, suspicious, spam, or legitimate.

What makes this novel is that it’s the first solution built specifically for the “last mile” of phishing defense, where human suspicion meets automation, and intent is the only signal left to analyze. It transforms noisy inboxes into structured intelligence and empowers analysts to focus only on what truly matters.

Solution Overview: How the Logic App Solution Works (and Why It’s Different)

Core Components:

  • Azure Logic Apps: Orchestrates the entire workflow, from ingestion to analysis, and 100% customizable.
  • Azure Function Apps: Parses and normalizes email data for efficient AI consumption.
  • Microsoft Security Copilot: Performs sophisticated AI-based phishing analysis by understanding email intent and tactics, rather than relying exclusively on predefined malicious indicators.

Key Benefits:

  • Rapid Analysis: Processes phishing alerts and, in minutes, delivers comprehensive reports that empower analysts to make faster, more informed triage decisions – compared to manual reviews that can take up to 30 minutes. And, unlike analysts, Security Copilot requires zero sleep! 
  • AI-driven Insights: LLM-based analysis is leveraged to generate clear explanations of classifications by assessing behavioral and contextual signals like urgency, seasonal threats, Business Email Compromise (BEC), subtle language clues, and otherwise sophisticated techniques. Most importantly, it identifies benign emails, which are often the bulk of reported emails.
  • Detailed, Actionable Reports: Generates clear, human-readable HTML reports summarizing threats and recommendations for analyst review.
  • Robust Attachment Parsing: Automatically examines attachments like PDFs and Excel documents for malicious content or contextual inconsistencies.
  • Integrated with Microsoft Sentinel: Optional integration with Sentinel ensures central incident tracking and comprehensive threat management. Analysis is attached directly to the incident, saving analysts more time.
  • Customization: Add, move, or replace any element of the Logic App or prompt to fit your specific workflows.
Deployment Guide: Quick, Secure, and Reliable Setup

The solution provides Azure Resource Manager (ARM) templates for rapid deployment:

Prerequisites:

  • Azure Subscription with Contributor access to a resource group.
  • Microsoft Security Copilot enabled.
  • Dedicated Office 365 shared mailbox (e.g., phishing@yourdomain.com) with Mailbox.Read.Shared permissions.
  • (Optional) Microsoft Sentinel workspace.

Refer to the up to date deployment instructions on the Security Copilot GitHub page.

Technical Architecture & Workflow:

The automated workflow operates as follows:

Email Ingestion:

  • Monitors the shared mailbox via Office 365 connector.
  • Triggers on new email arrivals every 3 minutes.
  • Assumes that the reported email has arrived as an attachment to a “carrier” email.

Determine if the Email Came from Defender/Sentinel:

If the email came from Defender, it would have a prepended subject of “Phishing”, if not, it takes the “False” branch. Change as necessary.

Initial Email Processing:

  • Exports raw email content from the shared mailbox.
  • Determines if .msg or .eml attachments are in binary format and converts if necessary.

Email Parsing via Azure Function App:

  • Extracts data from email content and attachments (URLs, sender info, email body, etc.) and returns a JSON structure.
  • Prepares clean JSON data for AI analysis.
  • This step is required to “prep” the data for LLM analysis due to token limits.
  • Click on the “Parse Email” block to see the output of the Function App for any troubleshooting. You’ll also notice a number of JSON keys that are not used but provided for flexibility.

Security Copilot Advanced AI Reasoning:

  • Analyzes email content using a comprehensive prompt that evaluates behavioral and seasonal patterns, BEC indicators, attachment context, and social engineering signals.
  • Scores cumulative risk based on structured heuristics without relying solely on known malicious indicators.
  • Returns validated JSON output (some customers are parsing this JSON and performing other action).
  • This is where you would customize the prompt, should you need to add some of your own organizational situations if the Logic App needs to be tuned:

JSON Normalization & Error Handling:

  • A “normalization” Azure Function ensures output matches the expected JSON schema.
  • Sometimes LLMs will stray from a strict output structure, this aims to solve that problem.
  • If you add or remove anything from the Parse Email code that alters the structure of the JSON, this and the next block will need to be updated to match your new structure.

Detailed HTML Reporting:

  • Generates a detailed HTML report summarizing AI findings, indicators, and recommended actions.
  • Reports are emailed directly to SOC team distribution lists or ticketing systems.

Optional Sentinel Integration:

Adds the reasoning & output from Security Copilot directly to the incident comments. This is the ideal location for output since the analyst is already in the security.microsoft.com portal. It waits up to 15 minutes for logs to appear, in situations where the user reports before an incident is created.

The solution works pretty well out of the box but may require some tuning, give it a test. Here are some examples of the type of Security Copilot reasoning.

Benign email detection: 

 

Example of phishing email detection:

 

 

More sophisticated phishing with subtle clues:

 

 

 

Enhanced Technical Details & Clarifications

Attachment Processing:

  • When multiple email attachments are detected, the Logic App processes each binary-format email sequentially.
  • If PDF or Excel attachments are detected, they are parsed for content and are evaluated appropriately for content and intent.

Security Copilot Reliability:

  • The Security Copilot Logic App API call uses an extensive retry policy (10 retries at 10-minute intervals) to ensure reliable AI analysis despite intermittent service latency.
  • If you run out of SCUs in an hour, it will pause until they are refreshed and continue.

Sentinel Integration Reliability:

  • Acknowledges inherent Sentinel logging delays (up to 15 minutes).
  • Implements retry logic and explicit manual alerting for unmatched incidents, if the analysis runs before the incident is created.

Security Best Practices:

  • Compare the Function & Logic App to your company security policies to ensure compliance.
  • Credentials, API keys, and sensitive details utilize Azure Managed Identities or secure API connections. No secrets are stored in plaintext.
  • Azure Function Apps perform only safe parsing operations; attachments and content are never executed or opened insecurely.

Be sure to check out how the Microsoft Defender for Office team is improving detection capabilities as well Microsoft Defender for Office 365’s Language AI for Phish: Enhancing Email Security | Microsoft Community Hub.

New tools for Security Copilot management and capacity planning

Using parameterized functions with KQL-based custom plugins in Microsoft Security Copilot

This post was originally published on this site.


 

 

 

In this blog, I will walk through how you can build functions based on a Microsoft Sentinel Log Analytics workspace for use in custom KQL-based plugins for Security Copilot. The same approach can be used for Azure Data Explorer and Defender XDR, so long as you follow the specific guidance for either platform. A link to those steps is provided in the Additional Resources section at the end of this blog.

But first, it’s helpful to clarify what parameterized functions are and why they are important in the context of Security Copilot KQL-based plugins. Parameterized functions accept input details (variables) such as lookback periods or entities, allowing you to dynamically alter parts of a query without rewriting the entire logic

Parameterized functions are important in the context of Security Copilot plugins because of:

  1. Dynamic prompt completion:
    Security Copilot plugins often accept user input (e.g., usernames, time ranges, IPs). Parameterized functions allow these inputs to be consistently injected into KQL queries without rebuilding query logic.
  2. Plugin reusability:
    By using parameters, a single function can serve multiple investigation scenarios (e.g., checking sign-ins, data access, or alerts for any user or timeframe) instead of hardcoding different versions.
  3. Maintainability and modularity:
    Parameterized functions centralize query logic, making it easier to update or enhance without modifying every instance across the plugin spec. To modify the logic, just edit the function in Log Analytics, test it then save it- without needing to change the plugin at all or re-upload it into Security Copilot. It also significantly reduces the need to ensure that the query part of the YAML is perfectly indented and tabbed as is required by the Open API specification, you only need to worry about formatting a single line vs several-potentially hundreds.
  4. Validation:
    Separating query logic from input parameters improves query reliability by avoiding the possibility of malformed queries. No matter what the input is, it’s treated as a value, not as part of the query logic.
  5. Plugin Spec mapping:
    OpenAPI-based Security Copilot plugins can map user-provided inputs directly to function parameters, making the interaction between user intent and query execution seamless.
Practical example

In this case, we have a 139-line KQL query that we will reduce to exactly one line that goes into the KQL plugin. In other cases, this number could be even higher.  Without using functions, this entire query would have to form part of the plugin

Note: The rest of this blog assumes you are familiar with KQL custom plugins-how they work and how to upload them into Security Copilot.

 

CloudAppEvents | where RawEventData.TargetDomain has_any ( ‘grok.com’, ‘x.ai’, ‘mistral.ai’, ‘cohere.ai’, ‘perplexity.ai’, ‘huggingface.co’, ‘adventureai.gg’, ‘ai.google/discover/palm2’, ‘ai.meta.com/llama’, ‘ai2006.io’, ‘aibuddy.chat’, ‘aidungeon.io’, ‘aigcdeep.com’, ‘ai-ghostwriter.com’, ‘aiisajoke.com’, ‘ailessonplan.com’, ‘aipoemgenerator.org’, ‘aissistify.com’, ‘ai-writer.com’, ‘aiwritingpal.com’, ‘akeeva.co’, ‘aleph-alpha.com/luminous’, ‘alphacode.deepmind.com’, ‘analogenie.com’, ‘anthropic.com/index/claude-2’, ‘anthropic.com/index/introducing-claude’, ‘anyword.com’, ‘app.getmerlin.in’, ‘app.inferkit.com’, ‘app.longshot.ai’, ‘app.neuro-flash.com’, ‘applaime.com’, ‘articlefiesta.com’, ‘articleforge.com’, ‘askbrian.ai’, ‘aws.amazon.com/bedrock/titan’, ‘azure.microsoft.com/en-us/products/ai-services/openai-service’, ‘bard.google.com’, ‘beacons.ai/linea_builds’, ‘bearly.ai’, ‘beatoven.ai’, ‘beautiful.ai’, ‘beewriter.com’, ‘bettersynonyms.com’, ‘blenderbot.ai’, ‘bomml.ai’, ‘bots.miku.gg’, ‘browsegpt.ai’, ‘bulkgpt.ai’, ‘buster.ai’, ‘censusgpt.com’, ‘chai-research.com’, ‘character.ai’, ‘charley.ai’, ‘charshift.com’, ‘chat.lmsys.org’, ‘chat.mymap.ai’, ‘chatbase.co’, ‘chatbotgen.com’, ‘chatgpt.com’, ‘chatgptdemo.net’, ‘chatgptduo.com’, ‘chatgptspanish.org’, ‘chatpdf.com’, ‘chattab.app’, ‘claid.ai’, ‘claralabs.com’, ‘claude.ai/login’, ‘clipdrop.co/stable-diffusion’, ‘cmdj.app’, ‘codesnippets.ai’, ‘cohere.com’, ‘cohesive.so’, ‘compose.ai’, ‘contentbot.ai’, ‘contentvillain.com’, ‘copy.ai’, ‘copymatic.ai’, ‘copymonkey.ai’, ‘copysmith.ai’, ‘copyter.com’, ‘coursebox.ai’, ‘coverler.com’, ‘craftly.ai’, ‘crammer.app’, ‘creaitor.ai’, ‘dante-ai.com’, ‘databricks.com’, ‘deepai.org’, ‘deep-image.ai’, ‘deepreview.eu’, ‘descrii.tech’, ‘designs.ai’, ‘docgpt.ai’, ‘dreamily.ai’, ‘editgpt.app’, ‘edwardbot.com’, ‘eilla.ai’, ‘elai.io’, ‘elephas.app’, ‘eleuther.ai’, ‘essayailab.com’, ‘essay-builder.ai’, ‘essaygrader.ai’, ‘essaypal.ai’, ‘falconllm.tii.ae’, ‘finechat.ai’, ‘finito.ai’, ‘fireflies.ai’, ‘firefly.adobe.com’, ‘firetexts.co’, ‘flowgpt.com’, ‘flowrite.com’, ‘forethought.ai’, ‘formwise.ai’, ‘frase.io’, ‘freedomgpt.com’, ‘gajix.com’, ‘gemini.google.com’, ‘genei.io’, ‘generatorxyz.com’, ‘getchunky.io’, ‘getgptapi.com’, ‘getliner.com’, ‘getsmartgpt.com’, ‘getvoila.ai’, ‘gista.co’, ‘github.com/features/copilot’, ‘giti.ai’, ‘gizzmo.ai’, ‘glasp.co’, ‘gliglish.com’, ‘godinabox.co’, ‘gozen.io’, ‘gpt.h2o.ai’, ‘gpt3demo.com’, ‘gpt4all.io’, ‘gpt-4chan+)’, ‘gpt6.ai’, ‘gptassistant.app’, ‘gptfy.co’, ‘gptgame.app’, ‘gptgo.ai’, ‘gptkit.ai’, ‘gpt-persona.com’, ‘gpt-ppt.neftup.app’, ‘gptzero.me’, ‘grammarly.com’, ‘hal9.com’, ‘headlime.com’, ‘heimdallapp.org’, ‘helperai.info’, ‘heygen.com’, ‘heygpt.chat’, ‘hippocraticai.com’, ‘huggingface.co/spaces/tiiuae/falcon-180b-demo’, ‘humanpal.io’, ‘hypotenuse.ai’, ‘ichatwithgpt.com’, ‘ideasai.com’, ‘ingestai.io’, ‘inkforall.com’, ‘inputai.com/chat/gpt-4’, ‘instantanswers.xyz’, ‘instatext.io’, ‘iris.ai’, ‘jasper.ai’, ‘jigso.io’, ‘kafkai.com’, ‘kibo.vercel.app’, ‘kloud.chat’, ‘koala.sh’, ‘krater.ai’, ‘lamini.ai’, ‘langchain.com’, ‘laragpt.com’, ‘learn.xyz’, ‘learnitive.com’, ‘learnt.ai’, ‘letsenhance.io’, ‘letsrevive.app’, ‘lexalytics.com’, ‘lgresearch.ai’, ‘linke.ai’, ‘localbot.ai’, ‘luis.ai’, ‘lumen5.com’, ‘machinetranslation.com’, ‘magicstudio.com’, ‘magisto.com’, ‘mailshake.com/ai-email-writer’, ‘markcopy.ai’, ‘meetmaya.world’, ‘merlin.foyer.work’, ‘mieux.ai’, ‘mightygpt.com’, ‘mosaicml.com’, ‘murf.ai’, ‘myaiteam.com’, ‘mygptwizard.com’, ‘narakeet.com’, ‘nat.dev’, ‘nbox.ai’, ‘netus.ai’, ‘neural.love’, ‘neuraltext.com’, ‘newswriter.ai’, ‘nextbrain.ai’, ‘noluai.com’, ‘notion.so’, ‘novelai.net’, ‘numind.ai’, ‘ocoya.com’, ‘ollama.ai’, ‘openai.com’, ‘ora.ai’, ‘otterwriter.com’, ‘outwrite.com’, ‘pagelines.com’, ‘parallelgpt.ai’, ‘peppercontent.io’, ‘perplexity.ai’, ‘personal.ai’, ‘phind.com’, ‘phrasee.co’, ‘play.ht’, ‘poe.com’, ‘predis.ai’, ‘premai.io’, ‘preppally.com’, ‘presentationgpt.com’, ‘privatellm.app’, ‘projectdecember.net’, ‘promptclub.ai’, ‘promptfolder.com’, ‘promptitude.io’, ‘qopywriter.ai’, ‘quickchat.ai/emerson’, ‘quillbot.com’, ‘rawshorts.com’, ‘read.ai’, ‘rebecc.ai’, ‘refraction.dev’, ‘regem.in/ai-writer’, ‘regie.ai’, ‘regisai.com’, ‘relevanceai.com’, ‘replika.com’, ‘replit.com’, ‘resemble.ai’, ‘resumerevival.xyz’, ‘riku.ai’, ‘rizzai.com’, ‘roamaround.app’, ‘rovioai.com’, ‘rytr.me’, ‘saga.so’, ‘sapling.ai’, ‘scribbyo.com’, ‘seowriting.ai’, ‘shakespearetoolbar.com’, ‘shortlyai.com’, ‘simpleshow.com’, ‘sitegpt.ai’, ‘smartwriter.ai’, ‘sonantic.io’, ‘soofy.io’, ‘soundful.com’, ‘speechify.com’, ‘splice.com’, ‘stability.ai’, ‘stableaudio.com’, ‘starryai.com’, ‘stealthgpt.ai’, ‘steve.ai’, ‘stork.ai’, ‘storyd.ai’, ‘storyscapeai.app’, ‘storytailor.ai’, ‘streamlit.io/generative-ai’, ‘summari.com’, ‘synesthesia.io’, ‘tabnine.com’, ‘talkai.info’, ‘talkpal.ai’, ‘talktowalle.com’, ‘team-gpt.com’, ‘tethered.dev’, ‘texta.ai’, ‘textcortex.com’, ‘textsynth.com’, ‘thirdai.com/pocketllm’, ‘threadcreator.com’, ‘thundercontent.com’, ‘tldrthis.com’, ‘tome.app’, ‘toolsaday.com/writing/text-genie’, ‘to-teach.ai’, ‘tutorai.me’, ‘tweetyai.com’, ‘twoslash.ai’, ‘typeright.com’, ‘typli.ai’, ‘uminal.com’, ‘unbounce.com/product/smart-copy’, ‘uniglobalcareers.com/cv-generator’, ‘usechat.ai’, ‘usemano.com’, ‘videomuse.app’, ‘vidext.app’, ‘virtualghostwriter.com’, ‘voicemod.net’, ‘warmer.ai’, ‘webllm.mlc.ai’, ‘wellsaidlabs.com’, ‘wepik.com’, ‘we-spots.com’, ‘wordplay.ai’, ‘wordtune.com’, ‘workflos.ai’, ‘woxo.tech’, ‘wpaibot.com’, ‘writecream.com’, ‘writefull.com’, ‘writegpt.ai’, ‘writeholo.com’, ‘writeme.ai’, ‘writer.com’, ‘writersbrew.app’, ‘writerx.co’, ‘writesonic.com’, ‘writesparkle.ai’, ‘writier.io’, ‘yarnit.app’, ‘zevbot.com’, ‘zomani.ai’ ) | extend sit = parse_json(tostring(RawEventData.SensitiveInfoTypeData)) | mv-expand sit | summarize Event_Count = count() by tostring(sit.SensitiveInfoTypeName), CountryCode, City, UserId = tostring(RawEventData.UserId), TargetDomain = tostring(RawEventData.TargetDomain), ActionType = tostring(RawEventData.ActionType), IPAddress = tostring(RawEventData.IPAddress), DeviceType = tostring(RawEventData.DeviceType), FileName = tostring(RawEventData.FileName), TimeBin = bin(TimeGenerated, 1h) | extend SensitivityScore = case(tostring(sit_SensitiveInfoTypeName) in~ (“U.S. Social Security Number (SSN)”, “Credit Card Number”, “EU Tax Identification Number (TIN)”,”Amazon S3 Client Secret Access Key”,”All Credential Types”), 90, tostring(sit_SensitiveInfoTypeName) in~ (“All Full names”), 40, tostring(sit_SensitiveInfoTypeName) in~ (“Project Obsidian”, “Phone Number”), 70, tostring(sit_SensitiveInfoTypeName) in~ (“IP”), 50,10 ) | join kind=leftouter ( IdentityInfo | where TimeGenerated > ago(lookback) | extend AccountUpn = tolower(AccountUPN) ) on $left.UserId == $right.AccountUpn | join kind=leftouter ( BehaviorAnalytics | where TimeGenerated > ago(lookback) | extend AccountUpn = tolower(UserPrincipalName) ) on $left.UserId == $right.AccountUpn //| where BlastRadius == “High” //| where RiskLevel == “High” | where Department == User_Dept | summarize arg_max(TimeGenerated, *) by sit_SensitiveInfoTypeName, CountryCode, City, UserId, TargetDomain, ActionType, IPAddress, DeviceType, FileName, TimeBin, Department, SensitivityScore | summarize sum(Event_Count) by sit_SensitiveInfoTypeName, CountryCode, City, UserId, Department, TargetDomain, ActionType, IPAddress, DeviceType, FileName, TimeBin, BlastRadius, RiskLevel, SourceDevice, SourceIPAddress, SensitivityScore

With parameterized functions, follow these steps to simplify the plugin that will be built based on the query above

  1. Define the variable/parameters upfront in the query (BEFORE creating the parameters in the UI). This will put the query in a “temporary” unusable state because the parameters will cause syntax problems in this state. However, since the plan is to run the query as a function this is ok

Fig. 1: Image showing partial query with the parameters to defined highlighted in red i.e. lookback and User_Dept

  1. Create the parameters in the Log Analytics UI

Fig 2. Screenshot showing how the function menu in the Log Analytics UI

Give the function a name and define the parameters exactly as they show up in the query in step 1 above. In this example, we are defining two parameters: lookback – to store the lookback period to be passed to the time filter and User_Dept to the user’s department.

Fig. 3. Function menu showing the two parameters defined in the function creation menu of Log Analytics

3. Test the query. Note the order of parameter definition in the UI. i.e. first the User_Dept THEN the lookback period. You can interchange them if you like but this will determine how you submit the query using the function. If the User_Dept parameter was defined first then it needs to come first when executing the function. See the below screenshot. Switching them will result in the wrong parameter being passed to the query and consequently 0 results will be returned.

Fig. 4: Sample run of the function with the parameters specified in the correct order

Effect of switched parameters:

Fig. 5: Sample function run with the functions switched to show effect of this situation

To edit the function, follow the steps below:

Navigate to the Logs menu for your Log Analytics workspace then select the function icon

 

Fig. 6: Partial view of the function being edited within the Log Analytics UI

Fig. 7: Image showing how to select the code button in the function menu to edit the function code

Once satisfied with the query and function, build your spec file for the Security Copilot plugin. Note the parameter definition and usage in the sections highlighted in red below

Fig. 8: Partial view of the YAML plugin showing the encapsulation of the 139 lines of KWL into a single one

And that’s it, from 139 unwieldy KQL lines to one very manageable one! You are welcome 😊

Let’s now put it through its paces once uploaded into Security Copilot. We start by executing the plugin using its default settings via the direct skill invocation method. We see indeed that the prompt returns results based on the default values passed as parameters to the function:

Fig. 9: View of Secuity Copilot landing page showing an example of direct skill execution of the created pluginFig. 10: Sample output showing records of users from the Sales department

Next, we still use direct skill invocation, but this time specify our own parameters:

Fig. 11: Direct skill invocation example but with specified parameters-Department, and lookback periodFig 12: Prompt run showing the output corresponding to the selections of the previous direct skill invocation prompt

Lastly, we test it out with a natural language prompt:

Fig 13: Security Copilot prompt bar showing example of natural language prompt seeking events related to users in the Human Resources departmentFig 14: Output from previous natural language prompt focused on users from the HR department

Tip: The function does not execute successfully if the default summarize function is used without creating a variable i.e. If the summarize count() command is used in your query, it results in a system-defined output variable named count_. To bypass this issue, ensure to use a user-defined variable such as Event_Count as shown in line 77 below:

Fig. 15: Highlighting the creation of a variable to store results from the summarize count() command

Conclusion

In conclusion, leveraging parameterized functions within KQL-based custom plugins in Microsoft Security Copilot can significantly streamline your data querying and analysis capabilities. By encapsulating reusable logic, improving query efficiency, and ensuring maintainability, these functions provide an efficient approach for tapping into data stored across Microsoft Sentinel, Defender XDR and Azure Data Explorer clusters. Start integrating parameterized functions into your KQL-based Security Copilot plugins today and let us have your feedback.

Additional Resources

Using parameterized functions in Microsoft Defender XDR

Using parameterized functions with Azure Data Explorer

Functions in Azure Monitor log queries – Azure Monitor | Microsoft Learn

Kusto Query Language (KQL) plugins in Microsoft Security Copilot | Microsoft Learn

Harnessing the power of KQL Plugins for enhanced security insights with Copilot for Security | Microsoft Community Hub

New tools for Security Copilot management and capacity planning

Busting myths on Microsoft Security Copilot

This post was originally published on this site.


Microsoft’s Security Copilot is a new AI-powered security assistant (launched in April 2024) that integrates with Microsoft Defender, Sentinel, Intune, Entra and Purview to help analysts protect and defend at the speed and scale of AI. As a cutting-edge generative AI tool, Security Copilot has naturally sparked interest and close attention from users and experts. This has resulted in various articles and blogs sharing experiences, perspectives, and feedback about the product. As a Microsoft Certified Trainer and a Microsoft ‘Consultant’, I happen to both teach and implement Security Copilot for professionals and organizations respectively. Lucky me! But one thing that I encounter frequently in both my roles, is a list of common myths (or concerns) that people have about Security Copilot especially given that it is a relatively newer product.

Today we are going to talk about such myths (or concerns) and try to see how they are either completely hokum or does have another aspect which you may/may not know about. In other words, we will try to dot all the i’s and cross all the t’s. I’ll do it in respective sections which may have one or more myths included, so let’s get started.

I sincerely appreciate the efforts of all authors and publishers who have shared their insights on Security Copilot. This article is intended to address common concerns and encourage professionals to explore the product with confidence, rather than to challenge or dismiss any shared opinions.

Cost and Licensing

Myth #1: High Consumption Cost:

  • Validity: The perception of high cost is relative and often lacks full context. While the consumption-based pricing of Security Copilot may appear higher when compared to certain other tools, it delivers significantly greater value through its advanced capabilities, seamless integration with the Microsoft Security ecosystem, and ability to accelerate threat detection and response. When evaluated alongside comparable AI-driven security solutions—both Microsoft and non-Microsoft—Security Copilot stands out for its category-defining use cases and operational efficiency, helping security teams do more with less.
  • Reasoning: While cost considerations are valid, they should be viewed through the lens of operational impact rather than raw consumption. Security Copilot functions as an intelligent assistant operating around the clock—enhancing threat detection, accelerating incident response, and enabling deeper, more proactive threat hunting. Many organizations have reported significant improvements in reducing mean time to respond (MTTR), increasing automation in routine investigations such as phishing, and expanding their overall security coverage without scaling headcount. By augmenting human expertise with AI, Security Copilot empowers teams to focus on high value tasks and strengthens organizational resilience against evolving threats.

Myth #2: Unpredictable billing:

  • Validity: This is a complete myth not only with Security Copilot but with any other Microsoft solution.
  • Reasoning: You get a dedicated usage dashboard in the Security Copilot portal and a link to the billing view that takes you to Microsoft Azure where you can not only see the incurred costs but can also have a reliable forecast of future costs. Whether you are a large organization with multiple instances of Security Copilot or an SMB with a limited usage, these dashboards and views will help you equally to ensure you are not under or overspending on Security Copilot.

Myth #3: It’s free or covered by an existing license:

  • Validity: This misconception likely arises from confusion with other Copilot offerings and becomes a myth!
  • Reasoning: The overall pricing model of Security Copilot is completely different from other Microsoft Security solutions. While other solutions operate on a licensing model, Security Copilot works on a consumption-based model meaning there is no per user or per device charges here! Hence, no existing license whether Entra or Office 365 based, can give you access to ‘Security Copilot’. Also, please note that Microsoft 365 Copilot (available in Teams, Word, PowerPoint or Azure portal) is not the same as Security Copilot.

Performance and Reliability

Myth #4: Slow responses and high latency:

  • Validity: This is a completely anecdotal and definitely a myth. There are a variety of factors that affects the response latency of Security Copilot.
  • Reasoning: You need to consider some important factors like number of SCUs provisioned, concurrent number of Security Copilot users, number of plugins and/or skills being invoked, length and complexity of the prompt etc. in order to understand why you may have gotten a response slower than usual. Moreover, Security Copilot has the feature of showing its response in streaming mode. This approach significantly enhances perceived latency for users, enabling them to begin reading responses as they are generated, like the below image. Reference: What’s new in Microsoft Security Copilot?

Source: Security Copilot Portal

Myth #5: Poor Quality or Unreliable responses:

  • Validity: All I am going to say here is ‘Your Copilot is as good as the quality of your prompts’!
  • Reasoning: AI is here to augment our intelligence, but it can only do that when it gets sufficient, clear and well thought prompts. There is a reason to call it a ‘Co’-‘Pilot’ because you are driving/flying/learning along with it. BTW, I prefer flying almost any time! Point is, we need to understand that the quality of AI output is heavily influenced by the tone, context and specificity of prompts. There have been numerous users who agree that refined prompts can yield better results if not the best! I am not suggesting going for in-depth prompt engineering classes here but just including the following elements when writing a prompt, should give you a considerable improvement in the quality of responses. More information on effective prompting practices here: Prompting in Microsoft Security Copilot
    1. Goal – specific, security-related information that you need
    2. Context – why you need this information or how you plan to use it
    3. Expectations – format or target audience you want the response tailored to
    4. Source – known information, data sources, or plugins Security Copilot should use
  • Moreover, I also suggest leveraging the OOTB (Out-Of-The-Box) prompts and promptbooks in order to understand the way on how you should structure your prompts. Security Copilot has a dedicated ‘Promptbook Library’ where you can see all the custom and OOTB prompts. You have the option of duplicating and creating a custom promptbook of your own from an OOTB promptbook. This way you can ensure you are leveraging the available resources to make your own use case work more efficiently.

Myth #6: Service Interruptions:

  • Validity: This is a fact portrayed as a myth. If provisioned Security Copilot Units (SCUs) are fully consumed without additional configuration, service may pause until capacity is restored. This behaviour aligns with standard consumption-based service models.
  • ReasoningTo maintain continuous service, Security Copilot now supports Overage Units, which automatically activate when the initially provisioned SCUs are exhausted. This helps ensure uninterrupted functionality without requiring manual intervention. Additionally, the platform provides clear usage notifications and warnings in advance, allowing teams to proactively monitor and manage consumption. Combined with its role as a 24/7 AI-powered assistant, Security Copilot continues to deliver high availability and operational efficiency—even under dynamic workloads. For details on how to configure and manage overage units, refer to this blog: Overage Units in Security Copilot.

Near Limit notification in Security Copilot standalone portalAbove Limit notification in Security Copilot standalone portal

Privacy and Data Security

Myth #7: Data sharing with Microsoft:

  • Validity: This is one of the most common myths that still exists amongst users and make them hesitant to adopt the product.
  • Reasoning: Microsoft has been very transparent and vocal on claiming that ‘customer data’ is never used to train the underlying LLM model nor is it accessible by any human including any non-relevant Microsoft employees. All Security Copilot data is handled according to Microsoft’s commitments to privacy, security, compliance, and responsible AI practices. Access to the systems that house your data is governed by Microsoft’s certified processes. Even when enabled by default, the option to share your data does:
    • Not shared with OpenAI
    • Not used for sales
    • Not shared with third parties
    • Not used to train Azure OpenAI foundational model

Security Copilot provides options to enable/disable user data collection

Myth #8: Data Privacy Compromises:

  • Validity: Concerns about data privacy are common with AI tools but this is another completely ironical myth for a security product.
  • Reasoning: One important thing to know when using Microsoft products and solutions is that Microsoft provides you with contractual commitments on giving you control over your own data! Microsoft takes data security so seriously that even if a law enforcement agency or the government requests your data, you will be notified and provided with a copy of the request! And hence Microsoft defends your data through clearly defined and well-established response policies and processes like:
    • Microsoft uses and enables the use of industry-standard encrypted transport protocols, such as Transport Layer Security (TLS) and Internet Protocol Security (IPsec) for any customer data in transit.
    • The Microsoft Cloud employs a wide range of encryption capabilities up to AES-256 for data at rest.
    • Your control over your data is reinforced by Microsoft compliance with broadly applicable privacy laws, such as GDPR and privacy standards. These include the world’s first international code of practice for cloud privacy, ISO/IEC 27018.

Uncategorized Myths

“Security Copilot will replace our SOC team”:

No! It’s a fact that Security Copilot is an assistant, not an infallible sensor. It is created to “assist security professionals” and acknowledges it may make mistakes (false positives/negatives). The very conception of Security Copilot is essentially taking over the manual and tiresome analysis of raw logs and events while giving time to security professionals to do what they do best, discovering vulnerabilities and securing organizations! Do you ever think why there is not a single capability in Security Copilot to take an action on its own or without your approval? What? You didn’t know that?! This is by design to ensure that you and I are always in the driving seat while our “Co”-pilot augments our capabilities, automates repetitive tasks and provides actionable insights. But users must always validate its advice.

“Copilot only works well with Microsoft products”:

Another anecdotal myth. While Security Copilot is deeply integrated with Microsoft’s own security tools, it is also designed to work effectively with a variety of third-party solutions. In fact, Microsoft provides you with more than 35+ non-Microsoft plugins out-of-the-box including some popular tools like Splunk, ServiceNow, Cyware, Shodan etc. And that’s not it, you can create your own custom plugin using one the three methods amongst API, GPT and KQL.

“You cannot track Copilot’s activities”:

The notion that “you cannot track Copilot’s activities” is definitively a myth. Security Copilot’s integration with Microsoft Purview and the Office 365 Management API provides full visibility into every interaction—prompt inputs, AI responses, plugin calls, and admin configurations. Administrators can enable, search, export, and retain these logs for compliance, forensics, or integration into broader SIEM and SOAR workflows, ensuring that Copilot becomes a transparent, auditable extension of your security operations rather than an untraceable “black box.”

Conclusion

As with any transformative technology, Microsoft Security Copilot has naturally invited speculations. However, many of the concerns—ranging from cost and licensing, to performance, reliability, and data privacy—are either based on misconceptions or lack full context. Through this article, we’ve examined these myths objectively and highlighted how Security Copilot’s design, operational model, and deep integration with Microsoft’s security ecosystem work together to empower, not replace, human defenders. It is built to scale security operations with intelligence and agility, not disrupt them with unpredictability. For organizations navigating increasingly complex threat landscapes, Security Copilot offers a way to enhance response, reduce fatigue, and operationalize AI securely and responsibly. The key is not to view it as just another product, but as a strategic co-pilot—working alongside your team to defend at the speed and scale that modern security demands.

Want to have a much deeper understanding of Security Copilot? Check out these awesome resources: