Identify which MFA methods your users actually use.
| A simple KQL query against Sign-in logs gives you visibility into the MFA methods users are actually using: [link] [comments] |
| A simple KQL query against Sign-in logs gives you visibility into the MFA methods users are actually using: [link] [comments] |
A Microsoft Sentinel custom data connector that ingests Microsoft Defender XDR portal-only telemetry — configuration, compliance, drift, exposure, governance — that public Microsoft APIs (Graph Security, Microsoft 365 Defender, MDE) don't expose.
| Platform | Azure Functions (PowerShell 7.4), Log Analytics, Sentinel |
|---|---|
| Auth | Two unattended auto-refreshing methods: Credentials+TOTP, Software Passkey. DirectCookies for diagnostic / one-shot use. |
| Scope | Microsoft Defender XDR portal (security.microsoft.com) — telemetry streams across 10 functional categories (Endpoint Device Management, Endpoint Configuration, Vulnerability Management, Identity Protection, Configuration & Settings, Exposure Management, Threat Analytics, Action Center, Multi-Tenant Operations, Streaming API). Every stream documented + live-captured. Some streams activate only when the tenant provisions the underlying feature (MDI / TVM / MCAS / Intune / MDO / Custom Collection). |
| Prerequisite | Existing Sentinel-enabled Log Analytics workspace (any RG / subscription in the same tenant). This template does NOT create a workspace. |
| Deployment | One-click Deploy to Azure + one ./tools/Initialize-XdrLogRaiderAuth.ps1 run post-deploy. Cross-RG / cross-region workspace supported. |
| Content | 8 workbooks · 20 analytic rules (14 detection + 6 XdrOps incl. RowVolumeSpike cost-budget gate) · 9 hunting queries · 4 KQL drift parsers + 11 consolidated LA tables (10 Defender_<Category>_CL + 1 XdrConnectorHealth_CL) · 390 sample queries (5 per active stream) — all auto-deployed via nested ARM. Every parser / rule / query / workbook column reference verified against live fix |
Happy Hunting 🥳 🎉
The AADGraphActivityLogs are available! For years, defenders have been left in the dark when it comes to attackers abusing the Azure Active Directory Graph.
The wait has been finaly over, and defenders can now use these logs to detect the usage of AADInternals, ROADtools and others.
Schema reference: https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/aadgraphactivitylogs
I tested out the ThreatIntel features with TAXII and the MS Defender Threat Intelligence connectors. Features wise it's fine for the most part but I noticed that expired indicators still get refreshed every week and therefore never age out. Am I missing something? Ingestion rules don't impact refreshes either so I'm unable make use of that to handle them.
Hi all. Sentinel bill is getting harder to defeend and i am tring to be smart about Analytics tier , Basic , Auxillary or...just dropping? (for me, is not a real option. But the others say this).
Right now everything go in Analytics. SigninLogs , AADNonIteractive, OfficeActivity , SecurityEvent, MDE tables, plus network and firewall. NonInteractive is almost half of the volume and i dont know how much real detection value we really get.
Thinking to move AADNonIteractive to Auxillary. If you did this, what detections did you lose? Worth it? Anyone using summary rules (at scale) , it is reliable or buggy? How agresive with DCR transformations. ADX for retention only or you actually run detections on it?
Please. not looking for "Turn It Off" advice , thanks.
Has anyone of you explored Observability Agent capability sitting in the Logs Blade inside Sentinel.
I had gone through the MS provided docs, but didn't find it useful. The real deal is different, with this here, what we have is an AI Agent sitting on top of our SIEM logs. We can ask it anything and it will give us how we will get from a model like chatgpt or claude, if they had this data to analyse.
I tried use-cases where I asked the agent to see if we can reduce log ingestion by removing unwanted logs,
Cases on cost optimisation.
Even pulling the watchlist ( via the Get watchlist fun ) and asking it to evaluate the current logs for any old IOCs from the watchlist.
Also tried analysing present rules with this agent and seeing if it fits the security posture.
But I also found some downfalls - like in certain customers of mine, the OAgent was not giving faster response and the more it took to give the response, it felt like those answers were not really accurate as well. Also found the limitation that, a single prompt couldn't be more than 500 words...
So similarity if you guys have tried it out and have tried out interesting use-cases please share and also if you have any docs or materials on the Observability Agent, so as to get in, drill deep down and understand this, please share the same as well
All comments are welcome. Thanks
Hi all,, we have problem with too much noise in our sign in risk rules and the SOC team is very tired of false positives. What is best way for tuning the scheduled analytics rules? Better to use entity mapping with grouping , or make a Watch list for the service accounts we know are good to exclude them ? Also someone is using NRT rules for high fidelity detections without making the ingestion cost explode? thanks
I’m still sort of new to sentinel but I’m wondering what everyone is doing for monitoring sentinel’s overall health. Are you using the MS workbooks? Monitoring other components? Etc.
With detection, coverage monitoring is crucial. This Microsoft Sentinel Workbook provides visibility into Microsoft Defender for Endpoint (MDE)–managed devices and their telemetry coverage within Sentinel. It helps security and operations teams verify that devices are properly configured for comprehensive monitoring by checking:
Azure Monitor Agent (AMA) installation status - SecurityEvent log ingestion into Sentinel (Windows) - Syslog log ingestion into Sentinel (Linux) - Last heartbeat and log timestamps for freshness - By correlating data from DeviceInfo, Heartbeat, and SecurityEvent/Syslog - tables, the workbook identifies configuration gaps and supports remediation efforts.
Note: This workbook assumes Microsoft Defender XDR data is ingested into Sentinel. Without ingestion, device name normalization and correlation may be inconsistent. To workaround that, copy the KQL query from the Github page and run it in Advanced Hunting in the Defender Portal
| There's life before Sentinel MCP + GitHub Copilot, and there's life after. There's no going back. Yes, AI helped write this project. No, this isn't AI slop. This is ~4 months and hundreds of hours of building, testing, breaking, fixing, and tuning agentic investigation skills against live Sentinel/Defender XDR environments. Every one of those 900+ KQL queries has been executed, schema-verified, and battle-tested against real tables with real pitfalls (if you've ever wasted 20 minutes debugging `Timestamp` vs `TimeGenerated`, you know). What it is: A GitHub Copilot Agent Mode framework that turns natural language into full security investigations using Microsoft's own MCP servers. Clone the repo, add your tenant and workspace ID, API Keys for TI Providers (optional) and go. Zero supply chain risk for the core framework. 5 of 6 MCP servers are Microsoft-hosted HTTP endpoints (Azure, Sentinel Data Lake, Graph API, Defender XDR, Sentinel Graph, Microsoft Learn) — no npm install, no pip install, nothing to compromise. All 5 use native Entra ID authentication — your existing MFA, Conditional Access policies, and RBAC apply automatically. The only npm dependency is `kql-search-mcp` for KQL schema intelligence and GitHub query discovery — shout out to noodlemctwoodle, the version is pinned with a sha512 integrity hash, and it's fully optional. IP enrichment (ipinfo, AbuseIPDB, Shodan, vpnapi) and local visualization MCP Apps are also optional add-ons. Don't have Sentinel or Data Lake? No problem. The Sentinel MCP server has Triage Tools available to all E5 customers. RunAdvancedHunting MCP tool can query both XDR-native tables AND connected Sentinel tables — at zero query cost. The framework defaults to AH for everything ≤30 days and falls back to Sentinel Data Lake only when you need 30-90+ day lookback. If you're E5 with no Sentinel workspace, the majority of skills still work, I tried to prioritize native XDR tables whenever possible. Worried about MCP adoption governance? There's a dedicated MCP Usage Monitoring skill that audits who's using which MCP servers, what endpoints they're hitting, usage trends, and behavioral anomalies — so you can track adoption and catch misuse across your team. Key features: - Threat Pulse Skill — One prompt queries 7 security domains in ~5 min. Prioritized dashboard (🔴 Escalate / 🟠 Investigate / 🟡 Monitor / ✅ Clear) with drill-down links that load the right skill, target the entity, and execute. Entry point that finds the leads FOR you. - 25 investigation skills — User, computer, incident, IoC, authentication tracing, CA policy forensics, scope drift (user/SPN/device behavioral baselines), exposure management, app registration posture, AI agent posture, identity posture, email threat posture, data security analysis, honeypot analysis, and more. Each one is a full guided workflow, not a single query. - 36 query library files — Organized by domain (identity, endpoint, email, cloud, network, incidents) used for adhoc threat hunts targeting specific TTP's. Threat intel hunting campaigns you can just point at: "hunt for Storm-1175 last 30 days" and it runs verified queries against your environment. - Author hunts from threat intel articles — Read any threat intel article (Microsoft, vendor blog, wherever), and the framework maps TTPs to KQL, tunes against your environment, and optionally pushes to Defender XDR Custom Detection API. Full lifecycle from article → queries → deployed detection, weekly updates from me. - Deterministic PowerShell pipelines — Sentinel Ingestion Report and MITRE ATT&CK Coverage Report use PowerShell to gather all data via `az rest`/`az monitor`/Graph API first, then the LLM renders the report. No hallucinated metrics. - SVG Dashboard framework — Generate consistent, portable data visualizations (KPI cards, bar/donut/line charts, tables, score cards) directly from investigation data or skill reports. No browser, no external tools — pure SVG rendered inline. Getting started: Note on models: This framework was designed and tested on Claude Opus 4.6 via GitHub Copilot. Mileage with other models may vary — the skill files and query library are model-agnostic markdown, but the instruction-following complexity benefits from a frontier model. Video walkthrough: https://youtu.be/3UFqWA4cmoE?t=1470 I'm actively developing this and adding new skills/queries regularly. Follow me on LinkedIn (https://www.linkedin.com/in/scstelz/) to keep up with new features. Feedback, contributions, and skill ideas welcome, AMA!? Dynamically link Threat Pulse findings to associated Queries or Skills [link] [comments] |
The attacker doesn't steal a password. They trick the user into granting permissions to a malicious application. "Sign in with Microsoft" — the user clicks approve, and now the attacker's app has a refresh token with persistent access to their mail, files, and calendar until revoked.
No password compromised. MFA was satisfied by the legitimate user. Conditional Access passed because the user authenticated normally. The malicious action happens at the consent layer — above authentication — where none of these controls apply.
The app now reads mail via Graph API. No interactive sign-in anomalies. No anomalous location. The non-interactive and service principal sign-in logs show token activity, but most SOCs never scrutinise them — and even when they do, the API calls are structurally identical to legitimate application behaviour.
Default M365 detections don't catch this reliably. Microsoft has added some — Defender for Cloud Apps flags unusual OAuth credential additions and suspicious mail access — but they're inconsistent, often delayed, and miss consent grants to newly registered external apps without a risk profile.
You need to monitor application consent grants in Entra ID audit logs ("Consent to application" under ApplicationManagement) and alert on any app requesting Mail.Read, Files.ReadWrite, User.Read.All, or offline_access from a non-approved publisher. Better still, disable user consent entirely in Entra ID and enforce an admin consent workflow — shifting the attack surface from "any user can be phished" to "only admins can approve apps."
This is the gap between "we have MFA" and "we have security."
I have created a query that finds inbox rules that is created on non-managed devices, some feedback on it? I want to reset mfa session of this happen, as it´s probably a compromised user.
let lookback = ago(1130d);
OfficeActivity
| where TimeGenerated > lookback
| where OfficeWorkload =~ "Exchange"
| where Operation =~ "New-InboxRule" and (ResultStatus =~ "True" or ResultStatus =~ "Succeeded")
| where Parameters has "Deleted Items" or Parameters has "Junk Email" or Parameters has "DeleteMessage" or Parameters has "RSS"
| extend AADSessionId = tostring(parse_json(tostring(AppAccessContext)).AADSessionId)
| join kind=leftouter (
SigninLogs
| where TimeGenerated > lookback
| where AppDisplayName == "OfficeHome"
| extend isManaged_ = tostring(DeviceDetail.isManaged)
| extend isCompliant_ = tostring(DeviceDetail.isCompliant)
| project SessionId, SignInTime = TimeGenerated, UserPrincipalName, AppDisplayName,
IPAddress, Location, DeviceDetail, ConditionalAccessStatus, RiskLevelDuringSignIn,
ClientAppUsed, ResourceDisplayName, isManaged_, isCompliant_
) on $left.AADSessionId == $right.SessionId
| where isManaged_ == "false"
Hi all, I am working on a task to integrate delinea secrets server logs to sentinel to create a rule like if someone reads secrets from secret server in short span of time or if someone has deleted large or important secrets. I am trying to find docs around it but it appears pretty dry. I am new to Sentinel and has background in AWS. Thanks so much.
One thing that has always driven me nuts with Sentinel is the workflow for storing incidents long term and the artifacts surrounding them. For example, I know one person in our org that has been compromised 4 different times, and when I bring this up, the older incidents have already hit retention, so all of the data, including comments on an incident have been wiped out. It kind of hurts your argument when you want something to be done with this user when you don’t have the black & white data to back up your argument. Instead, you are left with a barren incident that lacks entities
So, I tried “Cases” in Defender, which stores the comments that you put in it as well as what you attach to the case. However, linked incidents still falls victim to retention. Comments on a sentinel incident don’t sync to the case, and worst off, there isn’t a good way to export Cases in a nice viewable format to give to legal or other teams.
So, I am just curious on what others do for this. Do you use something like notion and store the data and artifacts in notion so that you can pull at a later time if need be?
I feel like there should be a better way to do this and I was hoping that the data lake would help with something like this, but it doesn’t seem like it is going to cover all scenarios, like if I want to store a file or screenshot quickly, as opposed to uploading it to a blob and add the links to screenshots to the incident.