Probable session hijack on j.smith
Next action: force re-auth, audit sent items, revoke inbox rule.
For mid-market security teams that cannot throw twelve analysts at an alert queue, Kindling turns related findings into case-level answers, with the evidence, reasoning, and next action attached.
Probable session hijack on j.smith
Next action: force re-auth, audit sent items, revoke inbox rule.
Known maintenance activity
No alert sent. Evidence remains inspectable.
Pressure
Lean teams inherit an enterprise-sized queue.
Context
Detection history, tenant baseline, finding history, and analyst knowledge.
Confidence
Consensus accuracy in internal case-quality testing.
Output
Case alerts include evidence, reasoning, and next action.
The shift
A finding alert says: "Something happened. Check it out." The analyst inherits a question.
A case alert says: "These things happened. We checked. Here's what they mean and how they relate. Act now." The analyst inherits an answer.
That's the difference. It is also the reason this page exists.
For eight years, Blumira's detection rules have looked for the patterns that matter. They still do, and they still fire when a pattern is present. They still produce findings the analyst can inspect in the admin panel.
What's new is what we send to your team.
Before
Each finding is technically valid, but the analyst still has to decide what connects, what matters, and what to do next.
Kindling work
The engine scores the finding against history, tenant baseline, related events, and organizational context.
After
Before: raw finding alert
Suspicious sign-in detected for j.smith@acme.com. Review.
Triage manually →
The analyst inherits a question. Nothing about this alert tells the team what to do, who else is involved, or whether it matters.
After: Kindling case alert
Probable session hijack on j.smith@acme.com
→ Next action: force re-auth · audit sent items · revoke inbox rule
The analyst inherits an answer. Verdict, evidence chain, and the next step, already done.
The moment
Mid-market security teams face the same alert volume Fortune 500 SOCs face, with a fraction of the staff. Other AI-SOC tools focus on organizations with enterprise-scale budgets and staff to throw at the problem. They solve the alert problem at scale. They don't solve it for you.
Pain point: staffing gap
Mid-market teams carry the alert volume without enterprise-level analyst headcount. Every finding that does not resolve automatically becomes work for the same small group.
Source: IANS Research, 2025
Pain point: false positives
Teams rank false positives as their top detection challenge. The issue is not whether detections fire. The issue is whether the queue earns the analyst's attention.
Source: SANS, 2025
Kindling is built for the team that doesn't have twelve analysts on staff. Eight years of detection-engineering history that newer entrants cannot match without time travel.
How it works
Nearly a decade of detection engineering, rule tuning, incident review, and platform-scale learning does not have a cheat code. It cannot be acquired quickly.
Kindling starts with a 14-day behavioral baseline of your specific environment, then adds Blumira's eight years of detection rules, documented incidents, internal reviews, known-benign patterns, and best-practice workflows. For existing Blumira customers, historical findings and logs on-platform add more context.
Raw queue
Most are routine. A few are connected. The old workflow asks the analyst to discover that by hand.
Kindling engine
Case answer
HighProbable session hijack on j.smith
Next action: force re-auth, audit sent items, revoke inbox rule.
The pipeline
Step 01
A detection rule fires. A finding is created with full context.
Read moreLessStep 02
Correlation, enrichment, severity, and baseline checks. Clear cases resolve here.
Read moreLessStep 03
Ambiguous cases reason against telemetry, history, and baseline.
Read moreLessStep 04
A verdict, a reasoning chain, and a next action. Or no alert.
Read moreLessTwo events from the same account, two minutes apart. On their own, each looks routine. Together, they don't.
Microsoft 365 audit log
Routine. Most of the time it's exactly what it looks like.
Microsoft 365 audit log
Looks like inbox cleanup. It is also exactly what an attacker does to hide replies after a session hijack.
Above the threshold, Kindling escalates this as a case alert with the inbox-rule context already attached. The same individual finding. Different surrounding context.
That's compounding context: the analysis the analyst would do, done first. Attackers who hijack a session create inbox rules so that bounce-backs and replies don't surface to the legitimate user. The two-step pattern is the signal. Each finding alone is not.
Case-quality testing
Two validation runs. The first used 1,580 cases, followed by additional analysis of the edge-case findings hardest to categorize accurately. A new set of 2,000 cases validated the work at 98.5% accuracy. Each training intervention is measured before it becomes part of the engine.
When Kindling is uncertain, it has been optimized to fail secure. 99% of the time it errs toward human review instead of quiet dismissal. The default is always to surface.
Most security AI products ship a model and hope it keeps up. Kindling runs a continuous evaluation loop against production data.
The most recent batch found eight detection types that needed further training, and resolved all eight in the same cycle. Six of the eight now run above 94% accuracy. The other two run above 98%.
This is the work that keeps the 98.5% number honest. Kindling's accuracy is a moving target that keeps moving in the right direction.
Receipts
The first validation run used 1,580 cases, followed by 2,000 additional cases to avoid over-training against the original set. The bars below are metrics from that validation work.
Kill chains detected. Up to 6 MITRE phases each. The cases that matter are the multi-finding ones, and Kindling correlates them first.
Two phases of validation run before any input reaches the LLM. Most AI SIEM products don't publish this layer. We do.
Most findings resolve at the deterministic layer. Only the cases where context actually matters reach the LLM.
Resolved at the deterministic layer
Routine activity can stay quiet. High-confidence signals move forward with structured evidence. Fast, reproducible, auditable.
Ambiguous to LLM analysis
The cases where context matters. AI does AI work.
That separation is what holds the autonomous resolution rate without over-triggering false positives.
Context is the moat
Four sources feed every Kindling decision. Each one helps the engine understand whether an isolated finding is routine activity or part of a case.
A 14-day behavioral baseline for users, devices, resources, and tenant activity.
Eight years of Blumira rules, reviews, known-benign patterns, and detection logic.
For existing customers, related findings and case history already retained in Blumira.
Analyst notes, travel context, maintenance windows, and organizational memory.
Convergence
Source 01
A 14-day behavioral baseline of your organization. When users log in, where, from what devices, against what resources. Your data stays inside your tenant. Your data trains your baseline.
Source 02
Detection rules, documented incidents, internal reviews, known-benign patterns, best-practice workflows, and organizational signatures from Blumira's history. A new finding is scored against the expertise the platform has built since 2018.
Source 03
For existing Blumira customers, Kindling can review related findings and case history already retained in the platform, up to a full year depending on retention. It does not retroactively ingest historical logs from outside Blumira.
Source 04
Your analysts can add notes to findings or cases ("Ruth is in London this week"). Kindling reads them as context. If a related finding shows up the next day, Kindling already knows. If it shows up after she's home, Kindling flags it.
Convergence
Kindling reasons from Blumira's detection-engineering history, your current baseline, your historical findings, and the organizational context your team already knows. Shared threat intelligence improves detection across the platform without using one customer's private data to train another customer's baseline.
That's also why Kindling can show you how your team is performing relative to similar organizations: same size, same industry, comparable complexity profile. "How are we doing compared to other healthcare companies our size?" is a question Kindling can answer. Most platforms can't.
Where it fits
Kindling sits above the detection layer that's powered Blumira since 2018. Detections still fire. Findings still surface in the platform and can be reviewed at any time. What's new is the layer that reads them, the one that asks the second and third question before sending an alert, and only escalates the cases that need human judgment.
For existing Blumira customers: Kindling ships to all customer tiers. No new tier, no separate billing, no upgrade conversation required.
For everyone else: this is what Blumira does now. It's also what the first Blumira detection rule in 2018 was always pointed toward.
The reasoning layer inside the Blumira platform.
Kindling does not replace detection. It sits above Blumira's SIEM, log retention, identity correlation, and eight years of detection engineering, then turns related findings into case-level answers.
Telemetry sources
Level set
Kindling uses better context to send fewer, higher-confidence alerts by focusing on validated cases instead of raw events. The work happens inside the engine: deterministic scoring, deterministic investigation and enrichment, LLM analysis, multi-judge consensus, signature matching against eight years of detection-engineering history. The output your team sees is the result of all that work compressed into a verdict and a next action. Threats are not missed. Findings that score clearly benign or clearly malicious resolve in the engine. Findings that need human judgment escalate as cases, with reasoning, evidence, and next action attached.
Kindling does not autonomously close findings. A human always confirms.
When Kindling's confidence is below threshold, it surfaces "I need a human" rather than guessing.
In testing, when Kindling is wrong, 99% of the time it errs toward surfacing the case for review, not clearing it as harmless.
Kindling will not silently dismiss a finding it is unsure about. The default is always to surface.
Fail-secure threshold
The model is tuned so errors fall toward human review, not quiet dismissal.
99% of errors fall toward the human-review side, not the auto-resolve side. That asymmetry is deliberate.
Customer story
A Blumira customer received two concurrent detections on the same Microsoft 365 account: an impossible-travel alert (Microsoft's own geolocation-based detection) and a separate auth-pattern detection from Blumira.
The first detection was a known IP-geolocation false positive. The user was logging into the company VPN, which routed their session through a server in another region. The second was the actual attack.
The customer's analyst saw the same account in two impossible-travel alerts in succession and dismissed both. Same alert as the last one, same account. The trap was straightforward, and exactly the kind of trap that case-level correlation is built to defeat.
What happened next, in 30 minutes:
Kindling correlation
One coherent attack chain, not four separate queue items.
Kindling correlation
Caught on batch review. Four detections, no entity overlap to a flat-eye scan. Same user. Same IP cluster. Same auth-pattern timeline. One coherent attack chain.
Kindling caught it on the batch review. Two detections with no entity overlap to a flat-eye scan, but everything in common to the engine: the same user identity, the same IP cluster, the same auth-pattern timeline, and the same case correlation logic that links impossible-travel, auth-from-new-country, external-forwarding-rule, and suspicious-inbox-rule into a single coherent attack chain. The investigation queries that the analyst would have manually built came pre-embedded in the case alert.
The detection rules fired correctly the entire time. They told the customer something happened. The customer dismissed them, exactly the failure mode that finding-level alerts produce when analysts are stretched thin.
Case-level alerts assume nobody has time to triage. They wait until the analysis says triage is necessary, and they bring actionable answers you can verify with them.
Get hands-on
For existing Blumira customers, we can show what Kindling would have surfaced from historical findings and logs already on-platform. For new environments, we can walk through the same workflow on live telemetry going forward or set you up with a Free NFR license. No synthetic environment. No staged scenario. Your data stays inside Blumira.
We'll reach out within one business day. No marketing automation drip. No 14-touch sequence. One email, one human.
Questions
Get started
Existing Blumira customers can review historical findings and logs already on-platform. New teams can see Kindling on their own telemetry going forward with a Free NFR license.
Eight years of detection led here. One look at your environment will show you why.