Small team Enterprise alert volume No extra headcount

    Answers instead of alerts.

    For mid-market security teams that cannot throw twelve analysts at an alert queue, Kindling turns related findings into case-level answers, with the evidence, reasoning, and next action attached.

    100events that would normally ask for human attention
    2case alerts with the surrounding context already attached
    98.5%consensus accuracy in internal case-quality testing

    Pressure

    100 events

    Lean teams inherit an enterprise-sized queue.

    Context

    4 sources

    Detection history, tenant baseline, finding history, and analyst knowledge.

    Confidence

    98.5%

    Consensus accuracy in internal case-quality testing.

    Output

    2 cases

    Case alerts include evidence, reasoning, and next action.

    Probable session hijack. Next action: force re-auth, audit sent items, revoke inbox rule.

    The shift

    From finding alerts to case alerts.

    A finding alert says: "Something happened. Check it out." The analyst inherits a question.

    A case alert says: "These things happened. We checked. Here's what they mean and how they relate. Act now." The analyst inherits an answer.

    That's the difference. It is also the reason this page exists.

    For eight years, Blumira's detection rules have looked for the patterns that matter. They still do, and they still fire when a pattern is present. They still produce findings the analyst can inspect in the admin panel.

    What's new is what we send to your team.

    Before: raw finding alert

    ALERT · 14:25:47 UTC Unread

    Suspicious sign-in detected for j.smith@acme.com. Review.

    No context. No correlation. No next action.

    Triage manually →

    The analyst inherits a question. Nothing about this alert tells the team what to do, who else is involved, or whether it matters.

    After: Kindling case alert

    CASE 0247 · HIGH CONTEXT Reviewed

    Probable session hijack on j.smith@acme.com

    • 01impossible_travel: login from new country
    • 02+ inbox_rule_created: auto-delete from senders
    • 03+ external_send: 32 outbound, fraud-pattern

    → Next action: force re-auth · audit sent items · revoke inbox rule

    The analyst inherits an answer. Verdict, evidence chain, and the next step, already done.

    The moment

    The alert problem at its breaking point.

    Mid-market security teams face the same alert volume Fortune 500 SOCs face, with a fraction of the staff. Other AI-SOC tools focus on organizations with enterprise-scale budgets and staff to throw at the problem. They solve the alert problem at scale. They don't solve it for you.

    Pain point: staffing gap

    1.5 analysts

    Mid-market teams carry the alert volume without enterprise-level analyst headcount. Every finding that does not resolve automatically becomes work for the same small group.

    Source: IANS Research, 2025

    Pain point: false positives

    73%

    Teams rank false positives as their top detection challenge. The issue is not whether detections fire. The issue is whether the queue earns the analyst's attention.

    Source: SANS, 2025

    Kindling is built for the team that doesn't have twelve analysts on staff. Eight years of detection-engineering history that newer entrants cannot match without time travel.

    How it works

    Eight years of detection. Now reasoning against it.

    Nearly a decade of detection engineering, rule tuning, incident review, and platform-scale learning does not have a cheat code. It cannot be acquired quickly.

    Kindling starts with a 14-day behavioral baseline of your specific environment, then adds Blumira's eight years of detection rules, documented incidents, internal reviews, known-benign patterns, and best-practice workflows. For existing Blumira customers, historical findings and logs on-platform add more context.

    8 yrs
    Detection engineering
    14d
    Behavioral baseline
    100:2
    Events in, cases out
    98.5%
    Consensus accuracy

    The pipeline

    Step 01

    Signal

    A detection rule fires. A finding is created with full context.

    Read moreLess
    A detection rule fires in your environment. Kindling receives the created finding with full context: the rule that triggered, the entity involved, the time, and the upstream telemetry that produced the signal. Nothing is dropped. Everything remains inspectable.

    Step 02

    Deterministic investigation

    Correlation, enrichment, severity, and baseline checks. Clear cases resolve here.

    Read moreLess
    Investigation and enrichment first. Correlation, severity weighting, signature lookups, and the 14-day per-customer behavioral baseline run against every finding. Clearly benign resolves here. Clearly malicious promotes to case analysis. The math is fast, reproducible, auditable.

    Step 03

    LLM analysis

    Ambiguous cases reason against telemetry, history, and baseline.

    Read moreLess
    Ambiguous cases reach the LLM layer, which reasons across related events, endpoint telemetry, identity activity, and Blumira's eight years of detection-engineering history. The output is an explainable verdict: reasoning, next action, escalation notes. Three-judge consensus before anything ships.

    Step 04

    Case alert

    A verdict, a reasoning chain, and a next action. Or no alert.

    Read moreLess
    When the case is clear, Kindling resolves it without surfacing anything. When it isn't, your team gets a case alert that names the verdict, the supporting evidence, and the next action. Every reasoning step is inspectable in the platform.

    A worked example

    Two events from the same account, two minutes apart. On their own, each looks routine. Together, they don't.

    Microsoft 365 audit log

    Consent to application granted

    User
    j.smith@example.com
    Application
    Slack (productivity)
    Permissions
    Read calendars, send mail
    Time
    14:23:11 UTC

    Routine. Most of the time it's exactly what it looks like.

    Microsoft 365 audit log

    Inbox rule created

    User
    j.smith@example.com
    Rule
    From:* → Move to Deleted Items
    Apply to
    All inbound, no exceptions
    Time
    14:25:47 UTC

    Looks like inbox cleanup. It is also exactly what an attacker does to hide replies after a session hijack.

    Just the finding
    Consent granted, in isolation
    Low probability of being connected to malicious activity
    + Inbox rule on the same account
    Compounding context, minutes later
    High probability of being connected to malicious activity

    Above the threshold, Kindling escalates this as a case alert with the inbox-rule context already attached. The same individual finding. Different surrounding context.

    That's compounding context: the analysis the analyst would do, done first. Attackers who hijack a session create inbox rules so that bounce-backs and replies don't surface to the legitimate user. The two-step pattern is the signal. Each finding alone is not.

    Case-quality testing

    How we got to 98.5%

    1,580 + 2,000validation cases reviewed

    Two validation runs. The first used 1,580 cases, followed by additional analysis of the edge-case findings hardest to categorize accurately. A new set of 2,000 cases validated the work at 98.5% accuracy. Each training intervention is measured before it becomes part of the engine.

    Initial responder baseline
    Customer-team accuracy in the dashboard. Common repeated miscategorizations were identified from this baseline.
    53%
    + First-round Kindling analysis
    Baseline Kindling accuracy after first-round correlation and analysis of the common errors to account for and check.
    89.5%
    + Ambiguous-finding refinement
    Final Kindling accuracy after dedicated analysis and refinement for the most ambiguous individual findings.
    98.5%

    When Kindling is uncertain, it has been optimized to fail secure. 99% of the time it errs toward human review instead of quiet dismissal. The default is always to surface.

    Built to improve over time.

    Most security AI products ship a model and hope it keeps up. Kindling runs a continuous evaluation loop against production data.

    Phase 01 Validate Every prompt change tested before deployment.
    Phase 02 Track Every detection-type fix is logged and measured.
    Phase 03 Find regressions Continuous evaluation flags regressions automatically.
    Phase 04 Ship fix Validated against the same data set, then deployed.

    The most recent batch found eight detection types that needed further training, and resolved all eight in the same cycle. Six of the eight now run above 94% accuracy. The other two run above 98%.

    This is the work that keeps the 98.5% number honest. Kindling's accuracy is a moving target that keeps moving in the right direction.

    Receipts

    Case-quality metrics from validation review.

    The first validation run used 1,580 cases, followed by 2,000 additional cases to avoid over-training against the original set. The bars below are metrics from that validation work.

    237

    Kill chains detected. Up to 6 MITRE phases each. The cases that matter are the multi-finding ones, and Kindling correlates them first.

    Narrative accuracy
    97.4%
    Actionability
    96.7%
    Severity alignment
    96.0%
    Cases combining multiple detection types
    46.7%
    Hallucination rate (low is good)
    3.3%

    The guardrail framework.

    Two phases of validation run before any input reaches the LLM. Most AI SIEM products don't publish this layer. We do.

    Input Evidence + user-controlled fields
    Gate 01 Sanitize & structure Strip prompt-injection vectors. Structure evidence cleanly.
    Gate 02 Boundary enforcement Hard limits on suppression authoring. Analyst approval required.
    LLM Reasoning layer, sanitized input only

    Why deterministic-first matters.

    Most findings resolve at the deterministic layer. Only the cases where context actually matters reach the LLM.

    Most

    Resolved at the deterministic layer

    Routine activity can stay quiet. High-confidence signals move forward with structured evidence. Fast, reproducible, auditable.

    Few

    Ambiguous to LLM analysis

    The cases where context matters. AI does AI work.

    That separation is what holds the autonomous resolution rate without over-triggering false positives.

    Context is the moat

    Kindling reasons against more than Blumira's own detection rules.

    Four sources feed every Kindling decision. Each one helps the engine understand whether an isolated finding is routine activity or part of a case.

    Source 01

    Your environment, specifically.

    A 14-day behavioral baseline of your organization. When users log in, where, from what devices, against what resources. Your data stays inside your tenant. Your data trains your baseline.

    Source 02

    Eight years of Blumira detection engineering.

    Detection rules, documented incidents, internal reviews, known-benign patterns, best-practice workflows, and organizational signatures from Blumira's history. A new finding is scored against the expertise the platform has built since 2018.

    Source 03

    Retained findings and case history.

    For existing Blumira customers, Kindling can review related findings and case history already retained in the platform, up to a full year depending on retention. It does not retroactively ingest historical logs from outside Blumira.

    Source 04

    Your team's organizational knowledge.

    Your analysts can add notes to findings or cases ("Ruth is in London this week"). Kindling reads them as context. If a related finding shows up the next day, Kindling already knows. If it shows up after she's home, Kindling flags it.

    Convergence

    Kindling reasons from Blumira's detection-engineering history, your current baseline, your historical findings, and the organizational context your team already knows. Shared threat intelligence improves detection across the platform without using one customer's private data to train another customer's baseline.

    That's also why Kindling can show you how your team is performing relative to similar organizations: same size, same industry, comparable complexity profile. "How are we doing compared to other healthcare companies our size?" is a question Kindling can answer. Most platforms can't.

    Where it fits

    The next layer of Blumira.

    Kindling sits above the detection layer that's powered Blumira since 2018. Detections still fire. Findings still surface in the platform and can be reviewed at any time. What's new is the layer that reads them, the one that asks the second and third question before sending an alert, and only escalates the cases that need human judgment.

    For existing Blumira customers: Kindling ships to all customer tiers. No new tier, no separate billing, no upgrade conversation required.

    For everyone else: this is what Blumira does now. It's also what the first Blumira detection rule in 2018 was always pointed toward.

    Level set

    What Kindling is not.

    Kindling uses better context to send fewer, higher-confidence alerts by focusing on validated cases instead of raw events. The work happens inside the engine: deterministic scoring, deterministic investigation and enrichment, LLM analysis, multi-judge consensus, signature matching against eight years of detection-engineering history. The output your team sees is the result of all that work compressed into a verdict and a next action. Threats are not missed. Findings that score clearly benign or clearly malicious resolve in the engine. Findings that need human judgment escalate as cases, with reasoning, evidence, and next action attached.

    Kindling does not autonomously close findings. A human always confirms.

    When Kindling's confidence is below threshold, it surfaces "I need a human" rather than guessing.

    In testing, when Kindling is wrong, 99% of the time it errs toward surfacing the case for review, not clearing it as harmless.

    Kindling will not silently dismiss a finding it is unsure about. The default is always to surface.

    8 yrs
    Detection engineering reasoning
    98.5%
    Consensus accuracy across 3 judge AI models
    ~99%
    Of errors surface for review

    Customer story

    Thirty minutes. In, staged, out.

    A Blumira customer received two concurrent detections on the same Microsoft 365 account: an impossible-travel alert (Microsoft's own geolocation-based detection) and a separate auth-pattern detection from Blumira.

    The first detection was a known IP-geolocation false positive. The user was logging into the company VPN, which routed their session through a server in another region. The second was the actual attack.

    The customer's analyst saw the same account in two impossible-travel alerts in succession and dismissed both. Same alert as the last one, same account. The trap was straightforward, and exactly the kind of trap that case-level correlation is built to defeat.

    What happened next, in 30 minutes:

    T+00:00
    Login Successful login flagged by geo-behavior detection.
    T+00:04
    Recon Inbox rules created to hide replies and move messages to Deleted Items.
    T+00:10
    Attack 32 outbound emails sent with fraud-pattern content, posing as the legitimate user to that user's clients. 32 outbound emails
    T+00:30
    Cleanup Session ended. The attacker deleted the sent emails, deleted the inbox rule, logged out, and never touched it again.

    Kindling correlation

    Caught on batch review. Four detections, no entity overlap to a flat-eye scan. Same user. Same IP cluster. Same auth-pattern timeline. One coherent attack chain.

    Kindling caught it on the batch review. Two detections with no entity overlap to a flat-eye scan, but everything in common to the engine: the same user identity, the same IP cluster, the same auth-pattern timeline, and the same case correlation logic that links impossible-travel, auth-from-new-country, external-forwarding-rule, and suspicious-inbox-rule into a single coherent attack chain. The investigation queries that the analyst would have manually built came pre-embedded in the case alert.

    The detection rules fired correctly the entire time. They told the customer something happened. The customer dismissed them, exactly the failure mode that finding-level alerts produce when analysts are stretched thin.

    Case-level alerts assume nobody has time to triage. They wait until the analysis says triage is necessary, and they bring actionable answers you can verify with them.

    Get hands-on

    See Kindling on your Blumira data.

    For existing Blumira customers, we can show what Kindling would have surfaced from historical findings and logs already on-platform. For new environments, we can walk through the same workflow on live telemetry going forward or set you up with a Free NFR license. No synthetic environment. No staged scenario. Your data stays inside Blumira.

    Loading form...

    We'll reach out within one business day. No marketing automation drip. No 14-touch sequence. One email, one human.

    Questions

    Kindling, in detail.

    Get started

    See Kindling on your Blumira data.

    Existing Blumira customers can review historical findings and logs already on-platform. New teams can see Kindling on their own telemetry going forward with a Free NFR license.

    Eight years of detection led here. One look at your environment will show you why.