Anomaly Detection
Behavioral baseline analysis that detects when an agent deviates from its normal patterns — catching threats that rule-based policies miss.
How It Works
The anomaly detector runs on a per-agent, per-tenant basis. It maintains a rolling behavioral baseline for each agent and compares recent activity against that baseline using two complementary analysis paths:
LLM analysis (primary): The agent's recent tool-call sequence and baseline profile are sent to a configurable LLM (Ollama by default, also supports OpenAI, Anthropic, and OpenRouter). The LLM produces an anomaly score, type, and plain-English explanation.
Statistical fallback: When the LLM is unavailable, a statistical frequency analysis takes over. Tools used significantly above or below their baseline rate are flagged.
Event-based checks (always on): Five fast-path pattern detectors run in parallel with LLM analysis, catching burst scenarios that require low latency (see below).
Anomaly events above the configured threshold are persisted to the database and can trigger alert rules. During the learning phase, events are recorded but alerts are suppressed so new agents can build a reliable baseline. The learning duration is configurable per agent.
Behavioral Baseline
The baseline is computed automatically from the agent's audit event history. For each tool the agent calls, the baseline records:
| Metric | Description |
|---|---|
| Call frequency | Mean and std deviation of calls per hour over the baseline window |
| Hour distribution | 24-bucket UTC hour histogram — which hours the agent is normally active |
| Tool list | Set of tools historically used by this agent |
Learning Phase
New agents enter a learning phase. During this period, the baseline accumulates data but anomaly alerts are suppressed. After the learning window expires, the baseline is automatically locked and alerts begin firing. The learning duration is configurable per agent via the API.
Event-Based Detection Patterns
Behavioral anomaly detection using statistical baselines and pattern recognition. Thresholds and detection windows are configurable per tenant.
dlp_burstA sudden spike of DLP data-loss events suggests the agent is actively exfiltrating data or scanning for sensitive information at an unusual rate.
injection_clusteringMultiple injection attempts in a short window indicate a coordinated attack or a compromised agent running injection payloads.
policy_denial_spikeA sudden increase in policy denials relative to the agent's historical baseline suggests the agent is probing restricted tools or its configuration has drifted.
unusual_tool_accessThe agent is calling tools it has never used before. This may indicate a new task, a misconfiguration, or a supply-chain compromise.
off_hours_activitySignificant activity during hours when the agent has never previously run. Can be tuned to an explicit off-hours window via per-agent config.
Configuration
All detection thresholds, baseline windows, learning phase duration, and off-hours schedules are configurable per agent and per tenant via the API. System-wide defaults can be overridden at any level.
API Reference
Example — override thresholds for a specific agent
curl -s -X PATCH https://api.shieldagent.io/tenants/:tenantId/agents/:agentId/anomaly-config \
-H 'Authorization: Bearer <token>' \
-H 'Content-Type: application/json' \
-d '{
"anomalyThreshold": 70,
"offHoursStart": 22,
"offHoursEnd": 6,
"learningPhaseDays": 14
}'Anomaly event shape
{
"id": "anev_...",
"agentId": "...",
"anomalyScore": 78,
"anomalyType": "injection_clustering",
"confidence": 80,
"source": "statistical",
"toolName": null,
"details": {
"explanation": "Injection clustering detected above configured threshold."
},
"detectedAt": "2026-04-24T14:05:00.000Z"
}