FeatureSignals

SOC 2 Incident Response

Last updated: April 2026

SOC 2 CC7.4 requires documented incident response procedures. FeatureSignals maintains a comprehensive incident response program covering the full lifecycle — from detection through post-mortem — with defined severity levels, response SLAs, and communication protocols.

Report a Security Incident

If you believe you've discovered a security vulnerability or are experiencing a security incident, contact security@featuresignals.com immediately. Do not file a public issue.

Incident Severity Levels

SeverityDefinitionAcknowledgmentResolution Target
P0Complete service outage, data breach, or critical vulnerability under active exploit15 minutes4 hours
P1Major feature outage, significant degradation, or confirmed vulnerability with known exploit1 hour24 hours
P2Partial feature degradation, non-critical bug affecting multiple users4 hours5 business days
P3Minor issue affecting single user, cosmetic bug, or feature request1 business dayNext release

Incident Response Lifecycle

Every incident follows a five-phase response lifecycle aligned with NIST SP 800-61 and SOC 2 CC7.3–CC7.5:

Detection & Triage

Incidents are detected through automated monitoring (SigNoz alerts, health check failures, error rate spikes), customer reports, or security researcher disclosures. The on-call engineer triages within the acknowledgment SLA — assessing scope, impact, and severity.

  • Acknowledge alert within SLA timeframe
  • Assess whether this is a security incident or operational issue
  • Assign severity level (P0–P3)
  • Create incident channel (Slack) and incident document
  • Notify on-call commander for P0/P1

Containment

The immediate priority is stopping the bleeding. For security incidents, this may mean revoking compromised credentials, isolating affected systems, or blocking an attack vector. For operational incidents, this means stopping the cascade failure.

  • Revoke compromised credentials or API keys immediately
  • Isolate affected systems if necessary
  • Apply WAF rules or rate limits to block attack traffic
  • Fail over to standby if primary is compromised
  • Preserve forensic evidence before making changes

Investigation

Root cause analysis begins in parallel with containment. The investigation team analyzes logs, audit trails, and system state to determine: what happened, when it started, what data was affected, and whether the attack vector is still open.

  • Review audit logs, access logs, and system metrics
  • Determine timeline — when did the incident begin?
  • Identify affected data, systems, and customers
  • Document findings in the incident document
  • Preserve evidence with chain of custody

Notification & Communication

Affected parties are notified according to regulatory requirements and contractual obligations. Internal stakeholders receive regular status updates. External communication is coordinated through a designated incident commander.

  • Notify affected customers within regulatory timeframe (GDPR: 72 hours)
  • Update status page within 30 minutes of confirmation
  • Send internal status updates every hour for P0, every 4 hours for P1
  • Coordinate external messaging with legal/comms
  • File regulatory notifications if required (data breach, DORA)

Remediation & Recovery

The root cause is fixed, systems are restored to normal operation, and verification confirms the fix is effective. For security incidents, additional hardening is applied to prevent recurrence.

  • Deploy fix with verified effectiveness
  • Rotate all secrets and credentials that may have been exposed
  • Restore services and verify with health checks
  • Update WAF rules and monitoring to detect similar attacks
  • Close incident after 24 hours of stable operation

Post-Mortem Process

Every P0 and P1 incident produces a blameless post-mortem within 48 hours of resolution. The post-mortem document covers:

  • Timeline: Minute-by-minute account from detection to resolution
  • Root cause: The underlying cause, not just the trigger
  • Impact assessment: Customers affected, data exposed, duration of impact
  • What went well: Processes that helped contain or resolve quickly
  • What went poorly: Gaps in detection, response, or communication
  • Action items: Specific, assigned, time-bound remediation items with tracking

Post-mortems are shared internally with the engineering team and, when appropriate, published as public incident reports to build customer trust.

SOC 2 Criteria Alignment

SOC 2 CriteriaRequirementHow We Meet It
CC7.3Evaluate security eventsAutomated alerting, on-call triage, severity classification
CC7.4Respond to incidentsDocumented 5-phase lifecycle, defined SLAs, incident commander role
CC7.5Recover from incidentsAutomated backup/restore, DR runbook, quarterly DR testing
CC9.2Assess vendor risksSub-processor incident notification requirements in DPAs

Security Contact

For incident reporting, vulnerability disclosure, or security inquiries: security@featuresignals.com

Next Steps