Workflow AutomationCybersecurityAI/ML UXProcess Design

ProACT — SOC Workflow Automation Platform

Designing a workflow automation platform that enables SOC analysts to build, test, and deploy automated incident response playbooks — without writing code.

Company

SISA Information Security

Year

2022–2023

Role

Lead UX Designer

Impact Metrics
−35%
Response Time
−70%
Manual Steps
89%
Playbook Adoption
12/wk
Analyst Hours Saved

Designing a workflow automation platform that enables SOC analysts to build, test, and deploy automated incident response playbooks — without writing code.

Discover
Define
Ideate
Low Fidelity
High Fidelity
Prototype
Test
Ship

Overview

ProACT was a SISA Information Security workflow automation product for Security Operations Center teams. It enabled Tier 2 and Tier 3 analysts to build no-code automation playbooks that trigger on alert conditions, run enrichment queries, and execute remediation actions.

The challenge: the users (security analysts) are highly technical but not software engineers. They understand logic and workflows, but reject anything that "feels like coding."

SISA security tool login experience and visual direction
SISA security tool login experience and visual direction

Alongside workflow-heavy security product work at SISA, I also contributed to security-tool entry experiences such as login and access screens. The visual direction needed to communicate trust, technical depth, and clear access separation while keeping the interaction simple for analysts, administrators, and consultants.


The Problem

SOC teams were spending 60–70% of analyst time on repetitive, procedural tasks — enrichment lookups, asset queries, ticket creation, Slack notifications. Tasks where the decision logic was already known; execution was just labor.

Existing automation tools (SOAR platforms, custom scripts) required Python or YAML knowledge. Most SOC analysts don't have this background, and the ones that do resent using it for rote tasks.

The design challenge: Build a workflow editor that feels like designing a flowchart, not writing a program — while handling the full complexity of conditional branching, API integrations, and error states.


Research

Mental Model Interviews

Interviewed 16 SOC analysts about how they think about incident response. Asked them to describe their process verbally, then had them draw it.

Key insight: Analysts think in terms of decisions and actions, not data transformations. The unit of thinking is "if this, then do that" — not "transform input A to output B."

This directly shaped the visual language of the builder.

Competitive Analysis

Analyzed 6 existing workflow automation tools (Splunk SOAR, Palo Alto XSOAR, Tines, Zapier, n8n, Retool). Key finding: most tools designed for engineers optimize for power over clarity. Analyst-targeted tools sacrificed power for simplicity. Neither hit the right balance.

Task Analysis

Mapped the 20 most common analyst workflows by frequency and manual effort. Identified the 5 highest-leverage automation targets:

  1. Alert enrichment (IP/domain/hash lookup) — 28 min/day saved
  2. Asset context gathering — 18 min/day
  3. Ticket creation and routing — 12 min/day
  4. Stakeholder notification — 9 min/day
  5. Evidence collection and packaging — 22 min/day

Design Principles

Based on research, I defined 4 design principles for ProACT:

  1. Think in workflows, not code. Visual canvas, not text editor.
  2. Reveal complexity progressively. Simple use cases should feel simple; complex ones should be possible.
  3. Make the machine's decisions visible. Every automated action should be auditable and explainable.
  4. Fail gracefully, recover easily. Error states must surface clearly with specific remediation guidance.

Design Process

Canvas Architecture

The central challenge was the workflow canvas. I explored 3 approaches:

  • Linear list — Simple but couldn't represent branching logic
  • Node-graph — Powerful but intimidating; "felt like engineering"
  • Swimlane flowchart — Familiar to analysts from process mapping; supported branches while remaining readable

The swimlane approach won in testing. Analysts immediately recognized the visual grammar.

Node Design System

Developed a library of 40+ workflow nodes grouped into 5 categories:

  • Triggers (alert conditions, schedules, manual)
  • Conditions (if/else, switch, rate limits)
  • Actions (API calls, notifications, ticket creation)
  • Enrichment (threat intel, asset lookup, geolocation)
  • Control (delays, loops, error handlers)

Each node type had a consistent layout: title, input fields, output connections, status indicator, and contextual help.

Testing & Iteration

Ran 3 rounds of task-based testing with 12 participants, progressively increasing workflow complexity.

Critical insight from Round 1: Analysts were confused by connection lines between nodes. They expected the flow to be top-to-bottom, like a checklist — not a graph. Redesigned the layout to enforce top-to-bottom flow by default, with branches displayed as indented sub-paths.


The Solution

A visual workflow editor built around analyst mental models:

Canvas Builder — A swimlane canvas where analysts drag, connect, and configure workflow nodes. Top-to-bottom flow with clear branching. No code required.

Inline Testing — Analysts can test workflows against real (or anonymized) alert data without deploying. A step-by-step execution trace shows exactly what each node received, processed, and output — making the machine's logic fully transparent.

Playbook Library — Pre-built templates for the 20 most common workflows, installable in one click and fully customizable. Reduces time-to-first-automation from days to minutes.

Audit Log — Every automated action is logged with actor (system or human), timestamp, inputs, and outputs. Supports forensic review and compliance requirements.


Impact

  • 35% reduction in average incident response time
  • 70% reduction in manual enrichment and notification steps
  • 89% playbook adoption rate across active SOC teams
  • 12 analyst hours saved per week per team on average
  • Zero production incidents caused by automation errors in the first 6 months post-launch

Learnings

1. Mental model matching is non-negotiable for power users. Experienced professionals won't adapt to a foreign mental model. The design has to meet them where they think.

2. Transparency is a feature, not a constraint. Building the audit log and execution trace made the platform more adoptable, not less. Analysts trusted it more because they could see inside it.

3. Templates lower activation energy. The playbook library drove more adoption than any training session. Seeing a working automation and modifying it is dramatically more accessible than building from scratch.