PRACTITIONER'S GUIDE

How WinDAGs thinks, selects skills, executes in waves, and evaluates quality. A preview of what you'll be working with.

01. HOW WINDAGS THINKS

When you give WinDAGs a task, it progressively decomposes your natural language into an executable DAG.

Pass 1: Structure

Break the task into logical subtasks. Identify which pieces depend on each other and which can run in parallel. The output is a rough dependency graph — nodes with edges, not yet assigned to skills.

Pass 2: Capability

For each subtask, find the best-matching skill from the library. If no skill matches well, mark it as a placeholder step — to be refined once earlier nodes produce context.

Pass 3: Topology

Arrange the graph into parallel waves. Wave 1 contains all nodes with no dependencies. Wave 2 contains nodes that only depend on Wave 1 outputs. And so on. This maximizes concurrency.

02. SKILL SELECTION

WinDAGs matches the right skill to each node using a 5-step selection cascade:

1
Semantic match — Compare the subtask description to skill descriptions using embedding similarity.
2
Capability filter — Does the skill have the right tools? If a subtask needs file system access, only skills withBashor Editqualify.
3
History boost — Skills that performed well on similar tasks in the past get ranked higher. The system balances exploring new skills with exploiting proven ones.
4
Cost check — If two skills are equally capable, prefer the one that uses a cheaper model tier.
5
Human override — You can always pin a specific skill to a node. The system respects manual choices.
03. WAVE EXECUTION

Once the DAG is built, WinDAGs executes it in parallel waves.

Wave 1
Scan Codebase
Read Config
Check Deps
Wave 2
Analyze Security
Lint Rules
Wave 3
Generate Report
Wave 4
Human Review
checkpoint
Wave 5
Apply Fixes
Update Tests

Each wave waits for the previous wave to complete. Within a wave, all nodes run concurrently. If a node fails, its downstream dependents are paused while the failure handler decides whether to retry, reroute, or escalate.

04. QUALITY EVALUATION

Every node output passes through four quality layers:

Floor
Does it parse? Does it compile?
Wall
Does it solve the subtask correctly?
Ceiling
Is it high quality? Follows conventions?
Envelope
Does it conflict with other nodes?

The Floor check is fast and cheap — it catches garbage. The Envelope check is the most expensive — it requires cross-referencing outputs from multiple nodes. WinDAGs runs them in order and stops early if a lower layer fails.

05. SEED TEMPLATES

WinDAGs ships with pre-built DAG templates for common workflows. Start from a template, then customize.

Security Audit
Scan dependencies, analyze code paths, check for OWASP Top 10, generate report with remediation priorities.
Code Review
Lint, type-check, test coverage, style guide compliance, architecture review, PR summary.
Data Pipeline
Schema validation, ETL logic, data quality checks, performance benchmarks, monitoring setup.
Feature Implementation
Requirements analysis, architecture design, implementation, test writing, documentation, changelog.
Incident Response
Log analysis, root cause identification, fix implementation, regression test, postmortem draft.
Design System Update
Token audit, component inventory, accessibility check, visual regression, documentation refresh.
06. WHAT YOU'LL BE ABLE TO BUILD

WinDAGs is general-purpose. If you can describe it as a set of subtasks with dependencies, WinDAGs can orchestrate it.

Full-stack feature builds
Multi-repo refactoring
Compliance audits
Test suite generation
Documentation overhauls
Performance optimization
Design system creation
Incident response automation

WANT TO TRY THIS?

WinDAGs is in beta testing Spring 2026. Request early access to start building with these patterns.

Request Early Access