Real-World Playbooks

Complete skill and agent setups for four common business domains. Each playbook includes the scenario, the skills and agents involved, and the expected output — ready for you to adapt to your own work.

This is the capstone of the "From Process to Agent" module. Everything you have learned — skill files, the Document-Structure-Codify-Test framework, agent architecture — comes together here in practical, end-to-end examples. These are not theoretical. They are distilled from real workflows that teams run daily.

How to Use These Playbooks
You do not need to implement all four. Read through them, pick the one closest to your daily work, and adapt it. Each playbook is designed to be a starting point. Modify the skills, adjust the workflow, and tailor the outputs to your specific needs.

Playbook 1: Research and Analysis

Market research, competitive intelligence, trend analysis, and due diligence — the core information-gathering workflows that inform decisions.

Scenario: Quarterly Market Landscape Report

Your team publishes a quarterly market landscape report covering competitors, market trends, and emerging players. The report takes 2-3 days of manual research and writing. You need to cover 8-10 competitors, identify 3-5 market trends, and profile 2-3 emerging companies. The final output is a 2,000-word report with an executive summary.

Skills Required

.claude/skills/
├── research-company.md        — Deep-dive on a single company
├── identify-trends.md         — Analyze a topic for emerging trends
├── competitive-comparison.md  — Side-by-side feature/positioning comparison
└── write-report-section.md    — Write a section with consistent formatting

Agent Setup

# Agent: Market Landscape Report

## Objective
Produce a quarterly market landscape report covering competitors,
trends, and emerging players.

## Inputs
- **competitors**: List of companies to analyze (8-10)
- **industry**: Market vertical to focus on
- **quarter**: Which quarter this report covers

## Workflow
1. **Competitor Research Phase** (per company):
   - Run /research-company for each competitor
   - Focus areas: product updates, funding, strategic moves, market positioning
   - Save each brief to output/research/{company-name}.md

2. **Trend Analysis Phase:**
   - Run /identify-trends for the {industry} over the past quarter
   - Look for: technology shifts, buyer behavior changes, regulatory updates,
     pricing model evolution, consolidation signals
   - Save to output/trends.md

3. **Comparison Phase:**
   - Run /competitive-comparison using the research from phase 1
   - Produce a comparison matrix: features, pricing, positioning, momentum
   - Save to output/comparison-matrix.md

4. **Emerging Players Phase:**
   - From the trend analysis, identify 2-3 companies not in the main competitor list
     that are gaining traction
   - Run /research-company on each
   - Save to output/emerging/

5. **Report Assembly Phase:**
   - Run /write-report-section for each section:
     a. Executive Summary (200 words) — written last, summarizing everything
     b. Market Overview (300 words) — state of the industry this quarter
     c. Competitor Updates (800 words) — organized by most significant changes
     d. Trend Analysis (400 words) — 3-5 trends with supporting evidence
     e. Emerging Players (200 words) — brief profiles of new entrants
     f. Outlook and Recommendations (200 words) — what to watch next quarter
   - Compile into output/market-landscape-Q{quarter}.md

## Output
- Individual research files in output/research/
- Trend analysis in output/trends.md
- Comparison matrix in output/comparison-matrix.md
- Final report in output/market-landscape-Q{quarter}.md

## Guardrails
- Cite sources for all claims about competitors
- Distinguish between confirmed facts and signals/speculation
- If a competitor has no notable activity, say so in one line
- Keep the full report under 2,500 words
- The executive summary must be standalone — readable without the full report

Expected Output

A complete, formatted market landscape report with individual research files that can be referenced later. The modular output means you can share just the competitor updates with the product team, the trend analysis with the strategy team, and the full report with leadership. What used to take 2-3 days of focused work now takes under an hour of supervised agent runs plus your review and editorial pass.

Playbook 2: Content and Writing

Proposals, case studies, email sequences, and content production — the writing workflows that drive business development and marketing.

Scenario: Case Study Production Pipeline

After each successful customer engagement, your team writes a case study. The process involves interviewing the client (you already have the transcript), extracting the narrative arc, writing the case study in your standard format, and producing a one-pager and social content to promote it. Currently, each case study takes a week from transcript to published content.

Skills Required

.claude/skills/
├── extract-narrative.md     — Pull the story arc from a transcript
├── write-case-study.md      — Write in the standard case study format
├── create-one-pager.md      — Condense to a single-page summary
├── social-post.md           — Write platform-specific social content
└── email-sequence.md        — Draft a 3-email nurture sequence

Agent Setup

# Agent: Case Study Pipeline

## Objective
Transform a client interview transcript into a full case study
package: long-form case study, one-pager, social posts, and
email sequence.

## Inputs
- **transcript**: Path to the interview transcript file
- **client_name**: Client company name
- **project_type**: What we did for them (e.g., "platform migration")

## Workflow
1. **Narrative Extraction:**
   - Run /extract-narrative on the transcript
   - Identify: the challenge, the solution approach, the results, key quotes
   - Determine the emotional arc (where was the client frustrated?
     where were they relieved? where were they excited?)
   - Save to output/narrative-brief.md

2. **Case Study Writing:**
   - Run /write-case-study using the narrative brief
   - Follow the standard format:
     a. Headline (benefit-focused, not feature-focused)
     b. Client snapshot (industry, size, key stats)
     c. The Challenge (2-3 paragraphs)
     d. The Solution (2-3 paragraphs, specific about what we did)
     e. The Results (metrics-heavy, with client quotes)
     f. Client Quote (pull-quote for visual emphasis)
   - Target length: 1,000-1,200 words
   - Save to output/case-study.md

3. **One-Pager Creation:**
   - Run /create-one-pager from the full case study
   - Format for a single printed page:
     a. Headline and client logo placeholder
     b. Challenge (3 bullet points)
     c. Solution (3 bullet points)
     d. Results (3 metrics with numbers)
     e. One client quote
   - Save to output/one-pager.md

4. **Social Content:**
   - Run /social-post for LinkedIn (150-200 words):
     - Lead with the result, not the process
     - Include a client quote
     - End with a CTA to read the full case study
   - Run /social-post for Twitter/X (thread of 4-5 tweets):
     - Tweet 1: The headline result
     - Tweet 2-3: The story in brief
     - Tweet 4: Client quote
     - Tweet 5: Link to full case study
   - Save to output/social/

5. **Email Sequence:**
   - Run /email-sequence to create 3 emails for prospects in a similar
     industry or with a similar challenge:
     - Email 1: Share the case study as a "thought you'd find this relevant"
     - Email 2: Follow up with the specific metric most relevant to them
     - Email 3: Soft ask for a conversation
   - Save to output/email-sequence.md

## Output
- output/narrative-brief.md — Raw narrative extraction
- output/case-study.md — Full case study
- output/one-pager.md — Condensed one-pager
- output/social/linkedin.md — LinkedIn post
- output/social/twitter-thread.md — Twitter thread
- output/email-sequence.md — 3-email nurture sequence

## Guardrails
- Never include internal details not mentioned in the transcript
- All metrics and quotes must come from the transcript — do not fabricate
- Maintain the client's voice in all quoted material
- If the transcript lacks clear metrics, flag this and use qualitative results
- The case study must be reviewed by the client before publishing —
  add a "CLIENT REVIEW REQUIRED" header

Expected Output

A complete content package from a single transcript. The case study, one-pager, social posts, and email sequence are all tonally consistent because they derive from the same narrative extraction step. Your editing pass focuses on accuracy and nuance rather than writing from scratch. A one-week production cycle drops to a single afternoon.

Playbook 3: Operations

Reporting, process documentation, data cleanup, and operational workflows that keep the business running smoothly.

Scenario: Monthly Operations Review Package

Every month, your ops team produces a review package: key metrics summary, process health check, team capacity report, and risk register update. Data comes from multiple sources — spreadsheet exports, project management tool exports, and manual notes. The assembly takes a full day each month, and the format is never quite consistent.

Skills Required

.claude/skills/
├── analyze-metrics.md       — Parse data exports and compute KPIs
├── process-health-check.md  — Assess process adherence and bottlenecks
├── capacity-report.md       — Calculate team utilization and availability
├── risk-assessment.md       — Update risk register with current status
└── write-ops-section.md     — Write a section of the ops review

Agent Setup

# Agent: Monthly Ops Review

## Objective
Produce the monthly operations review package from raw data exports
and notes. Consistent format, every month, with minimal manual assembly.

## Inputs
- **month**: Which month this review covers (e.g., "March 2026")
- **metrics_file**: Path to the metrics spreadsheet export (CSV)
- **project_export**: Path to the project management tool export (CSV/JSON)
- **notes**: Any manual observations or context from the ops lead

## Workflow
1. **Metrics Analysis:**
   - Run /analyze-metrics on {metrics_file}
   - Compute key KPIs:
     a. Revenue metrics (MRR, churn, expansion)
     b. Customer metrics (new, churned, NPS if available)
     c. Operational metrics (support tickets, response time, uptime)
   - Compare to previous month (reference output/previous-month.md if exists)
   - Flag anything > 10% deviation from the prior month
   - Save to output/metrics-analysis.md

2. **Process Health Check:**
   - Run /process-health-check on {project_export}
   - Assess:
     a. On-time delivery rate for projects
     b. Blocked or stalled items (no update in > 7 days)
     c. Process adherence (are tasks following the standard workflow?)
     d. Bottleneck identification (where are items piling up?)
   - Save to output/process-health.md

3. **Capacity Report:**
   - Run /capacity-report using {project_export} and team size from CLAUDE.md
   - Calculate:
     a. Utilization rate per team/function
     b. Available capacity for next month
     c. Overloaded individuals or teams
     d. Upcoming PTO or leaves (if noted in data)
   - Save to output/capacity.md

4. **Risk Register Update:**
   - Run /risk-assessment using outputs from steps 1-3 plus {notes}
   - Update existing risks:
     a. Has the risk increased or decreased?
     b. Are mitigations working?
   - Add new risks identified from the data
   - Remove risks that are no longer relevant
   - Save to output/risk-register.md

5. **Report Assembly:**
   - Compile all sections into the final report:
     a. Executive Summary (250 words) — key numbers and one flag
     b. Metrics Dashboard (table format)
     c. Process Health (narrative with specific issues)
     d. Team Capacity (table + narrative)
     e. Risk Register (table with status, owner, mitigation)
     f. Recommendations (3-5 action items for next month)
   - Save to output/ops-review-{month}.md

## Output
- Individual analysis files in output/
- Final report in output/ops-review-{month}.md
- Save a copy as output/previous-month.md for next month's comparison

## Guardrails
- All numbers must come from the provided data — never estimate metrics
- If data is missing or incomplete, flag the gap rather than filling it
- The executive summary must highlight the single most important issue
- Recommendations must be specific and assignable (who should do what)
- Format all currency as USD, all percentages to one decimal place

Expected Output

A complete monthly operations review with consistent formatting. The modular approach means any section can be regenerated independently if the input data is updated. The previous-month file enables automatic trend comparison. Your monthly review day becomes a two-hour review session instead of a full day of assembly.

Playbook 4: Code and Tooling

Internal tools, automations, API integrations, and development workflows. This playbook is for building functional tools and automations, not just documents.

Scenario: Internal Dashboard and Automation Suite

Your team needs internal tools — a data transformation script, an API integration that syncs data between services, and a simple dashboard to visualize key metrics. None of these justify hiring a developer, but they would save hours of manual work each week.

Skills Required

.claude/skills/
├── scaffold-project.md       — Set up a new project with boilerplate
├── build-data-transform.md   — Create a data transformation script
├── build-api-integration.md  — Build an integration between two services
├── build-dashboard.md        — Create a simple data dashboard
└── write-documentation.md    — Document the tool for team use

Agent: Data Transformation Tool

# Agent: Build Data Transform

## Objective
Create a reusable script that transforms data from one format to another,
with validation, error handling, and logging.

## Inputs
- **input_format**: Description of the source data (e.g., "CSV export from HubSpot")
- **output_format**: Description of the target format (e.g., "JSON for our internal API")
- **sample_input**: Path to a sample input file
- **transformations**: List of specific transformations needed
- **language**: "python" or "typescript". Default: "python"

## Workflow
1. **Analyze the input:**
   - Read the sample input file
   - Map the schema: field names, data types, nesting structure
   - Identify edge cases: missing fields, inconsistent formatting, special characters

2. **Design the transformation:**
   - Map source fields to target fields
   - Define transformation rules for each field:
     a. Direct mapping (field A → field X)
     b. Computed fields (field A + field B → field Y)
     c. Format conversions (date string → ISO format)
     d. Conditional mappings (if field A = "active", set field X = true)
   - Save the mapping to output/field-mapping.md

3. **Build the script:**
   - Run /scaffold-project for a {language} CLI tool
   - Implement the transformation logic:
     a. Input parsing with validation
     b. Field-by-field transformation
     c. Output serialization
     d. Error handling (skip bad rows, log errors)
     e. Progress reporting for large files
   - Save to output/transform/

4. **Add testing:**
   - Write tests using the sample input file
   - Test cases:
     a. Happy path — full valid input
     b. Missing fields — verify graceful handling
     c. Malformed data — verify error logging
     d. Empty input — verify clean exit
   - Save tests alongside the script

5. **Document:**
   - Run /write-documentation for the tool
   - Include: installation, usage, field mapping reference, troubleshooting
   - Save to output/transform/README.md

## Output
- output/field-mapping.md — Transformation mapping reference
- output/transform/ — Complete project with script, tests, and docs

## Guardrails
- The script must handle files up to 100MB without memory issues
- All errors must be logged, not silently swallowed
- The script must be runnable from the command line with no manual setup
- Include a --dry-run flag that shows the transformation without writing output
- Never hardcode file paths — use command-line arguments

Agent: API Integration Builder

# Agent: Build API Integration

## Objective
Create a script that syncs data between two services via their APIs,
with scheduling, error handling, and change detection.

## Inputs
- **source_service**: The service to pull data from (e.g., "HubSpot CRM")
- **target_service**: The service to push data to (e.g., "Notion database")
- **sync_fields**: Which fields to sync
- **sync_frequency**: How often to run (e.g., "every 6 hours")
- **api_docs**: Links or paths to relevant API documentation

## Workflow
1. **API Research:**
   - Read the API documentation for both services
   - Identify the relevant endpoints:
     a. Source: GET/list endpoint, pagination method, auth method
     b. Target: POST/update endpoint, required fields, auth method
   - Document rate limits for both APIs
   - Save to output/api-notes.md

2. **Build the Integration:**
   - Run /scaffold-project for a TypeScript project
   - Implement:
     a. Auth handling for both services (env vars, not hardcoded)
     b. Data fetching from source with pagination
     c. Change detection (only sync what changed)
     d. Data transformation (source format → target format)
     e. Upsert to target service
     f. Error handling with retry logic (exponential backoff)
     g. Logging with timestamps

3. **Add Scheduling:**
   - Create a configuration for running at the specified frequency
   - Options: cron expression, or a simple setInterval wrapper
   - Include a manual trigger mode for testing

4. **Add Monitoring:**
   - Log each sync run: timestamp, records processed, errors
   - Save sync history to a local file
   - On failure, log the error with enough context to debug

5. **Document:**
   - Run /write-documentation
   - Include: setup (env vars needed), running, monitoring, troubleshooting
   - Document the field mapping and any data transformations

## Output
- output/integration/ — Complete project
- output/api-notes.md — API research notes
- output/integration/README.md — Setup and usage docs

## Guardrails
- API keys must never be hardcoded — use environment variables only
- Respect rate limits — implement throttling
- Never delete data in the target service — only create and update
- Include a --dry-run mode that logs what would be synced without making changes
- All API calls must have timeouts (30 seconds default)

Expected Output

Functional, tested, documented tools that your team can run immediately. The key advantage is not just that the code gets written faster — it is that the code includes error handling, testing, and documentation that often gets skipped when building internal tools under time pressure. The agent enforces good practices because they are baked into the workflow.

Choosing and Customizing a Playbook

These playbooks are starting points. Here is how to adapt them to your specific situation:

1

Pick the closest match

Which playbook is closest to your daily work? You do not need an exact match. If you do a lot of client reporting but it is not exactly like the ops review, start with the ops playbook and modify the sections.

2

Audit your current process

Before adapting the playbook, document your actual current process (using the framework from the "Workflows to Skills" lesson). Compare it to the playbook. Note what is the same, what is different, and what is missing.

3

Start with the skills, not the agent

Build and test each individual skill before wiring them into an agent. A common mistake is trying to build the full agent immediately. Get each skill working reliably first. The agent layer should be the last thing you add.

4

Run the agent supervised

For the first several runs, watch the agent work and review intermediate outputs. Do not trust it to run fully unsupervised until you have seen consistent results across at least 5 different inputs. Build trust incrementally.

5

Iterate based on gaps

After each run, note what the agent missed, what it did unnecessarily, and where the output fell short. Feed these observations back into the skill files and agent definition. The first version is never the final version.

Common Patterns Across Playbooks

Looking across all four playbooks, several patterns appear consistently. These are the building blocks of reliable agent workflows, regardless of domain:

Phased Execution

Every playbook separates gathering from analysis from production. The research phase produces raw material. The analysis phase extracts insights. The production phase creates the deliverable. This separation makes each phase independently testable and debuggable.

Modular Output

Every playbook saves intermediate outputs to separate files. This serves three purposes: you can review and correct intermediate work before the final output is produced, you can reuse components independently, and you have an audit trail of what the agent did.

Explicit Guardrails

Every playbook includes specific constraints about what the agent should NOT do. This is not being cautious — it is being precise. Agents that know their boundaries produce better work within those boundaries. Unconstrained agents tend to over-produce, hallucinate, or drift from the objective.

Human Checkpoints

No playbook runs fully autonomously end-to-end without review. Each one is designed with natural review points — after the research phase, after the analysis, before the final output is used. Effective agents augment your judgment. They do not replace it.

The 80/20 Rule for Agents
The goal is not to automate 100% of a workflow. It is to automate the 80% that is repetitive so you can focus your time on the 20% that requires judgment, creativity, or relationship context. If an agent handles the research, analysis, and first draft, you add the strategic insight, personal touch, and quality assurance. That is the right division of labor.

Where to Go From Here

You have now completed the "From Process to Agent" module. Here is a practical path forward:

  1. This week: Write your first skill for a task you do every week. Follow the framework. Test it 3 times.
  2. Next week: Write 2 more skills. Start noticing which skills could chain together.
  3. Week 3: Build your first agent connecting 2-3 skills. Run it supervised 5 times.
  4. Month 2: Adapt one of the playbooks above to your core workflow. Share the skills with your team.
  5. Ongoing: Every time you catch yourself typing a long, detailed prompt, ask: "Should this be a skill?"

The supplementary materials include a quick reference card with all commands and patterns, and a guide to cowork mode for delegation-style workflows. These are optional but valuable as your practice matures.

Module Complete
You have finished Module 2: From Process to Agent. You understand skills, the workflow-to-skill framework, agent architecture, and have seen real-world playbooks across four domains. Check out the Supplementary section for the cowork guide and quick reference card.
Host Your Playbooks on Keyset
These playbooks become significantly more powerful when they run on their own. With Keyset, you can schedule agents to run automatically — a weekly market landscape report every Monday morning, a daily data cleanup at midnight, a case study pipeline that triggers when a new transcript lands. Add human-in-the-loop routing so an agent pauses for your approval before sending outreach emails or publishing content. And when your agents are reliable enough, offer them as a service to clients or your team. That is the full arc: from manual process, to local agent, to hosted production workflow.