Turning Workflows into Skills
Every manual process you repeat is a skill waiting to be written. Learn the systematic framework for identifying, structuring, and codifying your workflows into reusable skill files.
The Framework: Document, Structure, Codify, Test
Converting a workflow into a skill is not about writing a prompt. It is about decomposing a process into its essential parts. We use a four-step framework that works for any domain and any complexity level:
Document — Write down what you actually do
Next time you perform the task, narrate each step to yourself and write it down. Do not try to optimize yet. Include everything: where you get your inputs, what tools you use, what decisions you make, and what the final output looks like. Be honest about edge cases and judgment calls.
This step is about capturing reality, not the idealized version. If you sometimes skip a step when you are in a hurry, note that too. The goal is a complete picture of the process as it actually happens.
Structure — Identify inputs, steps, and outputs
Take your raw documentation and organize it into three buckets:
- Inputs: What information do you need before starting? What varies each time?
- Steps: What is the sequence of actions? Which steps are always the same vs. conditional?
- Outputs: What does the finished result look like? What format does it need to be in?
Also identify the constraints — word limits, tone requirements, source restrictions, quality bars. These are the guardrails that keep the skill producing consistent results.
Codify — Write the skill file
Translate your structured notes into the skill file format. Map your inputs to parameters, your steps to instructions, your output to a template, and your constraints to rules. Use the format covered in the previous lesson.
The key here is being explicit. Instructions like "make it good" do not work. Instead, specify what "good" means: "include at least 3 data points," "use active voice," "stay under 200 words."
Test — Run it, review, iterate
Run the skill with 3-5 different inputs. Compare the output to what you would have produced manually. Look for:
- Missing information you expected to see
- Sections that are too verbose or too thin
- Format inconsistencies between runs
- Edge cases that produce poor results
Edit the skill file and re-run. Most skills need 2-3 rounds of refinement before they are reliably good. This is normal and expected.
Identifying Good Skill Candidates
Not every task makes a good skill. The best candidates share these characteristics:
- Repetitive: You do it at least weekly, often with slight variations.
- Structured: The output follows a consistent format or template.
- Text-heavy: The work involves reading, writing, analyzing, or summarizing text.
- Time-consuming but not creative: It takes 30+ minutes but doesn't require genuine creative breakthroughs.
- Teachable: You could explain the process to a smart colleague in under 10 minutes.
Tasks that involve highly subjective judgment, visual design, or deep domain expertise that changes rapidly are harder to codify. Start with the straightforward ones and work your way up.
Example A: Weekly Competitive Research
Before: The Manual Process
Every Friday, Sarah on the strategy team spends 90 minutes researching three competitors. She opens each competitor's blog, checks their social media, searches Google News, and looks at their job postings for signals. She compiles her notes into a Google Doc, formats it with headers for each competitor, and shares it in the team Slack. The format drifts slightly each week. Sometimes she forgets to check job postings. The research is solid but inconsistent.
Documenting Sarah's Process
When Sarah writes down what she actually does, the list looks like this:
- Open each competitor's blog, look for posts from this week
- Check their LinkedIn and Twitter for announcements
- Search Google News for "[competitor name]" filtered to past 7 days
- Look at their careers page for new job postings (especially engineering and sales roles)
- Write 3-5 bullet points per competitor summarizing what she found
- Flag anything that seems like a direct response to something her company did
- Add a "So What" section with her recommendations
- Paste into Slack with @mentions for relevant team members
After: The Skill File
# Skill: Weekly Competitor Roundup
## Description
Produce a weekly competitive intelligence summary for up to 3 competitors.
## Inputs
- **competitors** (required): Comma-separated list of competitor names (max 3)
- **our_company** (optional): Our company name for threat analysis. Default: from CLAUDE.md
- **week_of** (optional): The week to research. Default: "current week"
## Instructions
For each competitor in {competitors}:
1. Search for blog posts published in the past 7 days
2. Check LinkedIn and X/Twitter for official announcements
3. Search news coverage from the past 7 days
4. Review careers/jobs page for new postings, especially:
- Engineering roles (signals product investment)
- Sales/marketing roles (signals go-to-market push)
- Executive hires (signals strategic shift)
5. Compile 3-5 bullet points of the most significant findings
6. Assess each finding:
- Is it a direct response to something {our_company} did?
- Does it represent a new competitive threat?
- Is it a signal of strategic direction change?
7. After covering all competitors, write a "So What" section:
- Top 1-2 things the team should pay attention to
- Any recommended actions
- Relevant team members to loop in (by role, not name)
## Output Format
# Competitive Roundup — Week of {week_of}
## {competitor_1}
- Finding 1 (🔴 threat / 🟡 monitor / 🟢 neutral)
- Finding 2 (threat level)
- ...
## {competitor_2}
- ...
## {competitor_3}
- ...
## So What
- Key takeaway and recommendation
- Who should care: [role]
## Constraints
- Keep each competitor section to 5 bullets max
- Total length under 500 words
- Use Slack-compatible formatting
- If no notable activity found, say "Quiet week" rather than padding
- Never speculate about internal competitor strategy without evidenceSarah's 90-minute Friday ritual is now a single command that produces consistent, well-formatted output in under a minute. She reviews the output, adds any personal commentary, and posts it. Total time: 10 minutes.
Example B: Client Email Drafting
Before: The Manual Process
Marcus sends 15-20 outreach emails per week to prospective clients. Each email needs to reference the prospect's company, mention a relevant pain point, connect it to his company's solution, and include a soft CTA. He has a mental template but the quality varies — especially on Friday afternoons when he's tired. His best emails get 35% reply rates. His worst get 5%.
Documenting Marcus's Process
Marcus's high-performing emails follow this pattern:
- Research the prospect's company (what they do, recent news, company size)
- Identify a specific pain point relevant to their role and industry
- Write a subject line that references something specific (not generic)
- Open with a personalized observation (not "I hope this finds you well")
- Connect the pain point to his company's solution in 1-2 sentences
- Include one piece of social proof (case study, metric, client name)
- Close with a low-friction CTA ("Would a 15-minute call make sense?")
- Keep the whole email under 150 words
After: The Skill File
# Skill: Outreach Email
## Description
Draft a personalized outreach email to a prospective client.
## Inputs
- **prospect_name** (required): Full name of the prospect
- **prospect_company** (required): Their company name
- **prospect_role** (optional): Their job title
- **pain_point** (optional): Known pain point. If not provided, research and infer one.
- **tone** (optional): One of "formal", "conversational", "bold". Default: "conversational"
## Instructions
1. If no pain_point is provided:
- Research {prospect_company} briefly
- Identify their industry and likely challenges
- Select the most relevant pain point our product addresses
2. Write the email following this structure:
a. **Subject line:** Reference something specific to {prospect_company}
- Good: "Quick thought on {prospect_company}'s expansion"
- Bad: "Great opportunity for you"
b. **Opening line:** A personalized observation. Options:
- Reference a recent company achievement or news
- Comment on their role and a common challenge
- Mention a mutual connection or shared context
Never use: "I hope this finds you well", "I'm reaching out because"
c. **Pain → Solution bridge:** 1-2 sentences connecting their pain to our solution
- Use our product details from CLAUDE.md
- Be specific about the benefit, not vague about features
d. **Social proof:** One brief reference (client name, metric, or case study)
- Pull from CLAUDE.md if available
e. **CTA:** Low-friction ask
- "Would a 15-minute call next week make sense?"
- "Happy to share how [similar company] handled this"
- Never: "Let me know if you're interested"
## Output Format
**Subject:** {subject_line}
Hi {prospect_name},
{email_body}
Best,
{sender_name from CLAUDE.md}
---
**Why this works:** Brief explanation of the personalization choices made.
## Constraints
- Total email body: 100-150 words (strictly enforced)
- Never use buzzwords: "synergy", "leverage", "ecosystem", "paradigm"
- Never mention competitors by name
- Tone must match the selected style
- If you cannot find real information about the prospect, say so — do not fabricateMarcus now runs /outreach-email prospect_name="Jane Smith" prospect_company="Acme Corp" and gets a draft that matches his best work — every time, even on Friday afternoons. He reviews and personalizes for 2 minutes instead of drafting from scratch for 15.
Example C: Meeting Notes to Action Items
Before: The Manual Process
After every client call, Priya spends 20 minutes turning her raw meeting notes into a structured summary with action items. She formats them into a Notion page with sections for discussion points, decisions made, and next steps. She assigns owners and deadlines to each action item, then shares the page with attendees. When she's busy, this step gets skipped and action items fall through the cracks.
Documenting Priya's Process
- Review raw notes (messy, abbreviated, sometimes just keywords)
- Identify the main topics discussed
- For each topic, note what was decided (if anything)
- Extract action items: what needs to happen, who owns it, by when
- Add context for anyone who wasn't on the call
- Flag any blockers or risks mentioned
- Format into the standard template and share
After: The Skill File
# Skill: Meeting Debrief
## Description
Turn raw meeting notes into a structured summary with action items.
## Inputs
- **notes** (required): Raw meeting notes (paste directly or reference a file)
- **meeting_type** (optional): "client-call", "internal", "standup". Default: "client-call"
- **attendees** (optional): List of attendees for assigning action items
## Instructions
1. Read the raw notes carefully. They may be messy, abbreviated, or incomplete.
2. Identify all distinct topics or agenda items discussed.
3. For each topic:
- Summarize what was discussed in 1-2 clear sentences
- Note any decisions made (explicitly mark as "Decision:")
- Note any open questions (mark as "Open:")
4. Extract every action item mentioned or implied:
- What needs to be done (specific and actionable)
- Who owns it (from {attendees} list, or "TBD" if unclear)
- Deadline (if mentioned) or suggest "by next meeting"
5. Identify any risks or blockers mentioned during the meeting
6. If meeting_type is "client-call":
- Add a "Client Sentiment" note (positive, neutral, concerned, frustrated)
- Flag any commitments made to the client
7. If meeting_type is "standup":
- Keep it brief — just blockers and action items
- Skip the detailed summary
## Output Format
# Meeting Notes — {date}
**Type:** {meeting_type} | **Attendees:** {attendees}
## Discussion Summary
### {Topic 1}
Summary of discussion. **Decision:** What was decided. **Open:** Unresolved question.
### {Topic 2}
...
## Action Items
| # | Action | Owner | Deadline | Priority |
|---|--------|-------|----------|----------|
| 1 | Specific task | Person | Date | High/Med/Low |
| 2 | ... | ... | ... | ... |
## Risks & Blockers
- Risk or blocker with brief context
## Client Sentiment (client calls only)
Overall sentiment and brief explanation.
## Constraints
- Action items must be specific enough to act on without re-reading the notes
- If an action item owner is ambiguous, mark as TBD rather than guessing
- Do not add topics or action items not mentioned in the original notes
- Keep the total summary under 400 wordsPriya now pastes her raw notes into Claude Code and runs /meeting-debrief. Two minutes later, she has a clean, formatted summary. No action items fall through the cracks. The consistent format means her team always knows where to look for their tasks.
Patterns for Better Skills
After converting several workflows, you will start to notice patterns that make skills more reliable. Here are the most important ones:
Pattern: Explicit Output Templates
The single biggest improvement you can make to any skill is defining exactly what the output looks like. Compare these two approaches:
# Vague (inconsistent results): "Summarize the findings in a clear format" # Explicit (consistent results): "Use this exact template: ## Summary **Key Finding:** One sentence **Supporting Data:** 2-3 bullet points **Recommendation:** One actionable sentence"
Pattern: Negative Instructions
Telling Claude what NOT to do is often as important as telling it what to do. If you notice the output including something you do not want, add a constraint:
## Constraints - Do not include a greeting or sign-off (this will be added manually) - Never use placeholder data — if information is unavailable, say so - Do not repeat the input back as confirmation - Skip the "let me know if you need changes" closing
Pattern: Context References
Instead of duplicating company information, product details, or team names in every skill, reference your project's CLAUDE.md file:
## Instructions 1. Use our product name and description from CLAUDE.md 2. Reference the competitive landscape section for positioning 3. Use the brand voice guidelines when writing copy
Pattern: Graceful Failure
Good skills handle edge cases explicitly rather than producing bad output:
## Constraints - If the URL is inaccessible, report the error immediately - If fewer than 3 data points are found, flag the research as "incomplete" - If the competitor has no recent activity, say "No activity found" rather than speculating about why
Skill Audit Checklist
Before considering a skill "done," run through this checklist:
- Does the skill have a clear, one-line description?
- Are all required inputs genuinely required? Could any be optional with a default?
- Are the instructions numbered and specific (not vague)?
- Is there an explicit output template?
- Are there constraints covering common failure modes?
- Have you tested with at least 3 different inputs?
- Is the output consistent across runs?
- Is the skill under 60 lines? (If not, consider splitting it.)