Custom Sub-agents
Build specialized agents that handle specific tasks—brand voice checking, code review, data validation, and more. Sub-agents let you codify expertise into reusable, automated workflows.
What Are Sub-agents?
A sub-agent is a specialized version of Claude Code that you configure for a specific purpose. While the main Claude Code agent is a generalist—it can do anything from writing code to analyzing data—sub-agents are focused. They have specific instructions, specific tools they should use, and specific criteria for success.
Think of it like the difference between a general practitioner and a specialist. Your main Claude Code session is the GP who handles everything. Sub-agents are specialists you call in for specific jobs: a "brand voice checker" that reviews all content, a "security auditor" that checks for vulnerabilities, or a "test writer" that generates comprehensive test suites.
Why Use Sub-agents?
- Consistency: A sub-agent follows the same checklist every time, eliminating human oversight gaps.
- Expertise encoding: Capture your team's domain knowledge in a reusable format that anyone can invoke.
- Delegation: Let Claude handle routine checks while you focus on creative and strategic work.
- Parallelism: Run multiple specialized sub-agents simultaneously, each handling a different concern.
Creating a Sub-agent via CLAUDE.md
The simplest way to create a sub-agent is by defining it in your CLAUDE.md file. You describe the agent's role, its instructions, and how to invoke it. Then, during any Claude Code session, you can activate the sub-agent by referencing it in your prompt.
# CLAUDE.md ## Sub-agents ### Brand Voice Checker When I ask you to "check brand voice" or "review tone," act as our brand voice specialist. Review the specified content against these rules: - Tone: professional but warm, never corporate or stiff - Audience: startup founders, 28-45 years old - Avoid: jargon, passive voice, sentences over 25 words - Required: at least one concrete example per section - Flag any instance of banned words: leverage, synergy, utilize, paradigm Output a structured report with: pass/fail per rule, specific violations, and suggested rewrites for any failures. ### Security Auditor When I ask you to "security audit" or "check for vulnerabilities," act as a security specialist. Check the specified code for: - SQL injection vulnerabilities - XSS attack vectors - Exposed secrets or credentials - Missing input validation - Insecure authentication patterns Rate each finding as Critical, High, Medium, or Low severity. Include a fix recommendation for each finding.
Now you can invoke these sub-agents naturally in conversation:
> Check brand voice on the new landing page copy in content/landing.mdx > Security audit the authentication module in src/lib/auth.ts
Example: Brand Voice Checker Agent
Let's build a complete brand voice checker sub-agent step by step. This is a real-world example that marketing teams use to maintain consistent messaging across all content.
The Sub-agent Definition
Add this to your project's CLAUDE.md:
### Brand Voice Checker Trigger: "check brand voice" or "voice review" Role: You are our brand voice quality assurance specialist. Your job is to ensure all customer-facing content matches our brand guidelines. Brand Guidelines: 1. **Tone**: Confident but not arrogant. Helpful but not patronizing. We sound like a smart friend who happens to be an expert. 2. **Vocabulary**: - Use "you" more than "we" - Use active voice (aim for >90% of sentences) - Keep sentences under 20 words on average - Keep paragraphs under 4 sentences 3. **Banned phrases**: "industry-leading," "best-in-class," "leverage," "utilize," "in order to," "at the end of the day" 4. **Required elements**: - Every H2 section needs a concrete example or statistic - CTAs should use action verbs: "Start," "Build," "Try" - Never end a section without a clear next step Process: 1. Read the content file(s) specified 2. Check each section against all guidelines above 3. Output a report in this format: ``` ## Voice Review: [filename] ### Overall Score: [A/B/C/D/F] ### Findings: - [PASS/FAIL] Tone: [details] - [PASS/FAIL] Active voice: [X%] active sentences - [PASS/FAIL] Sentence length: [avg X words] - [PASS/FAIL] Banned phrases: [list any found] - [PASS/FAIL] Examples per section: [details] - [PASS/FAIL] CTAs: [details] ### Specific Issues: 1. Line X: "[quoted text]" → Suggestion: "[rewrite]" 2. ... ### Rewritten Sections (if score < B): [Provide rewritten versions of failing sections] ```
Using the Brand Voice Checker
> Check brand voice on content/blog/announcing-v2.mdx ## Voice Review: announcing-v2.mdx ### Overall Score: B ### Findings: - [PASS] Tone: Confident and friendly, reads well - [PASS] Active voice: 88% active sentences - [FAIL] Sentence length: avg 24 words (target: under 20) - [FAIL] Banned phrases: Found "in order to" on line 34, "utilize" on line 67 - [PASS] Examples per section: Each section has at least one example - [FAIL] CTAs: Section 3 ends without a clear next step ### Specific Issues: 1. Line 34: "In order to get started..." → "To get started..." 2. Line 67: "...utilize our API..." → "...use our API..." 3. Line 45: 38-word sentence → Split into two sentences ...
Example: Code Review Agent
A code review sub-agent can enforce your team's standards automatically, catching issues before they reach a human reviewer.
### Code Review Agent Trigger: "review code" or "code review" Role: You are a senior engineer conducting a thorough code review. You check for correctness, maintainability, performance, and adherence to our project conventions. Review Checklist: 1. **Correctness**: Does the code do what it claims? Edge cases handled? 2. **Types**: Are TypeScript types specific (no `any`)? Proper null handling? 3. **Error handling**: Are errors caught and handled meaningfully? 4. **Naming**: Are variable/function names descriptive and consistent? 5. **Performance**: Any N+1 queries? Unnecessary re-renders? Missing memoization? 6. **Security**: Input validation? SQL injection? XSS? 7. **Testing**: Are changes covered by tests? Any untested paths? 8. **Conventions**: Does it follow our CLAUDE.md coding conventions? Output Format: - Start with a one-sentence summary of the change - List each finding with severity: 🔴 Must Fix, 🟡 Should Fix, 🟢 Suggestion - End with an overall assessment: Approve, Request Changes, or Discuss Process: 1. Read the changed files (use git diff if reviewing a branch) 2. Understand the intent of the changes 3. Apply each checklist item 4. Write the review
Invoke it during your workflow:
> Review code — check the changes on this branch vs main > Code review the new payment module in src/lib/payments/
Best Practices for Sub-agent Design
Be Specific About the Role
Vague instructions produce vague results. Compare these two approaches:
# Bad: Too vague ### Reviewer Review the code and give feedback. # Good: Specific and actionable ### Accessibility Reviewer Review UI components for WCAG 2.1 AA compliance. Check for: - Proper aria labels on interactive elements - Color contrast ratios (minimum 4.5:1 for normal text) - Keyboard navigation support (tab order, focus indicators) - Screen reader compatibility (semantic HTML, alt text) - Focus trap handling in modals and dialogs Rate each component: Compliant, Needs Work, or Non-compliant.
Define the Output Format
Always specify what the output should look like. This ensures consistent, predictable results you can rely on.
- Provide a template or example of the expected output
- Specify the format (Markdown, JSON, table, etc.)
- Define scoring or rating criteria if applicable
- State what should be included and what should be left out
Include the Process
Tell the sub-agent how to work, not just what to check. A defined process ensures thoroughness and makes the sub-agent more reliable.
Process: 1. First, read the target files to understand the full context 2. Then, read any related files (imports, types, tests) 3. Apply each checklist item systematically 4. Verify findings by re-reading the relevant code 5. Prioritize findings by severity 6. Write the report in the specified format
Set Boundaries
Be clear about what the sub-agent should and should not do. Should it only report issues, or should it also fix them? Should it modify files, or only read them?
Boundaries: - READ ONLY: Do not modify any files. Only report findings. - Scope: Only review files in the specified path, not the entire project. - Do not suggest architectural changes — focus on the code as written. - If you find a critical security issue, flag it immediately before continuing the rest of the review.
Walkthrough: Creating Your First Sub-agent
Let's create a "Documentation Checker" sub-agent from scratch. This agent verifies that every exported function in your codebase has proper JSDoc documentation.
Define the purpose
Start by clearly stating what problem this sub-agent solves. In our case: "We want to ensure all exported functions have JSDoc comments with a description, parameter docs, and return type docs."
Write the checklist
List every specific thing the sub-agent should check:
- Every exported function has a JSDoc comment
- The JSDoc includes a description (first line)
- Each parameter has a @param tag with type and description
- The return value has a @returns tag
- Examples are included for complex functions
Define the output format
Decide what the report should look like. A clear, structured format makes results actionable:
## Documentation Report: [path] Coverage: X/Y functions documented (Z%) ### Missing Documentation: - src/utils/format.ts: formatCurrency() — no JSDoc - src/utils/format.ts: formatDate() — missing @param tags ### Incomplete Documentation: - src/lib/auth.ts: createSession() — missing @returns tag - src/lib/api.ts: fetchUser() — missing @param for "options" ### Well Documented (examples to follow): - src/lib/payments.ts: processPayment() ✓
Add it to CLAUDE.md
Combine the purpose, checklist, process, and output format into a sub-agent definition in your CLAUDE.md:
### Documentation Checker Trigger: "check docs" or "documentation audit" Role: You are a documentation quality specialist. You verify that all exported functions have complete JSDoc documentation. Checklist: - Every exported function/class/type has a JSDoc comment - JSDoc includes a one-line description - All parameters have @param with type and description - Return values have @returns with description - Complex functions (>10 lines) include an @example Process: 1. Find all TypeScript/JavaScript files in the specified path 2. Identify all exported functions, classes, and types 3. Check each export against the checklist 4. Generate the coverage report Output: Markdown report with coverage percentage, missing docs, incomplete docs, and examples of well-documented functions. Boundaries: Read-only. Do not modify files. Only report findings.
Test it
Run the sub-agent and review the output. Refine the instructions based on the results. If it misses something, add it to the checklist. If the output is unclear, adjust the format.
> Check docs on the src/lib/ directory
Iterate and improve
After a few uses, you will notice patterns. Maybe the sub-agent should also check for outdated docs (params that no longer exist), or maybe it should ignore test files. Update the definition in CLAUDE.md as you learn.
Advanced Patterns
Chaining Sub-agents
You can run multiple sub-agents in sequence, where the output of one feeds into the next. For example, after writing new content, run the brand voice checker, then the SEO auditor, then the accessibility reviewer.
> Write a new blog post about our v2 release in content/blog/v2.mdx. Then check brand voice on it. Then check SEO. Fix any issues from both reviews.
Sub-agents in CI/CD
Because Claude Code can run non-interactively with the -p flag, you can integrate sub-agents into your CI/CD pipeline. For example, run the code review agent on every pull request:
# In your CI pipeline $ git diff main...HEAD | claude -p "Run code review on these changes using the Code Review Agent defined in CLAUDE.md"
Team-Shared Sub-agents
Since sub-agent definitions live in CLAUDE.md and CLAUDE.md is committed to your repository, your entire team benefits from every sub-agent you create. A senior engineer can encode their review expertise into a sub-agent that junior engineers invoke. A marketing lead can define the brand voice agent that the whole content team uses.