Let's build a Claude Code skill for documentation generation called /doc-it that can generate a complete documentation layer for your project.
When you think documentation, you might picture READMEs, or API references, or maybe a CONTRIBUTING guide if you're working on a larger project. But now, with many teams using AI agents, there are even more documentation layers to consider. Projects that use AI coding tools have been accumulating a second documentation surface. Files like CLAUDE.md and AGENTS.md help explain your codebase to your agents, covering where the docs live, which commands work, and what "good output" means in a particular repo. When those docs go stale, however, that failure mode can be silent. Agents don't alert you to a dead directory reference, they either trust that path or figure out their own way around the issues they encounter, which ends up costing you time and tokens.
At Dosu, we focus on building knowledge infrastructure. So we built a Claude Code skill called /doc-it to bootstrap that whole layer in one command, then your repo can better explain itself to both people and agents. We'll show you how we ran it against a real project and map where single-repo skills hit their ceiling.
We tested this skill against Overdue, the same retro pixel-art game we used in our companion post on documentation drift. Overdue is a FastAPI project with six API modules, which makes it a useful test case for a documentation generation skill. You can browse the skill file to see the final result.
Note: This walkthrough is for educational purposes. It's a great way to learn how Claude Code skills, project context files, and documentation generation fit together, but it is not a production-grade solution. The skill needs ongoing tuning, generated output requires human review, and the approach doesn't scale beyond a handful of repos.
What you'll build
- Skill file that tells Claude how to audit and generate docs
- Context file that tells Claude what your project considers good documentation
- Run the skill against your codebase and review what it finds
- Review and merge the generated pages into your project
When you run /doc-it all, Claude inventories your existing documentation, audits files like CLAUDE.md and AGENTS.md for stale references, scaffolds missing pages (CONTRIBUTING.md, TOOLS.md), generates API docs from your route handlers, and prints a summary of what it created, updated, and recommends. If you target a specific file like /doc-it src/api/volumes.py, it scopes down to just that module.
When we ran the /doc-it skill for Overdue, we were able to generate five documentation files for our project. The skill caught a stale path in AGENTS.md that had been misdirecting our AI tools working in the repo. The API docs identified error strings straight from our source code. The skill surfaced knowledge that had been trapped in source code, pulling it into files we could review.
Prerequisites
We recommend you budget about 30 minutes to build the skill and context file, plus another 15 minutes per module to review the generated output. Here's what you'll need to start:
- Claude Code installed locally (the skill runs inside Claude Code, not in CI)
- A project with source code that has route handlers, CLI commands, or public APIs to document
- An Anthropic API key configured for your Claude Code installation
- Basic familiarity with Markdown (the skill file and its output are both Markdown)
- Optionally, a
docs/directory with existing documentation (the skill adapts to whatever structure you have)
The three doc layers most projects have
The skill targets three distinct documentation layers that almost every repo has, even when nobody admits it.
- User-facing docs. README pages, API references, guides, architecture docs.
- Contributor-facing docs.
CONTRIBUTING.md, local tooling notes, test commands, release workflows. - Agent-facing docs.
CLAUDE.md,AGENTS.md, and any other files that tell AI tools how this repo works.
Building a skill that targets multiple documentation files (for example, README.md, CONTRIBUTING.md, TOOLS.md, CLAUDE.md, AGENTS.md, and our API endpoint pages) forces you to ask a question most teams dodge. What knowledge exists in our repo that's important enough that both people and agents should be able to reference immediately? That question shapes everything the skill generates.
How Claude Code skills work
A Claude Code skill is a reusable Markdown file that encodes a repeatable workflow Claude can execute on command. The skill file is where that answer gets encoded. Specifically, it's a Markdown file at .claude/skills/{skill-name}/SKILL.md. When you type /doc-it, Claude reads that file and runs from it rather than copying and pasting a large prompt every single time.
Three frontmatter fields control behavior in a skill.
allowed-toolscontrols which tools the skill can access (Read,Write,Grep,Glob,Bash)argument-hinttells a person what to pass when invoking the skill, like a path oralluser-invocablemakes it callable as a slash command from the Claude Code prompt
Skills follow the Agent Skills open standard, so the same file runs in Cursor and VS Code Copilot too. For Claude Code's specific mechanics, Anthropic's skills documentation provides more information.
The real win over direct prompting is that the workflow lives in version control, so we can see changes over time. /doc-it all or /doc-it src/api/volumes.py invokes the correct, reviewed, shared skill file while you're working on your codebase.
Step 1: Create the skill file
Create a new file at .claude/skills/doc-it/SKILL.md. This file defines what /doc-it does when you invoke it.
Expand the complete skill file
---
name: doc-it
description: >
Generate, update, and audit project documentation from source code. Scans the
repo for missing docs, stale references, and gaps in existing files. Produces
new pages, patches existing ones, and recommends docs the project should have.
allowed-tools: [Read, Grep, Glob, Bash, Write]
argument-hint: "<path> to target a file or directory, or 'all' for a full audit"
user-invocable: true
---
# Generate and Maintain Project Documentation
## When to Use
Run this skill when:
- A new contributor asks "where are the docs?"
- Endpoints, CLI commands, or config options have changed since the last doc update
- You want to find out what documentation is missing, stale, or incomplete
- You're preparing a release and need the CHANGELOG or README refreshed
Invoke with `/doc-it all` for a full repo audit, or target a specific
area: `/doc-it src/api/volumes.py`, `/doc-it README.md`,
`/doc-it docs/guides/`.
## Workflow
### Step 1: Discover what exists
Scan the repo root and common locations for documentation files.
- Root files: `README.md`, `CONTRIBUTING.md`, `CHANGELOG.md`, `TOOLS.md`,
`CLAUDE.md`, `AGENTS.md`, `CLAUDE.local.md`, `LICENSE`
- Doc directories: `docs/`, `documentation/`, `doc/`, `wiki/`
- Inline docs: docstrings, JSDoc, rustdoc, godoc annotations in source files
- Config manifests: `package.json`, `pyproject.toml`, `Cargo.toml`, `go.mod`,
`Makefile`, `Dockerfile`, `docker-compose.yml`
**Resolve symlinks before doing anything else.** Many repos symlink AI
instruction files to a single canonical source. Common patterns:
- `CLAUDE.md` -> `AGENTS.md` (or the reverse)
- `.cursorrules` -> `AGENTS.md`
- `CODEX.md` -> `AGENTS.md`
Run `ls -la` on every root-level markdown file to detect symlinks. When a
symlink is found, record which file is the real file (the symlink target)
and which are aliases. For the rest of the workflow:
- Only read and edit the **canonical (real) file**, never the symlink
- If the user targets a symlink by name (e.g., `/doc-it CLAUDE.md`
and CLAUDE.md is a symlink to AGENTS.md), resolve it and edit AGENTS.md
- Report the symlink relationship in the summary so the user knows which
file was edited and why
If both `CLAUDE.md` and `AGENTS.md` exist as separate (non-symlinked) files,
treat them as independent files but flag content divergence between them in
the audit step.
Build an inventory: file path (noting symlinks), last modified date,
approximate line count, and a one-line summary of what each file covers.
### Step 2: Audit existing documentation
For every documentation file found in Step 1 (using canonical paths only,
never symlink aliases):
1. **Check for stale references.** Flag any mention of files, directories,
commands, environment variables, or config keys that no longer exist in
the codebase.
2. **Check for missing coverage.** Compare what the doc describes against
what the code contains. If the code has endpoints, CLI commands, config
options, models, or public APIs that the doc doesn't mention, list them.
3. **Check internal consistency.** If the project has both CLAUDE.md and
AGENTS.md as separate files, flag divergence between them. If README.md
describes a setup process that conflicts with CONTRIBUTING.md, flag that
too.
Report findings as a list: file, issue type (stale, missing, inconsistent),
specific detail, and suggested fix.
### Step 3: Update existing docs
If $ARGUMENTS targets a specific file (e.g., `README.md`, `CHANGELOG.md`,
`docs/api/volumes.md`), update that file directly:
- Preserve the existing structure, voice, and formatting
- Add missing sections where the code has outgrown the docs
- Remove or flag references to deleted code
- Update examples, commands, and config values to match current state
- Leave a brief inline comment (`<!-- updated by /doc-it -->`)
at each changed section so reviewers can find the edits
If $ARGUMENTS is 'all', apply updates to every file that has audit findings
from Step 2.
### Step 4: Generate missing documentation
Based on the audit, generate new files the project should have but doesn't.
Detect which files to create from the repo contents:
**README.md** (if missing or stub): Project name, one-paragraph description,
install/setup commands (from manifest files), usage example, link to docs
directory if it exists.
**CONTRIBUTING.md** (if missing): Dev environment setup, branch and PR
conventions (inferred from git history and any PR templates), test commands,
commit message format, code review expectations.
**CHANGELOG.md** (if missing): Scaffold from git tags and release history.
Group entries by version with dates.
**TOOLS.md** (if missing): Language version, package manager, dev server
command, build command, lint/format commands, any MCP or tooling config.
**CLAUDE.md or AGENTS.md** (if missing): Project overview, directory layout,
key conventions, build/test/lint commands, and existing docs to reference
for style.
**API documentation** (if the project has route handlers or RPC definitions):
Scan for HTTP handlers, gRPC services, GraphQL resolvers, or CLI command
definitions. For each public interface, generate a doc page with:
- Name and signature
- "When to Use" section explaining the use case, not restating the signature
- Request/input details with types
- Response/output examples (success and common errors)
- Runnable code example
- Caveats: auth requirements, rate limits, side effects
Write API docs to the project's existing docs directory structure. If none
exists, create `docs/api/`.
### Step 5: Recommend additional docs
After generating and updating, look for gaps that aren't covered by the
standard files above. Recommend (but don't generate without confirmation)
docs the project would benefit from:
- Architecture overview (if the project has multiple services or layers)
- Deployment guide (if Dockerfile, CI config, or infra files exist)
- Troubleshooting page (if error handling code is substantial)
- Migration guide (if the schema or API has versioned breaking changes)
- Security policy (if auth, encryption, or secrets management code exists)
Present recommendations as a bulleted list with one sentence explaining
why each one would help.
### Step 6: Summary report
Print a structured summary:
- **Updated:** files that were modified, with a one-line description of
each change
- **Created:** new files that were generated
- **Recommended:** additional docs the project should consider
- **Gaps:** information the skill couldn't determine from source alone
(e.g., deployment targets, team conventions not visible in code)
## Quality Rules
- Never invent request/response fields, config keys, CLI flags, or
environment variables that don't exist in the source code
- Every code example must be syntactically valid and runnable
- Flag gaps rather than filling them with assumptions
- Match the voice, formatting, and heading style of existing docs in
the project. If the project has no docs yet, use plain markdown with
ATX headings
- Include only status codes, error messages, and return values that the
code produces. Do not guess
- If unsure about a parameter's purpose, write
"See source at [file:line]" rather than guessing
- When updating an existing file, make the smallest edit that fixes the
gap. Do not rewrite sections that are already correct
- Preserve manual edits. If a section has been hand-written and is still
accurate, leave it alone
Our skill prompt has three important sections worth calling out:
The six-step workflow (discover, audit, update, generate, recommend, report) is what makes this more than a generic, "just write me docs" type of prompt. Step 1's symlink resolution matters because many repos symlink CLAUDE.md to AGENTS.md for cross-tool compatibility. Without that check, our skill would edit the alias and the canonical file would stay stale, or worse, both files diverge.
The Quality Rules are a huge factor in our skill's output quality. During our testing, Claude added a review_count field to Overdue's volume response even though the Pydantic model didn't define one. Once you instruct Claude to flag gaps, the output gets much more useful.
The argument-hint field (<path> to target a file or directory, or 'all' for a full audit) is what makes /doc-it flexible for daily use. Running /doc-it all once gives you a great starting place. Then, targeting a specific module keeps token costs bounded and review time manageable.
Step 2: Add your project's context
A skill tells Claude what job to do. Your shared CLAUDE.md or AGENTS.md tells it what this repository counts as good documentation, and the context file shapes output quality more than you might expect.
Claude Code loads context from three layers:
~/.claude/CLAUDE.mdfor personal defaults like test framework and commit format./CLAUDE.mdfor shared repo conventions like directory layout and doc format./CLAUDE.local.mdfor local overrides you don't want to commit
For cross-tool compatibility, many teams use AGENTS.md instead of CLAUDE.md. Overdue maintains both as separate files, with AGENTS.md as the canonical reference. If you want a single file for all tools, you can symlink it with ln -s AGENTS.md CLAUDE.md. Benjamin Crozat's guide walks through that pattern, and Anthropic's memory docs cover the format in more depth.
Add the following sections to your project's CLAUDE.md (or AGENTS.md) to give Claude the context it needs for documentation generation, then you can adapt this example to match your project's structure.
## Project Documentation Context
### Code Structure
- Route handlers are in `src/api/` organized by domain
(e.g., `src/api/volumes.py`, `src/api/shelves.py`)
- Each handler file exports route definitions using
[your framework, e.g., Express, FastAPI, Hono]
- Middleware is defined in `src/middleware/`
(auth, rate-limiting, validation)
- Request/response types are in `src/models/` or `src/types/api/`
- Test files mirror the source structure in `tests/`
### Documentation Format
- Project docs live in `docs/` as Markdown files
- API docs go in `docs/api/` with one file per domain
(e.g., `docs/api/volumes.md` covers all `/api/v1/volumes/*` endpoints)
- Use this frontmatter:
```yaml
---
title: "Volumes API"
description: "Create, review, and manage knowledge volumes"
api_version: "v1"
---
Documentation Style
- Start each endpoint section with the HTTP method and path as a heading
- Always include a "When to Use" subsection that explains when and why, not just what
- Code examples use
curlfor universality - Show both success and the most common error response
- Use realistic example data (not "foo", "bar", "test123")
- Parameter tables include columns for name, type, whether it's required, and a description
Build and Test
- Document the commands developers need to run locally
uv run pytest(or your test command) runs the full suiteuv run fastapi dev src/main.py(or your dev command) starts the server- Include linting commands (e.g.,
uv run ruff check .)
Authentication Patterns
- Routes in
src/api/*/protected.tsrequire Bearer token authentication - Routes in
src/api/*/public.tsare unauthenticated - Admin routes require the
adminrole claim in the JWT - Document auth requirements for every endpoint
Existing Docs to Reference
When generating new docs, read these existing pages for style:
docs/api/authentication.md— our gold-standard API doc pageAGENTS.md— project conventions and AI guidancedocs/guides/quickstart.md— how we introduce concepts to new users
In practice, the context file needs to answer four questions.
1. Where do the docs live?
2. What does every endpoint page need to contain?
3. Which commands are real?
4. And which existing pages define the house style?
For Overdue, that boiled down to a few concrete directives. Docs live in `docs/`. Endpoint pages always include a `When to Use` section and `curl` examples. The real commands are `uv run pytest`, `uv run fastapi dev src/main.py`, and `uv run ruff check .`. The style references are `docs/api/authentication.md` and `AGENTS.md`.
Without that file, our skill still finds endpoints, though it defaults to generic headings and whatever command guesses might look plausible over what's correct.
## Step 3: Run `/doc-it` and review the output
With both files in place, invoke the skill from your Claude Code prompt:
```sh
/doc-it all
For a scoped run targeting a single module:
/doc-it src/api/volumes.py
Here's what /doc-it surfaced when we ran it against Overdue.
A missing CONTRIBUTING.md
Our skill pulled branch naming from existing PR templates, the test command from pyproject.toml, and the commit format from git history. It also assumed a code review requirement we don't enforce on solo PRs. The first draft was imperfect, but the imperfections were easy to spot because the shape of the file was right.
A bad path in AGENTS.md
Our file still pointed to a src/middleware/ directory that no longer existed. It might sound like a minor issue until you consider what an agent does in such a situation. A dead path in a human-facing guide gets ignored or flagged in an issue. In an agent-facing guide, it becomes repeated low-grade confusion. Each AI tool that reads the file trusts that reference and spends cycles looking for code that isn't there.
A forgotten token expiry update
The authentication docs said tokens expire in 24 hours, and our repo said 60 minutes. Nobody had noticed because the docs were written when the expiry was longer, and the change never made it upstream. The skill caught it by comparing the expires_in value in the handler against the number in authentication.md.
Using the TOOLS.md
The TOOLS.md page we previewed in "What you'll build" came from this finding. None of that operational detail had been documented before, even though it was the kind of knowledge new contributors and coding agents require.
Example: generated API endpoint docs
The review endpoint page shows whether the skill describes behavior or stops at scaffolding:
## Review a Volume
POST /api/volumes/{volume_id}/review
Reviews a volume, resetting its Dewey Score to 100. This triggers the game
engine to award XP, update the librarian's review streak, and check for badge
unlocks.
### When to Use
Use this when a librarian confirms that a volume is still accurate. In a
review queue UI, this is the endpoint behind the "Review" button.
**404 Not Found**
{
"detail": "That volume isn't on any of our shelves. Check the catalog
and try again."
}
/doc-it lifted a real error string from the handler, noticed side effects in on_volume_reviewed, and wrote a When to Use section that was far more accurate with our interface.
In one run, our skill's own <!-- updated by /doc-it --> comment landed inside a bash code block, which was easy to spot in review but something you'd miss without looking for it. We recommend budgeting 10 to 15 minutes of review per module.
Step 4: Fit /doc-it into your dev workflow
/doc-it is a great start, but it's not a scalable solution. This skill can help as you're working through new code or auditing other codebases.
Run /doc-it all once when you're bootstrapping your documentation layer. Let it inventory what exists, allowing for scaffolding of what you might need to add, and it can help you fill some gaps in your project documentation.
After that, you can switch to module-by-module runs:
/doc-it src/api/volumes.py
By following this workflow, your token costs stay bounded, the review surface stays reasonably sized, and you can tell whether the skill is adding helpful docs or doing cleanup work. A lot of AI documentation automation fails as your review queue gets larger (scaling is difficult)!
- Manual first. Run
/doc-itbefore you open a PR. This can work for smaller projects or teams. - Use a soft reminder next. A pre-commit hook that notices
src/changed and reminds you to run the skill is better than a brittle hard gate. - Automate last. If the output is consistently reviewable, move it into GitHub Actions. Our companion post on documentation drift walks through that workflow.
Where /doc-it falls short
We iterated on this skill over several runs before the output was consistently reviewable.
Claude can invent fields at times
The Quality Rules from Step 1 reduced fabrication, but they didn't quite eliminate it. Even with the rules in place, Claude occasionally generated plausible-looking endpoint parameters that didn't exist in the code. Instead of confidently inventing fields, our skill now flags the gap and moves on.
Docs drift when you forget to run it
The skill generates when you run it, not when your code changes. Modify an endpoint without running /doc-it and your docs drift quietly. This is the limitation of a local skill versus a CI-integrated solution.
Generated and hand-written docs diverge over time
The skill can help with whatever you point it at, but when you mix hand-written and generated docs, these two contexts can diverge over time unless someone on your team is constantly vigilant as you work through your docs.
Token costs add up on large projects
Running /doc-it all on a large project can use significant API tokens in a single run. Module-by-module invocation (/doc-it src/api/volumes.py) is a great cost control mechanism, keeping each run focused and the token spend predictable.
The audit catches stale paths, but not stale intent
The skill can catch a dead directory reference in CLAUDE.md, but it can't tell you whether your instruction file is producing better docs over time. It can compare code to docs but not docs quality to intent. If you add a new game rank to defaults.py, the skill dutifully documents it but won't notice that bots.py has its own mirrored ranks table that might also need updating.
Symlinks can send the skill to the wrong file
Many teams symlink CLAUDE.md to AGENTS.md for cross-tool compatibility. Without the symlink check in Step 1 of the skill's workflow, the skill can edit the alias while the canonical file stays stale, and both files lose important context. We added the ls -la check after discovering that Overdue's CLAUDE.md was a symlink and the skill was writing to the wrong file.
Or...just use Dosu!
The /doc-it skill you built targets documentation that lives inside the codebase. Dosu reads sources that your skill likely doesn't have information about like GitHub Issues, PRs, or discussions about code changes.
- Docs drift when you forget to run it. Dosu triggers automatically on every PR with no manual invocation, and helps guard against stale documentation as time goes on.
- Claude invents fields. Dosu offers citations that trace claims back to specific commits and discussions, rather than fabricated results.
- Generated and hand-written docs diverge. Dosu's review workflow catches divergence before anything ships, moving docs through draft, review, and published stages.
- The skill doesn't scale past one repo. Dosu works across repositories without asking you to hand-tune a fresh
SKILL.mdevery time your team spins up a new service.
Dosu has features that help close these gaps at scale:
Self-Documenting PRs. When a PR opens or merges, Dosu scans for documentation affected by the code changes. It analyzes each document against the PR diff and posts suggested updates as PR comments. Then, engineers can accept, decline, or edit directly from GitHub.
Doc Generation. Give Dosu a topic (and optionally a template), and it researches across connected data sources. Whether code, Slack threads, GitHub issues, Confluence pages, or PRs, Dosu gathers all the context you need to produce a thorough document. Automated Topic Discovery identifies gaps by analyzing recent PR activity against existing docs and suggests new pages worth writing. All generated docs start as drafts until you explicitly publish them.
MCP integration. Dosu's MCP server connects to tools like Claude Code, Cursor, and Codex for working with your organization's documentation and data sources. Coding agents query the knowledge base for task-relevant context before they start work, and can propose new documentation topics back through our save_topic tool, which creates a feedback loop where agents consume and contribute to organizational knowledge.
For a deeper look at why documentation context matters as teams scale, see Dosu's pieces on AI-generated documentation and knowledge management in the AI era.
The per-repo approach works and it's a great starting point for a team with a few repos and time to iterate. But if you find yourself copying skill files across five repositories and re-tuning each one, you might be spending a lot of time on maintenance, rather than your product.
If that sounds familiar, give Dosu a try and see how it compares to building your own.
Related in this series
- How to Catch Documentation Drift with Claude Code and GitHub Actions — Build a CI workflow that detects when docs fall out of sync with your codebase.


