Write Your First Claude Skill for OpenClaw: A 2026 Step-by-Step Guide
Claude Skills became an open standard in December 2025 and now power 13,729 community skills in the OpenClaw registry. Here is the definition, the SKILL.md anatomy, a full walkthrough for writing and shipping your first skill, and the comparison to MCP and sub-agents.
Last updated: April 19, 2026
A Claude Skill is a folder of markdown and optional scripts that teaches an AI agent one specific, repeatable workflow and loads itself into context only when the agent detects a matching request. Anthropic announced Skills on October 16, 2025, published the 32-page Complete Guide to Building Skills for Claude on January 29, 2026, and made Agent Skills an open standard on December 18, 2025. The same format now works across Claude Code, Claude Desktop, Cursor, and OpenClaw. By February 28, 2026 the public OpenClaw registry (ClawHub) was already carrying 13,729 community-built skills, and the format had become the default way agent builders package reusable capability.
Key takeaways
- A Skill is a directory with a
SKILL.mdfile. The file has YAML frontmatter (name, description, optional allowed-tools) and markdown instructions below it. - The
descriptionfield decides whether the agent ever loads the skill. Write it as a clear "what it does + when to use it" sentence. - Skills use progressive disclosure: only the frontmatter metadata sits in context until a matching request triggers a full load. Fifty skills cost roughly the same idle tokens as one.
- OpenClaw loads skills from three roots, in precedence order: bundled, local (
~/.openclaw/skills), and per-workspace. Later roots override earlier ones by name. - Agent Skills became an open standard in December 2025. A skill written for Claude Code drops into OpenClaw, Cursor, or any compatible runtime with no rewrite.
- Skills are the right answer for repeatable procedural knowledge. Tools, MCP servers, and sub-agents solve different problems and are explained below.
What a Claude Skill actually is
Before Skills, agent builders had three choices. You could stuff long instructions into the system prompt and pay the token cost on every turn. You could write a custom tool for each workflow and maintain the glue code forever. Or you could train a sub-agent per task and juggle routing logic by hand.
A Skill replaces all three for the case that covers most real work: the agent already knows how to do the thing in principle, but it needs the house-specific recipe. How does our agency format a client onboarding report? Which fields go into a GHL appointment webhook? What is the exact SQL migration pattern this codebase uses? Those answers are short, procedural, and worth reusing. That is a Skill.
The physical artifact is almost embarrassingly simple. A folder. Inside the folder, a file named SKILL.md. Optional subfolders named scripts/, references/, and assets/ for bundled code, docs, and templates. That is the whole spec.
The agent indexes every skill's frontmatter at startup. When a user request matches the description, the agent pulls in the full markdown and any referenced files, runs the procedure, and unloads it again. Progressive disclosure keeps idle context cheap and the skill library big.
Skills vs tools vs MCP servers vs sub-agents
Agent builders new to the ecosystem routinely confuse these four primitives. They solve overlapping but distinct problems, and picking the wrong one creates architecture pain that is hard to undo later.
| Primitive | What it encodes | When to reach for it | Cost model |
|---|---|---|---|
| Skill | Procedural knowledge and templates | A repeatable workflow your agent does often, with house-specific steps | Metadata-only when idle, full markdown on trigger |
| Tool | A deterministic function the agent can call | Reading a file, sending a message, running a shell command | Schema in context always, invoked on demand |
| MCP server | A remote bundle of tools and resources from one data source | Any external integration you want to share across agents | Subprocess or HTTP service, schema injected into context |
| Sub-agent | A separate agent loop with its own context window | Long research, parallel exploration, isolated failure domains | A fresh full conversation per invocation |
The mental model that keeps teams unstuck: a Skill tells the agent how to do something. A Tool or MCP server lets the agent do something. A Sub-agent delegates the whole job to a new context. Most production OpenClaw workspaces end up using all four, but they start with Skills because Skills are cheap, versionable, and readable in plain markdown.
For a deeper look at the MCP half of that picture, see the companion post on MCP connectors in OpenClaw.
The SKILL.md anatomy
Every Skill file has two parts. YAML frontmatter on top, markdown body below. The frontmatter tells the agent when to load the skill. The body tells the agent what to do once loaded.
A minimal example for a fictitious "summarize-client-call" skill:
---
name: summarize-client-call
description: Turns a raw call transcript into a structured client summary with action items. Use this whenever the user pastes a call transcript, attaches an audio transcript file, or asks to summarize a client conversation.
allowed-tools: [read_file, write_file]
---
# Summarize Client Call
## When to use
The user has a call transcript (text, VTT, or pasted dialogue) and wants a structured summary for their CRM.
## Steps
1. Read the transcript. Identify the client name, the agency owner's name, and the call date.
2. Extract the top three outcomes the client wanted from the call.
3. Extract every action item, with the owner and a due date if mentioned.
4. Write the result to `./out/<client>-<YYYY-MM-DD>.md` using the template in `./references/template.md`.
## Output contract
The summary file must contain five H2 sections: Attendees, Context, Outcomes, Action items, Next call.
Three details matter more than they look. First, description is the only string the agent sees before loading the skill, so it has to contain both what the skill does and the exact trigger phrases a user might say. Anthropic's own skill-creator plugin writes the description last for exactly this reason. Second, allowed-tools is a safety rail: even if the agent has twenty tools available, only the listed ones can fire while this skill is active. Third, the body uses short numbered steps, not prose. Agents follow checklists reliably. They rewrite prose.
Step-by-step: write and install your first OpenClaw skill
This walkthrough produces a working skill in about fifteen minutes on a fresh OpenClaw install. Any Linux, macOS, or WSL box with Node.js 22 or newer will do.
1. Install OpenClaw and start the gateway if you have not already:
npm install -g @openclaw/cli
openclaw init
openclaw gateway start
2. Create the skill directory inside your local skills root:
mkdir -p ~/.openclaw/skills/summarize-client-call/references
cd ~/.openclaw/skills/summarize-client-call
touch SKILL.md references/template.md
3. Write the SKILL.md file. Paste the example from the anatomy section above, or use the scaffold the CLI ships with:
openclaw skills new summarize-client-call \
--description "Turns a raw call transcript into a structured client summary." \
--allowed-tools read_file,write_file
The CLI writes a templated SKILL.md, an empty references/ directory, and an assets/ folder for any images or sample files.
4. Add a template to references. Drop a markdown file at references/template.md that holds the exact output shape you want. The agent will read it at runtime, so you avoid duplicating the template inside the main SKILL.md:
## Attendees
- <name> (role)
## Context
One paragraph.
## Outcomes
1. ...
2. ...
3. ...
## Action items
| Owner | Action | Due |
| --- | --- | --- |
| ... | ... | ... |
## Next call
Date, channel, goal.
5. Validate the skill locally:
openclaw skills validate summarize-client-call
The validator checks frontmatter syntax, warns on missing description triggers, and runs a dry-load against the current gateway.
6. Reload the daemon and list loaded skills:
openclaw gateway reload
openclaw skills list --agent my-first-agent
You should see summarize-client-call in the output, tagged with its source path. The gateway only loads frontmatter at this point, so startup time is unaffected.
7. Fire it. Send the agent a message that matches the trigger in the description:
openclaw chat my-first-agent \
"Here's a call transcript from today with Acme Dental. Summarize it."
The agent matches the description, loads the full SKILL.md plus the template, produces the summary, and writes it to ./out/acme-dental-2026-04-19.md. If you watch the gateway logs you will see a single "skill:load" event and a "skill:unload" right after the response is returned.
That is the full loop: write markdown, validate, reload, call. No compile step, no deployment, no redeploys across clients. The same skill dropped into a teammate's workspace works identically because the contract is files on disk.
Progressive disclosure and why the token math works
Progressive disclosure is the reason skills scale. At agent startup, OpenClaw reads every SKILL.md and indexes only the YAML frontmatter. Typical frontmatter is under 200 tokens. Fifty skills cost around 10,000 idle tokens in context, which is tolerable on any modern model.
When the agent decides to invoke a skill, it pulls the full markdown, any files referenced from the body, and any scripts the body explicitly calls out. Once the turn ends, that material is dropped from the working context. The next unrelated turn starts clean.
This is the same pattern that makes big codebases tractable for agents: keep the index in memory, load the file only when the query matches. It is also why Anthropic's engineering team has said Skills pair cleanly with MCP rather than replacing it. MCP handles discovery and tool invocation for live systems; Skills handle the procedural knowledge the agent applies once a tool is available.
How OpenClaw resolves bundled, local, and workspace skills
OpenClaw loads skills from three roots. Understanding the precedence rules prevents a whole class of "why is my skill not firing" support tickets.
- Bundled skills ship inside the
@openclaw/clipackage. You can point the gateway at a pinned bundled directory usingOPENCLAW_BUNDLED_PLUGINS_DIR. These are the defaults everyone gets on install. - Local skills live at
~/.openclaw/skills. Anything here applies to every agent on the machine and overrides a bundled skill of the same name. Use this for your own reusable workflows. - Workspace skills live at
~/.openclaw/agents/<agentId>/skills. Anything here applies only to that agent and overrides both bundled and local skills with the same name. Use this for client-specific or project-specific customizations.
The agent logs the source path next to each loaded skill so you can tell at a glance which copy won. For white-label agencies running thirty clients on one gateway, the common pattern is: bundled for the baseline, local for the agency house style, workspace for per-client variants. The file layout itself enforces isolation.
To see how this slots into the larger gateway picture, read what OpenClaw actually is.
Testing, versioning, and shipping a Skill
Skills are plain files in a git repository. Treat them as code. A team shipping skills to clients usually ends up with a shape like this:
- A
skills/directory at the root of the agency repo, one subdirectory per skill. - A
tests/directory next to each skill holding sample inputs and expected outputs. The CLI can run these against a lightweight agent loop:openclaw skills test summarize-client-call. - Pull requests that change a skill's behaviour bump the
versionfield in frontmatter. The registry surfaces this version in the UI, so clients can see when a skill changes underneath them. - A CI job that validates every skill on every commit. The validator is fast (under a second per skill) so it stays in the pre-commit hook too.
For publishing, the open OpenClaw skills documentation describes packaging for ClawHub. For private teams, the simplest approach is to keep skills in a shared git repo and install them into ~/.openclaw/skills via openclaw skills add <git-url>. The CLI handles the clone, checkout, and symlink steps so that updates are a single git pull.
When a Skill is not the right tool
Skills are great at one thing and bad at the opposite of that thing. They encode reusable procedural knowledge. They are the wrong answer when:
- The work is one-off. If you only need the agent to do it once, put the instructions in the chat. A Skill adds friction when it will never be reused.
- The work is mostly a live system call. If the core of the job is "hit this API and summarize the response", write an MCP server or a tool. Skills should orchestrate tools, not replace them.
- The work branches deeply based on state. If your procedure has more than two or three decision points that require loading different playbooks, promote each branch to its own skill and route between them with a top-level skill, or reach for a sub-agent.
- The instructions are longer than the task. If a skill's SKILL.md is 3,000 words long and the output is a two-line confirmation, the skill is doing too much. Break it up or convert the instructions to a real tool.
The honest test: if you would not write a Notion doc to describe the procedure, you probably should not write a skill either. Skills reward clarity, not volume.
Frequently asked questions
Do Claude Skills work in OpenClaw without modification?
Yes. Skills follow the Agent Skills open standard that Anthropic published in December 2025. The SKILL.md contract is identical across Claude Code, Claude Desktop, Cursor, and OpenClaw. A skill you wrote against Claude Code drops into ~/.openclaw/skills and loads without changes. Skills that call allowed-tools only work if the host agent actually has those tools, but that is a runtime concern, not a format one.
How many skills can one agent have before context costs blow up?
In practice, several hundred. Frontmatter is usually 100 to 200 tokens per skill. Progressive disclosure means the full body never sits in context unless the skill fires. Anthropic's own skills repo at the time of writing ships dozens of skills and runs on every Claude plan. The practical ceiling is discoverability: past 30 or 40 skills, the agent starts to have a harder time picking the right one, so invest in tighter description fields rather than a bigger library.
Is it safe to install community skills from ClawHub or agentskills.io?
Treat them like npm packages. Read the SKILL.md before installing. Check what scripts the skill ships and what allowed-tools it requests. The gateway isolates tool execution inside per-agent workspaces, but a skill that runs a shell script still runs with your user's permissions on the host. For anything touching a client workspace, fork the skill into your own repo, audit it, and pin the commit.
What is the difference between a Skill and a prompt template?
A prompt template is a string you paste into a conversation. The agent has no awareness of it beyond that one turn. A Skill is a persistent artifact the agent decides to load based on request intent, can reference files from, and can scope tool access inside. Skills are to prompts what functions are to snippets.
Can a Skill call another Skill?
Indirectly. A skill's markdown can instruct the agent to invoke another skill by name, and the agent will match the second skill's description and load it for the next turn. There is no direct programmatic import. This is intentional: the agent stays in charge of which skills are active, which keeps context use predictable and matches how progressive disclosure is supposed to work.
How do I debug a skill that is not firing?
Three checks. First, run openclaw skills list --agent <id> --verbose and confirm the skill loaded at startup. Second, check that the user message contains at least one phrase from the description field. Third, rerun the turn with OPENCLAW_LOG=debug to see the skill-match decision trace. Most "not firing" issues trace back to a description that is too vague or too narrow.
Ship the skill, then pick your next one
Skills are the lowest-friction way to teach an agent a repeatable job that lives in your head today. Write the markdown, validate it, reload the gateway, call it once, correct the description if the match was weak. That loop takes an afternoon. The compounding effect is that a year of those afternoons produces a skill library that becomes your agency's actual operating system.
If you want the OpenClaw gateway, the bundled skill library, and the ClawHub integration wired up for you rather than built by hand, Kyra deploys the whole stack on your own domain in about ten minutes. For industry-specific starter skills, see the dental practice template. For the broader integration picture, Anthropic's own Claude Code skills documentation, the open-source anthropics/skills repository, and the OpenClaw skills reference are the three sources worth bookmarking first. Skills are a format, not a feature, and formats outlast the companies that invent them.
The Kyra Team
Conversion System
We build white-label AI workforce infrastructure for digital agencies on top of OpenClaw. We publish practical guides on deploying AI agents, self-hosted AI, and multi-channel workforce design.
Try Kyra free
No credit card. Powered by OpenClaw. First AI worker live in under 2 minutes.
Related reading
AI Infrastructure
AI Agent Memory Systems in 2026: How OpenClaw Workspaces, SOUL.md, and Context Compaction Actually Work
13 min read
AI Infrastructure
Self-Hosted AI Cost vs Cloud LLM Bills in 2026: The Honest Math for Agencies
16 min read
AI Infrastructure
Per-Client AI Container Isolation in 2026: How Agencies Run 50+ AI Workers Without Cross-Contamination
12 min read