OpenClaw Session Guide
OpenClaw Builders
Session Guide • Claw Skills

How to Build Custom Skills
for OpenClaw Bots

A step-by-step walkthrough of Keith Motte's layered research-to-skill workflow — from brain dump to deployed agents inside OpenClaw.

Presented by Keith MotteCommunity: OpenClaw BuildersDuration: ~30 minLevel: Intermediate
Scroll to begin
01 — Overview

Session Overview & Why This Matters

This session guide distills Keith Motte's live walkthrough on building custom skills and agent teams for OpenClaw bots. Whether you are creating a single-purpose skill (like finding new businesses in your city) or a complex multi-agent system (like an expert X/Twitter posting engine), the principles covered here will save you tokens, improve output quality, and give you a repeatable workflow you can use every time.

Why does this matter to you?
Building skills the right way means your OpenClaw bot produces higher-quality, more trustworthy outputs — while costing you fewer API tokens. The layered approach Keith demonstrates turns a one-shot prompt into a refined, research-backed agent system that OpenClaw can optimize further on its own.

What You Will Learn

Research-First Workflow

How to do a "brain dump" of research before ever touching OpenClaw, using tools you already pay for.

Two Skill-Building Methods

The layered approach (recommended) vs. the quick-prompt approach — and when to use each.

Import & Deploy

How to take a skill built in an external LLM and bring it into OpenClaw via zip file.

Quality Assurance

Using the Google E-E-A-T framework and QA agents to ensure your outputs are trustworthy.

Claw SkillsAgent TeamsToken OptimizationE-E-A-T
02 — Two Methods

The Two Methods for Building Skills

Keith outlines two distinct approaches for creating OpenClaw skills. Both are valid, but they serve different purposes and produce different levels of quality. Understanding when to use each is the first decision you need to make.

AspectMethod 1: Layered ResearchMethod 2: Quick Prompt
Effort LevelHigh (1+ hour of prep research)Low (single prompt, minutes)
Token CostLower on OpenClaw (uses external LLM tokens for research)Higher on OpenClaw (all processing happens in-bot)
Output QualityHigher — multi-source, validated dataGood for simple, well-defined tasks
Best ForComplex multi-agent skills, production useQuick utilities, prototyping, simple automations
ExampleExpert X/Twitter posting engine with 8 agents"Find all new businesses that opened yesterday in San Diego"
Keith's Recommendation
"I like to do it this way because I want to hit it from every angle. Saves money and I think it brings about a better end solution." — Use the layered approach for anything you plan to run in production.
03 — Layered Research Approach

Method 1: The Layered Research Approach

Research workflow visualization

This is Keith's preferred method. The core idea is to use external LLMs and research tools you already pay for (Manus, Genspark, ChatGPT, Perplexity, Cowork, etc.) to do the heavy lifting — then hand a polished, research-backed package to OpenClaw for final optimization.

Step-by-Step Workflow

1

Define Your Goal

Think about what you are trying to achieve. Write a clear statement of the skill or agent team you want to build. Example: "Create an expert X/Twitter posting skill that analyzes top creators and generates high-engagement content."

2

Deep Research & Brain Dump

Spend time (Keith spent ~1 hour) gathering data from multiple sources. Look at best practices, articles, competitor tools, feature lists, code examples, and product pages. Use tools like Manus, Genspark, Skyworks, Atlas, Claude Browser, or Comet to scrape and compile information.

3

Feed Research into a Thinking LLM

Take your brain dump and feed it into an LLM with strong reasoning capabilities. Keith recommends Cowork (Claude-based) or Google Gemini 2.5 Pro for its deep thinking on complex problems. Tell the LLM: "Create an OpenClaw skill or agent" and let it ask clarifying questions.

4

Let the LLM Build the Skill Package

The LLM will process your research and generate a complete skill package — including an orchestrator, individual agents (Keith's example produced 8 agents), schemas, hook libraries, rubrics, and a README.

5

Review & QA the Output

Before importing into OpenClaw, review what was generated. Check the agent structure, schemas, and logic. Optionally add a QA agent that evaluates the overall deliverable against a separate set of rules.

6

Download as Zip & Import to OpenClaw

Download the skill package as a file.zip. This is the standard format OpenClaw expects. Copy the zip into OpenClaw and let it extract, read, and optimize the skill using its own memory, rules, and data.

The Two-Step Advantage

External LLM handles the expensive research and generation → OpenClaw optimizes the skill using its own knowledge. You save tokens and get a better result than either tool alone.

What Keith's Example Produced

Agent orchestration architecture

For the Expert X/Twitter Posting skill, the Cowork/Claude LLM generated a complete system with the following architecture:

1 Orchestrator

Central coordinator that routes tasks to the right agent and manages the pipeline.

2 Brain Agents

Brain Engine and Inspiration Engine for strategy and creative direction.

2 Research Agents

Account Researcher and Metrics Analyzer for data gathering and performance analysis.

4 Execution Agents

Content Coach, Reply Guy, Execution Engine, and Ideal Bank for content creation and posting.

The package also included 2,000+ lines of documentation, a 50-hook library, profile schemas, a scoring rubric, and a command reference guide.

04 — Quick Prompt Approach

Method 2: The Quick-Prompt Approach

For simpler skills or rapid prototyping, you can skip the external research phase entirely and just tell OpenClaw what you want in plain language. Keith demonstrated this by creating a business-finder skill on the fly.

Example: New Business Finder

The Prompt (verbatim)
"Create a skill to find all the new businesses that opened yesterday in San Diego. Use resources. Run every day at 8:35 AM."

OpenClaw processed this single prompt and produced a working skill that:

  • Fetches data from public sources (Secretary of State filings, OpenCorporates)
  • Categorizes businesses by type (food, retail, etc.) and city
  • Identifies potential leads with contact information
  • Scores leads based on configurable tiers
  • Sets up a cron job for daily automated execution
  • Delivers results via message (Telegram in Keith's setup)
Watch Out
Keith notes that running a quick-prompt skill right after another session can cause "contamination" — the bot may mix context from the previous session. Always start a new session for a new skill when using this method.
05 — Importing Skills

Importing Skills into OpenClaw

Import workflow visualization

Once you have built your skill package externally, the import process into OpenClaw follows a standard pattern. Here is the workflow Keith demonstrated:

1

Prepare the Zip File

Ensure your skill is packaged as a file.zip. This is the standard format that OpenClaw (and most agent frameworks like Claude and Manus) expect. The zip should contain all agent files, schemas, configs, and documentation.

2

Paste the Zip into OpenClaw

Copy the zip file into your OpenClaw instance. Add a brief description: "This zip file contains the content to create an OpenClaw set of agents and skills to create a product called [your product name]."

3

Let OpenClaw Extract & Optimize

OpenClaw will extract the zip, read the schemas, and optimize the skill using its own memory, rules, and knowledge base. Keith's import took approximately 2 minutes to complete.

4

Verify the Package

Check that OpenClaw created the expected files: skill definitions, agent configs, asset files (databases/schemas), a README, and a command reference.

Pro Tip
Always ask OpenClaw to generate a reference guide for your skill. Keith emphasizes: "I always create a reference guide because oftentimes there's going to be so much functionality you're not going to remember." This becomes your cheat sheet for using the skill day-to-day.
06 — Testing

Testing & Validation Strategies

Keith demonstrated several approaches for testing skills before and after importing them into OpenClaw. Testing is critical because it helps you baseline performance and catch issues early.

Testing Environments

EnvironmentBest ForNotes
Cowork / ClaudePre-import testing of skill logicDeploy as a Claude skill and test before importing to OpenClaw
ManusAgent creation and testingManus has built-in agent capabilities; good for parallel testing
KimiKlawQuick testing sandboxPre-hosted OpenClaw instance (~$40/mo). Good for testing, NOT for production.
OpenClaw (local)Production deploymentYour actual instance. Keith runs his on a Mac Mini.
Security Note on KimiKlaw
Keith ran an OWASP security scan on KimiKlaw and found some concerns. He was able to mitigate them by changing file permissions (chmod), but recommends against using it for production until it becomes "more trustworthy."

Testing Workflow

  • 1
    Test the skill in the external LLM first (Cowork, Claude, Manus)
  • 2
    Time how long tasks take to establish a baseline
  • 3
    Import into OpenClaw and run the same test
  • 4
    Compare results and execution time
  • 5
    Take screenshots of outputs and feed them back as feedback
  • 6
    Iterate: "Hey, you ran this, you gave me this banger post — here's the result" + screenshot
07 — Quality Assurance

Quality Assurance: The E-E-A-T Framework

One of Keith's "secret sauce" techniques is applying Google's E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) framework to validate skill outputs. If it is good enough for Google rankings, it is good enough to ensure your agents produce trustworthy content.

E

Experience

Does the output reflect real, first-hand experience with the topic? Include personal knowledge in your brain dump.

E

Expertise

Is the content produced by (or informed by) someone with genuine expertise? Your research phase ensures this.

A

Authoritativeness

Are claims backed by credible sources? Always ensure traceability — you should be able to trace back every assumption.

T

Trustworthiness

Is the output reliable and honest? Add a QA/Bar Raiser agent that evaluates the final deliverable against your standards.

Keith's QA Process
"I always like to have sources. I want to make sure that I have sources and there is some traceability where I could trace back all my assumptions, all the data that I have — enough data to support the article." He also recommends adding a dedicated QA agent (a "Bar Raiser") to your agent team that reviews outputs before they are delivered.
08 — Tools

Tools & LLM Recommendations

Keith mentioned several tools throughout the session. Here is a consolidated reference of what he uses and recommends for different stages of the skill-building process.

ToolStageWhy Keith Recommends It
ManusResearch, Agent TestingBuilt-in agent capabilities; can create and test agents directly
Genspark / SkyworksResearchGood for initial data gathering and exploration
Cowork (Claude-based)Skill Generation"Claude does a really good job creating skills." Excellent for building the skill package.
Google Gemini 2.5 ProDeep Thinking"It does deep thinking and it thinks about hard problems." Best for synthesizing large datasets.
Claude OpusSkill Generation"Opus does a really good job as well." Frontier model quality.
ChatGPT / PerplexityResearchUse whatever subscription you already have to save API costs.
Atlas / CometWeb ScrapingGood for grabbing data from product pages and information-rich sites.
OpenGravityIDE / Text EditorKeith's preferred editor for reviewing and editing skill packages.
09 — Key Takeaways

Key Takeaways & Action Items

1

Research First, Build Second

Spend time gathering data from multiple sources before asking any LLM to build your skill. The quality of your input determines the quality of your output.

2

Use External LLMs to Save Tokens

Leverage subscriptions you already pay for (ChatGPT, Perplexity, Manus, Cowork) to do the heavy research and initial skill generation.

3

Let OpenClaw Optimize

When you import a skill, OpenClaw applies its own memory, rules, and data to optimize it. This two-step process produces better results than either tool alone.

4

Always Add QA

Apply the E-E-A-T framework. Add a QA/Bar Raiser agent. Ensure traceability of sources. If it's good enough for Google, it's good enough for your agents.

5

Create Reference Guides

Always generate a reference guide for complex skills. You will not remember all the functionality, and a guide prevents you from missing capabilities.

6

Iterate with Feedback

Take screenshots of outputs, feed them back to your agents with context ("here's the result"), and let the system learn from real-world performance.

Your Action Checklist

  • Pick a skill you want to build for your OpenClaw bot
  • Spend 30–60 minutes researching: articles, tools, code examples, product pages
  • Compile everything into a single "brain dump" document
  • Feed the brain dump into Cowork, Claude, or Gemini 2.5 Pro
  • Tell the LLM to create an OpenClaw skill/agent package
  • Review the output and add QA rules if needed
  • Download as zip and import into OpenClaw
  • Test the skill and baseline its performance
  • Share your results in the OpenClaw Builders community!
10 — Community

Community Resources & Next Steps

Join the Conversation

Share your skill builds, ask questions, and get feedback from fellow Claw Builders.

skool.com/openclawbuilders

Where to Post Your Work

CategoryWhat to Post
Claw SkillsShare your completed skill packages, agent architectures, and reference guides
SolutionsPost workarounds, fixes, and creative approaches you discovered during skill building
WinsCelebrate your successful deployments and share performance metrics
General DiscussionAsk questions, share ideas, and discuss approaches with the community
Earn Access to Skill Files
Keith gamifies skill file access through the Skool leaderboard. Engage authentically with the community — share knowledge, help others, and provide real value — to level up and unlock downloadable skill packages. Currently, Level 5 members get access to the files demonstrated in this session.

Recommended Next Sessions

OpenClaw Security Hardening

Learn how to secure your OpenClaw instance, run OWASP scans, and manage file permissions.

Advanced Agent Orchestration

Deep dive into multi-agent architectures, Kanban boards, and automated pipelines.