Session Overview & Why This Matters
This session guide distills Keith Motte's live walkthrough on building custom skills and agent teams for OpenClaw bots. Whether you are creating a single-purpose skill (like finding new businesses in your city) or a complex multi-agent system (like an expert X/Twitter posting engine), the principles covered here will save you tokens, improve output quality, and give you a repeatable workflow you can use every time.
What You Will Learn
Research-First Workflow
How to do a "brain dump" of research before ever touching OpenClaw, using tools you already pay for.
Two Skill-Building Methods
The layered approach (recommended) vs. the quick-prompt approach — and when to use each.
Import & Deploy
How to take a skill built in an external LLM and bring it into OpenClaw via zip file.
Quality Assurance
Using the Google E-E-A-T framework and QA agents to ensure your outputs are trustworthy.
The Two Methods for Building Skills
Keith outlines two distinct approaches for creating OpenClaw skills. Both are valid, but they serve different purposes and produce different levels of quality. Understanding when to use each is the first decision you need to make.
| Aspect | Method 1: Layered Research | Method 2: Quick Prompt |
|---|---|---|
| Effort Level | High (1+ hour of prep research) | Low (single prompt, minutes) |
| Token Cost | Lower on OpenClaw (uses external LLM tokens for research) | Higher on OpenClaw (all processing happens in-bot) |
| Output Quality | Higher — multi-source, validated data | Good for simple, well-defined tasks |
| Best For | Complex multi-agent skills, production use | Quick utilities, prototyping, simple automations |
| Example | Expert X/Twitter posting engine with 8 agents | "Find all new businesses that opened yesterday in San Diego" |
Method 1: The Layered Research Approach

This is Keith's preferred method. The core idea is to use external LLMs and research tools you already pay for (Manus, Genspark, ChatGPT, Perplexity, Cowork, etc.) to do the heavy lifting — then hand a polished, research-backed package to OpenClaw for final optimization.
Step-by-Step Workflow
Define Your Goal
Think about what you are trying to achieve. Write a clear statement of the skill or agent team you want to build. Example: "Create an expert X/Twitter posting skill that analyzes top creators and generates high-engagement content."
Deep Research & Brain Dump
Spend time (Keith spent ~1 hour) gathering data from multiple sources. Look at best practices, articles, competitor tools, feature lists, code examples, and product pages. Use tools like Manus, Genspark, Skyworks, Atlas, Claude Browser, or Comet to scrape and compile information.
Feed Research into a Thinking LLM
Take your brain dump and feed it into an LLM with strong reasoning capabilities. Keith recommends Cowork (Claude-based) or Google Gemini 2.5 Pro for its deep thinking on complex problems. Tell the LLM: "Create an OpenClaw skill or agent" and let it ask clarifying questions.
Let the LLM Build the Skill Package
The LLM will process your research and generate a complete skill package — including an orchestrator, individual agents (Keith's example produced 8 agents), schemas, hook libraries, rubrics, and a README.
Review & QA the Output
Before importing into OpenClaw, review what was generated. Check the agent structure, schemas, and logic. Optionally add a QA agent that evaluates the overall deliverable against a separate set of rules.
Download as Zip & Import to OpenClaw
Download the skill package as a file.zip. This is the standard format OpenClaw expects. Copy the zip into OpenClaw and let it extract, read, and optimize the skill using its own memory, rules, and data.
The Two-Step Advantage
External LLM handles the expensive research and generation → OpenClaw optimizes the skill using its own knowledge. You save tokens and get a better result than either tool alone.
What Keith's Example Produced

For the Expert X/Twitter Posting skill, the Cowork/Claude LLM generated a complete system with the following architecture:
1 Orchestrator
Central coordinator that routes tasks to the right agent and manages the pipeline.
2 Brain Agents
Brain Engine and Inspiration Engine for strategy and creative direction.
2 Research Agents
Account Researcher and Metrics Analyzer for data gathering and performance analysis.
4 Execution Agents
Content Coach, Reply Guy, Execution Engine, and Ideal Bank for content creation and posting.
The package also included 2,000+ lines of documentation, a 50-hook library, profile schemas, a scoring rubric, and a command reference guide.
Method 2: The Quick-Prompt Approach
For simpler skills or rapid prototyping, you can skip the external research phase entirely and just tell OpenClaw what you want in plain language. Keith demonstrated this by creating a business-finder skill on the fly.
Example: New Business Finder
OpenClaw processed this single prompt and produced a working skill that:
- Fetches data from public sources (Secretary of State filings, OpenCorporates)
- Categorizes businesses by type (food, retail, etc.) and city
- Identifies potential leads with contact information
- Scores leads based on configurable tiers
- Sets up a cron job for daily automated execution
- Delivers results via message (Telegram in Keith's setup)
Importing Skills into OpenClaw

Once you have built your skill package externally, the import process into OpenClaw follows a standard pattern. Here is the workflow Keith demonstrated:
Prepare the Zip File
Ensure your skill is packaged as a file.zip. This is the standard format that OpenClaw (and most agent frameworks like Claude and Manus) expect. The zip should contain all agent files, schemas, configs, and documentation.
Paste the Zip into OpenClaw
Copy the zip file into your OpenClaw instance. Add a brief description: "This zip file contains the content to create an OpenClaw set of agents and skills to create a product called [your product name]."
Let OpenClaw Extract & Optimize
OpenClaw will extract the zip, read the schemas, and optimize the skill using its own memory, rules, and knowledge base. Keith's import took approximately 2 minutes to complete.
Verify the Package
Check that OpenClaw created the expected files: skill definitions, agent configs, asset files (databases/schemas), a README, and a command reference.
Testing & Validation Strategies
Keith demonstrated several approaches for testing skills before and after importing them into OpenClaw. Testing is critical because it helps you baseline performance and catch issues early.
Testing Environments
| Environment | Best For | Notes |
|---|---|---|
| Cowork / Claude | Pre-import testing of skill logic | Deploy as a Claude skill and test before importing to OpenClaw |
| Manus | Agent creation and testing | Manus has built-in agent capabilities; good for parallel testing |
| KimiKlaw | Quick testing sandbox | Pre-hosted OpenClaw instance (~$40/mo). Good for testing, NOT for production. |
| OpenClaw (local) | Production deployment | Your actual instance. Keith runs his on a Mac Mini. |
Testing Workflow
- 1Test the skill in the external LLM first (Cowork, Claude, Manus)
- 2Time how long tasks take to establish a baseline
- 3Import into OpenClaw and run the same test
- 4Compare results and execution time
- 5Take screenshots of outputs and feed them back as feedback
- 6Iterate: "Hey, you ran this, you gave me this banger post — here's the result" + screenshot
Quality Assurance: The E-E-A-T Framework
One of Keith's "secret sauce" techniques is applying Google's E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) framework to validate skill outputs. If it is good enough for Google rankings, it is good enough to ensure your agents produce trustworthy content.
Experience
Does the output reflect real, first-hand experience with the topic? Include personal knowledge in your brain dump.
Expertise
Is the content produced by (or informed by) someone with genuine expertise? Your research phase ensures this.
Authoritativeness
Are claims backed by credible sources? Always ensure traceability — you should be able to trace back every assumption.
Trustworthiness
Is the output reliable and honest? Add a QA/Bar Raiser agent that evaluates the final deliverable against your standards.
Tools & LLM Recommendations
Keith mentioned several tools throughout the session. Here is a consolidated reference of what he uses and recommends for different stages of the skill-building process.
| Tool | Stage | Why Keith Recommends It |
|---|---|---|
| Manus | Research, Agent Testing | Built-in agent capabilities; can create and test agents directly |
| Genspark / Skyworks | Research | Good for initial data gathering and exploration |
| Cowork (Claude-based) | Skill Generation | "Claude does a really good job creating skills." Excellent for building the skill package. |
| Google Gemini 2.5 Pro | Deep Thinking | "It does deep thinking and it thinks about hard problems." Best for synthesizing large datasets. |
| Claude Opus | Skill Generation | "Opus does a really good job as well." Frontier model quality. |
| ChatGPT / Perplexity | Research | Use whatever subscription you already have to save API costs. |
| Atlas / Comet | Web Scraping | Good for grabbing data from product pages and information-rich sites. |
| OpenGravity | IDE / Text Editor | Keith's preferred editor for reviewing and editing skill packages. |
Key Takeaways & Action Items
Research First, Build Second
Spend time gathering data from multiple sources before asking any LLM to build your skill. The quality of your input determines the quality of your output.
Use External LLMs to Save Tokens
Leverage subscriptions you already pay for (ChatGPT, Perplexity, Manus, Cowork) to do the heavy research and initial skill generation.
Let OpenClaw Optimize
When you import a skill, OpenClaw applies its own memory, rules, and data to optimize it. This two-step process produces better results than either tool alone.
Always Add QA
Apply the E-E-A-T framework. Add a QA/Bar Raiser agent. Ensure traceability of sources. If it's good enough for Google, it's good enough for your agents.
Create Reference Guides
Always generate a reference guide for complex skills. You will not remember all the functionality, and a guide prevents you from missing capabilities.
Iterate with Feedback
Take screenshots of outputs, feed them back to your agents with context ("here's the result"), and let the system learn from real-world performance.
Your Action Checklist
- Pick a skill you want to build for your OpenClaw bot
- Spend 30–60 minutes researching: articles, tools, code examples, product pages
- Compile everything into a single "brain dump" document
- Feed the brain dump into Cowork, Claude, or Gemini 2.5 Pro
- Tell the LLM to create an OpenClaw skill/agent package
- Review the output and add QA rules if needed
- Download as zip and import into OpenClaw
- Test the skill and baseline its performance
- Share your results in the OpenClaw Builders community!
Community Resources & Next Steps
Join the Conversation
Share your skill builds, ask questions, and get feedback from fellow Claw Builders.
skool.com/openclawbuildersWhere to Post Your Work
| Category | What to Post |
|---|---|
| Claw Skills | Share your completed skill packages, agent architectures, and reference guides |
| Solutions | Post workarounds, fixes, and creative approaches you discovered during skill building |
| Wins | Celebrate your successful deployments and share performance metrics |
| General Discussion | Ask questions, share ideas, and discuss approaches with the community |
Recommended Next Sessions
OpenClaw Security Hardening
Learn how to secure your OpenClaw instance, run OWASP scans, and manage file permissions.
Advanced Agent Orchestration
Deep dive into multi-agent architectures, Kanban boards, and automated pipelines.
