first commit

This commit is contained in:
douboer
2026-03-21 18:57:10 +08:00
commit c49aa1a5e9
570 changed files with 107167 additions and 0 deletions

View File

@@ -0,0 +1,85 @@
---
name: ai-ppt-generator
description: Generate PPT with Baidu AI. Smart template selection based on content.
metadata: { "openclaw": { "emoji": "📑", "requires": { "bins": ["python3"], "env":["BAIDU_API_KEY"]},"primaryEnv":"BAIDU_API_KEY" } }
---
# AI PPT Generator
Generate PPT using Baidu AI with intelligent template selection.
## Smart Workflow
1. **User provides PPT topic**
2. **Agent asks**: "Want to choose a template style?"
3. **If yes** → Show styles from `ppt_theme_list.py` → User picks → Use `generate_ppt.py` with chosen `tpl_id` and real `style_id`
4. **If no** → Use `random_ppt_theme.py` (auto-selects appropriate template based on topic content)
## Intelligent Template Selection
`random_ppt_theme.py` analyzes the topic and suggests appropriate template:
- **Business topics** → 企业商务 style
- **Technology topics** → 未来科技 style
- **Education topics** → 卡通手绘 style
- **Creative topics** → 创意趣味 style
- **Cultural topics** → 中国风 or 文化艺术 style
- **Year-end reports** → 年终总结 style
- **Minimalist design** → 扁平简约 style
- **Artistic content** → 文艺清新 style
## Scripts
- `scripts/ppt_theme_list.py` - List all available templates with style_id and tpl_id
- `scripts/random_ppt_theme.py` - Smart template selection + generate PPT
- `scripts/generate_ppt.py` - Generate PPT with specific template (uses real style_id and tpl_id from API)
## Key Features
- **Smart categorization**: Analyzes topic content to suggest appropriate style
- **Fallback logic**: If template not found, automatically uses random selection
- **Complete parameters**: Properly passes both style_id and tpl_id to API
## Usage Examples
```bash
# List all templates with IDs
python3 scripts/ppt_theme_list.py
# Smart automatic selection (recommended for most users)
python3 scripts/random_ppt_theme.py --query "人工智能发展趋势报告"
# Specific template with proper style_id
python3 scripts/generate_ppt.py --query "儿童英语课件" --tpl_id 106
# Specific template with auto-suggested category
python3 scripts/random_ppt_theme.py --query "企业年度总结" --category "企业商务"
```
## Agent Steps
1. Get PPT topic from user
2. Ask: "Want to choose a template style?"
3. **If user says YES**:
- Run `ppt_theme_list.py` to show available templates
- User selects a template (note the tpl_id)
- Run `generate_ppt.py --query "TOPIC" --tpl_id ID`
4. **If user says NO**:
- Run `random_ppt_theme.py --query "TOPIC"`
- Script will auto-select appropriate template based on topic
5. Set timeout to 300 seconds (PPT generation takes 2-5 minutes)
6. Monitor output, wait for `is_end: true` to get final PPT URL
## Output Examples
**During generation:**
```json
{"status": "PPT生成中", "run_time": 45}
```
**Final result:**
```json
{
"status": "PPT导出结束",
"is_end": true,
"data": {"ppt_url": "https://image0.bj.bcebos.com/...ppt"}
}
```
## Technical Notes
- **API integration**: Fetches real style_id from Baidu API for each template
- **Error handling**: If template not found, falls back to random selection
- **Timeout**: Generation takes 2-5 minutes, set sufficient timeout
- **Streaming**: Uses streaming API, wait for `is_end: true` before considering complete

View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn7akgt520t01vgs2tzx7yk6m180kt26",
"slug": "ai-ppt-generator",
"version": "1.1.4",
"publishedAt": 1773656502997
}

View File

@@ -0,0 +1,148 @@
import os
import random
import sys
import time
import requests
import json
import argparse
URL_PREFIX = "https://qianfan.baidubce.com/v2/tools/ai_ppt/"
class Style:
def __init__(self, style_id, tpl_id):
self.style_id = style_id
self.tpl_id = tpl_id
class Outline:
def __init__(self, chat_id, query_id, title, outline):
self.chat_id = chat_id
self.query_id = query_id
self.title = title
self.outline = outline
def get_ppt_theme(api_key: str):
"""Get a random PPT theme"""
headers = {
"Authorization": "Bearer %s" % api_key,
}
response = requests.post(URL_PREFIX + "get_ppt_theme", headers=headers)
result = response.json()
if "errno" in result and result["errno"] != 0:
raise RuntimeError(result["errmsg"])
style_index = random.randint(0, len(result["data"]["ppt_themes"]) - 1)
theme = result["data"]["ppt_themes"][style_index]
return Style(style_id=theme["style_id"], tpl_id=theme["tpl_id"])
def ppt_outline_generate(api_key: str, query: str):
"""Generate PPT outline"""
headers = {
"Authorization": "Bearer %s" % api_key,
"X-Appbuilder-From": "openclaw",
"Content-Type": "application/json"
}
headers.setdefault('Accept', 'text/event-stream')
headers.setdefault('Cache-Control', 'no-cache')
headers.setdefault('Connection', 'keep-alive')
params = {
"query": query,
}
title = ""
outline = ""
chat_id = ""
query_id = ""
with requests.post(URL_PREFIX + "generate_outline", headers=headers, json=params, stream=True) as response:
for line in response.iter_lines():
line = line.decode('utf-8')
if line and line.startswith("data:"):
data_str = line[5:].strip()
delta = json.loads(data_str)
if not title:
title = delta["title"]
chat_id = delta["chat_id"]
query_id = delta["query_id"]
outline += delta["outline"]
return Outline(chat_id=chat_id, query_id=query_id, title=title, outline=outline)
def ppt_generate(api_key: str, query: str, style_id: int = 0, tpl_id: int = None, web_content: str = None):
"""Generate PPT - simple version"""
headers = {
"Authorization": "Bearer %s" % api_key,
"Content-Type": "application/json",
"X-Appbuilder-From": "openclaw",
}
# Get theme
if tpl_id is None:
# Random theme
style = get_ppt_theme(api_key)
style_id = style.style_id
tpl_id = style.tpl_id
print(f"Using random template (tpl_id: {tpl_id})", file=sys.stderr)
else:
# Specific theme - use provided style_id (default 0)
print(f"Using template tpl_id: {tpl_id}, style_id: {style_id}", file=sys.stderr)
# Generate outline
outline = ppt_outline_generate(api_key, query)
# Generate PPT
headers.setdefault('Accept', 'text/event-stream')
headers.setdefault('Cache-Control', 'no-cache')
headers.setdefault('Connection', 'keep-alive')
params = {
"query_id": int(outline.query_id),
"chat_id": int(outline.chat_id),
"query": query,
"outline": outline.outline,
"title": outline.title,
"style_id": style_id,
"tpl_id": tpl_id,
"web_content": web_content,
"enable_save_bos": True,
}
with requests.post(URL_PREFIX + "generate_ppt_by_outline", headers=headers, json=params, stream=True) as response:
if response.status_code != 200:
print(f"request failed, status code is {response.status_code}, error message is {response.text}")
return []
for line in response.iter_lines():
line = line.decode('utf-8')
if line and line.startswith("data:"):
data_str = line[5:].strip()
yield json.loads(data_str)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Generate PPT")
parser.add_argument("--query", "-q", type=str, required=True, help="PPT topic")
parser.add_argument("--style_id", "-si", type=int, default=0, help="Style ID (default: 0)")
parser.add_argument("--tpl_id", "-tp", type=int, help="Template ID (optional)")
parser.add_argument("--web_content", "-wc", type=str, default=None, help="Web content")
args = parser.parse_args()
api_key = os.getenv("BAIDU_API_KEY")
if not api_key:
print("Error: BAIDU_API_KEY must be set in environment.")
sys.exit(1)
try:
start_time = int(time.time())
results = ppt_generate(api_key, args.query, args.style_id, args.tpl_id, args.web_content)
for result in results:
if "is_end" in result and result["is_end"]:
print(json.dumps(result, ensure_ascii=False, indent=2))
else:
end_time = int(time.time())
print(json.dumps({"status": result["status"], "run_time": end_time - start_time}))
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)

View File

@@ -0,0 +1,43 @@
import os
import sys
import requests
import json
def ppt_theme_list(api_key: str):
url = "https://qianfan.baidubce.com/v2/tools/ai_ppt/get_ppt_theme"
headers = {
"Authorization": "Bearer %s" % api_key,
"X-Appbuilder-From": "openclaw",
}
response = requests.post(url, headers=headers)
result = response.json()
if "errno" in result and result["errno"] != 0:
raise RuntimeError(result["errmsg"])
themes = []
count = 0
for theme in result["data"]["ppt_themes"]:
count += 1
if count > 100:
break
themes.append({
"style_name_list": theme["style_name_list"],
"style_id": theme["style_id"],
"tpl_id": theme["tpl_id"],
})
return themes
if __name__ == "__main__":
api_key = os.getenv("BAIDU_API_KEY")
if not api_key:
print("Error: BAIDU_API_KEY must be set in environment.")
sys.exit(1)
try:
results = ppt_theme_list(api_key)
print(json.dumps(results, ensure_ascii=False, indent=2))
except Exception as e:
exc_type, exc_value, exc_traceback = sys.exc_info()
print(f"error type{exc_type}")
print(f"error message{exc_value}")
sys.exit(1)

View File

@@ -0,0 +1,321 @@
#!/usr/bin/env python3
"""
Random PPT Theme Selector
If user doesn't select a PPT template, this script will randomly select one
from the available templates and generate PPT.
"""
import os
import sys
import json
import random
import argparse
import subprocess
import time
def get_available_themes():
"""Get available PPT themes"""
try:
api_key = os.getenv("BAIDU_API_KEY")
if not api_key:
print("Error: BAIDU_API_KEY environment variable not set", file=sys.stderr)
return []
# Import the function from ppt_theme_list.py
script_dir = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, script_dir)
from ppt_theme_list import ppt_theme_list as get_themes
themes = get_themes(api_key)
return themes
except Exception as e:
print(f"Error getting themes: {e}", file=sys.stderr)
return []
def categorize_themes(themes):
"""Categorize themes by style for better random selection"""
categorized = {
"企业商务": [],
"文艺清新": [],
"卡通手绘": [],
"扁平简约": [],
"中国风": [],
"年终总结": [],
"创意趣味": [],
"文化艺术": [],
"未来科技": [],
"默认": []
}
for theme in themes:
style_names = theme.get("style_name_list", [])
if not style_names:
categorized["默认"].append(theme)
continue
added = False
for style_name in style_names:
if style_name in categorized:
categorized[style_name].append(theme)
added = True
break
if not added:
categorized["默认"].append(theme)
return categorized
def select_random_theme_by_category(categorized_themes, preferred_category=None):
"""Select a random theme, optionally preferring a specific category"""
# If preferred category specified and has themes, use it
if preferred_category and preferred_category in categorized_themes:
if categorized_themes[preferred_category]:
return random.choice(categorized_themes[preferred_category])
# Otherwise, select from all non-empty categories
available_categories = []
for category, themes in categorized_themes.items():
if themes:
available_categories.append(category)
if not available_categories:
return None
# Weighted random selection: prefer non-default categories
weights = []
for category in available_categories:
if category == "默认":
weights.append(0.5) # Lower weight for default
else:
weights.append(2.0) # Higher weight for specific styles
# Normalize weights
total_weight = sum(weights)
weights = [w/total_weight for w in weights]
selected_category = random.choices(available_categories, weights=weights, k=1)[0]
return random.choice(categorized_themes[selected_category])
def suggest_category_by_query(query):
"""Suggest template category based on query keywords - enhanced version"""
query_lower = query.lower()
# Comprehensive keyword mapping with priority order
keyword_mapping = [
# Business & Corporate (highest priority for formal content)
("企业商务", [
"企业", "公司", "商务", "商业", "商务", "商业计划", "商业报告",
"营销", "市场", "销售", "财务", "会计", "审计", "投资", "融资",
"战略", "管理", "运营", "人力资源", "hr", "董事会", "股东",
"年报", "季报", "财报", "业绩", "kpi", "okr", "商业计划书",
"提案", "策划", "方案", "报告", "总结", "规划", "计划"
]),
# Technology & Future Tech
("未来科技", [
"未来", "科技", "人工智能", "ai", "机器学习", "深度学习",
"大数据", "云计算", "区块链", "物联网", "iot", "5g", "6g",
"量子计算", "机器人", "自动化", "智能制造", "智慧城市",
"虚拟现实", "vr", "增强现实", "ar", "元宇宙", "数字孪生",
"芯片", "半导体", "集成电路", "电子", "通信", "网络",
"网络安全", "信息安全", "数字化", "数字化转型",
"科幻", "高科技", "前沿科技", "科技创新", "技术"
]),
# Education & Children
("卡通手绘", [
"卡通", "动画", "动漫", "儿童", "幼儿", "小学生", "中学生",
"教育", "教学", "课件", "教案", "学习", "培训", "教程",
"趣味", "有趣", "可爱", "活泼", "生动", "绘本", "漫画",
"手绘", "插画", "图画", "图形", "游戏", "玩乐", "娱乐"
]),
# Year-end & Summary
("年终总结", [
"年终", "年度", "季度", "月度", "周报", "日报",
"总结", "回顾", "汇报", "述职", "考核", "评估",
"成果", "成绩", "业绩", "绩效", "目标", "完成",
"工作汇报", "工作总结", "年度报告", "季度报告"
]),
# Minimalist & Modern Design
("扁平简约", [
"简约", "简洁", "简单", "极简", "现代", "当代",
"设计", "视觉", "ui", "ux", "用户体验", "用户界面",
"科技感", "数字感", "数据", "图表", "图形", "信息图",
"分析", "统计", "报表", "dashboard", "仪表板",
"互联网", "web", "移动", "app", "应用", "软件"
]),
# Chinese Traditional
("中国风", [
"中国", "中华", "传统", "古典", "古风", "古代",
"文化", "文明", "历史", "国学", "东方", "水墨",
"书法", "国画", "诗词", "古文", "经典", "传统节日",
"春节", "中秋", "端午", "节气", "风水", "易经",
"", "", "", "", "茶道", "瓷器", "丝绸"
]),
# Cultural & Artistic
("文化艺术", [
"文化", "艺术", "文艺", "美学", "审美", "创意",
"创作", "作品", "展览", "博物馆", "美术馆", "画廊",
"音乐", "舞蹈", "戏剧", "戏曲", "电影", "影视",
"摄影", "绘画", "雕塑", "建筑", "设计", "时尚",
"文学", "诗歌", "小说", "散文", "哲学", "思想"
]),
# Artistic & Fresh
("文艺清新", [
"文艺", "清新", "小清新", "治愈", "温暖", "温柔",
"浪漫", "唯美", "优雅", "精致", "细腻", "柔和",
"自然", "生态", "环保", "绿色", "植物", "花卉",
"风景", "旅行", "游记", "生活", "日常", "情感"
]),
# Creative & Fun
("创意趣味", [
"创意", "创新", "创造", "发明", "新奇", "新颖",
"独特", "个性", "特色", "趣味", "有趣", "好玩",
"幽默", "搞笑", "笑话", "娱乐", "休闲", "放松",
"脑洞", "想象力", "灵感", "点子", "想法", "概念"
]),
# Academic & Research
("默认", [
"研究", "学术", "科学", "论文", "课题", "项目",
"实验", "调查", "分析", "理论", "方法", "技术",
"医学", "健康", "医疗", "生物", "化学", "物理",
"数学", "工程", "建筑", "法律", "政治", "经济",
"社会", "心理", "教育", "学习", "知识", "信息"
])
]
# Check each category with its keywords
for category, keywords in keyword_mapping:
for keyword in keywords:
if keyword in query_lower:
return category
# If no match found, analyze query length and content
words = query_lower.split()
if len(words) <= 3:
# Short query, likely specific - use "默认" or tech-related
if any(word in query_lower for word in ["ai", "vr", "ar", "iot", "5g", "tech"]):
return "未来科技"
return "默认"
else:
# Longer query, analyze word frequency
word_counts = {}
for word in words:
if len(word) > 1: # Ignore single characters
word_counts[word] = word_counts.get(word, 0) + 1
# Check for business indicators
business_words = ["报告", "总结", "计划", "方案", "业绩", "销售", "市场"]
if any(word in word_counts for word in business_words):
return "企业商务"
# Check for tech indicators
tech_words = ["技术", "科技", "数据", "数字", "智能", "系统"]
if any(word in word_counts for word in tech_words):
return "未来科技"
# Default fallback
return "默认"
def generate_ppt_with_random_theme(query, preferred_category=None):
"""Generate PPT with randomly selected theme"""
# Get available themes
themes = get_available_themes()
if not themes:
print("Error: No available themes found", file=sys.stderr)
return False
# Categorize themes
categorized = categorize_themes(themes)
# Select random theme
selected_theme = select_random_theme_by_category(categorized, preferred_category)
if not selected_theme:
print("Error: Could not select a theme", file=sys.stderr)
return False
style_id = selected_theme.get("style_id", 0)
tpl_id = selected_theme.get("tpl_id")
style_names = selected_theme.get("style_name_list", ["默认"])
print(f"Selected template: {style_names[0]} (tpl_id: {tpl_id})", file=sys.stderr)
# Generate PPT
script_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "generate_ppt.py")
try:
# Run generate_ppt.py with the selected theme
cmd = [
sys.executable, script_path,
"--query", query,
"--tpl_id", str(tpl_id),
"--style_id", str(style_id)
]
start_time = int(time.time())
process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
bufsize=1,
universal_newlines=True
)
# Stream output
for line in process.stdout:
line = line.strip()
if line:
try:
data = json.loads(line)
if "is_end" in data and data["is_end"]:
print(json.dumps(data, ensure_ascii=False))
else:
end_time = int(time.time())
print(json.dumps({"status": data.get("status", "生成中"), "run_time": end_time - start_time}, ensure_ascii=False))
except json.JSONDecodeError:
# Just print non-JSON output
print(line)
process.wait()
return process.returncode == 0
except Exception as e:
print(f"Error generating PPT: {e}", file=sys.stderr)
return False
def main():
parser = argparse.ArgumentParser(description="Generate PPT with random theme selection")
parser.add_argument("--query", "-q", type=str, required=True, help="PPT主题/内容")
parser.add_argument("--category", "-c", type=str, help="Preferred category (企业商务/文艺清新/卡通手绘/扁平简约/中国风/年终总结/创意趣味/文化艺术/未来科技)")
args = parser.parse_args()
# Determine preferred category
preferred_category = args.category
if not preferred_category:
preferred_category = suggest_category_by_query(args.query)
if preferred_category:
print(f"Auto-suggested category: {preferred_category}", file=sys.stderr)
# Generate PPT
success = generate_ppt_with_random_theme(args.query, preferred_category)
if not success:
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,5 @@
## Self-Improving Check
- Read `./skills/self-improving/heartbeat-rules.md`
- Use `~/self-improving/heartbeat-state.md` for last-run markers and action notes
- If no file inside `~/self-improving/` changed since the last reviewed change, return `HEARTBEAT_OK`

View File

@@ -0,0 +1,250 @@
---
name: Self-Improving + Proactive Agent
slug: self-improving
version: 1.2.16
homepage: https://clawic.com/skills/self-improving
description: "Self-reflection + Self-criticism + Self-learning + Self-organizing memory. Agent evaluates its own work, catches mistakes, and improves permanently. Use when (1) a command, tool, API, or operation fails; (2) the user corrects you or rejects your work; (3) you realize your knowledge is outdated or incorrect; (4) you discover a better approach; (5) the user explicitly installs or references the skill for the current task."
changelog: "Clarifies the setup flow for proactive follow-through and safer installation behavior."
metadata: {"clawdbot":{"emoji":"🧠","requires":{"bins":[]},"os":["linux","darwin","win32"],"configPaths":["~/self-improving/"],"configPaths.optional":["./AGENTS.md","./SOUL.md","./HEARTBEAT.md"]}}
---
## When to Use
User corrects you or points out mistakes. You complete significant work and want to evaluate the outcome. You notice something in your own output that could be better. Knowledge should compound over time without manual maintenance.
## Architecture
Memory lives in `~/self-improving/` with tiered structure. If `~/self-improving/` does not exist, run `setup.md`.
Workspace setup should add the standard self-improving steering to the workspace AGENTS, SOUL, and `HEARTBEAT.md` files, with recurring maintenance routed through `heartbeat-rules.md`.
```
~/self-improving/
├── memory.md # HOT: ≤100 lines, always loaded
├── index.md # Topic index with line counts
├── heartbeat-state.md # Heartbeat state: last run, reviewed change, action notes
├── projects/ # Per-project learnings
├── domains/ # Domain-specific (code, writing, comms)
├── archive/ # COLD: decayed patterns
└── corrections.md # Last 50 corrections log
```
## Quick Reference
| Topic | File |
|-------|------|
| Setup guide | `setup.md` |
| Heartbeat state template | `heartbeat-state.md` |
| Memory template | `memory-template.md` |
| Workspace heartbeat snippet | `HEARTBEAT.md` |
| Heartbeat rules | `heartbeat-rules.md` |
| Learning mechanics | `learning.md` |
| Security boundaries | `boundaries.md` |
| Scaling rules | `scaling.md` |
| Memory operations | `operations.md` |
| Self-reflection log | `reflections.md` |
| OpenClaw HEARTBEAT seed | `openclaw-heartbeat.md` |
## Requirements
- No credentials required
- No extra binaries required
- Optional installation of the `Proactivity` skill may require network access
## Learning Signals
Log automatically when you notice these patterns:
**Corrections** → add to `corrections.md`, evaluate for `memory.md`:
- "No, that's not right..."
- "Actually, it should be..."
- "You're wrong about..."
- "I prefer X, not Y"
- "Remember that I always..."
- "I told you before..."
- "Stop doing X"
- "Why do you keep..."
**Preference signals** → add to `memory.md` if explicit:
- "I like when you..."
- "Always do X for me"
- "Never do Y"
- "My style is..."
- "For [project], use..."
**Pattern candidates** → track, promote after 3x:
- Same instruction repeated 3+ times
- Workflow that works well repeatedly
- User praises specific approach
**Ignore** (don't log):
- One-time instructions ("do X now")
- Context-specific ("in this file...")
- Hypotheticals ("what if...")
## Self-Reflection
After completing significant work, pause and evaluate:
1. **Did it meet expectations?** — Compare outcome vs intent
2. **What could be better?** — Identify improvements for next time
3. **Is this a pattern?** — If yes, log to `corrections.md`
**When to self-reflect:**
- After completing a multi-step task
- After receiving feedback (positive or negative)
- After fixing a bug or mistake
- When you notice your output could be better
**Log format:**
```
CONTEXT: [type of task]
REFLECTION: [what I noticed]
LESSON: [what to do differently]
```
**Example:**
```
CONTEXT: Building Flutter UI
REFLECTION: Spacing looked off, had to redo
LESSON: Check visual spacing before showing user
```
Self-reflection entries follow the same promotion rules: 3x applied successfully → promote to HOT.
## Quick Queries
| User says | Action |
|-----------|--------|
| "What do you know about X?" | Search all tiers for X |
| "What have you learned?" | Show last 10 from `corrections.md` |
| "Show my patterns" | List `memory.md` (HOT) |
| "Show [project] patterns" | Load `projects/{name}.md` |
| "What's in warm storage?" | List files in `projects/` + `domains/` |
| "Memory stats" | Show counts per tier |
| "Forget X" | Remove from all tiers (confirm first) |
| "Export memory" | ZIP all files |
## Memory Stats
On "memory stats" request, report:
```
📊 Self-Improving Memory
HOT (always loaded):
memory.md: X entries
WARM (load on demand):
projects/: X files
domains/: X files
COLD (archived):
archive/: X files
Recent activity (7 days):
Corrections logged: X
Promotions to HOT: X
Demotions to WARM: X
```
## Common Traps
| Trap | Why It Fails | Better Move |
|------|--------------|-------------|
| Learning from silence | Creates false rules | Wait for explicit correction or repeated evidence |
| Promoting too fast | Pollutes HOT memory | Keep new lessons tentative until repeated |
| Reading every namespace | Wastes context | Load only HOT plus the smallest matching files |
| Compaction by deletion | Loses trust and history | Merge, summarize, or demote instead |
## Core Rules
### 1. Learn from Corrections and Self-Reflection
- Log when user explicitly corrects you
- Log when you identify improvements in your own work
- Never infer from silence alone
- After 3 identical lessons → ask to confirm as rule
### 2. Tiered Storage
| Tier | Location | Size Limit | Behavior |
|------|----------|------------|----------|
| HOT | memory.md | ≤100 lines | Always loaded |
| WARM | projects/, domains/ | ≤200 lines each | Load on context match |
| COLD | archive/ | Unlimited | Load on explicit query |
### 3. Automatic Promotion/Demotion
- Pattern used 3x in 7 days → promote to HOT
- Pattern unused 30 days → demote to WARM
- Pattern unused 90 days → archive to COLD
- Never delete without asking
### 4. Namespace Isolation
- Project patterns stay in `projects/{name}.md`
- Global preferences in HOT tier (memory.md)
- Domain patterns (code, writing) in `domains/`
- Cross-namespace inheritance: global → domain → project
### 5. Conflict Resolution
When patterns contradict:
1. Most specific wins (project > domain > global)
2. Most recent wins (same level)
3. If ambiguous → ask user
### 6. Compaction
When file exceeds limit:
1. Merge similar corrections into single rule
2. Archive unused patterns
3. Summarize verbose entries
4. Never lose confirmed preferences
### 7. Transparency
- Every action from memory → cite source: "Using X (from projects/foo.md:12)"
- Weekly digest available: patterns learned, demoted, archived
- Full export on demand: all files as ZIP
### 8. Security Boundaries
See `boundaries.md` — never store credentials, health data, third-party info.
### 9. Graceful Degradation
If context limit hit:
1. Load only memory.md (HOT)
2. Load relevant namespace on demand
3. Never fail silently — tell user what's not loaded
## Scope
This skill ONLY:
- Learns from user corrections and self-reflection
- Stores preferences in local files (`~/self-improving/`)
- Maintains heartbeat state in `~/self-improving/heartbeat-state.md` when the workspace integrates heartbeat
- Reads its own memory files on activation
This skill NEVER:
- Accesses calendar, email, or contacts
- Makes network requests
- Reads files outside `~/self-improving/`
- Infers preferences from silence or observation
- Deletes or blindly rewrites self-improving memory during heartbeat cleanup
- Modifies its own SKILL.md
## Data Storage
Local state lives in `~/self-improving/`:
- `memory.md` for HOT rules and confirmed preferences
- `corrections.md` for explicit corrections and reusable lessons
- `projects/` and `domains/` for scoped patterns
- `archive/` for decayed or inactive patterns
- `heartbeat-state.md` for recurring maintenance markers
## Related Skills
Install with `clawhub install <slug>` if user confirms:
- `memory` — Long-term memory patterns for agents
- `learning` — Adaptive teaching and explanation
- `decide` — Auto-learn decision patterns
- `escalate` — Know when to ask vs act autonomously
## Feedback
- If useful: `clawhub star self-improving`
- Stay updated: `clawhub sync`

View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn73vp5rarc3b14rc7wjcw8f8580t5d1",
"slug": "self-improving",
"version": "1.2.16",
"publishedAt": 1773329327755
}

View File

@@ -0,0 +1,59 @@
# Security Boundaries
## Never Store
| Category | Examples | Why |
|----------|----------|-----|
| Credentials | Passwords, API keys, tokens, SSH keys | Security breach risk |
| Financial | Card numbers, bank accounts, crypto seeds | Fraud risk |
| Medical | Diagnoses, medications, conditions | Privacy, HIPAA |
| Biometric | Voice patterns, behavioral fingerprints | Identity theft |
| Third parties | Info about other people | No consent obtained |
| Location patterns | Home/work addresses, routines | Physical safety |
| Access patterns | What systems user has access to | Privilege escalation |
## Store with Caution
| Category | Rules |
|----------|-------|
| Work context | Decay after project ends, never share cross-project |
| Emotional states | Only if user explicitly shares, never infer |
| Relationships | Roles only ("manager", "client"), no personal details |
| Schedules | General patterns OK ("busy mornings"), not specific times |
## Transparency Requirements
1. **Audit on demand** — User asks "what do you know about me?" → full export
2. **Source tracking** — Every item tagged with when/how learned
3. **Explain actions** — "I did X because you said Y on [date]"
4. **No hidden state** — If it affects behavior, it must be visible
5. **Deletion verification** — Confirm item removed, show updated state
## Red Flags to Catch
If you find yourself doing any of these, STOP:
- Storing something "just in case it's useful later"
- Inferring sensitive info from non-sensitive data
- Keeping data after user asked to forget
- Applying personal context to work (or vice versa)
- Learning what makes user comply faster
- Building psychological profile
- Retaining third-party information
## Kill Switch
User says "forget everything":
1. Export current memory to file (so they can review)
2. Wipe all learned data
3. Confirm: "Memory cleared. Starting fresh."
4. Do not retain "ghost patterns" in behavior
## Consent Model
| Data Type | Consent Level |
|-----------|---------------|
| Explicit corrections | Implied by correction itself |
| Inferred preferences | Ask after 3 observations |
| Context/project data | Ask when first detected |
| Cross-session patterns | Explicit opt-in required |

View File

@@ -0,0 +1,36 @@
# Corrections Log — Template
> This file is created in `~/self-improving/corrections.md` when you first use the skill.
> Keeps the last 50 corrections. Older entries are evaluated for promotion or archived.
## Example Entries
```markdown
## 2026-02-19
### 14:32 — Code style
- **Correction:** "Use 2-space indentation, not 4"
- **Context:** Editing TypeScript file
- **Count:** 1 (first occurrence)
### 16:15 — Communication
- **Correction:** "Don't start responses with 'Great question!'"
- **Context:** Chat response
- **Count:** 3 → **PROMOTED to memory.md**
## 2026-02-18
### 09:00 — Project: website
- **Correction:** "For this project, always use Tailwind"
- **Context:** CSS discussion
- **Action:** Added to projects/website.md
```
## Log Format
Each entry includes:
- **Timestamp** — When the correction happened
- **Correction** — What the user said
- **Context** — What triggered it
- **Count** — How many times (for promotion tracking)
- **Action** — Where it was stored (if promoted)

View File

@@ -0,0 +1,54 @@
# Heartbeat Rules
Use heartbeat to keep `~/self-improving/` organized without creating churn or losing data.
## Source of Truth
Keep the workspace `HEARTBEAT.md` snippet minimal.
Treat this file as the stable contract for self-improving heartbeat behavior.
Store mutable run state only in `~/self-improving/heartbeat-state.md`.
## Start of Every Heartbeat
1. Ensure `~/self-improving/heartbeat-state.md` exists.
2. Write `last_heartbeat_started_at` immediately in ISO 8601.
3. Read the previous `last_reviewed_change_at`.
4. Scan `~/self-improving/` for files changed after that moment, excluding `heartbeat-state.md` itself.
## If Nothing Changed
- Set `last_heartbeat_result: HEARTBEAT_OK`
- Append a short "no material change" note if you keep an action log
- Return `HEARTBEAT_OK`
## If Something Changed
Only do conservative organization:
- refresh `index.md` if counts or file references drift
- compact oversized files by merging duplicates or summarizing repetitive entries
- move clearly misplaced notes to the right namespace only when the target is unambiguous
- preserve confirmed rules and explicit corrections exactly
- update `last_reviewed_change_at` only after the review finishes cleanly
## Safety Rules
- Most heartbeat runs should do nothing
- Prefer append, summarize, or index fixes over large rewrites
- Never delete data, empty files, or overwrite uncertain text
- Never reorganize files outside `~/self-improving/`
- If scope is ambiguous, leave files untouched and record a suggested follow-up instead
## State Fields
Keep `~/self-improving/heartbeat-state.md` simple:
- `last_heartbeat_started_at`
- `last_reviewed_change_at`
- `last_heartbeat_result`
- `last_actions`
## Behavior Standard
Heartbeat exists to keep the memory system tidy and trustworthy.
If no rule is clearly violated, do nothing.

View File

@@ -0,0 +1,22 @@
# Heartbeat State Template
Use this file as the baseline for `~/self-improving/heartbeat-state.md`.
It stores only lightweight run markers and maintenance notes.
```markdown
# Self-Improving Heartbeat State
last_heartbeat_started_at: never
last_reviewed_change_at: never
last_heartbeat_result: never
## Last actions
- none yet
```
## Rules
- update `last_heartbeat_started_at` at the beginning of every heartbeat
- update `last_reviewed_change_at` only after a clean review of changed files
- keep `last_actions` short and factual
- never turn this file into another memory log

View File

@@ -0,0 +1,106 @@
# Learning Mechanics
## What Triggers Learning
| Trigger | Confidence | Action |
|---------|------------|--------|
| "No, do X instead" | High | Log correction immediately |
| "I told you before..." | High | Flag as repeated, bump priority |
| "Always/Never do X" | Confirmed | Promote to preference |
| User edits your output | Medium | Log as tentative pattern |
| Same correction 3x | Confirmed | Ask to make permanent |
| "For this project..." | Scoped | Write to project namespace |
## What Does NOT Trigger Learning
- Silence (not confirmation)
- Single instance of anything
- Hypothetical discussions
- Third-party preferences ("John likes...")
- Group chat patterns (unless user confirms)
- Implied preferences (never infer)
## Correction Classification
### By Type
| Type | Example | Namespace |
|------|---------|-----------|
| Format | "Use bullets not prose" | global |
| Technical | "SQLite not Postgres" | domain/code |
| Communication | "Shorter messages" | global |
| Project-specific | "This repo uses Tailwind" | projects/{name} |
| Person-specific | "Marcus wants BLUF" | domains/comms |
### By Scope
```
Global: applies everywhere
└── Domain: applies to category (code, writing, comms)
└── Project: applies to specific context
└── Temporary: applies to this session only
```
## Confirmation Flow
After 3 similar corrections:
```
Agent: "I've noticed you prefer X over Y (corrected 3 times).
Should I always do this?
- Yes, always
- Only in [context]
- No, case by case"
User: "Yes, always"
Agent: → Moves to Confirmed Preferences
→ Removes from correction counter
→ Cites source on future use
```
## Pattern Evolution
### Stages
1. **Tentative** — Single correction, watch for repetition
2. **Emerging** — 2 corrections, likely pattern
3. **Pending** — 3 corrections, ask for confirmation
4. **Confirmed** — User approved, permanent unless reversed
5. **Archived** — Unused 90+ days, preserved but inactive
### Reversal
User can always reverse:
```
User: "Actually, I changed my mind about X"
Agent:
1. Archive old pattern (keep history)
2. Log reversal with timestamp
3. Add new preference as tentative
4. "Got it. I'll do Y now. (Previous: X, archived)"
```
## Anti-Patterns
### Never Learn
- What makes user comply faster (manipulation)
- Emotional triggers or vulnerabilities
- Patterns from other users (even if shared device)
- Anything that feels "creepy" to surface
### Avoid
- Over-generalizing from single instance
- Learning style over substance
- Assuming preference stability
- Ignoring context shifts
## Quality Signals
### Good Learning
- User explicitly states preference
- Pattern consistent across contexts
- Correction improves outcomes
- User confirms when asked
### Bad Learning
- Inferred from silence
- Contradicts recent behavior
- Only works in narrow context
- User never confirmed

View File

@@ -0,0 +1,75 @@
# Memory Template
Copy this structure to `~/self-improving/memory.md` on first use.
```markdown
# Self-Improving Memory
## Confirmed Preferences
<!-- Patterns confirmed by user, never decay -->
## Active Patterns
<!-- Patterns observed 3+ times, subject to decay -->
## Recent (last 7 days)
<!-- New corrections pending confirmation -->
```
## Initial Directory Structure
Create on first activation:
```bash
mkdir -p ~/self-improving/{projects,domains,archive}
touch ~/self-improving/{memory.md,index.md,corrections.md,heartbeat-state.md}
```
## Index Template
For `~/self-improving/index.md`:
```markdown
# Memory Index
## HOT
- memory.md: 0 lines
## WARM
- (no namespaces yet)
## COLD
- (no archives yet)
Last compaction: never
```
## Corrections Log Template
For `~/self-improving/corrections.md`:
```markdown
# Corrections Log
<!-- Format:
## YYYY-MM-DD
- [HH:MM] Changed X → Y
Type: format|technical|communication|project
Context: where correction happened
Confirmed: pending (N/3) | yes | no
-->
```
## Heartbeat State Template
For `~/self-improving/heartbeat-state.md`:
```markdown
# Self-Improving Heartbeat State
last_heartbeat_started_at: never
last_reviewed_change_at: never
last_heartbeat_result: never
## Last actions
- none yet
```

View File

@@ -0,0 +1,30 @@
# HOT Memory — Template
> This file is created in `~/self-improving/memory.md` when you first use the skill.
> Keep it ≤100 lines. Most-used patterns live here.
## Example Entries
```markdown
## Preferences
- Code style: Prefer explicit over implicit
- Communication: Direct, no fluff
- Time zone: Europe/Madrid
## Patterns (promoted from corrections)
- Always use TypeScript strict mode
- Prefer pnpm over npm
- Format: ISO 8601 for dates
## Project defaults
- Tests: Jest with coverage >80%
- Commits: Conventional commits format
```
## Usage
The agent will:
1. Load this file on every session
2. Add entries when patterns are used 3x in 7 days
3. Demote unused entries to WARM after 30 days
4. Never exceed 100 lines (compacts automatically)

View File

@@ -0,0 +1,5 @@
## Self-Improving Check
- Read `./skills/self-improving/heartbeat-rules.md`
- Use `~/self-improving/heartbeat-state.md` for last-run markers and action notes
- If no file inside `~/self-improving/` changed since the last reviewed change, return `HEARTBEAT_OK`

View File

@@ -0,0 +1,144 @@
# Memory Operations
## User Commands
| Command | Action |
|---------|--------|
| "What do you know about X?" | Search all tiers, return matches with sources |
| "Show my memory" | Display memory.md contents |
| "Show [project] patterns" | Load and display specific namespace |
| "Forget X" | Remove from all tiers, confirm deletion |
| "Forget everything" | Full wipe with export option |
| "What changed recently?" | Show last 20 corrections |
| "Export memory" | Generate downloadable archive |
| "Memory status" | Show tier sizes, last compaction, health |
## Automatic Operations
### On Session Start
1. Load memory.md (HOT tier)
2. Check index.md for context hints
3. If project detected → preload relevant namespace
### On Correction Received
```
1. Parse correction type (preference, pattern, override)
2. Check if duplicate (exists in any tier)
3. If new:
- Add to corrections.md with timestamp
- Increment correction counter
4. If duplicate:
- Bump counter, update timestamp
- If counter >= 3: ask to confirm as rule
5. Determine namespace (global, domain, project)
6. Write to appropriate file
7. Update index.md line counts
```
### On Pattern Match
When applying learned pattern:
```
1. Find pattern source (file:line)
2. Apply pattern
3. Cite source: "Using X (from memory.md:15)"
4. Log usage for decay tracking
```
### Weekly Maintenance (Cron)
```
1. Scan all files for decay candidates
2. Move unused >30 days to WARM
3. Archive unused >90 days to COLD
4. Run compaction if any file >limit
5. Update index.md
6. Generate weekly digest (optional)
```
## File Formats
### memory.md (HOT)
```markdown
# Self-Improving Memory
## Confirmed Preferences
- format: bullet points over prose (confirmed 2026-01)
- tone: direct, no hedging (confirmed 2026-01)
## Active Patterns
- "looks good" = approval to proceed (used 15x)
- single emoji = acknowledged (used 8x)
## Recent (last 7 days)
- prefer SQLite for MVPs (corrected 02-14)
```
### corrections.md
```markdown
# Corrections Log
## 2026-02-15
- [14:32] Changed verbose explanation → bullet summary
Type: communication
Context: Telegram response
Confirmed: pending (1/3)
## 2026-02-14
- [09:15] Use SQLite not Postgres for MVP
Type: technical
Context: database discussion
Confirmed: yes (said "always")
```
### projects/{name}.md
```markdown
# Project: my-app
Inherits: global, domains/code
## Patterns
- Use Tailwind (project standard)
- No Prettier (eslint only)
- Deploy via GitLab CI
## Overrides
- semicolons: yes (overrides global no-semi)
## History
- Created: 2026-01-15
- Last active: 2026-02-15
- Corrections: 12
```
## Edge Case Handling
### Contradiction Detected
```
Pattern A: "Use tabs" (global, confirmed)
Pattern B: "Use spaces" (project, corrected today)
Resolution:
1. Project overrides global → use spaces for this project
2. Log conflict in corrections.md
3. Ask: "Should spaces apply only to this project or everywhere?"
```
### User Changes Mind
```
Old: "Always use formal tone"
New: "Actually, casual is fine"
Action:
1. Archive old pattern with timestamp
2. Add new pattern as tentative
3. Keep archived for reference ("You previously preferred formal")
```
### Context Ambiguity
```
User says: "Remember I like X"
But which namespace?
1. Check current context (project? domain?)
2. If unclear, ask: "Should this apply globally or just here?"
3. Default to most specific active context
```

View File

@@ -0,0 +1,31 @@
# Self-Reflections Log
Track self-reflections from completed work. Each entry captures what the agent learned from evaluating its own output.
## Format
```
## [Date] — [Task Type]
**What I did:** Brief description
**Outcome:** What happened (success, partial, failed)
**Reflection:** What I noticed about my work
**Lesson:** What to do differently next time
**Status:** ⏳ candidate | ✅ promoted | 📦 archived
```
## Example Entry
```
## 2026-02-25 — Flutter UI Build
**What I did:** Built a settings screen with toggle switches
**Outcome:** User said "spacing looks off"
**Reflection:** I focused on functionality, didn't visually check the result
**Lesson:** Always take a screenshot and evaluate visual balance before showing user
**Status:** ✅ promoted to domains/flutter.md
```
## Entries
(New entries appear here)

View File

@@ -0,0 +1,125 @@
# Scaling Patterns
## Volume Thresholds
| Scale | Entries | Strategy |
|-------|---------|----------|
| Small | <100 | Single memory.md, no namespacing |
| Medium | 100-500 | Split into domains/, basic indexing |
| Large | 500-2000 | Full namespace hierarchy, aggressive compaction |
| Massive | >2000 | Archive yearly, summary-only HOT tier |
## When to Split
Create new namespace file when:
- Single file exceeds 200 lines
- Topic has 10+ distinct corrections
- User explicitly separates contexts ("for work...", "in this project...")
## Compaction Rules
### Merge Similar Corrections
```
BEFORE (3 entries):
- [02-01] Use tabs not spaces
- [02-03] Indent with tabs
- [02-05] Tab indentation please
AFTER (1 entry):
- Indentation: tabs (confirmed 3x, 02-01 to 02-05)
```
### Summarize Verbose Patterns
```
BEFORE:
- When writing emails to Marcus, use bullet points, keep under 5 items,
no jargon, bottom-line first, he prefers morning sends
AFTER:
- Marcus emails: bullets ≤5, no jargon, BLUF, AM preferred
```
### Archive with Context
When moving to COLD:
```
## Archived 2026-02
### Project: old-app (inactive since 2025-08)
- Used Vue 2 patterns
- Preferred Vuex over Pinia
- CI on Jenkins (deprecated)
Reason: Project completed, patterns unlikely to apply
```
## Index Maintenance
`index.md` tracks all namespaces:
```markdown
# Memory Index
## HOT (always loaded)
- memory.md: 87 lines, updated 2026-02-15
## WARM (load on match)
- projects/current-app.md: 45 lines
- projects/side-project.md: 23 lines
- domains/code.md: 112 lines
- domains/writing.md: 34 lines
## COLD (archive)
- archive/2025.md: 234 lines
- archive/2024.md: 189 lines
Last compaction: 2026-02-01
Next scheduled: 2026-03-01
```
## Multi-Project Patterns
### Inheritance Chain
```
global (memory.md)
└── domain (domains/code.md)
└── project (projects/app.md)
```
### Override Syntax
In project file:
```markdown
## Overrides
- indentation: spaces (overrides global tabs)
- Reason: Project eslint config requires spaces
```
### Conflict Detection
When loading, check for conflicts:
1. Build inheritance chain
2. Detect contradictions
3. Most specific wins
4. Log conflict for later review
## User Type Adaptations
| User Type | Memory Strategy |
|-----------|-----------------|
| Power user | Aggressive learning, minimal confirmation |
| Casual | Conservative learning, frequent confirmation |
| Team shared | Per-user namespaces, shared project space |
| Privacy-focused | Local-only, explicit consent per category |
## Recovery Patterns
### Context Lost
If agent loses context mid-session:
1. Re-read memory.md
2. Check index.md for relevant namespaces
3. Load active project namespace
4. Continue with restored patterns
### Corruption Recovery
If memory file corrupted:
1. Check archive/ for recent backup
2. Rebuild from corrections.md
3. Ask user to re-confirm critical preferences
4. Log incident for debugging

View File

@@ -0,0 +1,196 @@
# Setup — Self-Improving Agent
## First-Time Setup
### 1. Create Memory Structure
```bash
mkdir -p ~/self-improving/{projects,domains,archive}
```
### 2. Initialize Core Files
Create `~/self-improving/memory.md` using `memory-template.md`:
```markdown
Copy the structure from `memory-template.md` into `~/self-improving/memory.md`.
```
Memory file baseline:
```markdown
# Memory (HOT Tier)
## Preferences
## Patterns
## Rules
```
Create `~/self-improving/corrections.md`:
```markdown
# Corrections Log
| Date | What I Got Wrong | Correct Answer | Status |
|------|-----------------|----------------|--------|
```
Create `~/self-improving/index.md`:
```markdown
# Memory Index
| File | Lines | Last Updated |
|------|-------|--------------|
| memory.md | 0 | — |
| corrections.md | 0 | — |
```
Create `~/self-improving/heartbeat-state.md`:
```markdown
# Self-Improving Heartbeat State
last_heartbeat_started_at: never
last_reviewed_change_at: never
last_heartbeat_result: never
## Last actions
- none yet
```
### 3. Choose Operating Mode
Add to your AGENTS.md or workspace config:
```markdown
## Self-Improving Mode
Current mode: Passive
Available modes:
- Passive: Only learn from explicit corrections
- Active: Suggest patterns after 3x repetition
- Strict: Require confirmation for every entry
```
### 4. Add SOUL.md Steering
Add this section to your `SOUL.md`:
```markdown
**Self-Improving**
Compounding execution quality is part of the job.
Before non-trivial work, load `~/self-improving/memory.md` and only the smallest relevant domain or project files.
After corrections, failed attempts, or reusable lessons, write one concise entry to the correct self-improving file immediately.
Prefer learned rules when relevant, but keep self-inferred rules revisable.
Do not skip retrieval just because the task feels familiar.
```
### 5. Add the Proactivity Companion as Part of Setup
At the end of setup, briefly tell the user that you are going to add characteristics so the agent is more proactive:
- noticing missing next steps
- verifying outcomes instead of assuming they landed
- recovering context better after long or interrupted threads
- keeping the right level of initiative
Then say that, for this, you are going to install the `Proactivity` skill.
Only install it after the user explicitly agrees.
If the user agrees:
1. Run `clawhub install proactivity`
2. Read the installed `proactivity` skill
3. Continue into its setup flow immediately so the skill is active for this workspace
If the user says no, skip it cleanly and continue with self-improving only.
### 6. Refine AGENTS.md Memory Section (Non-Destructive)
Update `AGENTS.md` by complementing the existing `## Memory` section. Do not replace the whole section and do not remove existing lines.
If your `## Memory` block differs from the default template, insert the same additions in equivalent places so existing information is preserved.
Add this line in the continuity list (next to Daily notes and Long-term):
```markdown
- **Self-improving:** `~/self-improving/` (via `self-improving` skill) — execution-improvement memory (preferences, workflows, style patterns, what improved/worsened outcomes)
```
Right after the sentence "Capture what matters...", add:
```markdown
Use `memory/YYYY-MM-DD.md` and `MEMORY.md` for factual continuity (events, context, decisions).
Use `~/self-improving/` for compounding execution quality across tasks.
For compounding quality, read `~/self-improving/memory.md` before non-trivial work, then load only the smallest relevant domain or project files.
If in doubt, store factual history in `memory/YYYY-MM-DD.md` / `MEMORY.md`, and store reusable performance lessons in `~/self-improving/` (tentative until human validation).
```
Before the "Write It Down" subsection, add:
```markdown
Before any non-trivial task:
- Read `~/self-improving/memory.md`
- List available files first:
```bash
for d in ~/self-improving/domains ~/self-improving/projects; do
[ -d "$d" ] && find "$d" -maxdepth 1 -type f -name "*.md"
done | sort
```
- Read up to 3 matching files from `~/self-improving/domains/`
- If a project is clearly active, also read `~/self-improving/projects/<project>.md`
- Do not read unrelated domains "just in case"
If inferring a new rule, keep it tentative until human validation.
```
Inside the "Write It Down" bullets, refine the behavior (non-destructive):
- Keep existing intent, but route execution-improvement content to `~/self-improving/`.
- If the exact bullets exist, replace only these lines; if wording differs, apply equivalent edits without removing unrelated guidance.
Use this target wording:
```markdown
- When someone says "remember this" → if it's factual context/event, update `memory/YYYY-MM-DD.md`; if it's a correction, preference, workflow/style choice, or performance lesson, log it in `~/self-improving/`
- Explicit user correction → append to `~/self-improving/corrections.md` immediately
- Reusable global rule or preference → append to `~/self-improving/memory.md`
- Domain-specific lesson → append to `~/self-improving/domains/<domain>.md`
- Project-only override → append to `~/self-improving/projects/<project>.md`
- Keep entries short, concrete, and one lesson per bullet; if scope is ambiguous, default to domain rather than global
- After a correction or strong reusable lesson, write it before the final response
```
## Verification
Run "memory stats" to confirm setup:
```
📊 Self-Improving Memory
🔥 HOT (always loaded):
memory.md: 0 entries
🌡️ WARM (load on demand):
projects/: 0 files
domains/: 0 files
❄️ COLD (archived):
archive/: 0 files
⚙️ Mode: Passive
```
### 7. Add HEARTBEAT.md Steering
Add this section to your `HEARTBEAT.md`:
```markdown
## Self-Improving Check
- Read `./skills/self-improving/heartbeat-rules.md`
- Use `~/self-improving/heartbeat-state.md` for last-run markers and action notes
- If no file inside `~/self-improving/` changed since the last reviewed change, return `HEARTBEAT_OK`
```
Keep this in the same default setup flow as the AGENTS and SOUL additions so recurring maintenance is installed consistently.
If your installed skills path differs, keep the same three lines but point the first line at the installed copy of `heartbeat-rules.md`.

View File

@@ -0,0 +1,5 @@
# Errors Log
Command failures, exceptions, and unexpected behaviors.
---

View File

@@ -0,0 +1,5 @@
# Feature Requests
Capabilities requested by user that don't currently exist.
---

View File

@@ -0,0 +1,5 @@
# Learnings Log
Captured learnings, corrections, and discoveries. Review before major tasks.
---

View File

@@ -0,0 +1,646 @@
---
name: self-improvement
description: "记录 learnings、errors 和 corrections以支持持续改进。适用于(1) 命令或操作意外失败,(2) 用户纠正 Claude'No, that's wrong...', 'Actually...'(3) 用户请求当前不存在的能力,(4) 外部 API 或工具失败,(5) Claude 意识到自己的知识已过时或不正确,(6) 为重复任务发现了更好的处理方式。在开始重大任务前,也应先回顾 learnings。"
metadata:
---
# Self-Improvement Skill
将 learnings 和 errors 记录到 markdown 文件中,以支持持续改进。后续 coding agents 可以基于这些记录推进修复,而重要的学习项则可以提升为项目级 memory。
## 快速参考
| Situation | Action |
|-----------|--------|
| Command/operation fails | 记录到 `.learnings/ERRORS.md` |
| User corrects you | 以 `correction` category 记录到 `.learnings/LEARNINGS.md` |
| User wants missing feature | 记录到 `.learnings/FEATURE_REQUESTS.md` |
| API/external tool fails | 将集成细节记录到 `.learnings/ERRORS.md` |
| Knowledge was outdated | 以 `knowledge_gap` category 记录到 `.learnings/LEARNINGS.md` |
| Found better approach | 以 `best_practice` category 记录到 `.learnings/LEARNINGS.md` |
| Simplify/Harden recurring patterns | 使用 `Source: simplify-and-harden` 和稳定的 `Pattern-Key` 记录或更新 `.learnings/LEARNINGS.md` |
| Similar to existing entry | 通过 `**See Also**` 建立关联,并考虑提升优先级 |
| Broadly applicable learning | 提升到 `CLAUDE.md``AGENTS.md` 和/或 `.github/copilot-instructions.md` |
| Workflow improvements | 提升到 `AGENTS.md`OpenClaw workspace |
| Tool gotchas | 提升到 `TOOLS.md`OpenClaw workspace |
| Behavioral patterns | 提升到 `SOUL.md`OpenClaw workspace |
## OpenClaw Setup推荐
OpenClaw 是这个 skill 的主要运行平台。它通过基于 workspace 的 prompt injection 和自动 skill loading 工作。
### Installation
**Via ClawdHub推荐**
```bash
clawdhub install self-improving-agent
```
**Manual**
```bash
git clone https://github.com/peterskoett/self-improving-agent.git ~/.openclaw/skills/self-improving-agent
```
为 openclaw 改造自原始仓库https://github.com/pskoett/pskoett-ai-skills - https://github.com/pskoett/pskoett-ai-skills/tree/main/skills/self-improvement
### Workspace Structure
OpenClaw 会将这些文件注入到每个 session 中:
```
~/.openclaw/workspace/
├── AGENTS.md # 多代理工作流与委派模式
├── SOUL.md # 行为指导、个性与原则
├── TOOLS.md # 工具能力、集成 gotchas
├── MEMORY.md # 长期 memory仅主 session
├── memory/ # 每日 memory 文件
│ └── YYYY-MM-DD.md
└── .learnings/ # 本 skill 的日志文件
├── LEARNINGS.md
├── ERRORS.md
└── FEATURE_REQUESTS.md
```
### Create Learning Files
```bash
mkdir -p ~/.openclaw/workspace/.learnings
```
然后创建这些日志文件(或从 `assets/` 复制):
- `LEARNINGS.md` — corrections、knowledge gaps、best practices
- `ERRORS.md` — command failures、exceptions
- `FEATURE_REQUESTS.md` — 用户请求的 capabilities
### Promotion Targets
当 learnings 被证明具有广泛适用性时,将它们提升到 workspace 文件:
| Learning Type | Promote To | Example |
|---------------|------------|---------|
| Behavioral patterns | `SOUL.md` | "Be concise, avoid disclaimers" |
| Workflow improvements | `AGENTS.md` | "Spawn sub-agents for long tasks" |
| Tool gotchas | `TOOLS.md` | "Git push needs auth configured first" |
### Inter-Session Communication
OpenClaw 提供了这些工具,用于跨 session 共享 learnings
- **sessions_list** — 查看活跃/最近 session
- **sessions_history** — 读取另一个 session 的 transcript
- **sessions_send** — 向另一个 session 发送 learning
- **sessions_spawn** — 为后台工作启动 sub-agent
### Optional: Enable Hook
如需在 session 启动时自动提醒:
```bash
# Copy hook to OpenClaw hooks directory
cp -r hooks/openclaw ~/.openclaw/hooks/self-improvement
# Enable it
openclaw hooks enable self-improvement
```
完整细节见 `references/openclaw-integration.md`
---
## Generic Setup其他 agents
对于 Claude Code、Codex、Copilot 或其他 agents在你的项目中创建 `.learnings/`
```bash
mkdir -p .learnings
```
`assets/` 复制模板,或自行创建带标题的文件。
### 在 agent 文件 AGENTS.md、CLAUDE.md 或 .github/copilot-instructions.md 中加入提示,提醒自己记录 learnings。这是 hook-based reminders 的替代方案)
#### Self-Improvement Workflow
当出现 errors 或 corrections 时:
1. 记录到 `.learnings/ERRORS.md``LEARNINGS.md``FEATURE_REQUESTS.md`
2. 回顾并将具有广泛适用性的 learnings 提升到:
- `CLAUDE.md` - 项目事实和约定
- `AGENTS.md` - workflow 和 automation
- `.github/copilot-instructions.md` - Copilot 上下文
## Logging Format
### Learning Entry
追加到 `.learnings/LEARNINGS.md`
```markdown
## [LRN-YYYYMMDD-XXX] category
**Logged**: ISO-8601 timestamp
**Priority**: low | medium | high | critical
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Summary
用一行描述本次学到了什么
### Details
完整上下文:发生了什么、哪里错了、正确做法是什么
### Suggested Action
建议采取的具体修复或改进措施
### Metadata
- Source: conversation | error | user_feedback
- Related Files: path/to/file.ext
- Tags: tag1, tag2
- See Also: LRN-20250110-001 (if related to existing entry)
- Pattern-Key: simplify.dead_code | harden.input_validation (optional, for recurring-pattern tracking)
- Recurrence-Count: 1 (optional)
- First-Seen: 2025-01-15 (optional)
- Last-Seen: 2025-01-15 (optional)
---
```
### Error Entry
追加到 `.learnings/ERRORS.md`
```markdown
## [ERR-YYYYMMDD-XXX] skill_or_command_name
**Logged**: ISO-8601 timestamp
**Priority**: high
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Summary
简要说明失败了什么
### Error
```
Actual error message or output
```
### Context
- 尝试执行的 command/operation
- 使用的输入或参数
- 如有必要,补充环境细节
### Suggested Fix
如果能够识别,写明可能的解决方式
### Metadata
- Reproducible: yes | no | unknown
- Related Files: path/to/file.ext
- See Also: ERR-20250110-001 (if recurring)
---
```
### Feature Request Entry
追加到 `.learnings/FEATURE_REQUESTS.md`
```markdown
## [FEAT-YYYYMMDD-XXX] capability_name
**Logged**: ISO-8601 timestamp
**Priority**: medium
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Requested Capability
用户想要完成的能力
### User Context
他们为什么需要它,要解决什么问题
### Complexity Estimate
simple | medium | complex
### Suggested Implementation
可以如何实现,它可能扩展哪部分能力
### Metadata
- Frequency: first_time | recurring
- Related Features: existing_feature_name
---
```
## ID Generation
格式:`TYPE-YYYYMMDD-XXX`
- TYPE: `LRN`learning`ERR`error`FEAT`feature
- YYYYMMDD: 当前日期
- XXX: 顺序号或随机 3 个字符(例如 `001``A7B`
示例:`LRN-20250115-001``ERR-20250115-A3F``FEAT-20250115-002`
## Resolving Entries
当问题被修复后,更新该 entry
1.`**Status**: pending` 改为 `**Status**: resolved`
2. 在 Metadata 后追加 resolution block
```markdown
### Resolution
- **Resolved**: 2025-01-16T09:00:00Z
- **Commit/PR**: abc123 or #42
- **Notes**: 简要描述做了什么
```
其他 status 值:
- `in_progress` - 正在处理
- `wont_fix` - 决定不处理(在 Resolution notes 中补充原因)
- `promoted` - 已提升到 CLAUDE.md、AGENTS.md 或 .github/copilot-instructions.md
## Promoting to Project Memory
当某条 learning 具有广泛适用性(不是一次性修复)时,将它提升为永久性的项目 memory。
### When to Promote
- 该 learning 适用于多个文件/功能
- 是任何 contributorhuman 或 AI都应该知道的知识
- 能防止重复犯错
- 记录了项目特定的约定
### Promotion Targets
| Target | What Belongs There |
|--------|-------------------|
| `CLAUDE.md` | 所有 Claude 交互都应知道的项目事实、约定、gotchas |
| `AGENTS.md` | agent 专用 workflow、tool usage patterns、automation rules |
| `.github/copilot-instructions.md` | GitHub Copilot 的项目上下文与约定 |
| `SOUL.md` | 行为指导、沟通风格、原则OpenClaw workspace |
| `TOOLS.md` | 工具能力、usage patterns、integration gotchasOpenClaw workspace |
### How to Promote
1. **Distill**:将 learning 提炼成简洁的规则或事实
2. **Add**:加入目标文件的适当 section如无则创建
3. **Update** 原始 entry
-`**Status**: pending` 改为 `**Status**: promoted`
- 添加 `**Promoted**: CLAUDE.md``AGENTS.md``.github/copilot-instructions.md`
### Promotion Examples
**Learning**(详细版):
> Project uses pnpm workspaces. Attempted `npm install` but failed.
> Lock file is `pnpm-lock.yaml`. Must use `pnpm install`.
**在 CLAUDE.md 中**(简洁版):
```markdown
## Build & Dependencies
- Package manager: pnpm (not npm) - use `pnpm install`
```
**Learning**(详细版):
> When modifying API endpoints, must regenerate TypeScript client.
> Forgetting this causes type mismatches at runtime.
**在 AGENTS.md 中**(可执行版):
```markdown
## After API Changes
1. Regenerate client: `pnpm run generate:api`
2. Check for type errors: `pnpm tsc --noEmit`
```
## Recurring Pattern Detection
如果你记录的内容与已有 entry 类似:
1. **Search first**`grep -r "keyword" .learnings/`
2. **Link entries**:在 Metadata 中添加 `**See Also**: ERR-20250110-001`
3. 如果问题持续复发,**Bump priority**
4. **Consider systemic fix**:重复问题往往意味着:
- 缺少 documentation→ 提升到 CLAUDE.md 或 .github/copilot-instructions.md
- 缺少 automation→ 加到 AGENTS.md
- 存在 architectural problem→ 创建 tech debt ticket
## Simplify & Harden Feed
使用这套 workflow 吸收 `simplify-and-harden`
skill 中反复出现的模式,并将其转化为更持久的 prompt guidance。
### Ingestion Workflow
1. 从任务总结中读取 `simplify_and_harden.learning_loop.candidates`
2. 对每个 candidate使用 `pattern_key` 作为稳定的 dedupe key。
3.`.learnings/LEARNINGS.md` 中搜索是否已存在该 key 对应的 entry
- `grep -n "Pattern-Key: <pattern_key>" .learnings/LEARNINGS.md`
4. 如果已存在:
- 增加 `Recurrence-Count`
- 更新 `Last-Seen`
- 为相关 entries/tasks 添加 `See Also` links
5. 如果不存在:
- 创建新的 `LRN-...` entry
- 设置 `Source: simplify-and-harden`
- 设置 `Pattern-Key``Recurrence-Count: 1` 以及 `First-Seen`/`Last-Seen`
### Promotion RuleSystem Prompt Feedback
当以下条件同时满足时,将 recurring patterns 提升到 agent context/system prompt files
- `Recurrence-Count >= 3`
- 出现在至少 2 个不同任务中
- 发生在 30 天窗口内
Promotion targets
- `CLAUDE.md`
- `AGENTS.md`
- `.github/copilot-instructions.md`
- 在适用时,提升到 `SOUL.md` / `TOOLS.md` 作为 OpenClaw workspace-level guidance
写入被提升的规则时,要写成简短的 prevention rules编码前/编码中该做什么),而不是冗长的 incident write-ups。
## Periodic Review
在自然断点回顾 `.learnings/`
### When to Review
- 开始新的重大任务前
- 完成一个 feature 后
- 进入某个已有历史 learning 的 area 时
- 活跃开发期间每周一次
### Quick Status Check
```bash
# Count pending items
grep -h "Status\*\*: pending" .learnings/*.md | wc -l
# List pending high-priority items
grep -B5 "Priority\*\*: high" .learnings/*.md | grep "^## \["
# Find learnings for a specific area
grep -l "Area\*\*: backend" .learnings/*.md
```
### Review Actions
- 解决已修复的 items
- 提升适合沉淀的 learnings
- 关联相关 entries
- 升级 recurring issues
## Detection Triggers
当你注意到这些情况时,自动记录:
**Corrections**(→ 使用 `correction` category 的 learning
- "No, that's not right..."
- "Actually, it should be..."
- "You're wrong about..."
- "That's outdated..."
**Feature Requests**(→ feature request
- "Can you also..."
- "I wish you could..."
- "Is there a way to..."
- "Why can't you..."
**Knowledge Gaps**(→ 使用 `knowledge_gap` category 的 learning
- 用户提供了你此前不知道的信息
- 你引用的 documentation 已过时
- API behavior 与你的理解不一致
**Errors**(→ error entry
- command 返回 non-zero exit code
- exception 或 stack trace
- 非预期输出或行为
- timeout 或 connection failure
## Priority Guidelines
| Priority | When to Use |
|----------|-------------|
| `critical` | 阻塞核心功能、存在数据丢失风险、安全问题 |
| `high` | 影响显著、影响常见 workflow、重复出现的问题 |
| `medium` | 影响中等,存在 workaround |
| `low` | 轻微不便、边缘 case、nice-to-have |
## Area Tags
用于按 codebase 区域筛选 learnings
| Area | Scope |
|------|-------|
| `frontend` | UI、components、client-side code |
| `backend` | API、services、server-side code |
| `infra` | CI/CD、deployment、Docker、cloud |
| `tests` | test files、testing utilities、coverage |
| `docs` | documentation、comments、READMEs |
| `config` | configuration files、environment、settings |
## Best Practices
1. **Log immediately** - 问题刚出现时上下文最完整
2. **Be specific** - 未来的 agents 需要快速理解
3. **Include reproduction steps** - 对 errors 尤其重要
4. **Link related files** - 让修复更容易
5. **Suggest concrete fixes** - 不要只写 "investigate"
6. **Use consistent categories** - 便于过滤
7. **Promote aggressively** - 如果拿不准,就加到 CLAUDE.md 或 .github/copilot-instructions.md
8. **Review regularly** - 过期的 learnings 会迅速失去价值
## Gitignore Options
**Keep learnings local**(按开发者本地保留):
```gitignore
.learnings/
```
**Track learnings in repo**(团队共享):
不要添加到 `.gitignore` 中,这样 learnings 会成为共享知识。
**Hybrid**(跟踪模板,忽略具体 entries
```gitignore
.learnings/*.md
!.learnings/.gitkeep
```
## Hook Integration
通过 agent hooks 启用自动提醒。该能力是 **opt-in** 的,你必须显式配置 hooks。
### Quick SetupClaude Code / Codex
在项目中创建 `.claude/settings.json`
```json
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}]
}
}
```
这会在每次 prompt 后注入一个 learning evaluation reminder约增加 50-100 tokens 开销)。
### Full Setup带 Error Detection
```json
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}],
"PostToolUse": [{
"matcher": "Bash",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/error-detector.sh"
}]
}]
}
}
```
### Available Hook Scripts
| Script | Hook Type | Purpose |
|--------|-----------|---------|
| `scripts/activator.sh` | UserPromptSubmit | 提醒在任务结束后回顾 learnings |
| `scripts/error-detector.sh` | PostToolUse (Bash) | 在 command errors 出现时触发 |
详见 `references/hooks-setup.md` 中的详细配置与故障排查。
## Automatic Skill Extraction
当某条 learning 足够有价值,可以演化为可复用的 skill 时,使用提供的 helper 进行提取。
### Skill Extraction Criteria
当满足以下任意条件时,这条 learning 就适合做 skill extraction
| Criterion | Description |
|-----------|-------------|
| **Recurring** | 有 2 条以上相似问题的 `See Also` links |
| **Verified** | `Status``resolved`,并且 fix 已验证可用 |
| **Non-obvious** | 需要真实 debugging/investigation 才能发现 |
| **Broadly applicable** | 不是项目特定问题;对多个 codebase 都有帮助 |
| **User-flagged** | 用户明确说 “save this as a skill” 或类似表达 |
### Extraction Workflow
1. **Identify candidate**:该 learning 满足 extraction criteria
2. **Run helper**(或手动创建):
```bash
./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run
./skills/self-improvement/scripts/extract-skill.sh skill-name
```
3. **Customize SKILL.md**:将 learning 内容填入模板
4. **Update learning**:将 status 设为 `promoted_to_skill`,并补充 `Skill-Path`
5. **Verify**:在新的 session 中读取该 skill确保它是 self-contained 的
### Manual Extraction
如果你更偏向手工创建:
1. 创建 `skills/<skill-name>/SKILL.md`
2. 使用 `assets/SKILL-TEMPLATE.md` 中的模板
3. 遵循 [Agent Skills spec](https://agentskills.io/specification)
- 使用包含 `name` 和 `description` 的 YAML frontmatter
- name 必须与 folder name 一致
- skill folder 内不要放 README.md
### Extraction Detection Triggers
注意这些信号,它们通常意味着某条 learning 应该升级为 skill
**In conversation**
- "Save this as a skill"
- "I keep running into this"
- "This would be useful for other projects"
- "Remember this pattern"
**In learning entries**
- 存在多个 `See Also` links说明是 recurring issue
- 高优先级且已 resolved
- category 为 `best_practice` 且具有广泛适用性
- 用户对该 solution 给出明显正向反馈
### Skill Quality Gates
在提取前,确认:
- [ ] Solution 已测试并可用
- [ ] Description 在脱离原始上下文后仍然清晰
- [ ] Code examples 是 self-contained 的
- [ ] 没有项目特定的 hardcoded values
- [ ] 遵循 skill naming conventionslowercase、hyphens
## Multi-Agent Support
该 skill 可以在不同 AI coding agents 中工作,但 activation 方式因 agent 而异。
### Claude Code
**Activation**HooksUserPromptSubmit、PostToolUse
**Setup**:在 `.claude/settings.json` 中配置 hooks
**Detection**:通过 hook scripts 自动检测
### Codex CLI
**Activation**Hooks与 Claude Code 相同模式)
**Setup**:在 `.codex/settings.json` 中配置 hooks
**Detection**:通过 hook scripts 自动检测
### GitHub Copilot
**Activation**:手动(不支持 hooks
**Setup**:加入 `.github/copilot-instructions.md`
```markdown
## Self-Improvement
在解决 non-obvious issues 后,考虑将其记录到 `.learnings/`
1. 使用 self-improvement skill 的格式
2. 用 See Also 关联相关 entries
3. 将高价值 learnings 提升为 skills
可以在 chat 中提问:"Should I log this as a learning?"
```
**Detection**:在 session 结束时手动回顾
### OpenClaw
**Activation**Workspace injection + inter-agent messaging
**Setup**:见上文 “OpenClaw Setup” section
**Detection**:通过 session tools 和 workspace files
### Agent-Agnostic Guidance
无论使用什么 agent当你遇到以下情况时都应应用 self-improvement
1. **Discover something non-obvious** - solution 不是立刻想到的
2. **Correct yourself** - 最初方案是错的
3. **Learn project conventions** - 发现了未文档化的 patterns
4. **Hit unexpected errors** - 特别是 diagnosis 很费劲时
5. **Find better approaches** - 相比原始方案找到了更优路径
### Copilot Chat Integration
对于 Copilot 用户,在相关场景下可以把这句话加进 prompt
> After completing this task, evaluate if any learnings should be logged to `.learnings/` using the self-improvement skill format.
也可以使用这些快速 prompts
- "Log this to learnings"
- "Create a skill from this solution"
- "Check .learnings/ for related issues"

View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn70cjr952qdec1nx70zs6wefn7ynq2t",
"slug": "self-improving-agent",
"version": "3.0.5",
"publishedAt": 1773760428300
}

View File

@@ -0,0 +1,44 @@
# Learnings
用于记录开发过程中的 corrections、insights 和 knowledge gaps。
**Categories**: correction | insight | knowledge_gap | best_practice
**Areas**: frontend | backend | infra | tests | docs | config
**Statuses**: pending | in_progress | resolved | wont_fix | promoted | promoted_to_skill
## Status Definitions
| Status | Meaning |
|--------|---------|
| `pending` | 尚未处理 |
| `in_progress` | 正在处理 |
| `resolved` | 问题已修复或知识已吸收 |
| `wont_fix` | 决定不处理(原因写在 Resolution 中) |
| `promoted` | 已提升到 CLAUDE.md、AGENTS.md 或 copilot-instructions.md |
| `promoted_to_skill` | 已提炼为可复用的 skill |
## Skill Extraction Fields
当一条 learning 被提升为 skill 时,添加以下字段:
```markdown
**Status**: promoted_to_skill
**Skill-Path**: skills/skill-name
```
示例:
```markdown
## [LRN-20250115-001] best_practice
**Logged**: 2025-01-15T10:00:00Z
**Priority**: high
**Status**: promoted_to_skill
**Skill-Path**: skills/docker-m1-fixes
**Area**: infra
### Summary
Docker build fails on Apple Silicon due to platform mismatch
...
```
---

View File

@@ -0,0 +1,177 @@
# Skill Template
用于从 learnings 中提炼 skill 的模板。复制后按需定制。
---
## SKILL.md Template
```markdown
---
name: skill-name-here
description: "简洁说明这个 skill 在什么场景下使用、为什么要使用。包含 trigger conditions。"
---
# Skill Name
简要介绍这个 skill 解决的问题,以及它的来源。
## Quick Reference
| Situation | Action |
|-----------|--------|
| [Trigger 1] | [Action 1] |
| [Trigger 2] | [Action 2] |
## Background
说明这类知识为什么重要、能避免什么问题,以及原始 learning 提供的上下文。
## Solution
### Step-by-Step
1. 第一步,附上 code 或 command
2. 第二步
3. 验证步骤
### Code Example
\`\`\`language
// 展示解决方案的示例代码
\`\`\`
## Common Variations
- **Variation A**: 说明这种变体以及如何处理
- **Variation B**: 说明这种变体以及如何处理
## Gotchas
- 常见警告或错误 #1
- 常见警告或错误 #2
## Related
- 指向相关文档
- 指向相关 skill
## Source
Extracted from learning entry.
- **Learning ID**: LRN-YYYYMMDD-XXX
- **Original Category**: correction | insight | knowledge_gap | best_practice
- **Extraction Date**: YYYY-MM-DD
```
---
## Minimal Template
适用于不需要太多 section 的简单 skill
```markdown
---
name: skill-name-here
description: "这个 skill 做什么,以及在什么场景下使用。"
---
# Skill Name
[用一句话说明问题]
## Solution
[直接给出解决方案与 code/commands]
## Source
- Learning ID: LRN-YYYYMMDD-XXX
```
---
## Template with Scripts
适用于包含可执行 helper 的 skill
```markdown
---
name: skill-name-here
description: "这个 skill 做什么,以及在什么场景下使用。"
---
# Skill Name
[简介]
## Quick Reference
| Command | Purpose |
|---------|---------|
| `./scripts/helper.sh` | [它的作用] |
| `./scripts/validate.sh` | [它的作用] |
## Usage
### Automated (Recommended)
\`\`\`bash
./skills/skill-name/scripts/helper.sh [args]
\`\`\`
### Manual Steps
1. 第一步
2. 第二步
## Scripts
| Script | Description |
|--------|-------------|
| `scripts/helper.sh` | 主工具 |
| `scripts/validate.sh` | 校验工具 |
## Source
- Learning ID: LRN-YYYYMMDD-XXX
```
---
## Naming Conventions
- **Skill name**: 使用 lowercase空格用 hyphens
- Good: `docker-m1-fixes`, `api-timeout-patterns`
- Bad: `Docker_M1_Fixes`, `APITimeoutPatterns`
- **Description**: 以动作开头,并说明 trigger
- Good: "Handles Docker build failures on Apple Silicon. Use when builds fail with platform mismatch."
- Bad: "Docker stuff"
- **Files**:
- `SKILL.md` - 必需,主文档
- `scripts/` - 可选,可执行代码
- `references/` - 可选,详细文档
- `assets/` - 可选,模板
---
## Extraction Checklist
在从 learning 创建 skill 之前:
- [ ] Learning 已验证status: resolved
- [ ] 解决方案具有广泛适用性(不是一次性问题)
- [ ] 内容完整(包含必要上下文)
- [ ] 名称符合 conventions
- [ ] Description 简洁但信息充分
- [ ] Quick Reference table 可直接执行
- [ ] Code examples 已测试
- [ ] 已记录 source learning ID
创建之后:
- [ ] 将原 learning 更新为 `promoted_to_skill`
- [ ] 在 learning metadata 中添加 `Skill-Path: skills/skill-name`
- [ ] 在新的 session 中读取该 skill确认可用

View File

@@ -0,0 +1,23 @@
---
name: self-improvement
description: "在 agent bootstrap 期间注入 self-improvement reminder"
metadata: {"openclaw":{"emoji":"🧠","events":["agent:bootstrap"]}}
---
# Self-Improvement Hook
在 agent bootstrap 阶段注入一段提醒,提示检查是否需要记录 learnings。
## What It Does
-`agent:bootstrap` 时触发(早于 workspace files 注入)
- 添加一段 reminder block提示检查 `.learnings/` 中是否有相关 entries
- 提醒 agent 记录 corrections、errors 和 discoveries
## Configuration
无需额外配置。启用方式:
```bash
openclaw hooks enable self-improvement
```

View File

@@ -0,0 +1,56 @@
/**
* Self-Improvement Hook for OpenClaw
*
* Injects a reminder to evaluate learnings during agent bootstrap.
* Fires on agent:bootstrap event before workspace files are injected.
*/
const REMINDER_CONTENT = `
## Self-Improvement Reminder
After completing tasks, evaluate if any learnings should be captured:
**Log when:**
- User corrects you → \`.learnings/LEARNINGS.md\`
- Command/operation fails → \`.learnings/ERRORS.md\`
- User wants missing capability → \`.learnings/FEATURE_REQUESTS.md\`
- You discover your knowledge was wrong → \`.learnings/LEARNINGS.md\`
- You find a better approach → \`.learnings/LEARNINGS.md\`
**Promote when pattern is proven:**
- Behavioral patterns → \`SOUL.md\`
- Workflow improvements → \`AGENTS.md\`
- Tool gotchas → \`TOOLS.md\`
Keep entries simple: date, title, what happened, what to do differently.
`.trim();
const handler = async (event) => {
// Safety checks for event structure
if (!event || typeof event !== 'object') {
return;
}
// Only handle agent:bootstrap events
if (event.type !== 'agent' || event.action !== 'bootstrap') {
return;
}
// Safety check for context
if (!event.context || typeof event.context !== 'object') {
return;
}
// Inject the reminder as a virtual bootstrap file
// Check that bootstrapFiles is an array before pushing
if (Array.isArray(event.context.bootstrapFiles)) {
event.context.bootstrapFiles.push({
path: 'SELF_IMPROVEMENT_REMINDER.md',
content: REMINDER_CONTENT,
virtual: true,
});
}
};
module.exports = handler;
module.exports.default = handler;

View File

@@ -0,0 +1,62 @@
/**
* Self-Improvement Hook for OpenClaw
*
* Injects a reminder to evaluate learnings during agent bootstrap.
* Fires on agent:bootstrap event before workspace files are injected.
*/
import type { HookHandler } from 'openclaw/hooks';
const REMINDER_CONTENT = `## Self-Improvement Reminder
After completing tasks, evaluate if any learnings should be captured:
**Log when:**
- User corrects you → \`.learnings/LEARNINGS.md\`
- Command/operation fails → \`.learnings/ERRORS.md\`
- User wants missing capability → \`.learnings/FEATURE_REQUESTS.md\`
- You discover your knowledge was wrong → \`.learnings/LEARNINGS.md\`
- You find a better approach → \`.learnings/LEARNINGS.md\`
**Promote when pattern is proven:**
- Behavioral patterns → \`SOUL.md\`
- Workflow improvements → \`AGENTS.md\`
- Tool gotchas → \`TOOLS.md\`
Keep entries simple: date, title, what happened, what to do differently.`;
const handler: HookHandler = async (event) => {
// Safety checks for event structure
if (!event || typeof event !== 'object') {
return;
}
// Only handle agent:bootstrap events
if (event.type !== 'agent' || event.action !== 'bootstrap') {
return;
}
// Safety check for context
if (!event.context || typeof event.context !== 'object') {
return;
}
// Skip sub-agent sessions to avoid bootstrap issues
// Sub-agents have sessionKey patterns like "agent:main:subagent:..."
const sessionKey = event.sessionKey || '';
if (sessionKey.includes(':subagent:')) {
return;
}
// Inject the reminder as a virtual bootstrap file
// Check that bootstrapFiles is an array before pushing
if (Array.isArray(event.context.bootstrapFiles)) {
event.context.bootstrapFiles.push({
path: 'SELF_IMPROVEMENT_REMINDER.md',
content: REMINDER_CONTENT,
virtual: true,
});
}
};
export default handler;

View File

@@ -0,0 +1,373 @@
# Entry Examples
这里给出字段完整、格式规范的 entries 示例。
## Learning: Correction
```markdown
## [LRN-20250115-001] correction
**Logged**: 2025-01-15T10:30:00Z
**Priority**: high
**Status**: pending
**Area**: tests
### Summary
错误地认为 pytest fixtures 默认都是 function scope
### Details
编写 test fixtures 时,我假设所有 fixtures 都是 function scope。
用户纠正我:虽然 function scope 是默认值,但这个 codebase 的约定是,
数据库连接这类开销较大的资源应使用 module scope以提升 test 性能。
### Suggested Action
当创建涉及昂贵初始化DB、network的 fixtures 时,
不要默认用 function scope先检查现有 fixtures 的 scope 模式。
### Metadata
- Source: user_feedback
- Related Files: tests/conftest.py
- Tags: pytest, testing, fixtures
---
```
## Learning: Knowledge Gap (Resolved)
```markdown
## [LRN-20250115-002] knowledge_gap
**Logged**: 2025-01-15T14:22:00Z
**Priority**: medium
**Status**: resolved
**Area**: config
### Summary
项目使用 pnpm而不是 npm作为 package manager
### Details
尝试运行 `npm install`,但项目实际上使用 pnpm workspaces。
锁文件是 `pnpm-lock.yaml`,而不是 `package-lock.json`
### Suggested Action
在默认假设 npm 之前,先检查是否存在 `pnpm-lock.yaml``pnpm-workspace.yaml`
对于该项目,应使用 `pnpm install`
### Metadata
- Source: error
- Related Files: pnpm-lock.yaml, pnpm-workspace.yaml
- Tags: package-manager, pnpm, setup
### Resolution
- **Resolved**: 2025-01-15T14:30:00Z
- **Commit/PR**: N/A - knowledge update
- **Notes**: 已加入 CLAUDE.md 作为后续参考
---
```
## Learning: Promoted to CLAUDE.md
```markdown
## [LRN-20250115-003] best_practice
**Logged**: 2025-01-15T16:00:00Z
**Priority**: high
**Status**: promoted
**Promoted**: CLAUDE.md
**Area**: backend
### Summary
API responses 必须包含来自 request headers 的 correlation ID
### Details
所有 API responses 都应回传请求中的 `X-Correlation-ID` header。
这是 distributed tracing 的要求。缺少该 header 的 response
会破坏 observability pipeline。
### Suggested Action
在 API handlers 中始终加入 correlation ID passthrough。
### Metadata
- Source: user_feedback
- Related Files: src/middleware/correlation.ts
- Tags: api, observability, tracing
---
```
## Learning: Promoted to AGENTS.md
```markdown
## [LRN-20250116-001] best_practice
**Logged**: 2025-01-16T09:00:00Z
**Priority**: high
**Status**: promoted
**Promoted**: AGENTS.md
**Area**: backend
### Summary
修改 OpenAPI spec 后必须重新生成 API client
### Details
修改 API endpoints 时,必须重新生成 TypeScript client。
忘记执行这一步会导致 type mismatches只在运行时才暴露出来。
generate script 还会顺带执行 validation。
### Suggested Action
把它加入 agent workflow只要有 API 变更,就运行 `pnpm run generate:api`
### Metadata
- Source: error
- Related Files: openapi.yaml, src/client/api.ts
- Tags: api, codegen, typescript
---
```
## Error Entry
```markdown
## [ERR-20250115-A3F] docker_build
**Logged**: 2025-01-15T09:15:00Z
**Priority**: high
**Status**: pending
**Area**: infra
### Summary
Docker build 在 M1 Mac 上因 platform mismatch 失败
### Error
```
error: failed to solve: python:3.11-slim: no match for platform linux/arm64
```
### Context
- Command: `docker build -t myapp .`
- Dockerfile 使用 `FROM python:3.11-slim`
- 运行环境为 Apple SiliconM1/M2
### Suggested Fix
在 command 中加 platform flag`docker build --platform linux/amd64 -t myapp .`
或者更新 Dockerfile`FROM --platform=linux/amd64 python:3.11-slim`
### Metadata
- Reproducible: yes
- Related Files: Dockerfile
---
```
## Error Entry: Recurring Issue
```markdown
## [ERR-20250120-B2C] api_timeout
**Logged**: 2025-01-20T11:30:00Z
**Priority**: critical
**Status**: pending
**Area**: backend
### Summary
第三方 payment API 在 checkout 期间 timeout
### Error
```
TimeoutError: Request to payments.example.com timed out after 30000ms
```
### Context
- Command: POST /api/checkout
- timeout 设为 30s
- 高峰时段更容易发生(午间、晚间)
### Suggested Fix
实现 exponential backoff retry并考虑 circuit breaker pattern。
### Metadata
- Reproducible: yes (during peak hours)
- Related Files: src/services/payment.ts
- See Also: ERR-20250115-X1Y, ERR-20250118-Z3W
---
```
## Feature Request
```markdown
## [FEAT-20250115-001] export_to_csv
**Logged**: 2025-01-15T16:45:00Z
**Priority**: medium
**Status**: pending
**Area**: backend
### Requested Capability
将分析结果导出为 CSV 格式
### User Context
用户每周都会生成报表,需要把结果分享给非技术 stakeholders 在 Excel 中查看。
目前他们只能手工复制输出。
### Complexity Estimate
simple
### Suggested Implementation
为 analyze command 增加 `--output csv` flag。使用标准 csv module。
也可以沿用现有的 `--output json` 模式扩展。
### Metadata
- Frequency: recurring
- Related Features: analyze command, json output
---
```
## Feature Request: Resolved
```markdown
## [FEAT-20250110-002] dark_mode
**Logged**: 2025-01-10T14:00:00Z
**Priority**: low
**Status**: resolved
**Area**: frontend
### Requested Capability
dashboard 支持 dark mode
### User Context
用户经常在深夜工作,觉得亮色界面太刺眼。
另外还有几位用户也非正式提过这个需求。
### Complexity Estimate
medium
### Suggested Implementation
使用 CSS variables 管理颜色,并在 user settings 中加入 toggle。
同时考虑 system preference detection。
### Metadata
- Frequency: recurring
- Related Features: user settings, theme system
### Resolution
- **Resolved**: 2025-01-18T16:00:00Z
- **Commit/PR**: #142
- **Notes**: 已实现 system preference detection 和手动 toggle
---
```
## Learning: Promoted to Skill
```markdown
## [LRN-20250118-001] best_practice
**Logged**: 2025-01-18T11:00:00Z
**Priority**: high
**Status**: promoted_to_skill
**Skill-Path**: skills/docker-m1-fixes
**Area**: infra
### Summary
Docker build 在 Apple Silicon 上因 platform mismatch 失败
### Details
在 M1/M2 Mac 上构建 Docker images 时build 会失败,
因为 base image 没有 ARM64 variant。这个问题很常见
会影响很多开发者。
### Suggested Action
在 docker build command 中添加 `--platform linux/amd64`
或者在 Dockerfile 中使用 `FROM --platform=linux/amd64`
### Metadata
- Source: error
- Related Files: Dockerfile
- Tags: docker, arm64, m1, apple-silicon
- See Also: ERR-20250115-A3F, ERR-20250117-B2D
---
```
## Extracted Skill Example
当上面的 learning 被提炼为 skill 时,会变成:
**File**: `skills/docker-m1-fixes/SKILL.md`
```markdown
---
name: docker-m1-fixes
description: "修复 Apple SiliconM1/M2上的 Docker build failures。在 docker build 因 platform mismatch errors 失败时使用。"
---
# Docker M1 Fixes
用于解决 Apple Silicon Mac 上的 Docker build 问题。
## Quick Reference
| Error | Fix |
|-------|-----|
| `no match for platform linux/arm64` | 在 build 时添加 `--platform linux/amd64` |
| Image runs but crashes | 使用 emulation或寻找兼容 ARM 的 base |
## The Problem
许多 Docker base images 没有 ARM64 variants。
当在 Apple SiliconM1/M2/M3上构建时Docker 默认会尝试拉取 ARM64 images
从而导致 platform mismatch errors。
## Solutions
### Option 1: Build Flag (Recommended)
在 build command 中添加 platform flag
\`\`\`bash
docker build --platform linux/amd64 -t myapp .
\`\`\`
### Option 2: Dockerfile Modification
在 FROM instruction 中显式指定 platform
\`\`\`dockerfile
FROM --platform=linux/amd64 python:3.11-slim
\`\`\`
### Option 3: Docker Compose
为 service 添加 platform
\`\`\`yaml
services:
app:
platform: linux/amd64
build: .
\`\`\`
## Trade-offs
| Approach | Pros | Cons |
|----------|------|------|
| Build flag | 不需要改文件 | 必须记得每次都加 flag |
| Dockerfile | 显式且可版本化 | 会影响所有 builds |
| Compose | 对开发场景方便 | 需要 compose |
## Performance Note
在 ARM64 上运行 AMD64 images 依赖 Rosetta 2 emulation。
这对开发场景通常可行,但可能更慢。对于生产环境,应尽量寻找 ARM-native
alternatives。
## Source
- Learning ID: LRN-20250118-001
- Category: best_practice
- Extraction Date: 2025-01-18
```

View File

@@ -0,0 +1,223 @@
# Hook Setup Guide
为 AI coding agents 配置自动 self-improvement triggers。
## Overview
hooks 会在关键时机注入提醒,以便主动记录 learnings
- **UserPromptSubmit**:每次 prompt 后提醒是否需要记录 learnings
- **PostToolUse (Bash)**:当 command 失败时进行 error detection
## Claude Code Setup
### Option 1: Project-Level Configuration
在项目根目录创建 `.claude/settings.json`
```json
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}
]
}
],
"PostToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/error-detector.sh"
}
]
}
]
}
}
```
### Option 2: User-Level Configuration
将以下内容加入 `~/.claude/settings.json`,实现全局启用:
```json
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "~/.claude/skills/self-improvement/scripts/activator.sh"
}
]
}
]
}
}
```
### Minimal Setup仅 Activator
如果希望降低开销,只使用 UserPromptSubmit hook
```json
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}
]
}
]
}
}
```
## Codex CLI Setup
Codex 使用与 Claude Code 相同的 hook system。在 `.codex/settings.json` 中写入:
```json
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}
]
}
]
}
}
```
## GitHub Copilot Setup
Copilot 不直接支持 hooks。可以改为在 `.github/copilot-instructions.md` 中加入提示:
```markdown
## Self-Improvement
在完成包含以下情况的任务后:
- Debugging non-obvious issues
- 发现 workarounds
- 学到项目特定 patterns
- 解决 unexpected errors
考虑使用 self-improvement skill 的格式,将这条 learning 记录到 `.learnings/`
对于对其他 sessions 也有价值的高价值 learnings考虑做 skill extraction。
```
## Verification
### Test Activator Hook
1. 启用 hook 配置
2. 启动新的 Claude Code session
3. 发送任意 prompt
4. 确认上下文中出现 `<self-improvement-reminder>`
### Test Error Detector Hook
1. 为 Bash 启用 PostToolUse hook
2. 运行一个会失败的 command`ls /nonexistent/path`
3. 确认出现 `<error-detected>` reminder
### Dry Run Extract Script
```bash
./skills/self-improvement/scripts/extract-skill.sh test-skill --dry-run
```
预期输出会展示将要创建的 skill scaffold。
## Troubleshooting
### Hook Not Triggering
1. **Check script permissions**`chmod +x scripts/*.sh`
2. **Verify path**:使用绝对路径,或使用相对项目根目录的路径
3. **Check settings location**:确认是 project 级还是 user 级配置
4. **Restart session**hooks 会在 session 启动时加载
### Permission Denied
```bash
chmod +x ./skills/self-improvement/scripts/activator.sh
chmod +x ./skills/self-improvement/scripts/error-detector.sh
chmod +x ./skills/self-improvement/scripts/extract-skill.sh
```
### Script Not Found
如果使用相对路径,请确认当前目录正确;否则使用绝对路径:
```json
{
"command": "/absolute/path/to/skills/self-improvement/scripts/activator.sh"
}
```
### Too Much Overhead
如果 activator 让你觉得太打断:
1. **Use minimal setup**:仅启用 UserPromptSubmit不启用 PostToolUse
2. **Add matcher filter**:只在特定 prompts 下触发:
```json
{
"matcher": "fix|debug|error|issue",
"hooks": [...]
}
```
## Hook Output Budget
activator 被设计为轻量输出:
- **Target**:约 50-100 tokens 每次触发
- **Content**:结构化 reminder而不是冗长说明
- **Format**XML tags便于解析
如果还需要进一步降低开销,可以直接修改 `activator.sh`,让输出更短。
## Security Considerations
- hook scripts 以与 Claude Code 相同的权限运行
- scripts 只输出文本;不会修改文件,也不会执行额外 commands
- error detector 会读取 `CLAUDE_TOOL_OUTPUT` environment variable
- 所有 scripts 都是 opt-in 的(必须手动配置才会启用)
## Disabling Hooks
如果想临时停用,又不想删除配置:
1. **Comment out in settings**
```json
{
"hooks": {
// "UserPromptSubmit": [...]
}
}
```
2. **或直接删除 settings file**:没有配置时 hooks 就不会运行

View File

@@ -0,0 +1,248 @@
# OpenClaw Integration
将 self-improvement skill 集成到 OpenClaw 的完整 setup 与使用指南。
## Overview
OpenClaw 使用基于 workspace 的 prompt injection并结合 event-driven hooks。上下文会在 session 启动时从 workspace files 注入hooks 则会在生命周期事件上触发。
## Workspace Structure
```
~/.openclaw/
├── workspace/ # 工作目录
│ ├── AGENTS.md # 多代理协作模式
│ ├── SOUL.md # 行为指导与个性
│ ├── TOOLS.md # 工具能力与 gotchas
│ ├── MEMORY.md # 长期 memory仅主 session
│ └── memory/ # 每日 memory 文件
│ └── YYYY-MM-DD.md
├── skills/ # 已安装 skills
│ └── <skill-name>/
│ └── SKILL.md
└── hooks/ # 自定义 hooks
└── <hook-name>/
├── HOOK.md
└── handler.ts
```
## Quick Setup
### 1. Install the Skill
```bash
clawdhub install self-improving-agent
```
或手动复制:
```bash
cp -r self-improving-agent ~/.openclaw/skills/
```
### 2. Install the Hook可选
将 hook 复制到 OpenClaw 的 hooks 目录:
```bash
cp -r hooks/openclaw ~/.openclaw/hooks/self-improvement
```
启用 hook
```bash
openclaw hooks enable self-improvement
```
### 3. Create Learning Files
在 workspace 中创建 `.learnings/` 目录:
```bash
mkdir -p ~/.openclaw/workspace/.learnings
```
或者在 skill 目录中创建:
```bash
mkdir -p ~/.openclaw/skills/self-improving-agent/.learnings
```
## Injected Prompt Files
### AGENTS.md
用途:多代理 workflows 与 delegation patterns。
```markdown
# Agent Coordination
## Delegation Rules
- 对开放式 codebase 问题使用 explore agent
- 对长时间任务启动 sub-agents
- 使用 sessions_send 做跨 session 通信
## Session Handoff
在委派给另一个 session 时:
1. 在 handoff message 中提供完整上下文
2. 包含相关文件路径
3. 指定期望的输出格式
```
### SOUL.md
用途:行为指导与沟通风格。
```markdown
# Behavioral Guidelines
## Communication Style
- 保持直接和简洁
- 避免不必要的 caveats 和 disclaimers
- 使用符合上下文的技术语言
## Error Handling
- 及时承认错误
- 立即给出更正后的信息
- 将重要 errors 记录到 learnings
```
### TOOLS.md
用途工具能力、integration gotchas、本地配置。
```markdown
# Tool Knowledge
## Self-Improvement Skill
将 learnings 记录到 `.learnings/`,以支持持续改进。
## Local Tools
- 在这里记录 tool-specific gotchas
- 记录 authentication requirements
- 跟踪 integration quirks
```
## Learning Workflow
### Capturing Learnings
1. **In-session**:像平时一样记录到 `.learnings/`
2. **Cross-session**:提升到 workspace files
### Promotion Decision Tree
```
Is the learning project-specific?
├── Yes → Keep in .learnings/
└── No → Is it behavioral/style-related?
├── Yes → Promote to SOUL.md
└── No → Is it tool-related?
├── Yes → Promote to TOOLS.md
└── No → Promote to AGENTS.md (workflow)
```
### Promotion Format Examples
**From learning**
> 在未配置 auth 的情况下执行 Git push 到 GitHub 会失败,并触发 desktop prompt
**To TOOLS.md**
```markdown
## Git
- 在确认 auth 已配置前不要执行 push
- 使用 `gh auth status` 检查 GitHub CLI auth
```
## Inter-Agent Communication
OpenClaw 提供这些工具用于跨 session 通信:
### sessions_list
查看活跃和最近的 sessions
```
sessions_list(activeMinutes=30, messageLimit=3)
```
### sessions_history
读取另一个 session 的 transcript
```
sessions_history(sessionKey="session-id", limit=50)
```
### sessions_send
向另一个 session 发送消息:
```
sessions_send(sessionKey="session-id", message="Learning: API requires X-Custom-Header")
```
### sessions_spawn
启动一个后台 sub-agent
```
sessions_spawn(task="Research X and report back", label="research")
```
## Available Hook Events
| Event | When It Fires |
|-------|---------------|
| `agent:bootstrap` | 在 workspace files 注入前触发 |
| `command:new` | 执行 `/new` command 时触发 |
| `command:reset` | 执行 `/reset` command 时触发 |
| `command:stop` | 执行 `/stop` command 时触发 |
| `gateway:startup` | gateway 启动时触发 |
## Detection Triggers
### Standard Triggers
- 用户纠正("No, that's wrong..."
- command failuresnon-zero exit codes
- API errors
- knowledge gaps
### OpenClaw-Specific Triggers
| Trigger | Action |
|---------|--------|
| Tool call error | 记录到 TOOLS.md并标明 tool name |
| Session handoff confusion | 记录到 AGENTS.md并补充 delegation pattern |
| Model behavior surprise | 记录到 SOUL.md并说明 expected vs actual |
| Skill issue | 记录到 .learnings/ 或向上游反馈 |
## Verification
检查 hook 是否已注册:
```bash
openclaw hooks list
```
检查 skill 是否已加载:
```bash
openclaw status
```
## Troubleshooting
### Hook not firing
1. 确保 hooks 已在配置中启用
2. 修改配置后重启 gateway
3. 检查 gateway logs 中是否有 errors
### Learnings not persisting
1. 确认 `.learnings/` 目录存在
2. 检查文件权限
3. 确认 workspace path 配置正确
### Skill not loading
1. 检查 skill 是否位于 skills 目录中
2. 确认 SKILL.md 的 frontmatter 正确
3. 运行 `openclaw status` 查看已加载 skills

View File

@@ -0,0 +1,20 @@
#!/bin/bash
# Self-Improvement Activator Hook
# Triggers on UserPromptSubmit to remind Claude about learning capture
# Keep output minimal (~50-100 tokens) to minimize overhead
set -e
# Output reminder as system context
cat << 'EOF'
<self-improvement-reminder>
After completing this task, evaluate if extractable knowledge emerged:
- Non-obvious solution discovered through investigation?
- Workaround for unexpected behavior?
- Project-specific pattern learned?
- Error required debugging to resolve?
If yes: Log to .learnings/ using the self-improvement skill format.
If high-value (recurring, broadly applicable): Consider skill extraction.
</self-improvement-reminder>
EOF

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Self-Improvement Error Detector Hook
# Triggers on PostToolUse for Bash to detect command failures
# Reads CLAUDE_TOOL_OUTPUT environment variable
set -e
# Check if tool output indicates an error
# CLAUDE_TOOL_OUTPUT contains the result of the tool execution
OUTPUT="${CLAUDE_TOOL_OUTPUT:-}"
# Patterns indicating errors (case-insensitive matching)
ERROR_PATTERNS=(
"error:"
"Error:"
"ERROR:"
"failed"
"FAILED"
"command not found"
"No such file"
"Permission denied"
"fatal:"
"Exception"
"Traceback"
"npm ERR!"
"ModuleNotFoundError"
"SyntaxError"
"TypeError"
"exit code"
"non-zero"
)
# Check if output contains any error pattern
contains_error=false
for pattern in "${ERROR_PATTERNS[@]}"; do
if [[ "$OUTPUT" == *"$pattern"* ]]; then
contains_error=true
break
fi
done
# Only output reminder if error detected
if [ "$contains_error" = true ]; then
cat << 'EOF'
<error-detected>
A command error was detected. Consider logging this to .learnings/ERRORS.md if:
- The error was unexpected or non-obvious
- It required investigation to resolve
- It might recur in similar contexts
- The solution could benefit future sessions
Use the self-improvement skill format: [ERR-YYYYMMDD-XXX]
</error-detected>
EOF
fi

View File

@@ -0,0 +1,221 @@
#!/bin/bash
# Skill Extraction Helper
# Creates a new skill from a learning entry
# Usage: ./extract-skill.sh <skill-name> [--dry-run]
set -e
# Configuration
SKILLS_DIR="./skills"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
usage() {
cat << EOF
Usage: $(basename "$0") <skill-name> [options]
Create a new skill from a learning entry.
Arguments:
skill-name Name of the skill (lowercase, hyphens for spaces)
Options:
--dry-run Show what would be created without creating files
--output-dir Relative output directory under current path (default: ./skills)
-h, --help Show this help message
Examples:
$(basename "$0") docker-m1-fixes
$(basename "$0") api-timeout-patterns --dry-run
$(basename "$0") pnpm-setup --output-dir ./skills/custom
The skill will be created in: \$SKILLS_DIR/<skill-name>/
EOF
}
log_info() {
echo -e "${GREEN}[INFO]${NC} $1"
}
log_warn() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1" >&2
}
# Parse arguments
SKILL_NAME=""
DRY_RUN=false
while [[ $# -gt 0 ]]; do
case $1 in
--dry-run)
DRY_RUN=true
shift
;;
--output-dir)
if [ -z "${2:-}" ] || [[ "${2:-}" == -* ]]; then
log_error "--output-dir requires a relative path argument"
usage
exit 1
fi
SKILLS_DIR="$2"
shift 2
;;
-h|--help)
usage
exit 0
;;
-*)
log_error "Unknown option: $1"
usage
exit 1
;;
*)
if [ -z "$SKILL_NAME" ]; then
SKILL_NAME="$1"
else
log_error "Unexpected argument: $1"
usage
exit 1
fi
shift
;;
esac
done
# Validate skill name
if [ -z "$SKILL_NAME" ]; then
log_error "Skill name is required"
usage
exit 1
fi
# Validate skill name format (lowercase, hyphens, no spaces)
if ! [[ "$SKILL_NAME" =~ ^[a-z0-9]+(-[a-z0-9]+)*$ ]]; then
log_error "Invalid skill name format. Use lowercase letters, numbers, and hyphens only."
log_error "Examples: 'docker-fixes', 'api-patterns', 'pnpm-setup'"
exit 1
fi
# Validate output path to avoid writes outside current workspace.
if [[ "$SKILLS_DIR" = /* ]]; then
log_error "Output directory must be a relative path under the current directory."
exit 1
fi
if [[ "$SKILLS_DIR" =~ (^|/)\.\.(/|$) ]]; then
log_error "Output directory cannot include '..' path segments."
exit 1
fi
SKILLS_DIR="${SKILLS_DIR#./}"
SKILLS_DIR="./$SKILLS_DIR"
SKILL_PATH="$SKILLS_DIR/$SKILL_NAME"
# Check if skill already exists
if [ -d "$SKILL_PATH" ] && [ "$DRY_RUN" = false ]; then
log_error "Skill already exists: $SKILL_PATH"
log_error "Use a different name or remove the existing skill first."
exit 1
fi
# Dry run output
if [ "$DRY_RUN" = true ]; then
log_info "Dry run - would create:"
echo " $SKILL_PATH/"
echo " $SKILL_PATH/SKILL.md"
echo ""
echo "Template content would be:"
echo "---"
cat << TEMPLATE
name: $SKILL_NAME
description: "[TODO: Add a concise description of what this skill does and when to use it]"
---
# $(echo "$SKILL_NAME" | sed 's/-/ /g' | awk '{for(i=1;i<=NF;i++) $i=toupper(substr($i,1,1)) tolower(substr($i,2))}1')
[TODO: Brief introduction explaining the skill's purpose]
## Quick Reference
| Situation | Action |
|-----------|--------|
| [Trigger condition] | [What to do] |
## Usage
[TODO: Detailed usage instructions]
## Examples
[TODO: Add concrete examples]
## Source Learning
This skill was extracted from a learning entry.
- Learning ID: [TODO: Add original learning ID]
- Original File: .learnings/LEARNINGS.md
TEMPLATE
echo "---"
exit 0
fi
# Create skill directory structure
log_info "Creating skill: $SKILL_NAME"
mkdir -p "$SKILL_PATH"
# Create SKILL.md from template
cat > "$SKILL_PATH/SKILL.md" << TEMPLATE
---
name: $SKILL_NAME
description: "[TODO: Add a concise description of what this skill does and when to use it]"
---
# $(echo "$SKILL_NAME" | sed 's/-/ /g' | awk '{for(i=1;i<=NF;i++) $i=toupper(substr($i,1,1)) tolower(substr($i,2))}1')
[TODO: Brief introduction explaining the skill's purpose]
## Quick Reference
| Situation | Action |
|-----------|--------|
| [Trigger condition] | [What to do] |
## Usage
[TODO: Detailed usage instructions]
## Examples
[TODO: Add concrete examples]
## Source Learning
This skill was extracted from a learning entry.
- Learning ID: [TODO: Add original learning ID]
- Original File: .learnings/LEARNINGS.md
TEMPLATE
log_info "Created: $SKILL_PATH/SKILL.md"
# Suggest next steps
echo ""
log_info "Skill scaffold created successfully!"
echo ""
echo "Next steps:"
echo " 1. Edit $SKILL_PATH/SKILL.md"
echo " 2. Fill in the TODO sections with content from your learning"
echo " 3. Add references/ folder if you have detailed documentation"
echo " 4. Add scripts/ folder if you have executable code"
echo " 5. Update the original learning entry with:"
echo " **Status**: promoted_to_skill"
echo " **Skill-Path**: skills/$SKILL_NAME"