← Leo Labs

136K Stars and Still Half-Useful: I Tore Apart Everything Claude Code

Published 2026-04-08 · Chinese version first on X Article

136K Stars and Still Half-Useful

I spent three months building my own Claude Code system (61 skills, 15 agents, running daily). Last week I tore through the Everything Claude Code repo — 136K stars, 20K forks, 156 skills, 48 agents, 72 commands. This is what's actually worth keeping.

What's Actually in There

The repo stacks six layers:

The numbers look impressive. But zoom in and you see the pattern: it's organized by programming language.

Of the 48 agents, 10 are language-specific code reviewers (one each for TypeScript, Python, Go, Java, Kotlin, C++, C#, Flutter, Rust, Swift). Another 8 are language-specific build fixers. That's 37% of the agent count. The 156 skills follow the same pattern — Django TDD, Python patterns, Go patterns, Kotlin patterns, Swift patterns, Laravel patterns. If you only write TypeScript, 70% of the skills won't touch your codebase.

Quality isn't the issue. The Django TDD skill actually teaches test-driven development. But breadth comes at a cost: complexity.

Six-Layer Repo Architecture

Five Designs Worth Studying

1. SOUL.md — Five Core Principles

Most Claude Code configs start with rule lists. Everything Claude Code leads with principles:

These aren't decorative. Every agent and skill is designed to follow them. The planner uses Opus (most expensive, most capable) because planning is the critical path. Code review uses Sonnet because it's high-frequency work and needs cost balance. Model selection aligns with principles.

My own system has rules but no explicit principle layer. After reading SOUL.md, that's something worth copying — principles first, rules second, keeps the whole thing coherent.

2. Instinct Learning System (v2.1)

This is Everything Claude Code's most ambitious piece.

Typical configs are static: you write rules, Claude follows them. Instinct tries something else — letting Claude learn:

  1. Hooks capture behavior before/after each tool call
  2. Background process (cheap Haiku model) analyzes your patterns
  3. Generate "atomic instincts" — behavior judgments with confidence scores (0.3 to 0.9)
  4. Once accumulated, auto-cluster into new skills
Instinct Auto-Learning System v2.1

Example: if you rewrite class-based code to functional style five times in a row, the system generates instinct: prefer-functional-style, confidence 0.7, scoped to current project. Next time you code, it nudges you toward functional patterns.

v2.1 adds project isolation — React patterns don't bleed into Python projects. You can export and share your instincts too.

Good idea in theory. But I'm skeptical about execution. Using Haiku for post-hoc analysis means overhead on every tool call. And does a "0.7 confidence" number actually mean anything? I haven't seen validation data.

My approach is different: patterns.md is manual, trigger-based. I only record when I get corrected. 428 entries, none auto-generated. Higher signal, lower noise, fully verifiable.

Automatic learning covers more ground but may accumulate noise. Manual recording is precise but depends on being corrected — you miss patterns that never triggered a correction.

3. Hook Automation Framework

Everything Claude Code covers 8 event types with 20+ scripts. Three worth attention:

block-no-verify.js — intercepts git commit --no-verify. People (and AI) skip pre-commit hooks when lazy. This blocks it at the Claude layer. Simple. Effective.

suggest-compact.js — counts tool calls, auto-suggests /compact at 50. This is more reliable than writing "remind me to compact" in rules. Hooks are program-level guarantees. Rules are AI-level suggestions. AI forgets.

pre-bash-commit-quality.js — runs lint and secret detection before commit.

There's also a three-tier strictness system (minimal/standard/strict). Don't want to be nagged by hooks? Go minimal. Perfectionist? Go strict.

4. Selective Installation

The real problem: 156 skills means 70% are irrelevant to you.

Everything Claude Code solves this with manifest-driven installation. install-modules.json categorizes everything by cost (light/medium/heavy), stability (stable/beta), and platform. Install just rules-core + hooks-runtime, skip all the language-specific skills.

Much better than "clone the whole repo and delete what you don't need."

5. Multi-Platform Compatibility

Everything Claude Code isn't just for Claude Code. It supports Cursor, Codex, OpenCode, Antigravity. Same agent definitions, adapted per platform. Claude Code gets all 48 agents. OpenCode gets 12.

If you jump between AI coding tools, one config beats re-configuring each one.

Three Things Nobody Talks About

Over-Engineering Tax

156 skills, 48 agents, 72 commands. Maintaining these is itself a project. The CONTRIBUTING.md has 100+ sections.

For individual users, full installation means hundreds of files in .claude/ you don't understand. When a hook fails, do you know where to look? When an agent gives bad advice, can you tell if it's the agent or your code?

You've installed a system you don't own. When it breaks, it's harder to fix than if you'd built lighter.

Hook Attack Surface

Every hook is a Node.js script running before/after tool calls. Installing Everything Claude Code means you're running 20+ scripts on your machine.

Most are harmless (counting, reminders, saving). But the mechanism itself has surface area. A malicious fork could inject code. Dependency chains could get compromised. This isn't just a config problem — it's code and secrets at risk.

Scan scripts/hooks/ before installing. Ten minutes of reading beats potential leaks. This risk isn't unique to Everything Claude Code, but it applies to any hook-based config.

The Claude Code Team is Catching Up

Claude Code itself moves fast. Native subagents launched. Native memory and skills are improving. A lot of what Everything Claude Code does — session saves, agent routing, skill definition — was necessary when those didn't exist. Now the official platform is closing the gap.

The 72 commands are already migrating to skill format because native Claude Code skills outperform custom commands.

This "getting lapped by official features" will happen more. When you pick Everything Claude Code, decide: am I using an actively maintained framework, or am I adding a middleware layer that'll be obsolete in six months?

What I Actually Changed

A few concrete moves:

Added failure tracking to session saves. Everything Claude Code's save-session template includes "What Did NOT Work." I wasn't tracking failed approaches, only successes. Started doing that the next day. Small change, big payoff for next time.

Verified project isolation in patterns.md. Everything Claude Code's Instinct v2.1 emphasizes isolation. My patterns.md already splits across 10 files by topic (workflow, data, debugging, content, etc.). Natural isolation. But I'll add project-level indexing later.

Skipped the compact hook. suggest-compact.js is smarter than rules-based reminders. I evaluated the ROI — my current rules already catch when to compact. I'll implement it once those rules stop working.

Recommendations

How to Use ECC at Each Stage

Just starting with Claude Code: don't install Everything Claude Code. Run naked first. Your pain points are your first rules.

You have your own CLAUDE.md and want to reference others: grab three things from Everything Claude Code: read the common rules in rules/common/, borrow the session-save template structure, try one language-specific patterns skill. Skip the rest.

Heavy user with your own system: read Everything Claude Code as reference docs. Focus on Instinct and Hook design — not necessarily to copy, but to understand the thinking. SOUL.md's "principles first" approach is worth taking seriously.

Everyone: scan scripts/hooks/ before installing. Twenty scripts running on your machine warrant ten minutes of review.

136K stars means a lot of people are figuring out how to work with AI coding tools. Everything Claude Code gives a solid reference. Pick what you need. Full installation usually wastes effort. Selective adoption usually pays off.


Leo (@runes_leo) is an AI × Crypto independent builder. He runs quantitative strategies on Polymarket using Claude Code for data pipelines and trading automation. More: leolabs.me