hoeem (@hooeem) put out "I want to become a Claude architect (full course)" — 7.56M views, 56K saves. A complete dissection of Anthropic's Claude Certified Architect exam outline.
I've been shipping with Claude Code for six months — prediction market quant trading, on-chain data analysis, the whole stack. Here's my take on the five exam domains: what actually matters in daily use and what stays theoretical.
Agent architecture takes up nearly 30%, but ironically it's the easiest part to "talk about on paper." MCP and Context Management, though lower-weighted, are where you actually hit the most landmines in real work.
The exam covers coordinator-subagent patterns and Agent SDK orchestration mechanics.
I use subagents daily, but my setup is dead simple. In my CLAUDE.md, different task types route to different models:
Retrieval/exploration → haiku (fast, cheap)
Content/code review → sonnet (sufficient)
Strategy/finance-related → opus (can't save here)
Main session assigns tasks, each does its thing. One finds files, another writes tweets, another runs strategy reports — no multi-layer orchestration needed.
I've tried complex agent collaboration setups. The debugging overhead kills any gains. Five agents passing context around, one fails and the whole chain breaks. A single agent going end-to-end is more stable. The exam teaches multi-agent orchestration design, but real work taught me: picking the right model matters far more than architecture diagrams.
Server scoping, environment variables, project-level vs. user-level config — the exam covers these and they're foundational.
But the real difficulty isn't on the outline. I've integrated 10+ MCP servers, and each had its own gotchas. Two examples:
Twitter's MCP server has API rate limits, and I run two X accounts (main and alt) — reads go through the alt to protect the main account, interactions must use main. This "same server, multiple identities" scenario? Not on the exam.
Playwright's MCP server — cookie injection is flaky. Login states conflict across servers. Log into site A with Playwright, then Puppeteer hits the same domain, session corrupts. Fix: give each server its own isolated browser profile.
My rule: use community servers for standard needs, write your own for edge cases. Saves time, but don't skip the judgment calls.
CLAUDE.md hierarchy, rules directory, slash commands, skills frontmatter — the exam covers these as basics.
But CLAUDE.md becomes something more as you use it. My system looks like:
Global CLAUDE.md → user identity, delivery standards, collaboration preferences
rules/ directory → behaviors, capture rules, memory-flush rules (auto-loaded)
Project-level CLAUDE.md → business context, strategy parameters
memory/ directory → cross-session state, daily progress, pit records
None of this was designed up front — I discovered it through pain. I maintain a SSOT (single source of truth) ownership table:
Strategy state → only PROJECT_CONTEXT.md
Daily progress → only today.md
Portfolio data → only portfolio.md
Why so strict? I once wrote the same number in three files, changed one, forgot the other two, then spent hours debugging phantom inconsistencies.
The config framework is on GitHub (claude-workflow) if you want to see what this looks like in practice.
The exam asks how to write skills frontmatter. It doesn't ask how to make your rule system evolve with your usage habits.
Few-shot, JSON schema, tool_use — learn them, move on.
What's more useful is understanding Claude's behavioral patterns. Vague instructions? It over-interprets. Give it clear role definitions and success criteria, waste drops dramatically. My CLAUDE.md has a "forbidden phrases" list:
Forbidden: "I fixed it, you run it" / "should be fine" / "might pass" / "theoretically correct"
Output quality jumped immediately — Claude stops giving hand-wavy sign-offs and starts producing verification evidence.
I also built a source-check pass: after writing content, auto-scan factual claims and label them — ✅ sourced, 🎨 rhetoric, ❌ unverified. This kind of defensive thinking isn't exam material, but I use it daily.
The exam covers theory. The pain is in the details.
When to compress context? My threshold: 15 conversation rounds or 30 tool calls. After compression, recovery order matters:
Step 1: query memory library (qmd search) → find pit records and patterns
Step 2: read daily work log (today.md) → know where we left off
Step 3: re-read project docs (PROJECT_CONTEXT.md) → restore business context
Order matters. Reading project docs first wastes context on known background, leaving less room for what actually changed.
Subagent failures? Config errors: fix directly. Logic errors: roll back. Three consecutive failures: stop and document — no brute-forcing a 4th attempt. This rule is hardcoded in my behaviors.md.
hoeem organized six practice scenarios: customer service agent, code generation, multi-agent research, dev tools, CI/CD pipeline, structured data extraction. Not official exam questions — his suggested learning path.
Code generation I use daily. CI/CD powers my autonomous research agents — claude -p non-interactive mode, auto-runs at 3 AM, I read the report over coffee. Data extraction drives my Polymarket chain analysis — pull trade records from API, calculate PnL, identify whale behavior patterns.
Exam scenarios are clean. Real life means accounts getting banned, APIs changing without notice, data formats shifting every few days.
Currently limited to Claude Partner Network members, $99 per attempt. Partner Network is free to join but requires being "an organization bringing Claude to market" — individual devs can't sign up yet.
The knowledge framework is genuinely worth studying, especially MCP and context management — many people use Claude Code for months without knowing these exist. But the cert proves you understand Anthropic's ideal architecture. Whether you can ship a product with Claude Code is a different question.
Building one project end-to-end beats cramming any outline.
Fastest path to learn: have Claude generate a study plan from these five domains and walk you through it daily. Learn Claude with Claude — two weeks and you'll know what matters.
Reference: I want to become a Claude architect (full course) - @hooeem
About the author: Leo (@runes_leo), AI × Crypto independent builder. Ships prediction market quant trading and data analysis with Claude Code daily. Writes code, documents pitfalls, occasionally shares what actually works on X.