How many tweets are sitting in your X bookmarks that you've never actually opened?
Mine used to have over a hundred. I'd see something good while scrolling, hit save, and think "I'll read it tonight." By the time tonight came, I'd already bookmarked ten more. The old ones never got a turn. Skip cleaning for a few days and you've got dozens. A week of neglect and you're over a hundred.
Then I switched tactics: stop hoarding, digest as you go. When I find a good tweet, I send the URL to Claude Code and get results in 30 seconds. Is it worth retweeting? Does it matter for what I'm building? Do I need to dig deeper? One conversation answers all of it.
After more than a month, my bookmarks are at zero. Every piece of content I consume actually turns into action—tweets written, strategy parameters tweaked, even portfolio allocation shifted.
Here's how I do it.
The problem is friction.
Reading a solid English tweet seriously takes 3-5 minutes. Understanding the main point, then figuring out if it applies to you—another few minutes. If you want to write a thoughtful quote-tweet in response, add another few minutes for thinking and writing. One piece of content, start to finish: 10-15 minutes.
Save 10 a day and you're looking at two hours. Nobody has that.
So bookmarks become a graveyard: you think saving it means you won't miss out. Really it means you'll never look at it again.
The approach is straightforward: hand the URL to Claude Code and have it extract what matters, match it to what you're working on, and tell you whether it's worth engaging with.
Extract value means: what's the core claim here, and are there actual numbers or methods? Match to action means: does this affect something I'm currently building? Judge engagement means: is this tweet-worthy, and from what angle?
I've set up a trigger in my Claude Code rules: whenever I paste a URL or screenshot, it automatically breaks down the idea along three dimensions and gives structured advice. The format looks something like this:
Value: Core insight in one sentence
Action: Specific next step
Engagement: Quote / reply / skip, with the angle to use
You might ask: Claude Code can't browse the web on its own, so how does it read tweets?
MCP tools. MCP is Claude Code's plugin system. Once installed, it can call external tools to fetch and read web content. For tweets specifically, I've tested several approaches and landed on a stable workflow:
Regular web pages are easiest. Jina Reader handles them—free, fast, you paste a URL and get the full text back. Blog posts, articles, documentation, most things work on the first try.
X tweets are trickier since you need to be logged in to see the full content. I use a browser helper that lets Claude Code read through my local Chrome browser. You're already logged into X in Chrome during normal use, so the tool borrows that session and reads the full tweet as if you clicked the link yourself.
Occasionally a tool fails. Claude Code automatically falls back to the next option. I've defined a simple priority list in my rules: try the fastest first, fail over to the next one if needed. Something usually works.
It's all transparent to you. Just paste the URL and Claude Code picks the right tool to use.
Here's an actual screenshot of me digesting a tweet. I paste a link, Claude Code fetches the content, then breaks it down across value, action, and engagement:
This tweet was about Anthropic and OpenAI's shipping cadence. After reading it, Claude Code didn't just summarize—it connected it to my own work. The "tools build the next version of the tools" flywheel is exactly what I'm doing by building on Claude Code. That's context-aware analysis. It's not generic, it's specific to me.
I built an automation that would open bookmarks, fetch them all, run batch analysis, spit out a report. Two weeks later, I killed it.
Batch results you won't actually read. Give yourself 15 pieces of analysis at once and it's like getting 15 emails—you'll skim and close the tab.
One at a time works. See something good, digest it now, decide now. Worth tweeting? Write it now. Relevant to a project? Save it now. Worthless? Skip it, don't rent space in your head.
When it comes to absorbing information, speed isn't the bottleneck. Acting on it immediately is.
The 30-second digest is possible because Claude Code knows what I'm doing.
It's read my project list, this week's goals, my current positions. So when a tweet mentions "market-making strategies," it doesn't say generically "here's an article about market making." It says "you could apply this to your H43 strategy."
That's the real value of CLAUDE.md. Not for teaching AI to code, but for giving it your context.
I've defined an "external content handler" in my rules: when you paste a URL or screenshot, it automatically evaluates from four angles—surface value, relevance to current positions, connection to active projects, and executable next steps—then gives you a recommendation.
The AI doesn't need to be a domain expert. It just needs to know who you are and what you're building. Then it can quickly tell you if a piece of information matters to you.
Every piece of content needs a destination after processing. I have four:
Worth retweeting? Write the quote-tweet and post it now. Useful for a project? Add it to notes or create a TODO. Needs deeper study? Mark it "research later" and batch those for the weekend. Not relevant? Forget it.
Never leave anything in "I read it but did nothing" limbo. Either it becomes action or you let it go.
I'm digesting a dozen or more external pieces daily, sometimes dozens. Most get filtered out, a few become quote-tweets or project TODOs.
My X account went from under 10k to 15k followers in that time. Content digestion wasn't the whole story, but steady Quote output was definitely part of the growth.
Bookmarks went from over 100 backed up to zero, and I've never let them pile up again.
You don't need a complicated system:
No automation, no scheduled tasks, no databases. Just conversation.
You can add automation later if you want. First just build the habit: read it, digest it, do something about it.
Earlier I said I ditched batch processing. Later I tried again. This time it's different: instead of throwing 15 analyses at you, the AI pre-screens them and only surfaces what's worth your time.
Here's a real example of batch digestion. Ten bookmarks come in, the AI flags 2 as worth quoting, 4 for you to read directly, 1 for deeper research, and tells you to skip the other 3. Each has a clear next step. You're not sifting through a report.
The key is the AI does the filtering, you only see the ones that matter. It's not another pile of analysis for you to sort through.
One-by-one style? The four steps above are enough. Want to clear your whole bookmark backlog at once? This works too.
About the author: Leo (@runes_leo), AI × Crypto independent builder. Runs quantitative trading on Polymarket and uses Claude Code to build data analysis and automated trading systems. More hands-on insights: leolabs.me