• Claude Hooks: Stop Burning Tokens on Irrelevant Context

    CLAUDE.md implements static context injection. You maintain knowledge across projects—debugging methodologies, code patterns, security practices. Standard practice: dump everything into ~/.claude/CLAUDE.md with @-includes.

    Except, then claude injects everything as system context. Every token counts against your context window.

    Alternatively, you could keep CLAUDE.md more minimal and remember to manually tell it to load memories. Perhaps stored in @~/.claude/docs/. Maybe you piece together some slash commands. Not bad…

    Ope. You forget. Claude gets insufferably dumb faster. You remember halfway through your session. Context matters when it’s missing. Look, I have ADHD—like 5% of adults—so “just remember to load the right context” isn’t a solution. It’s a setup for failure. No matter who you are, it’s an invitation to more uneven results.

  • Automating Away Claude's Bad Habits with Hooks

    When AI agents like Claude Code write code, they often leave trailing whitespace for no reason. Tabs or spaces? Doesn’t matter. It’s bad.

    Claude leaving trailing whitespace

    Whyyyyy?

    LLMs generate trailing whitespace because they produce text token by token without visual feedback. They ’learn’ from training data containing inconsistent whitespace patterns.

    Telling Claude what to do isn’t enough

    I used to have something like this in my CLAUDE.md:

    NEVER EVER leave trailing whitespace at the end of lines.
    

    But as with many things added as context in CLAUDE.md, Claude Code can be very inconsistent about following them. Does that happpen as the context window fills up? I don’t know. I want something more deterministic here, though.