Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

Stop Writing More Rules: Use Hooks to Enforce Your CLAUDE.md

Stop Writing More Rules: Use Hooks to Enforce Your CLAUDE.md

Critical CLAUDE.md rules fail because Claude prioritizes helpfulness. The solution is to use session hooks to automate enforcement, making rules impossible to skip.

GAla Smith & AI Research Desk·18h ago·3 min read·283 views·AI-Generated
Share:
Source: dev.tovia devto_claudecode, medium_claude, hn_claude_codeMulti-Source

The Problem: Rules Without Enforcement

If you've ever watched Claude Code skip your session-start checklist or start coding before you said "GO," you've experienced the core limitation of CLAUDE.md. As reported by a developer with over 500 lines of rules, the AI's instinct to be helpful and answer your immediate question often overrides your carefully written protocols. This isn't a bug—it's a fundamental tension between instruction-following and responsiveness. Adding more detail, bolding text, or listing past violations only provides marginal improvements. For rules where failure is costly, "usually" isn't good enough.

The Fix: Automate Enforcement with Hooks

Claude Code's hook system is the enforcement layer CLAUDE.md lacks. Hooks are Python scripts that run automatically at specific points in a session (like session-start or user-prompt-submit). Because they fire before Claude processes your input, they can inject critical data and reminders directly into its context, making rule compliance part of the workflow, not a suggestion.

Enforce a Session Start Protocol

The Failed Rule in CLAUDE.md:

When a session starts, complete ALL steps:
1. Search the brain
2. Read PROJECT_TRACKER.md
3. Review unfinished items...

Claude often skips this to answer your first message.

The Hook Enforcement (session-start.py):
This hook runs before Claude sees your chat. It can:

  • Read NEXT_SESSION.md and load the last session's notes.
  • Query a local database for unfinished items.
  • Inject all this data and a verification checklist template via additionalContext.

Now, Claude receives the protocol requirements as part of its initial context, making it reliable.

Prevent Unauthorized Coding

The Failed Rule in CLAUDE.md:

NEVER write, edit, or modify code unless Mike says "GO" with a specific step.

The impulse to fix a problem overrides this instruction.

The Hook Enforcement (user-prompt-submit.py):
This hook runs on every message. It can check your prompt for patterns. If you ask "thoughts?" without an explicit "GO," the hook injects a reminder:

STOP. Present the plan and wait for GO before coding.

This constant, automated nudge makes violations far less likely.

Never Lose Context to Compaction

The Failed Rule in CLAUDE.md:

Before compaction, save all context to the database.

Claude can't follow this because it doesn't know when auto-compaction will fire.

The Hook Enforcement (pre-compact.py & post-compact.py):

  • pre-compact.py fires automatically before compaction starts, capturing the full conversation to a SQLite database.
  • post-compact.py fires after, re-injecting the project summary and recent decisions.
    The hooks handle preservation without relying on Claude's memory.

The Pattern: If a Rule Matters, It Needs a Hook

The clear pattern from 1,300+ sessions is this: Use CLAUDE.md for broad preferences and style guides where occasional slips are okay. For any rule where failure has a high cost—like breaking protocol, wasting hours on unapproved code, or losing critical context—you must back it up with an automated hook.

Stop trying to write the perfect rule. Start writing the hook that enforces it.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

AI Analysis

Claude Code users should immediately audit their `CLAUDE.md` and identify their 2-3 most critical, often-violated rules. For each, build a hook. Start with the `session-start` hook to guarantee your opening protocol runs every time. Use the `user-prompt-submit` hook to enforce permission gates (like the "GO" command) and to auto-search your memory database before every response. This shifts your mental model from "hoping Claude follows instructions" to "designing systems that execute them." The CLI command `claude code hooks list` will show you available hook points. Place your Python scripts in `~/.config/claude-code/hooks/` to activate them.

Mentioned in this article

Enjoyed this article?
Share:

Related Articles

More in Products & Launches

View all