Claude Code Planning Mode Posture Is Off Balance

In planning mode, Claude Code defaults to action, and in most cases this is wrong!

  • People rarely provide enough information in single shot (especially at a terminal prompt!) to allow an LLM to deeply infer a complete plan.*
  • LLMs are not yet strong enough to correctly infer all of the intent of a user even when a “pretty good” context is provided to them.

Thus:

The posture and bias of Claude Code’s Planning Mode should be to *expect* the user to need to provide more detail.

Here’s what Claude Code’s planning mode looks like today:

  Would you like to proceed?
> 1. Yes, and auto-accept edits
  2. Yes, and manually approve edits
  3. No, keep planning

Here’s what the prompt aught to be:

  How should we proceed? 
> 1. Correct or clarify parts of our plan
  2. Let me move forward with the plan we have, but let me have some wiggle room to modify it a bit to stay unblocked
  3. Let me move forward with the plan, but if I think something needs to be changed, stop and require your approval

This insight came when going through how to apply Claude Code and Anthropic’s capable Sonnet and Opus models to a problem faced by my colleague Stas Lyakhov at Eclypisum.

Stas was reverse engineering a bunch of NAND storage error correction–a non-trivial effort–and watching him assist the LLM to build a useful and detailed plan revealed how strange the existing prompt was.

Most important in this proposed revision is that the default is to keep planning.

The second most important thing is that it takes the question of whether it will get approval to editing out of the picture. If the plan is sufficiently detailed, there won’t be a question as to whether the LLM should edit it or not!

Some other subtle changes here are to position the AI as an entity that is acting sometimes on its own. Other times it is in cooperation. Thus, the use of “we” and “our” versus only “you.”

*Even with exposure to necessary code and related systems, in the majority of cases it will take multiple turns of providing clarifications necessary to edit a plan for AI agents like Claude Code to efficiently and successfully carry out instructions.

Leave a comment

Your email address will not be published. Required fields are marked *