Back to Explore

jnemargut/better-plan-mode

3331 contributorsMIT
View on GitHub

Summary

Better Plan Mode is a single markdown prompt file (SKILL.md) that instructs AI coding tools like Claude Code, Cursor, or Codex to run an enhanced planning workflow. Instead of quick yes/no decisions, it generates rich HTML documents with 4 options, visual mockups, comparison tables, and saves decision history to a .decisions/ folder. The entire 'codebase' is one prompt engineering file — there is no executable code, no build system, and no tests.

Great for

People interested in prompt engineering for AI coding assistants, specifically crafting structured instruction sets that shape how LLMs handle multi-step planning workflows

Easy wins

  • +Add a CONTRIBUTING.md explaining how to test changes (e.g., what prompts to run against Claude Code to validate behavior)
  • +Add more decision category examples or domain-specific variants (e.g., a SKILL-mobile.md tuned for mobile app planning decisions)
  • +Improve the visual preview generation instructions in SKILL.md for specific diagram types (architecture diagrams are notoriously inconsistent from LLMs)
  • +Add a CHANGELOG or version history — with only 1 commit, there's no way to track prompt evolution over time

Red flags

  • !Cannot cross-reference README claims with SKILL.md content — the actual prompt file was not surfaced in source_samples, so behavioral guarantees are unverifiable
  • !1 commit total: no revision history means no evidence of refinement or testing across edge cases
  • !Zero issues, zero CI, zero tests — for a prompt-engineering project, there's no test harness or example outputs proving the documented behavior actually works
  • !README references '2026-03-15' as last commit date, which is a future date relative to most readers — suggests either a clock skew issue or fabricated metadata
  • !No CONTRIBUTING guide and no labeled issues means there's no onboarding path for collaborators despite the repo being publicly positioned for collaboration

Code quality

rough

There is no source code to review — SKILL.md is a prompt instruction file and was not included in the source_samples, so its actual content and quality cannot be assessed beyond what the README describes. The repo has exactly 1 commit and 1 contributor, meaning this is essentially a published first draft with no iteration history. The README makes specific behavioral claims (always 4 options, visual mockups rendered as HTML, decision history saved correctly) that cannot be verified without seeing SKILL.md's actual instructions or running them against an LLM.

What makes it unique

The concept of 'AI skill files' that shape planning workflows is genuinely useful and the specific focus on visual HTML decision documents is a differentiator over plain-text plan modes. However, because the actual SKILL.md content isn't visible, it's impossible to know if the implementation is meaningfully better than a few paragraphs of system prompt. There are many similar 'drop this file in your .claude folder' prompt repos emerging; this one's niche (visual decision documents with persistent history) is specific enough to stand out if the prompt quality holds up.

Scores

Collab
2
Activity
4

Barrier to entry

low

The entire repo is 4 files — a README, LICENSE, demo GIF, and one SKILL.md prompt file — so there's nothing to build, install as a dependency, or run locally beyond dropping the file in a directory.

Skills needed

Prompt engineering / LLM instruction designUnderstanding of AI coding tool ecosystems (Claude Code, Cursor, Codex, Gemini CLI)Markdown authoringHTML/CSS (for improving the visual preview generation instructions)UX writing / information architecture (for improving decision categories and option framing)