Back to Explore

fission-ai/openspec

TypeScript30,5972,019292 issues45 contributorsMIT
View on GitHub

Summary

OpenSpec is a CLI tool that generates and manages spec-driven development (SDD) artifacts — proposal.md, design.md, specs/, tasks.md — inside your project's openspec/ directory, then injects corresponding slash commands (e.g., /opsx:propose, /opsx:apply) into 20+ AI coding assistants (Claude, Cursor, Windsurf, Qwen, etc.). It acts as a structured layer between human intent and AI code generation: humans write specs first, then AI implements against them. The archive command also maintains a living spec registry by applying delta-based changes (ADDED/MODIFIED/REMOVED/RENAMED sections) to persistent spec files.

Great for

Great for people interested in formalizing AI-assisted development workflows — specifically the problem of AI coding assistants generating code that drifts from intent because requirements only exist in ephemeral chat history.

Easy wins

  • +Add a CONTRIBUTING.md — the README mentions the contribution process but there's no dedicated file, and the repo explicitly lacks has_contributing_guide
  • +Tag open issues with 'good first issue' or 'help wanted' — there are 292 open issues and 0 labeled for newcomers, which is a significant onboarding gap
  • +Add the missing GIF demo mentioned in the README (<!-- TODO: Add GIF demo of /opsx:propose → /opsx:archive workflow -->) — this is explicitly called out as a TODO in the README source
  • +Expand integration test coverage for the 'integration' issue label — the supported-tools.md claims 20+ tools but the test suite (e.g., update.test.ts) only exercises Claude, Cursor, Qwen, and Windsurf explicitly

Red flags

  • !The README references model names 'Opus 4.5' and 'GPT 5.2' — these don't exist as of early 2026, suggesting the README may have been partially AI-generated or contains aspirational/fabricated model recommendations that could mislead users
  • !posthog-node is a runtime dependency for telemetry — it's opt-out rather than opt-in, which some developers and organizations object to on principle; the README does document opt-out but it defaults on
  • !The metadata shows commit_frequency_30d: 1 and contributor_count: 1 despite 45 contributors listed elsewhere — this likely reflects a shallow data extraction, but if accurate it would suggest bus-factor risk despite the star count
  • !No CONTRIBUTING.md despite the README's contributing section saying larger changes require a spec proposal first — the process exists but isn't formally documented anywhere discoverable

Code quality

good

The test suite is genuinely thorough — tests use real temp directories with proper beforeEach/afterEach cleanup rather than heavy mocking, and edge cases like path traversal injection (rejecting '../foo' and '/etc/passwd' as change names) are explicitly tested in artifact-workflow.test.ts. The archive.test.ts covers nuanced spec-merging semantics (RENAMED→REMOVED→MODIFIED→ADDED ordering, REMOVED-on-new-spec warnings, --no-validate flag bypassing validators). Architecture is clean: src/core/ handles business logic, src/commands/ handles CLI parsing, src/utils/ handles shared utilities — a new contributor can find their way around quickly. The main concern is that commit_count is listed as 1 and contributor_count is 1 in the metadata despite 45 listed contributors and 30k stars, which suggests the data extraction was shallow; the actual codebase doesn't show these red flags.

What makes it unique

This occupies a specific niche between 'just prompt your AI' and heavyweight spec tools like Kiro (AWS) or GitHub's Spec Kit. The README's competitive comparison is fair and specific. The real differentiator is the delta-based spec archiving system — the ability to write ADDED/MODIFIED/REMOVED/RENAMED sections in a change spec and have them merged into a living spec registry is genuinely novel for this category. Most similar tools (Kiro, spec-kit) don't maintain a persistent, evolvable spec corpus. The multi-tool slash command generation (same spec, different command formats for Claude vs Qwen TOML vs Windsurf workflows) is also practically useful rather than just aspirational.

Discussion

Sign in to join the discussion.

No comments yet. Be the first to share your thoughts.