Back to Explore

fr-e-d/gaai-framework

Shell70191 contributorsNOASSERTION
View on GitHub

Summary

GAAI is a drop-in folder-based framework that imposes structured governance on AI coding agents (Claude Code, Cursor, Codex, etc.) by separating 'Discovery' (scoping what to build) from 'Delivery' (executing it) via isolated context windows. It uses only Markdown, YAML, and bash — no SDK or package install required. A backlog of YAML story files acts as the contract between the two phases, with an optional daemon that polls the backlog and auto-launches parallel delivery sessions.

Great for

developers interested in imposing reproducible, auditable delivery governance on AI coding agents — specifically the problem of agents going off-script, losing cross-session context, or shipping unverifiable work

Easy wins

  • +Write integration tests or validation scripts for the backlog YAML schema — there are currently zero tests in the repo
  • +Add a second AI tool adapter (e.g., a Windsurf or Aider adapter) following the existing compat/ pattern described in the README
  • +Improve the migration-guide.md and quick-start.md with concrete before/after examples from real projects — the docs exist but likely lack community-contributed real-world cases
  • +Add a GitHub Actions CI step that actually lints the YAML/Markdown files in .gaai/core/ — the existing workflows only validate structure

Red flags

  • !Single contributor, single commit: this is effectively a personal project published publicly, not a collaborative open-source project — the fork count (19) likely reflects people copying it for personal use, not active collaboration
  • !License mismatch: README badges claim ELv2 (Elastic License v2, which restricts commercial use and SaaS hosting) but GitHub resolves the license as NOASSERTION — contributors should verify IP terms before contributing
  • !Zero tests: a framework claiming 'reliable software delivery' with acceptance criteria verification has no automated tests of its own mechanisms
  • !The .gaai/core/ directory — the actual implementation — is not present in the visible file tree, meaning the framework ships as opaque content that users drop in and trust, with no ability to audit what those agent prompts actually instruct the AI to do
  • !last_commit_at is 2026-03-16, which is a future date — either a clock skew issue or fabricated metadata, both of which undermine trust in the repo's provenance
  • !commit_count: 1 combined with 70 stars suggests the repo may have been force-pushed or history was squashed, making it impossible to review how the project evolved

Code quality

rough

No source code samples were available for direct review, which is itself a red flag — the actual agent specs, skills, and workflow files living inside .gaai/core/ are not in the file tree shown, meaning the core implementation is opaque. The bash scripts referenced (install.sh, delivery-daemon.sh, daemon-setup.sh) are not visible for review. The repo has a single commit, zero tests, and the license field resolves to 'NOASSERTION' despite the README badge claiming ELv2 — a mismatch that suggests the LICENSE file may not be machine-readable or properly formatted.

What makes it unique

The concept is genuinely differentiated from BMAD-METHOD (which simulates a full Agile team) and from LangGraph/CrewAI (which are code-first orchestration SDKs). The specific insight — isolating Discovery and Delivery into separate context windows with a YAML backlog as the contract — is a legitimate and non-obvious architectural pattern for governing AI agents. However, the 'no SDK, just Markdown+YAML+bash' positioning is also its weakness: the actual governance logic lives in opaque Markdown prompts you can't unit-test, diff meaningfully, or formally verify.

Scores

Collab
6
Activity
4

Barrier to entry

high

The repo has exactly 1 contributor, 1 commit in recorded history, 0 tests, 0 good-first-issues, and no source code samples were surfaced for review — the 'code' is primarily Markdown/YAML agent prompts that aren't publicly visible in the file tree, making it very hard to evaluate what you'd actually be modifying or whether contributions align with the maintainer's vision.

Skills needed

Bash scripting (the entire implementation is shell scripts)YAML and Markdown authoring (agent specs, skill definitions, workflow files)Familiarity with at least one supported AI coding tool (Claude Code, Cursor, Codex CLI)Git workflows (the daemon coordinates via git push/pull to a staging branch)Understanding of prompt engineering and agent orchestration patterns