Back to Explore

lap-platform/lap

Python321 contributorsApache-2.0
View on GitHub

Summary

LAP (Lean API Platform) is a CLI tool and Python library that compiles API specifications (OpenAPI, GraphQL, AsyncAPI, Protobuf, Postman, Smithy) into a compact custom text format designed to reduce token usage when feeding API docs to LLM agents. It claims 5-10x compression via a purpose-built grammar with directives like @endpoint, @required, @returns, plus a registry at lap.sh for browsing pre-compiled specs.

Great for

building token-efficient API documentation pipelines for LLM agents, particularly people integrating real-world APIs into coding assistants or autonomous agents who need to minimize context window usage

Easy wins

  • +Add a Smithy IDL text format compiler — currently only JSON AST is supported (docs/smithy-implementation-summary.md hints at this gap), and the smithy compiler internals are well-structured to follow
  • +Improve the TypeScript SDK (sdks/typescript/) — it has package.json and tsconfig but the src/ is not shown; the Python side is more mature and the TS side likely needs parity
  • +Add format auto-detection tests for edge cases — tests/fixtures/ has specific edge cases (big_enum.yaml, deep_nested.yaml, special_chars.yaml) but the test_quality_bugs.py file suggests there are known quality issues worth investigating
  • +Add GraphQL subscription support — the GraphQL compiler handles queries/mutations but AsyncAPI-style pub/sub patterns for GraphQL subscriptions are likely unimplemented or untested

Red flags

  • !Single commit repository: the entire project was pushed in one commit, making git history useless for understanding design decisions or reviewing changes incrementally
  • !Benchmark claims (0.399 vs 0.860 accuracy, '500 blind runs') are central to the README pitch but the benchmark code in benchmarks/benchmark_skill.py is not shown, and the methodology relies on an external repo (Lap-benchmark-docs) — impossible to verify reproducibility from this codebase alone
  • !Several tests use `assert True` as their assertion (test_agent_implementation.py lines like 'assert True, Test completed') — these tests cannot catch regressions and inflate the '1,101 passed' badge count
  • !The lap.sh registry is a hosted service controlled solely by the single maintainer — `lapsh skill-install` drops files into ~/.claude/skills/ from a remote registry with no content verification shown in the source
  • !README claims '6 input formats' but pyproject.toml only lists pyyaml, tiktoken, and graphql-core as dependencies — Protobuf parsing is done with a hand-rolled regex parser (no protoc), and Smithy only supports JSON AST, not .smithy IDL files

Code quality

decent

The protobuf compiler (lap/core/compilers/protobuf.py) is genuinely well-written — proper AST dataclasses, comment stripping that handles string literals, brace-matching with depth tracking, and enum prefix stripping. The CLI (lap/cli/main.py) has real security hygiene: ANSI escape stripping on server responses, registry response shape validation before field access. However, the test suite has a pattern of over-using `assert True, 'Test completed'` as a pass condition (e.g., test_extract_nested_response_objects, test_extract_request_body) which means those tests aren't actually asserting anything meaningful — they'll never fail.

What makes it unique

The core idea — a purpose-built compact text format for API specs targeting LLM context windows — is genuinely differentiated from simple YAML minification or OpenAPI stripping tools. The custom grammar with @directives, typed constraints like str(uuid) and enum(a|b|c), and the round-trip back to OpenAPI show real design thought. However, the single-commit history and polished README with pre-made benchmark charts suggests this may be a product launch more than an organic open-source project seeking collaborators.

Scores

Collab
6
Activity
3

Barrier to entry

medium

The project structure is well-organized with clear separation between compilers, formats, CLI, and SDKs, and there's a CONTRIBUTING.md plus CLAUDE.md with developer context — but there are 0 open issues, 0 good-first-issues labels, and only 1 contributor with 1 total commit, meaning there's no established contribution workflow to plug into.

Skills needed

Python 3.10+ (core compiler logic)API spec formats: OpenAPI/Swagger, GraphQL SDL, Protobuf, AsyncAPIParser/compiler design (custom grammar, AST manipulation)TypeScript/Node.js (for the npm CLI wrapper in sdks/typescript/)pytest for testingCLI design with argparse