Explore Projects
23 analyzed repos ready for contributors.
og-drizzles/trache
Trache is a CLI tool that caches a Trello board locally in SQLite and provides Git-style pull/push semantics for reading and mutating cards without hitting the Trello API repeatedly. It's specifica...
Great for people interested in building local-first caching layers for external SaaS APIs, or optimizing AI agent workflows to reduce expensive API round-trips — specifically in the Trello/project-management space.
agulaya24/baselayer
Base Layer is a CLI tool and MCP server that processes personal text corpora (ChatGPT exports, journals, books) through a 4-step LLM pipeline (extract facts → author identity layers → compose brief...
people interested in personal AI memory systems, behavioral modeling from text corpora, or LLM-as-judge evaluation frameworks — specifically the problem of compressing identity signal from unstructured personal data into dense, injectable context
dandaka/traul
Traul is a local-first CLI tool that syncs messages from Slack, Discord, Telegram, Gmail, Linear, WhatsApp, and Claude Code sessions into a SQLite database, then lets you search them with FTS5 keyw...
People building local-first personal data pipelines, or anyone interested in giving AI agents searchable memory over real communication data (Slack, Telegram, Gmail) using SQLite FTS5 + vector embeddings without sending data to a cloud service.
agentailor/slimcontext-mcp-server
A thin MCP server wrapper around the SlimContext npm library that exposes two chat history compression tools: token-based trimming (drop oldest messages) and AI-powered summarization via OpenAI. It...
people interested in context window management for LLM applications, specifically implementing chat history compression strategies within the MCP ecosystem
chopratejas/headroom
Headroom is a transparent LLM context compression proxy that sits between your application and providers like OpenAI/Anthropic. It intercepts prompt messages and compresses tool outputs, JSON array...
people interested in LLM cost optimization infrastructure, specifically the engineering problem of compressing heterogeneous agent context (JSON tool outputs, logs, code, RAG results) without degrading answer quality — touching NLP compression algorithms, statistical anomaly detection, and LLM proxy architecture
samuelfaj/distill
distill is a CLI pipe tool that compresses verbose command outputs (test logs, git diffs, terraform plans, etc.) before they're consumed by an LLM agent, saving tokens by using a local or remote LL...
building token-efficient LLM agent workflows, especially people integrating tools like Claude Code, Codex, or OpenCode into CI/heavy-output shell pipelines
Sign in to explore more
Sign In