Back to Explore

ratherlegit/environmental-impact-tracker

1 contributors
View on GitHub

Summary

A Claude Code 'skill' (essentially a prompt/instruction file) that instructs Claude to estimate and display the environmental cost of AI interactions — translating token counts into energy (Wh) and water (mL) usage with real-world comparisons. It's not executable code; it's a SKILL.md prompt file plus a references folder with methodology documentation, designed to be loaded into Claude's context as a behavioral instruction.

Great for

People interested in AI transparency tooling and environmental cost awareness for LLM usage — specifically those building or extending Claude Code skill/plugin systems

Easy wins

  • +Update or validate the energy/water estimates in SKILL.md against newer 2025 research (README explicitly calls this out as a maintenance task every 2 months)
  • +Add energy rate rows for Claude Haiku 3.x, Opus 3.x, or other model families that are missing from the current table
  • +Write a CONTRIBUTING.md explaining how to test changes to the skill and how to update the rates methodology
  • +Add a comparisons.md entry (the file exists in the tree but its content isn't shown) for additional real-world analogies beyond 'phone charge' and 'tablespoons'

Red flags

  • !The installation command references 'example-skills@anthropic-agent-skills' which appears to be a placeholder or fictional registry path — there's no evidence this plugin registry exists, making the primary install method potentially non-functional
  • !Claims like 'cumulative tracking via a local log file' and 'auto-wires enforcement to CLAUDE.md' are entirely dependent on Claude following prompt instructions reliably — there's no actual persistent storage code, so these features are best-effort prompt compliance, not guaranteed behavior
  • !1 commit, 0 stars, 0 forks, no CI, no tests, no contributing guide — this is a very early-stage personal project with no community validation yet
  • !No license file in the repo (README says MIT but there's no LICENSE file in the file tree)

Code quality

rough

There is no source code in the traditional sense — source_samples came back empty, and the entire project is a prompt instruction file (SKILL.md) plus two markdown reference files. The README makes several concrete implementation claims (cumulative session/weekly/project tracking via a local log file, auto-wiring a rule into CLAUDE.md, subagent token aggregation) that cannot be verified because SKILL.md content wasn't surfaced; these features would all be prompt instructions asking Claude to behave a certain way, not actual persistent code, which raises real questions about reliability. The energy/water figures are clearly sourced and appropriately caveated, which is the strongest quality signal here.

What makes it unique

The concept is genuinely novel in the Claude Code skill ecosystem — environmental footprint awareness for AI sessions isn't a saturated space. However, the implementation is essentially a prompt template, not a standalone tool, which severely limits its portability and verifiability compared to, say, an MCP server that actually instruments token usage. If you want something similar with real persistence and cross-tool support, you'd need to build actual infrastructure around this concept.

Scores

Collab
1
Activity
3

Barrier to entry

low

There's no code to compile or run — the entire project is a SKILL.md prompt file and reference docs, so contributing means editing markdown and understanding the Claude Code skill system.

Skills needed

Understanding of Claude Code plugin/skill architecturePrompt engineering (the core artifact is a prompt file)Knowledge of LLM energy/water research literature to validate or update estimatesMarkdown documentation writing