How the automated grant evaluations are produced. This page explains what triggers a review, what gets checked, and how to read the verdict. Anyone on the committee should be able to look at a row in the database, click into the page, and understand why the evaluator landed where it did.
Applications can sit unreviewed for too long. The committee aims to respond within two weeks, but the calendar slips. The automated evaluator produces a first-pass read so Sov, mfw, or Chim can engage faster, and so newly submitted applications never go a week without at least an internal evidence package.
The evaluator is not a decision-maker. It produces evidence and a recommended verdict. The committee makes the actual decision on Snapshot.
Three triggers, all routed through the same skill:
Applications that already have a pipeline entry are not re-evaluated. Those are committee-tracked.
The evaluation walks through seven steps. Each step produces structured evidence that lands in the database row plus a free-form section in the evaluation page.
Step 0 — Ingest. Pull the structured fields from the application: project name, authors, contact info, funding ask, milestone count, length, payment address, declared category, claimed track record. Flag any required template fields that are missing.
Step 1 — External research. Verify the application against external sources, never trusting the application text alone. This includes searching for each named team member, checking GitHub activity, fetching every URL the application cites, looking up the team's CoW ecosystem footprint in the pipeline and forum, and scanning for active grants or core team tooling that might overlap.
Step 2 — Eligibility and scope. Three pass/fail questions: does the work belong in the CoW Grants program, is it a public good rather than a commercial subsidy, and are the deliverables specific and checkable.
Step 3 — Minimum requirements. Verifies the application against the Application Template (Feb 2025) and the Process Guide (Feb 2025). Missing required fields means the application gets returned as incomplete before substantive evaluation.
Step 4 — Disqualifiers. Any single disqualifier stops the application: fabricated or materially overstated claims, scope duplicates funded work, strategic misalignment, implausible timeline, no verifiable team identity, license incompatibility, or past-spend recovery without prior agreement.
Step 5 — Risk flags. Non-disqualifying signals worth tracking: solo developer on a large ask, new forum account, deferred open-source commitment, identity inconsistencies across platforms, zero community resonance, vague acceptance criteria, ask exceeds 25,000 xDAI without strong justification.
Step 6 — Routing. Before recommending decline, check whether part of the ask should go elsewhere (volume integration template, retro funding round, an existing RFP, etc).
Step 7 — Verdict. Land on Advance, Needs Revision, Decline, Redirect, or On Hold, with two to four sentences of plain-prose rationale.