Methodology
Two scoring rubrics. Five dimensions each. Twenty-five points max.
Govini scores contractor-fit — does your company match this RFP? We score AI-readiness — is this RFP actually shaped for an AI-native startup, or dressed-up legacy work? Different question. Different answer. The two rubrics below are how we operationalize that distinction for contracts and for early-stage companies.
Contracts rubric
Applied to every DoD solicitation. Scores 0-5 on each of five dimensions for a maximum of 25. The score is the headline; the rationale paragraph is the value — it explains the tradeoffs behind each number.
| Dimension | 0 — Not present | 3 — Partial | 5 — Full signal |
|---|---|---|---|
| Pathway Speed | Cost-plus FAR with traditional acquisition pathway | SBIR Phase II.5, Direct-to-Phase II | OTA, AFWERX STRATFI/TACFI, DIU CSO, OSC, Replicator |
| Timeline Realism | Impossible or absurd deadline | 14-30 day response, tight but achievable | 45+ day response window AND prototype timeline that respects AI dev cycles |
| Problem Framing | "AI/ML" used as marketing, no real problem stated | Reasonable framing but scope ambiguity | Bounded problem + named end-user + success metrics + data described |
| AI / ML Fit | "AI" is marketing; problem is not actually AI-shaped | AI is meaningful but secondary to integration / pipeline work | Core problem is genuinely AI-shaped AND data is named/available |
| Award + Transition | Study contract with no follow-on path | $1M-$5M ceiling OR vague transition mention | $5M+ ceiling AND explicit production transition pathway |
Example record
AFWERX STRATFI is one of the premier fast-track OT instruments available, earning a top pathway score, and the $60M ceiling with an explicit TRL 7 on-orbit prototype culminating in a 6-month operational demonstration represents a clear, substantial transition pathway. AI/ML fit scores a 3 because wh…
Seed rubric
Applied to early-stage defense tech companies. Scores 0-5 on five dimensions that predict which seed-stage companies will reach $10M ARR or a government prime contract within three years.
| Dimension | 0 — Not present | 3 — Partial | 5 — Full signal |
|---|---|---|---|
| Team Strength | No founders named, generic description | Capable team, unclear domain fit | Named founders with deep defense or domain expertise (ex-military, prior exits, domain PhD) |
| Validation Depth | No evidence of traction | One form of validation (SBIR Phase I, small seed round) | Government contract + external VC funding + named pilot customer |
| Novelty | Commodity offering, no visible differentiation | Solid product in a crowded space | Clear technical differentiation in underserved defense niche |
| Market Timing | No clear path to DoD budget | Addresses real defense need, no recent pull signal | Directly addresses a named DoD priority or SECDEF initiative (last 18 months) |
| Capital Efficiency | Unclear financials or excessive early dilution | Reasonable capital deployment | Meaningful product shipped on under $3M raised |
Example record
Red 6 shows exceptional validation depth with multiple Phase 2 SBIR awards (TACFI, STRATFI) and direct integration into AFSOC MC-130J platforms, plus recent 2024 momentum on ATARS. Team strength appears solid given Phase 2 progression and AFSOC/AETC partnership, though founder names and backgrounds …
Data sources
| Source | What it covers | Update cadence |
|---|---|---|
| SAM.gov | Federal opportunities — OTA, CSO, BAA, SBIR, sources sought | Daily |
| DIU | Commercial Solutions Opening awards and project pages | Weekly (press releases) |
| AFWERX | STRATFI ($3M-$15M), TACFI ($375K-$2M), Spark awards | Weekly (press releases) |
| OSC | Office of Strategic Capital direct DoD loans and investments | As published |
| DARPA | BAAs, program announcements, performer news | As published |
| Y Combinator | YC-backed defense tech companies from batch announcements | Per batch (bi-annual) |
| SBIR Phase I | Early-stage awardees — DoD and DoE programs | Per award cycle |
| TechCrunch | Funding rounds for defense-adjacent tech companies | As published |
| DefenseScoop | Program awards and defense tech funding news | As published |
Update cadence
Scrapers run nightly. Scoring runs weekly — each new batch goes through the rubric and the LLM rationale. A manual review pass covers the top-scored records before they go into the newsletter: if a record scores 18+ but the rationale reads off, it gets flagged and re-scored. The database compounds over time; old records stay in place even as new ones are added.
Limitations
LLM scoring is calibrated against domain intuition, not against actual transition outcomes. A contract that scores 22/25 might not attract a single qualified bidder if the program office has already pre-selected a vendor. A company that scores 18/25 might still run out of runway.
The rubric will be recalibrated as score-to-outcome data accumulates over six to twelve months. Dimensions that don't predict outcomes will be dropped or reweighted. v0.1 scores are a starting point, not a final answer.
Some data sources rate-limit or publish inconsistently — ingest can be partial. SAM.gov in particular returns incomplete descriptions that require a separate fetch. If a record looks thin, check the source link on the detail page.