Paid PR Replay audit · Free DIY sample

Would Roam have caught
your last incident?

We replay today's structural detectors against your last 5 / 30 / 90 merged PRs and ship a written report — what the detectors would have flagged pre-merge, the patterns worth wiring into CI, and the single highest-leverage gate to add today.

Same engine as the free CLI · Apache 2.0 · No code uploaded for the DIY sample · 50% credits toward Roam Review

How it works

Three steps from "I'd like a PR Replay" to "here is the report." Synchronous walk-through call closes the engagement.

  1. 1 Kickoff (day 1). You send the repo URL and the time window (or you've pre-filled them at Stripe checkout). We countersign a one-page SOW and our DPA. You approve the read-only access scope. No code leaves your environment until both are signed.
  2. 2 Replay (days 2–4 for Team, 2–8 for Deep). We clone to a temp working tree, run roam pr-replay across your range, founder reviews the top findings by hand and drafts the narrative. Per-detector deep-dive added on Deep tier.
  3. 3 Walk-through + delivery (day 5 / day 10). 30-min call (Team) or 90-min call (Deep) over your video tool of choice. We walk the report, answer questions, and discuss the recommended CI gates. Polished Markdown + PDF lands in your inbox right after.

After delivery: temporary clone is deleted, the git ledger we kept of the engagement is yours on request, and the optional 50% credit toward Roam Review activates if you subscribe within 60 days.

Three tiers

Same engine across all three. The difference is window size, founder hours included, and the depth of the recommended-actions section.

Sample

Free · 5 PRs · self-serve

DIY 5-PR sample. Watermarked. No code uploaded — the CLI runs entirely on your machine. Same engine as the paid tiers, just with a smaller window and no founder review.

pip install roam-code && roam pr-replay --tier sample

No email needed. Run it on any repo you have a checkout of.

Team

$2,500 · 30 PRs · 30-min walk-through

Thirty most-recent merged PRs scored against the current detector set. Aggregated detector-class breakdown, per-PR ranking, recommended CI gates, written narrative, founder walk-through. $1,250 credits toward Roam Review if you subscribe within 60 days. See sample output.

Buy Team — $2,500

Self-serve checkout launches soon; email orders go through today.

Deep

$6,000 · 90 PRs · 90-min walk-through

Everything in Team plus a per-detector deep-dive section, a written 90-day remediation plan against the highest-leverage CI gates, and a 90-minute walk-through call. $3,000 credits toward Roam Review if you subscribe within 60 days. See sample (Team format; Deep adds the deep-dive section).

Buy Deep — $6,000

Self-serve checkout launches soon; email orders go through today.

Reports ship as Markdown + PDF. Turnaround: 5 business days for Team, 10 business days for Deep, both from kickoff. We never train on or reuse buyer code; the engagement runs against a temporary clone we delete on completion. See the DPA and security policy for details. Email [email protected] with questions before purchase.

What's in the deliverable

Every paid report has the same skeleton. Section depth scales with tier; Deep adds the per-detector deep-dive.

Executive summary Verdict line, % of PRs that would have been flagged, total high / medium counts.
Detector breakdown Aggregated table: detector × total findings × ratio of PRs hit. Highlights the highest-impact class.
Per-PR ranking Top PRs ranked high → medium → total severity, with date, SHA, subject, top hits.
Per-detector deep-dive Deep tier only. One section per detector class with up to 5 example PRs each.
Recommended next steps Top three detector classes to gate on first, plus concrete CLI commands for CI integration.
Roam Review credit Explicit credit amount and 60-day window — surfaced inside the deliverable, not just the marketing page.
What's NOT covered Semantic correctness, security audit, performance profiling, in-flight PR review — set out explicitly.
Methodology What was measured, what wasn't, and how to reproduce on your own machine.

Markdown + PDF. You can preview the Team-tier shape on GitHub before commissioning.

Who PR Replay is for

Three buyer shapes recur across the engagements we'd quote.

  1. A Teams adopting AI coding agents. You've added Cursor, Claude Code, or Copilot in the last 6 months and the shape of incoming PRs has changed. You want a read on what's slipping through your existing reviewer.
  2. B Post-incident retrospectives. You shipped something that broke production. You want to know if a structural-review gate would have caught it pre-merge — and what other PRs in the last quarter look like the same shape.
  3. C Pre-purchase signal for Roam Review. You're considering a Roam Review subscription. PR Replay tells you what the gate would have done on your specific codebase before you commit. The 50% credit makes it close to free if you subscribe.
Apache 2.0 engine The same roam-code CLI you can install yourself runs the analysis. Nothing proprietary in the audit pipeline.
Temporary clone, deleted on completion We clone your repo to a temp working tree only for the duration of the engagement. Deleted within 7 days of report delivery. DPA covers retention details.
No training on your code Contractually committed: never used to train, fine-tune, or evaluate any ML model — ours or any third-party's. See the security policy.
EU-based, GDPR-native Built in Athens. Made in the EU. The default processing location is the EU; Stripe handles billing under SCCs.

Apply this fee toward Roam Review

Half of the engagement fee credits toward your first year of Roam Review if you subscribe within 60 days of report delivery.

Roam Review credit math by PR Replay tier and Roam Review subscription tier.
PR Replay tier Fee Credits toward Review Net first-year cost on Review Team ($299/mo)
Team $2,500 $1,250 $2,338 (was $3,588)
Deep $6,000 $3,000 $588 (was $3,588)

Mention the report when subscribing and we apply the credit to the first invoice. Credit is single-use and expires 60 days after report delivery.

Common questions

How do I get the report on a private repo without giving you direct access?

Two paths: (1) you grant us a read-only deploy key on a temporary clone we delete on completion, or (2) you run roam pr-replay locally and send us the JSON envelope plus the markdown — we add the founder review and ship the polished deliverable. Both paths sign the DPA and a one-page SOW.

What if you find nothing material on my repo?

If a Team engagement surfaces zero findings worth wiring into CI, we'll either run it on a different range at no extra cost (if you want broader coverage) or refund 50% of the fee. The point is to know whether you'd have shipped the same incidents knowing what Roam would have flagged — if you wouldn't have, the engagement underdelivered and we own that.

Can I share the report with my CTO, board, or auditor?

Yes. The report is yours. You own the IP and can redistribute internally without restriction. External attribution (e.g., in a public post-mortem) is appreciated but not required.

Why this and not just running roam critique ourselves?

You can. The CLI is Apache 2.0 and free forever. The paid engagement adds: founder review of the patterns that matter for your codebase specifically, a ranked recommendation of which detector classes to wire into CI first, a polished narrative artefact you can hand to leadership or auditors, and a 30 or 90-minute walk-through call.

How fast is the turnaround?

5 business days from kickoff for Team, 10 business days for Deep. Kickoff happens the day we receive payment confirmation and the agreed-on commit range or repo access. Smaller repos usually finish faster.

Do you train ML models on our code?

No. Contractually committed in the DPA. We never use buyer source code, diffs, comments, metrics, or any derived artefact to train, fine-tune, or evaluate any machine-learning model — ours, ours-via-third-party, or any third party's.

Can multiple people on our team join the walk-through?

Yes. We default to "everyone you'd want in a code-review retrospective": eng lead, the engineers who shipped the flagged PRs, sometimes the security lead. We send the report a few minutes before the call so people can skim.

What about the EU AI Act?

PR Replay is evidence-generation tooling, not a regulated service. The EU AI Act Article 12 (record-keeping) only attaches to providers of high-risk AI systems listed in Annex III; code-generation tooling is not in Annex III. If your own product is a high-risk AI system, the report's tamper-evident audit-trail entries are useful as Article 14 human-oversight evidence. The classification call is for you and your DPO.

Try the free sample first: See the tiers · or email [email protected] with questions before purchase.