Early access · Multifamily acquisitions

Underwrite multifamily deals from a data room — with a citation on every line.

Drop your sources. Get a structured wiki, a cross-document discrepancy report, and an Excel export. Every figure traces back to the page it came from — so you can trust it before you sign the LOI.

  • Source citation on every extracted field
  • Cross-document reconciliation
  • Multi-sheet Excel export
  • Hard cost cap per ingest
Counsel · Oak Street Apartments
Reconciled
  • OM rent exceeds rent roll on 12 of 84 units

    critical

    Stated average $1,425 vs actual $1,275. Pro forma overstates revenue.

    −$1,847/mo · −$22,164/yr

    OM·p. 14Rent Roll·p. 3
  • 3 leases expired before estimated close date

    warning

    Units 102, 207, 311 — month-to-month risk at takeover.

    Lease 102·exp. 2025-11-30Lease 207·exp. 2025-12-15Lease 311·exp. 2026-01-08
  • T12 income $47,310 below OM pro forma

    warning

    Trailing 12 actual gross income materially below sponsor projection.

    −4.2% vs proforma

    T12·Q4 summaryOM·p. 22
3 of 47 findingsingest · 2m 14s
What it ingests
  • T12Trailing financials
  • RRRent roll
  • OMOffering memo
  • LeaseLease abstracts
  • AmendAmendments
  • FEMAFlood zone
  • ACSDemographics
  • FMRFair market rent
The problem

Underwriting a deal is six hours of grunt work — and the catastrophic miss is the one you didn't have time for.

Deal flow you can't underwrite

Brokers send five OMs a week. You have time for one. The deals you skip are the deals you lose.

Sponsor numbers don't tie

Stated OM rents drift from the rent roll. T12 income drifts from the pro forma. Catching it manually means line-by-line on a Saturday.

No analyst to QC you

Solo shop. No second pair of eyes. One missed lien or expired lease can blow up the deal — and the LP relationship.

How it works

From a folder to an IC-ready review in one command.

Extraction is strictly separated from inference. The pipeline pulls only what's literally on the page; reconciliation does the cross-document reasoning. That's why the citations hold up.

  1. Step 01

    Drop the data room

    Point it at a folder of PDFs, Excel, DOCX, and images. No formatting required.

    Output

    sources/ 42 files · 218 MB

  2. Step 02

    Pipeline runs

    Classify, extract, verify, reconcile. Cost is capped — set --max-cost and walk away.

    Output

    ingest · standard · cap $25.00

  3. Step 03

    Wiki + counsel built

    One page per unit. Discrepancies surfaced. A counsel report drafted with risks and quick wins.

    Output

    wiki/ 84 unit pages · counsel.md

  4. Step 04

    Excel + chat the deal

    Multi-sheet workbook ready for your model. Ask follow-ups in plain English; pin the answers.

    Output

    export · 6 sheets · queries/*

Before and after

Same deal, same data room, two different Saturdays.

A 120-unit acquisition, 38 files, 210 MB. Messy — the way they actually arrive. Here's how the day goes with and without the pipeline.

Manual diligence

Excel, coffee, and hope.

  • 8:00 AMOpen the OM. 44 pages. Start summarizing in a notebook.
  • 8:45 AMCrack the rent roll XLS. Build a unit-by-unit summary in a second tab.
  • 10:00 AMT12 doesn't tie to rent roll × 12. Spend an hour hunting why.
  • 11:30 AMConcessions scrubbed off the rent roll. Dig through the lease folder to reconstruct.
  • 1:00 PMLunch. Still behind.
  • 2:30 PMRent roll flags three leases expiring in 60 days. Start finding the signed copies.
  • 3:45 PMCan't find the signed lease for unit 204. Email the broker. Wait.
  • 5:00 PMExcel model is 60% done. Call it. Finish tomorrow.
  • SundayRebuild the IC deck. Numbers are “probably right.” Hope nothing blows up Monday.
Total~9 hrs + Sunday cleanup
Data Room Analyzer

One command, a structured artifact.

  • 9:00 AMDrop the folder. dataroom ingest --max-cost 25.
  • 9:01 AMWalk the dog. Make coffee. Pipeline runs unattended.
  • 9:28 AMOpen counsel.md. Discrepancies surfaced, quick wins ranked by dollars.
  • 9:35 AMT12 vs rent-roll gap flagged automatically — with the exact GL lines behind it.
  • 9:40 AMThree unsigned leases listed by unit number. Citation on every one.
  • 9:50 AMOpen the Excel export. Paste into your underwriting model.
  • 10:15 AMDecide: LOI, pass, or follow-up with specific, sourced questions for the broker.
  • 10:30 AMSaturday back.
Total~90 min, citations included

Honest note: this won't make a bad deal good. It gets you the structured review four hours faster — which means the bad deal gets rejected today, not next weekend. And the good deal gets a clean IC memo backed by citations your LPs can verify.

Features

Built for the part of underwriting you don't want to do twice.

Source citations

Every number traces back to the page it came from.

Extraction is built on a SourceReference model — file, page, confidence. No more chasing where a figure came from when an LP asks. The citation is right there next to the value.

  • Per-field provenance and confidence
  • Click through from the wiki to the source PDF
  • Audit trail in wiki/lint.md and the run log
Unit 204 · lease abstract
Tenant
Reyes, M.
lease.pdf · p.1
Monthly rent
$1,275
lease.pdf · p.2
Lease start
2024-07-01
lease.pdf · p.1
Lease end
2025-06-30
lease.pdf · p.1
Security deposit
$1,275
lease.pdf · p.3
Counsel report

An opinionated deal review, not a transcript.

A multi-pass LLM review — financial analysis, lease quality, pattern recognition, risk stratification, quick wins, and negotiation leverage. Embedded directly in your Excel export.

  • Single-pass (~$0.36) or multi-pass (~$2.10) on a 100-unit deal
  • Customize sections by editing prompt templates — no code
  • Goes straight into the Excel workbook for IC
counsel.md
Sonnet · multi-pass
Executive summary

Risk: elevated. Loss-to-lease and three expired leases create $22k of near-term revenue exposure. Negotiation leverage exists on price reduction tied to T12 reconciliation.

Quick wins
  • · Reset 12 below-market units at renewal: ~$22k/yr
  • · Resolve unsigned leases before close: 3 units
  • · Submeter water on Building B: ~$8k/yr
Public-data enrichment

Flood zone, demographics, and FMR — alongside the docs.

Geocode the address and layer in FEMA, Census ACS, and HUD Fair Market Rents. Cached, concurrent, and written to the wiki next to your extractions.

  • FEMA flood zone lookup
  • Census ACS tract demographics
  • HUD FMR by MSA and bedroom count
enrich · public data
  • FEMA
    Outside 500-yr floodplain
    Zone X
  • Census ACS
    Median household income, tract
    $74,210
  • HUD FMR
    2BR Fair Market Rent, MSA
    $1,310
Saved queries

Your custom diligence checklist gets sharper every deal.

Promote a chat answer to a versioned wiki page. Re-run it as the data room evolves and get a diffable revision history — your private playbook, applied to every new deal.

  • Pin valuable answers as wiki/queries/<slug>.md
  • Reruns append revisions; nothing gets overwritten
  • Use as a portable diligence checklist across deals
wiki/queries/
  • /below-market-units
    Which units are below market by >10%?
    rev 3
  • /expiring-90d
    Which leases expire in the next 90 days?
    rev 2
  • /concession-burn
    What concessions roll off in Q1?
    rev 1
vs. ChatGPT or Claude direct

You can paste an OM into a chatbot. You shouldn't trust the answer.

General-purpose chat is great for one-shot questions on a single PDF. For the cross-document reasoning that matters in diligence — and for citations that survive an LP's scrutiny — the structure is the product.

Capability
ChatGPT / Claude direct
Data Room Analyzer
  • Setup
    Paste a PDF
    Drop a folder, run pipeline
  • Cross-document reasoning
    Bound by context window
    Reconciliation across the data room
  • Source citations
    Page numbers often hallucinated
    Bound to file + page on every field
  • Persistent wiki
    Per-conversation only
    Browseable, diffable, re-runnable
  • Diligence checklist audit
    Deterministic — no LLM call
  • Multi-sheet Excel export
    Copy-paste only
    Counsel embedded in workbook
  • Public-data enrichment
    Web tool only
    FEMA, Census, HUD — cached
  • Cost control
    $20/mo flat
    Per-deal hard cap with --max-cost

Honest note: a chatbot can absolutely answer a question about an OM. The difference shows up at scale across a full data room — when you need the same answer to be reproducible, sourced, and exportable.

Read the full comparison
Pricing

Pay per deal up front. Subscribe when it earns its keep.

Early access — pricing below is what we'll launch with. The first deal is genuinely free; you keep the wiki and the export either way.

First deal
Freeone data room

Run the full pipeline on one deal. Keep the wiki and the export.

  • Full ingest + counsel report
  • Wiki + Excel export
  • Bring your own Anthropic API key
  • API costs paid directly to Anthropic
Recommended
Solo
$149/ month · billed annually

For the syndicator underwriting deals every week. Unlimited data rooms.

  • Unlimited deals
  • Counsel multi-pass mode
  • Saved queries with revision history
  • Public-data enrichment (FEMA, ACS, HUD)
  • Bring your own Anthropic API key

Monthly $179. Annual saves $360/yr.

Team
Custommulti-seat

For acquisitions teams with shared deal pipelines and audit needs.

  • Everything in Solo
  • Shared deal workspace
  • Per-seat run logs and cost attribution
  • Centralized Anthropic key
  • SSO and SOC 2 on roadmap
FAQ

Questions worth answering directly.

Why not just paste the OM into ChatGPT?
Honest answer — for one-off questions on a single document, it's fine. The gap shows up on a real data room: cross-document reconciliation (OM vs rent roll vs T12), source citations that don't drift, a persistent wiki you can re-query as the deal evolves, and an Excel export that goes straight to your underwriting model. Chat is a transcript; this is a structured artifact. Read the full comparison.
What if a document is a scanned photo of a rent roll from 2003?
The pipeline runs OCR before extraction. Quality on truly bad scans is uneven — we'll flag low-confidence extractions in the review queue rather than silently guessing. You can correct any classification with one command and the correction is logged.
Where does my data live?
Locally, in your project folder. Source documents stay in sources/, the generated wiki in wiki/. The only thing that leaves your machine is what we send to Anthropic's API for classification and extraction. Bring-your-own-key is on the roadmap for the Team tier.
How much does it cost to ingest a typical deal?
For a 100-unit property with a normal data room, expect roughly $5–25 in API spend on the standard preset. The pipeline accepts a hard --max-cost cap — hit it and the run exits cleanly with partial results saved. There are no surprise bills.
Can I trust the LLM not to hallucinate a number into my IC memo?
Extraction is strictly separated from inference. The extraction pass pulls only what is literally on the page, with a source reference and confidence score on every field. Cross-document reasoning happens in a separate reconciliation step, clearly labeled. The lint pass catches the mechanical misses (missing leases, broken references) without an LLM call at all.
What document types are supported?
Out of the box: rent rolls, leases and amendments, T12 financials, offering memos, and a long tail of common diligence documents. Adding a new type is a YAML schema + a prompt template — no code change needed.
Early access

Run your next deal through it before you read the next OM.

The first data room is free. If it earns its keep, the rest are $149/mo. If it doesn't, you keep the wiki and the export anyway.

We won't share your email. Unsubscribe with one click.