Post No. 001 · Research April 24, 2026 · 9 min read

What 50 personal injury firms in NYC told us about AI visibility.

We audited fifty firms across ChatGPT, Claude, Perplexity, and Gemini. The median firm was cited in fewer than three of every ten queries on Perplexity — and where the firm ranked on Google predicted almost none of it.

By Joshua Dresner Brooklyn, NY Filed under: Personal Injury
§ 01 — The setup

In Q1 2026, we ran our standard audit harness against fifty New York City personal injury firms — twelve in the Bronx, fourteen in Brooklyn, eight in Queens, and sixteen in Manhattan. Firm size ranged from solo practitioners to forty-attorney shops. We chose firms not to flatter or embarrass anyone, but to span the practice's economic distribution: a few household names, many mid-tier shops, several digital-first outfits.

Each firm was probed against the same thirty-eight canonical queries — the phrases that, by our intake-team interviews, account for roughly 80% of how a prospective client phrases their question to an AI. About two-thirds were English; the remainder were Spanish, drawn from the same research because personal injury in New York is a heavily bilingual market and ignoring half of it leaves the picture incomplete.

Each query ran three times, on each engine, in a fresh browser session, on the same calendar day. We recorded the cited sources, the position of any firm mention, and the language of the answer the model returned. Citation extraction was done with our standard pipeline — a Bedrock-hosted Claude Haiku model normalizes the messy free-text outputs into structured records.

This is the same audit we ship to paying customers; nothing about the methodology was tuned for the report.

§ 02 — Headline numbers

Median citation rates were lower than every firm we showed expected.

The first thing the data does is dispel a comforting story: that being well-known on Google means being well-known in the AI engines. It does not.

Median citation rate · 50-firm panel · 38 queries Q1 2026
Engine English Spanish Combined Run-to-run variance
ChatGPT 54% 38% 49% Low
Gemini 58% 34% 50% Low
Claude 41% 30% 37% Medium
Perplexity 28% 22% 26% High
Median across panel; rates are any-position citations within a 38-query intake set.

Across the panel, Pearson correlation between Google Maps rank for "personal injury lawyer near me" and the firm's median citation rate on ChatGPT was 0.18. On Perplexity it was 0.07. On Gemini, where Google's first-party signals leak through, it was a more respectable 0.41 — but still not what you'd expect.

Translation: ten of the fifty firms in our panel pay between four and eleven thousand dollars a month for SEO, and the engine that is increasingly answering their prospects' questions is mostly indifferent to that work.

§ 03 — Engine-by-engine variance

Each engine has a personality. The fragility is the point.

"
If you only check ChatGPT, you have a partial answer. If you only check Google, you have a wrong one.

ChatGPT and Gemini gave the most stable answers — same firm cited in the top three across all three runs about 60% of the time. Perplexity was the loudest: it varies its source mix more than any other engine, and that variance correlates with whether a Reddit thread on the topic is recently active. Claude was the most cautious — it tends to refuse to recommend a specific firm and instead returns a generic "here are factors to consider" answer to about a quarter of intake-style queries. (It still cites firms in the answer; it just declines to rank them.)

Three things follow practically:

  1. The "single rank" you read in someone's screenshot is meaningless. We run every query three times for a reason; one-shot results are noise. Anyone selling you "your AI ranking" off a single run is selling you a coin flip.
  2. Coverage decisions matter. If you only monitor ChatGPT — the modal "I asked an AI" experience — you get a partial answer. Perplexity behaves differently enough that ignoring it is non-optional in any vertical with real intake stakes.
  3. There is a substantial Spanish-language gap in this market. More on that next.
§ 04 — The Spanish-language gap

Where the firm advertises in Spanish but doesn't publish in Spanish, the citations don't follow.

Of the thirty-eight queries, fourteen were in Spanish. On the English subset, our fifty firms were cited in a median of 41% of queries across the four engines. On the Spanish subset, the median was 28%.

Two reasons. First, the directories that already dominate Spanish-language queries — Avvo and Justia in particular — are even more decisive there than in English, because the model has fewer high-trust signals to lean on. Second, very few firms in our panel publish a Spanish-language version of their website. Among those that do, citation rates on the Spanish queries were twenty-two points higher.

If your firm advertises in Spanish on subway cars but not on its own website, you are leaving a meaningful share of your AI intake on the table. The fix is not subtle, and it is one a competent legal-marketing translator can do in a week.

§ 05 — The directory pattern

Across the panel, four directories did two-thirds of the speaking.

Across the fifty firms, four directory domains accounted for 67% of all citations the engines returned: avvo.com, justia.com, findlaw.com, lawyers.com. A handful of regional outlets and specialty directories — BronxTimes, Super Lawyers' regional listings, NYCBar's referral pages — accounted for another 9%. The firm's own website was the cited source only 11% of the time on average, and only 3% on Perplexity.

The pattern is not subtle. The AI engines are not summarizing law-firm websites; they are summarizing structured directory data about law firms, and citing the directory. If your directory profiles are stale, missing fields, or unclaimed, the model paints a stale, partial, or wrong picture of you — and there is no error from your perspective. The query just goes to a competitor whose listing is cleaner.

Coming next week. We're publishing a separate playbook on this — The directory triage (No. 002) — because it's the single highest-leverage move surfaced by the audit. For now: if you do nothing else this week, claim and update Avvo, Justia, FindLaw, and Lawyers.com.
§ 06 — What this means for your firm

Three things, in declining order of urgency.

  1. Your AI ranking is not your Google ranking. Whatever your agency reports, monitor the engines directly. The correlation is too weak to extrapolate, and on Perplexity it's effectively noise.
  2. Your directory profiles are doing two-thirds of the speaking on your behalf. Treat them as a primary marketing surface, not an afterthought.
  3. If your market is bilingual, a Spanish-language site is no longer optional — it is a twenty-two-point lift in AI visibility on the queries that bring those cases.
§ 07 — Methodology notes

The fifty-firm panel is anonymous; nothing in this report identifies a specific firm. The thirty-eight-query set is available on request to managing partners. Citation extraction uses our standard pipeline — Bedrock-hosted Claude Haiku 4.5, three-run aggregation, deterministic source normalization. The audit harness is the same one we ship to paying customers; there is no special tooling for this report.

If you'd like the full panel cuts for your specific neighborhood and practice area — including which directories cite you and which cite a competitor instead — the audit takes ten business days and is fixed-fee. The first sample, on twenty queries, is on us.

— § —

Joshua Dresner is a co-founder of Ampersand Labs. He writes from Brooklyn and answers email at josh@withampersand.ai.

Where does your firm actually stand?

Twenty-query sample audit, on us. Ten business days, fixed fee from there.

Request an audit
© 2026 Ampersand Labs, Inc. · ATTORNEY ADVERTISING MATERIAL SET IN SOURCE SERIF & INTER TIGHT · PRINTED VIRTUALLY IN BROOKLYN