Vol. II · Issue 03
The Ampersand Report
Friday
ESTABLISHED · 2024 · BROOKLYN, NY QUERIES RUN THIS WEEK · 147,208 FIRMS MONITORED · 612 CITATIONS INDEXED · 2.4M
Monitoring · All four leading AI engines
+ Copilot · Grok on request
AI Visibility Monitoring · for law firms

Your next client asked Google ChatGPT ChatGPT Perplexity Google for a lawyer. You weren't in the answer.

We monitor ChatGPT, Claude, Perplexity, and Google AI across the queries that bring you cases — in English and Spanish — and tell you every week whether your firm is the answer.

Live probe RUN #3481
H
Hudson & Vega
ChatGPT
Claude
Perplexity
Gemini
ChatGPT
Claude
Perplexity
Gemini
Across thousands of weekly probes, more than four in ten queries for "[city] personal injury lawyer" now return an AI-written answer with no blue link clicked at all.
THE AMPERSAND REPORT · VOL. II · ISSUE 03 · METHODOLOGY APPENDIX
§ 01 — THE SCORECARD

Every query, every engine, every week.

A live monitoring ledger for the queries that matter most to your practice area. Below: a representative week for Hudson & Vega, a Bronx personal-injury firm monitoring 38 tracked queries across four engines.

O
Hudson & Vega
Bronx · Personal Injury · 38 tracked queries
Visibility Index
64/ 100
Share of Voice
18.2%
Week over Week
+6.4pts
Tracked query
ChatGPT
Claude
Perplexity
Gemini
"best personal injury lawyer in the Bronx"
#2
#4
#1
"construction accident attorney NYC"
#1
#2
#5
#2
"slip and fall lawyer Bronx"
#6
#4
"wrongful death attorney Bronx"
#3
#5
#4
#2
"abogado de lesiones personales Bronx"
#1
#1
#3
"is Hudson & Vega a good law firm?"
#1
#1
#1
#1

Ampersand audits are confidential to the commissioning firm. The scorecard above is illustrative; your audit is not.

§ 02 — CITATION DIFF

Where the AI looks — and where you're missing.

For every query you don't win, we capture which sources the model actually cited. Patterns emerge quickly: missing directory listings, unclaimed profiles, thin press coverage. These are the levers.

When Hudson & Vega is cited
  1. hudsonvega.com
  2. avvo.com
  3. justia.com
  4. bronxtimes.com
When Hudson & Vega is missed
  1. findlaw.com
  2. lawyers.com
  3. reddit.com /r/legaladvice
  4. quora.com
The pattern

Four directories you haven't claimed account for two-thirds of your missed citations. Claim them and the gap closes.

THE LONG VIEW

Your visibility, plotted against every model update.

When OpenAI ships a new model, your citations shift overnight. Ampersand annotates the timeline so you can tell a bad week apart from a trend. The 12-month history belongs to you, exportable as CSV for whatever deck your partners need next.

Hudson & Vega Category average
VISIBILITY INDEX · 12 MONTHS Now: 64 · 52-wk high 71 · low 38
90 70 50 30 GPT-5 release · Aug 12 Perplexity Reddit weighting · Feb 4
MAY '25JULSEPNOVJAN '26MARAPR 17
§ 04 — REACH OUT

See where your firm stands. One message.

Tell us your firm and your market. We'll run a sample audit on twenty queries and email you back with what the AI engines actually say about you. No sales call required.

Replies and reports are exchanged partner-to-partner. We do not share, sell, or publish firm names.

We reply within one business day. Replies come from a partner, not a sales rep.

Message received.

A partner will email you within one business day with a sample audit and next steps.

Frequently asked — mostly by managing partners.

Will my report be visible to my competitors?

No. Each audit is delivered to a firm-specific URL accessible only via the link we email the managing partner. We do not publish firm names, do not maintain a public leaderboard, and do not allow firms in the same metro to see each other's reports. The directory of firms we monitor is intentionally not public.

Is this SEO? We already pay for SEO.

No. SEO optimizes for Google's ten blue links. Ampersand measures and optimizes for the paragraph an AI writes when a prospect asks it a question — a different surface, a different retrieval system, and in many practice areas already the bigger source of intake calls. The two work together; neither replaces the other.

How do you get the data? Do the AI companies allow this?

We run queries the same way a prospective client does — real browser sessions, anonymous users, public web UIs. We respect rate limits, rotate sessions, and do not attempt to harvest training data. We maintain written counsel guidance on terms-of-service compliance and make it available to enterprise customers on request.

How is this different from the generalist tools?

Generalist dashboards are built for DTC marketing teams with weekly standups and growth org structures. Law firms have a managing partner and a paralegal. We package recommendations as actions your paralegal can complete in an afternoon, or we do them for you under The Practice retainer.

How many queries do I actually need to track?

For most single-practice firms, 40–60 canonical queries cover 80% of the intent surface. We start by interviewing your intake team for the five phrases they hear most, then backfill from our corpus of 2.4M indexed citations across the vertical.

LLM outputs are random. How do I trust a "#3" ranking?

You shouldn't trust any single run. Every query is run three times per engine per week, and we report the distribution alongside the median. A rank that moves from #2 to #5 only matters if the distribution shifted; most weekly noise falls inside it, and we filter it out of the headline numbers.

Can we export the data?

Yes. CSV and PDF are one click. The full JSON including raw responses and citation metadata is available via API on The Practice tier.

Pilot spots for Q2 2026 — twelve remaining

Find out whether your firm is the answer — before your competitor does.

Request an audit Talk to an analyst
Fixed fee. Ten business days. No sales call required to get started.
© 2026 Ampersand Labs, Inc. · ATTORNEY ADVERTISING MATERIAL SET IN SOURCE SERIF & INTER TIGHT · PRINTED VIRTUALLY IN BROOKLYN