We monitor ChatGPT, Claude, Perplexity, and Google AI across the queries that bring you cases — in English and Spanish — and tell you every week whether your firm is the answer.
Across thousands of weekly probes, more than four in ten queries for "[city] personal injury lawyer" now return an AI-written answer with no blue link clicked at all.
A live monitoring ledger for the queries that matter most to your practice area. Below: a representative week for Hudson & Vega, a Bronx personal-injury firm monitoring 38 tracked queries across four engines.
Ampersand audits are confidential to the commissioning firm. The scorecard above is illustrative; your audit is not.
For every query you don't win, we capture which sources the model actually cited. Patterns emerge quickly: missing directory listings, unclaimed profiles, thin press coverage. These are the levers.
Four directories you haven't claimed account for two-thirds of your missed citations. Claim them and the gap closes.
Every Ampersand report ends with three prioritized moves — the ones our model says will recover the most visibility per hour of your paralegal's time. We do the work if you want us to.
Four of eleven losing queries cite an unclaimed directory listing as their primary source. The profiles exist but point to a 2018 address and the wrong practice areas.
Perplexity and Claude both favor pages that answer a specific sub-question ("scaffolding fall," "Labor Law 240"). Your competitors have them; you have one generic page.
Seventeen threads about "best Bronx injury lawyer" are cited weekly by Perplexity. None mention your firm. A partner-attributed answer (per subreddit rules) usually holds for 6+ months.
When OpenAI ships a new model, your citations shift overnight. Ampersand annotates the timeline so you can tell a bad week apart from a trend. The 12-month history belongs to you, exportable as CSV for whatever deck your partners need next.
Tell us your firm and your market. We'll run a sample audit on twenty queries and email you back with what the AI engines actually say about you. No sales call required.
No. Each audit is delivered to a firm-specific URL accessible only via the link we email the managing partner. We do not publish firm names, do not maintain a public leaderboard, and do not allow firms in the same metro to see each other's reports. The directory of firms we monitor is intentionally not public.
No. SEO optimizes for Google's ten blue links. Ampersand measures and optimizes for the paragraph an AI writes when a prospect asks it a question — a different surface, a different retrieval system, and in many practice areas already the bigger source of intake calls. The two work together; neither replaces the other.
We run queries the same way a prospective client does — real browser sessions, anonymous users, public web UIs. We respect rate limits, rotate sessions, and do not attempt to harvest training data. We maintain written counsel guidance on terms-of-service compliance and make it available to enterprise customers on request.
Generalist dashboards are built for DTC marketing teams with weekly standups and growth org structures. Law firms have a managing partner and a paralegal. We package recommendations as actions your paralegal can complete in an afternoon, or we do them for you under The Practice retainer.
For most single-practice firms, 40–60 canonical queries cover 80% of the intent surface. We start by interviewing your intake team for the five phrases they hear most, then backfill from our corpus of 2.4M indexed citations across the vertical.
You shouldn't trust any single run. Every query is run three times per engine per week, and we report the distribution alongside the median. A rank that moves from #2 to #5 only matters if the distribution shifted; most weekly noise falls inside it, and we filter it out of the headline numbers.
Yes. CSV and PDF are one click. The full JSON including raw responses and citation metadata is available via API on The Practice tier.