We won the Jacksonville-area family law local pack for our 904 Family Law client in seven months. Top three on every "family lawyer near me," "divorce attorney Jacksonville," and "child custody lawyer Florida" query in the metro. The competitor set we displaced included three firms with bigger ad budgets and more attorneys.
The reason it worked is unromantic. We ran a Google Business Profile cadence weekly, instead of monthly. We responded to every review within 24 hours, including the negative ones, with substantive answers a partner could have written. We rebuilt the practice-area page architecture so each matter type had its own URL with its own internal authority. We tracked all of it inside Orbit so the cadence didn't drift the moment the engagement got busy.
Three things, executed weekly, for seven months. That's the whole game on local. Most agencies sell pieces of this inside a larger SEO retainer. None of the pieces matter without all three running together.
This article is the operational version of what to do, why each thing works, and how to know if you're actually doing it. Written for any service business, clinics, law firms, MSPs, restaurants, contractors. The algorithm doesn't care which industry you're in. It cares whether you're feeding it the signals it's optimizing for.
What a "near me" search actually is, mechanically
Before the tactics, the mechanics. When someone types "family lawyer near me" into Google on their phone, what fires:
Google reads the query as having implicit local intent. It pulls the user's coarse location from the device. It runs the query against the Google Business Profile index for that geographic area, weighted by category match, relevance, and prominence. It returns the local pack, the three highlighted business listings with map pins, above the organic blue links. The local pack is usually the only thing the user sees on the first scroll.
Three things determine which businesses make the local pack:
Relevance, does the business's GBP category, services, and content match the query intent? "Family lawyer" matches a profile categorized as "Family law attorney" with practice areas listed for divorce and custody. It does not match a profile categorized as "General practice attorney" with no practice areas listed.
Distance, how close is the business's verified address to the user's device location? You can't fake this one. If you're 22 miles from the searcher and a competitor is 4 miles, the competitor wins on this signal.
Prominence, how known and active is the business? This is the bucket where almost all the optimizable signals live: review count and quality, GBP post velocity, Q&A activity, citation consistency, backlinks, mentions in the local-news cluster, and on-site authority for the relevant practice areas.
The gap between sitting outside the local pack and ranking inside it is almost entirely prominence work. You can't move your address. You can't change Google's category list. You can move every prominence signal aggressively in 30 days if someone is actually doing the work.
What the algorithm is rewarding right now (2026 update)
The local algorithm changes, but the rewarded signals haven't shifted dramatically in the last few years. As of mid-2026, here's what we see moving the needle.
GBP post cadence. Google rewards profiles that publish on the GBP itself, not just on the firm's website. Weekly minimum. Monthly is "we've given up." The posts don't need to be polished, they need to be real. New service offerings, seasonal promotions, recent case outcomes (where ethics rules permit), photos from operations. We post weekly through Orbit's GBP module so the cadence is enforced operationally, not depending on whether the marketing team remembers.
Specific reviews. The local algorithm reads review content, not just the star rating. A review that says "helped us through a complicated probate matter when my mother passed in March" scores stronger relevance for probate queries than a review that says "great service, thanks!", even though both are five stars. We've trained intake teams to ask post-engagement clients for specific reviews mentioning the matter type, the responsiveness, and what the outcome meant. The lift in relevance signal is meaningful.
Review response substance. Generic "thanks for the review!" responses don't help. A response that names the matter, references the outcome (within ethics rules), and reads like a real person wrote it scores stronger as a relevance signal. We respond to every review within 24 hours with a real partner-voice response. Including the negative ones. Especially the negative ones.
Q&A activity. The Q&A section on a GBP is largely abandoned by most businesses. We treat it as a publishing surface. We seed common questions ourselves. We answer every prospect-asked question within a day. The number of GBP profiles where the Q&A is empty or filled with unanswered prospect questions is wild. Activating it is a 30-minute monthly task that visibly moves rankings.
Service area accuracy. Most GBPs were claimed at launch with a default service radius. The radius hasn't been touched since. Updating service areas seasonally, especially for service businesses that operate by territory, sends a signal to the algorithm that the profile is actively maintained. The act of updating matters as much as the specific update.
On-site practice-area depth. This is where most legal and healthcare local SEO programs leak. The website has a single "practice areas" or "services" page with a dropdown. Each practice gets a paragraph. None of them rank because none of them have enough authority to rank as standalone entities.
The fix: every practice area gets its own URL with 2,000+ words, FAQs, attorney or clinician bios specific to that practice, real outcomes (where ethics permit), and a CTA tied to the right intake routing. Search engines treat each practice area as a separate topical entity. Prospects treat them as proof of specialization. The lift on long-tail near-me queries (family lawyer near me, divorce attorney near me, custody lawyer near me) is meaningful because each query is now mapping to a deep page instead of fighting for the same shared /practice-areas/ slug.
The weekly cadence we actually run
The discipline that makes the difference between a local-pack listing and not is not strategic. It's operational. Here's the program we run on every local-search engagement, in the order we run it during the week:
Monday: review queue triage. Every review from the prior week, across Google, Facebook, Yelp, BBB, and any vertical-specific sites, gets a response. Within 24 hours of receipt where possible. Substantive, specific, partner-voice. Negative reviews get the longest, most specific responses because they're the highest-stakes communication the firm will have with a prospect that week.
Tuesday: GBP posts. Two posts ship, typically a service highlight or seasonal post, plus an offer or update. Photo, copy, CTA, link. Live on the profile by Tuesday afternoon. The posts go live as the engagement runs, not as a one-time backfill.
Wednesday: Q&A scan. Any prospect-asked question on the GBP gets a real answer. We also seed one or two new questions ourselves with substantive answers, the questions a real prospect would have asked but didn't. The Q&A section gets read by Google's local algorithm and by humans on the SERP. Both score better when it's active.
Thursday: review velocity work. We coordinate with the firm's intake team on which clients from the prior week are in a position to be asked for a review. The ask itself is templated for specificity: matter type, responsiveness, outcome. A 12 percent ask-to-review conversion rate is industry standard. We push it to 18-22 percent through better timing (within 48 hours of resolution, not three weeks later) and better prompting.
Friday: ranking + activity audit. We pull the week's local pack rankings across the firm's primary geo and the secondary terms we're working. Trend deltas get logged in Orbit. Any drop of 3+ positions on a tracked term triggers a Monday investigation. The local algorithm has weekly volatility we can read against the broader trend by tracking it formally instead of spot-checking.
Monthly: service area + category audit. Service areas refreshed. Category list re-validated against the firm's actual offerings. Any new services added to the GBP. Photo refresh on the profile. The monthly review keeps the profile reading as actively maintained, which is its own signal.
Quarterly: competitive set audit. Who else is in the local pack we're trying to rank in? What do their GBPs look like? What review velocity are they running? Where are the gaps in their on-site practice-area architecture we could exploit? Quarterly competitive intel sharpens the work for the next quarter.
That's roughly six to nine hours per week of operator time per local-search engagement. It's not glamorous. It's the discipline that delivers the results.
How to know if your current SEO retainer is actually doing this work
Three diagnostic questions to ask whoever runs your local SEO program.
"Show me last month's GBP posts." Pull up the GBP. Count the posts. If there were fewer than four, the cadence is wrong. If they're auto-generated marketing-style posts that don't reference real services or real activity, the substance is wrong.
"Show me last month's review responses, including the one-star reviews." A good local program responds to every review within 24 hours. The negative reviews get the longest, most specific responses. If the responses are generic "thanks for the review" templates or, worse, the negative reviews are unanswered, the firm isn't running a real program.
"Show me the practice-area pages with their internal authority report." A good local program has a separate URL for every practice area, with each one over 2,000 words, with internal links from the homepage and from related blog posts. If the program's report doesn't include this view, they're not optimizing for the long-tail near-me queries that drive most of the prominence signal.
If any of those three checks comes back weak, the retainer isn't doing local SEO. It's doing reporting on local SEO. Different products.
What we won't do, and why
A note on the list of "best practices" that get sold as local-SEO essentials but don't actually move rankings.
Citation building campaigns past the obvious twenty. Yelp, Bing Places, Apple Maps, BBB, and the four or five vertical-specific directories that matter for your industry. After that, citation count plateaus as a signal. Spending on a 200-citation package is mostly a transfer of money to the citation vendor.
Quarterly "local SEO content" packages without practice-area architecture. Generic "5 things to know about hiring a divorce lawyer" listicles don't rank for "divorce lawyer near me." The query needs an entity-specific page that answers here is who we are, what we do, where we operate, and how to reach us. Listicle content can supplement this work but doesn't replace it.
Schema-only optimization without GBP work. Schema markup helps the website rank organically. It does almost nothing for the local pack. The local pack reads the GBP, not the website's structured data. Doing schema right is a different project from doing local right.
"Reputation management" services that suppress negative reviews. This is, separately, against Google's terms and ethically suspect. A real local-search program treats negative reviews as the highest-leverage communication the firm has and answers them substantively in public. Trying to bury them is the wrong mechanism.
Why we run this in Orbit
Orbit is the operational layer we built for ourselves before we sold it. The reason: every local SEO engagement we ever ran on a "manual cadence" eventually decayed. Month one, the team was sharp. Month four, the GBP posts were a week late. Month seven, the review responses were going out three days after receipt instead of one. Month twelve, the engagement looked the same as the engagement we replaced.
The cadence has to be enforced operationally, not motivationally. Orbit holds the weekly tasks, dashboards the activity, alerts when something drifts past 24 hours, and surfaces the ranking trend so the firm can see what's actually working. It's the same operating system we run on our own marketing. We sell it to clients because it's what made our own discipline survive contact with reality.
If your local-search program is producing inconsistent results, it's almost always not because the strategy is wrong. It's because the operational discipline drifted. The fix is the discipline, not a different agency.
If you want to see what an honest audit of your current local-search program looks like, the next step is a 30-minute conversation. We do this for two to four firms a quarter as a free entry point. The output is a punch list of what's working, what's drifting, and what to do about it. If the work is right for us, we'll scope it. If it's better handled by a contractor or in-house, we'll point you there.
