GROWTH2026-03-24· 9 min· By Michael Saad

We Use Seven AI Tools at Digital1010. Here's the Routing Logic Behind Every Decision.

Most agencies are still asking 'which AI should we use?' That's the wrong question. We're not loyal to any model or vendor. We're loyal to the outcome.

We Use Seven AI Tools at Digital1010. Here's the Routing Logic Behind Every Decision.

At 4:45 AM every morning, before my alarm goes off, an AI agent reviews yesterday’s activity across all our client accounts, checks for system errors, pulls today’s calendar, and surfaces the two or three things that actually need my attention. It delivers everything to Slack. I wake up to a briefing, not a pile of noise.

No one on my team built that this morning. No one reviewed the output. It ran, it worked, it delivered.

That’s the goal behind everything we’ve built at Digital1010 not to have the best AI tool, but to have the right tool for each job, with humans in the loop only when humans are actually needed. We currently route work across seven systems: Claude.ai Pro, Claude Code, Cowork, ChatGPT, Cursor, LM Studio, and OpenClaw. Not because we’re chasing shiny objects. Because each one occupies a distinct position in how work gets done.

Here’s the routing logic.

I don’t have a favorite AI. I have a routing decision.

Most agencies are still asking “which AI should we use?” That’s the wrong question.

We’re not loyal to any model or vendor. We’re loyal to the outcome. The question we ask is: what does this job require and which tool produces the best result at the right cost, with the right level of human involvement?

Sometimes that’s Claude. Sometimes it’s GPT-4o. Sometimes it’s a local model that never touches an external API. The model is a variable. The routing decision is the skill. And it’s the only thing that keeps running seven tools coherent instead of chaotic.

The Human-in-the-Loop Layer

These are the tools where a human is actively present in the work contributing judgment, reviewing output, making decisions.

ChatGPT: Ideation and Free-Flowing Conversation

ChatGPT is where we think out loud. Brainstorming, early-stage ideation, exploring angles before we’ve committed to a direction. GPT-4o follows tangents well and doesn’t over-structure early thinking which is exactly what you want when the problem is still fuzzy.

We don’t use it for execution. Once we know what we’re building or writing, it hands off.

Claude.ai Pro: Strategy, Writing, and Quick Access

Claude is our primary work tool. Writing, strategy, research, client-facing content anything where quality and judgment matter. The reasoning quality at the Pro tier is where we live for most knowledge work.

Pro also serves a practical function: it’s mobile-accessible and always available. Quick questions, thinking through a problem between meetings, reviewing something on the go. For deep sessions with full context, we move to Code. They overlap, and that’s fine the deciding factor is usually where you are and how much context the problem needs.

Claude Code: Deep Sessions with Full Context

When the work requires terminal access, repo awareness, and extended context, Claude Code is the environment. Architecture decisions, complex refactors, PR reviews, anything where the problem has depth that needs to be held across a long session.

Real example: When we rebuilt our client ops dashboard, we used Claude Code to choose between Next.js 14 and Remix. Not because we couldn’t Google it because the right answer depended on our specific deployment setup (DigitalOcean + Supabase), our team’s skill gaps, and our long-term platform plans. That conversation saved roughly two weeks of wrong-direction work.

Cursor: In the Editor, Implementation Mode

Once the approach is defined, Cursor handles execution. Editor-native, no copy-paste tax, fast at implementation, excellent at debugging unfamiliar code. It keeps you in flow when leaving the editor would cost momentum.

Real example: A HighLevel webhook integration was failing intermittently on a client account. Cursor explored the event log, middleware stack, and database query patterns simultaneously, found a race condition, and surfaced the fix. Eight minutes. Manual trace would have been an hour minimum.

Cursor Pro is $20/month flat. For implementation-heavy work, the value is hard to argue with.

The Autonomy Layer

These are the tools and systems where the work runs without us or has graduated past needing regular review.

Cowork: Deliverables That No Longer Need Your Attention

People assume Cowork is a collaboration tool a way to add more people to the loop. We use it for the opposite: removing people from the loop on deliverables that no longer require them.

We built the workflow. We defined the standard. We reviewed until we trusted it. Now it runs. The value isn’t adding oversight it’s earning the right to eliminate it. We only re-enter when something breaks the pattern.

That’s the progression we’re always pushing toward: human-reviewed → human-spot-checked → autonomous.

LM Studio: Local Inference for Cron Jobs and Private Data

LM Studio runs on our Mac Studio M3 Ultra with models stored on an external drive. It handles two specific categories: high-volume routine operations where sending every request to an external API is unnecessary, and anything involving data that shouldn’t leave our infrastructure.

Cron jobs that don’t need frontier model reasoning run locally. Client data with sensitivity constraints stays local. Cost is effectively zero beyond hardware. The privacy guarantee is absolute.

It’s not competing with Claude or GPT-4o. It’s handling the work that doesn’t need them.

OpenClaw: Autonomous Multi-Agent Orchestration

OpenClaw is our internal system built on Claude’s API, running 24/7 on a dedicated Mac Mini, accessible remotely via Tailscale. It runs 16 active cron jobs and manages 33 agents across four workflows. Nothing about its daily operation requires human input.

A few of the active jobs:

04:45 daily. Morning Briefing
Activity review, system health, priorities
Model: DeepSeek

12:00 daily. Intel Midday Scan
Competitor activity, SEO, tool releases
Model: DeepSeek

19:30 daily. Evening Digest
Compiled intelligence to Slack
Model: DeepSeek

1st of month. Client Reporting
GA4 + GSC pull, formatted and delivered
Model: Sonnet

Mon 08:30. CRM Task Monitor
Open pipeline tasks, flags stalled items
Model: DeepSeek

Weekly + daily. Content Pipeline
Schedule, produce, auto-post fully autonomous
Model: Mixed

The architecture

Every agent session loads a set of structured context files AGENTS.md for roles and permissions, SOUL.md for tone and decision hierarchy, TOOLS.md for API access, MEMORY.md for institutional memory, HEARTBEAT.md for what to check every 30 minutes. Think of them as ~/.bashrc for AI. Runtime context that gives every session full situational awareness without needing to be caught up.

Memory is file-backed with hybrid search: 70% vector similarity, 30% full-text keyword matching, across 151 files and 350 indexed chunks. When an agent needs to recall a client issue from three weeks ago, it does without being asked twice. AI without memory is expensive autocomplete. With memory, it becomes an operator.

The cost model

We route by complexity. The morning briefing costs roughly $0.02/day on DeepSeek. The same job on Sonnet runs about $0.15/day. Over a year, across 16 active cron jobs, that differential is real money. Matching model to task is just good engineering.

What we learned the hard way (February 2026)

Misconfigured cron jobs exhausted our Anthropic API quota and took the entire system down. Twenty-six jobs at too-short intervals created a cascade failure. The governance rules we run now:

  • Hard ceiling of 22 active cron jobs
  • No interval faster than 30 minutes
  • Explicit Slack channel IDs in every config (channel names fail silently; channel IDs don’t)
  • 120-second timeout for complex jobs
  • Sandboxed execution for all subagents

None of this was obvious upfront. All of it came from breaking something in production.

The Routing Decision

When a task comes in, we’re implicitly running through one question: does this require open thinking, active building, or autonomous execution?

If the job requires ideation, brainstorming, early exploration:
→ Use ChatGPT

If the job requires strategy, writing, judgment on the go:
→ Use Claude Pro

If the job requires deep session, architecture, refactor, code review:
→ Use Claude Code

If the job requires implementation (spec defined, time to build):
→ Use Cursor

If the deliverable has earned its autonomy:
→ Use Cowork

If the job requires high-volume or privacy-sensitive local work:
→ Use LM Studio

If the job is scheduled, recurring, multi-step, autonomous:
→ Use OpenClaw

The underlying principle: the more a task requires human judgment, the more present you should be. The more it’s defined, repeatable, and predictable, the further you push it toward full autonomy.

A Note on What This Is Actually For

People will read this and assume the goal is to eliminate jobs or eventually my own role. It’s the opposite.

Digital1010 has been running for 14 years. Growth has always meant more overhead, more complexity, more management surface area, more exposure to turnover. Every new hire is a relationship, a training investment, a dependency. That’s not a criticism of people it’s just the math of scaling a service business.

What we’re building changes that math. Not by removing people, but by removing people from work that shouldn’t require them. The repetitive, the schedulable, the predictable that gets automated. What remains is the work that actually needs a human: judgment calls, client relationships, creative direction, strategic decisions.

The team we have does more meaningful work because the low-leverage tasks are handled. I run a tighter, more resilient operation because I’m not dependent on any single person remembering to pull a report or check a dashboard. And we can take on more without the complexity that usually comes with it.

That’s the goal. Not fewer people. Better use of the people and the time we have.

What This Actually Buys You

AI without routing logic is expensive chaos. You end up using your most capable model for tasks that don’t need it, and your fastest tool for decisions that deserve more thought.

The agencies making real progress with AI right now aren’t the ones with the best single tool. They’re the ones who’ve built the judgment to match tool to task, model to complexity, and human attention to the decisions that actually require it.

We’re still building this. Every week there’s a workflow to automate, a deliverable to graduate past human review, or a cron job to route to a cheaper model. The system isn’t done it’s directional.

But the 4:45 AM briefing hits Slack whether I’m thinking about it or not. And that’s the point.

Michael Saad is the founder of Digital1010, a full service digital marketing agency based in Jacksonville, Florida.

msaad@digital1010.com · digital1010.com · Connect on LinkedIn

If you’re building routing logic like this, I’d be interested to hear how you’re thinking about it.

Want to apply this?

Run an AEO Scan against your own stack.

Free written read of your visibility across ChatGPT, Claude, Perplexity, and Google AIO in 24 hours. Same diagnostic we run on every new engagement.