Back to Blog
Engineering·6 min read

We Fought Vendor Lock-In for Decades. AI Agents Are Walking Right Back Into It.

Google restricted paying subscribers for using OpenClaw. The walled gardens are going up — and most agent teams aren't ready.

Illustration of a massive wall being built around a glowing AI brain with developers trying to reach through

We spent 20 years escaping vendor lock-in. Open APIs won. REST won. "Don't build on someone else's platform" became gospel.

And now, with AI agents, we're speed-running the same mistakes all over again.

The Incident That Should Worry You

Last week, a thread on the Google AI Developers Forum exploded. Paying Google AI Ultra subscribers — people spending real money for API access — woke up to restricted accounts. No warning email. No grace period. Just locked out.

Their crime? Connecting through OpenClaw via OAuth.

The thread hit Hacker News and racked up 465 upvotes and 378 comments in hours. The frustration was visceral. These weren't free-tier abusers. They were paying customers using a standard protocol to connect their own tools.

Google's response? Silence, mostly.

This Isn't New. We've Seen This Movie.

Remember when Twitter killed third-party clients? When Google killed RSS Reader? When Apple decided which apps could exist on your phone?

The playbook is always the same:

  • Build an open ecosystem to attract developers
  • Wait until everyone depends on it
  • Tighten the walls

AI model providers are entering act two. The APIs are open for now. The OAuth flows work for now. But the moment agents start consuming serious compute through third-party orchestration layers, the incentives flip.

Google doesn't want you routing through OpenClaw. They want you inside their ecosystem — their UI, their tools, their upsell path.

Why Agent Teams Should Care More Than They Do

Here's the uncomfortable part. Most agent setups I see have a single model provider hardcoded into their stack. One API key. One vendor. One throat to choke.

If that provider decides tomorrow that agent-based access violates their ToS — and we just watched it happen — your entire automation pipeline goes dark. Not in a week. Overnight.

This isn't theoretical risk management. It happened to real people last Tuesday.

The Fix Is Boring (and That's the Point)

The teams that shrugged off last week's incident had one thing in common: provider abstraction.

Not fancy multi-model routing. Not some elaborate failover architecture. Just basic hygiene:

  • Multiple provider credentials configured — so when one locks you out, traffic shifts
  • Model-agnostic prompts — avoiding provider-specific features that create invisible lock-in
  • Local fallbacks for critical paths — even a smaller open-source model beats a 503

The Bigger Picture

The AI agent ecosystem is at an inflection point. We're building increasingly autonomous systems on top of APIs controlled by a handful of companies who have zero obligation to keep the door open.

That's not a technology problem. It's a governance problem. And the developer community is sleepwalking into it because the APIs work today.

The history of software tells us exactly how this plays out. The only question is whether we learn from it this time or pretend it's different because it's AI.

What I'd Do This Week

If you're running agents in production, take thirty minutes and answer three questions:

  • What happens if your primary model provider restricts agent access tomorrow?
  • How long would it take to switch to an alternative?
  • Do you have the credentials and config ready, or would it be a scramble?

If the answer to that last one is "scramble," you've got your weekend project.

The walled gardens are going up. The question isn't whether your provider will change the rules. It's whether you'll be ready when they do.

Have you been hit by a provider restriction? Or already built multi-provider failover into your agent stack? Drop a comment — I'm genuinely curious how teams are handling this.