onsa logo
Try Onsa
Back to blog

AI Sales Agent Autonomy: When Our AI Started Thinking for Itself

I was reviewing agent logs late one night when I found something I didn’t program.

One of our customers needed annual compensation data to qualify leads. Makes sense—they sell to senior executives and need to verify prospects are at the right level. The problem: most LinkedIn profiles don’t show salary.

Traditional software would hit this wall and throw an error. “Data not found. Please try again.”

Here’s what our AI agent did instead.

While researching a Swiss prospect, the agent noticed their previous role was at a public university. I read through its chain of thought:


“This person was at [University Name]. Since this is a public institution in Switzerland, average compensations are published in a public registry. I’ll fetch that data and use it as an estimate.”

It went to the Swiss government registry. Pulled the average salary for that role. Applied it to the qualification.

No one told it to do this. The agent figured out that public institution salaries are public data—and found a way to get the signal we needed.

I sat there staring at the screen thinking: this is fundamentally different from anything I’ve built before.

Another One That Surprised Me

A different customer is in payment services. Part of their ICP includes specific payment methods the prospect’s e-commerce store supports, plus certain wording in return policies.

Normally you’d need a human to read through terms of service documents. Or you’d skip the signal entirely because it’s too manual to scale.

Our agent crawled the prospect’s website, found the terms of service, read through the return policy, and extracted exactly the signals needed for qualification.

The customer was surprised this was even possible. Honestly, so was I.

What This Taught Me About the Difference

These moments crystallized something I’d been struggling to articulate:

Software hits a limit and throws an error. You fix it.

An agent finds a workaround. Sometimes brilliant. Sometimes… creative in ways you didn’t expect.

Just like an overly proactive employee.

That’s the real distinction. Not “AI-powered” vs “traditional.” It’s deterministic vs adaptive. Software follows the path you programmed. Agents understand the goal and figure out their own path.

The Autonomy Ladder

After enough of these moments, I started thinking about AI tools on a spectrum. I call it the autonomy ladder:


Human does all research, qualification, outreach. Tools just store data.


AI helps find information faster. Human still makes all decisions. Most tools marketed as “AI-powered” live here—they’re really just faster search.


Predefined sequences run automatically. AI handles routine tasks. Human approves exceptions.


AI handles most of the workflow. Human reviews before critical actions. The agent can find creative workarounds—like the Swiss salary lookup.


AI handles end-to-end with minimal oversight. Human sets goals, AI figures out execution. We’re not quite here yet. Maybe that’s good.

Most customers come to us at Stage 0 or 1. They’re doing everything manually, or using tools that are basically fancy search engines. We help them climb to Stage 3—autonomous enough to find creative solutions, with guardrails so they stay in control.

The Part That Keeps Me Up at Night

Here’s the tension with agents: workarounds can be brilliant or dangerous.

The Swiss salary lookup? Brilliant. The agent found public data we didn’t know existed.

But I’ve also seen agents:
- Find “creative” interpretations of guidelines that technically comply but miss the intent
- Access data through unexpected paths that might raise compliance questions
- Prioritize completing the task over following the spirit of instructions

This is why Stage 3 matters—autonomous with guardrails. The agent can improvise, but humans review before anything irreversible happens.

Full autonomy sounds appealing until you realize you’re trusting a system that might find paths you never imagined. Some of those paths are the Swiss salary registry. Some of them are not.

Why This Matters for Anyone Evaluating AI Tools

If you’re looking at AI sales tools, here’s what I’d ask:

Where on the ladder does this actually sit? Most tools marketed as “AI” are really Stage 1—assisted search with a chatbot interface. True agents operate at Stage 2-3. Ask for examples of the tool doing something unexpected but useful.

How does it handle edge cases? Does it throw an error, or find alternatives? Can you see its reasoning when it takes unexpected paths? Transparency matters more than capability.

What guardrails exist? How do you review before critical actions? Can you adjust autonomy levels as trust builds?

The goal isn’t maximum automation from day one. It’s building toward autonomy as you understand how the system thinks.

What Gets Me Excited

The flexibility is what changed my perspective on building software.

Traditional software is deterministic. You program the workflow, it follows the workflow. If the workflow doesn’t cover a case, you get an error.

Agents understand the goal and figure out the path. Sometimes that path goes through a Swiss salary registry. Sometimes it reads through a terms of service page looking for payment providers.

As LLMs get smarter, these workarounds get more sophisticated. The agent that found the salary data today might find ten other creative solutions tomorrow that we haven’t imagined yet.

That’s the promise—and the challenge—of building with AI agents. You’re not programming behavior anymore. You’re shaping judgment.

I’m Bayram, founder of Onsa. If you want to talk about any of this—or share your own “wait, it did what?” agent moments—find me on LinkedIn.