onsa logo
Try Onsa
Back to blog

Can AI Sales Tools Predict Your Next Deal with 90% Accuracy?

I recently came across a study that made me question whether we’ve been overthinking our sales research for years. It turns out, we might be spending thousands of dollars on “market discovery” when a well-prompted LLM could give us the answers in seconds.

The researchers did something fascinating: they used LLMs to generate “virtual buyers” and asked them to evaluate products. The goal was to see if an AI could actually mimic the decision-making process of a real human being. As someone building in the ai sales space, the results were both impressive and a little bit humbling.

The “Text-First” Trick to Better Sales Insights

The researchers didn’t just ask the AI to “rate this product from 1 to 10.” If you do that, the AI usually gives you a generic, middle-of-the-road answer that’s about as useful as a “maybe” from a prospect who’s ghosting you.

Instead, they used a clever two-step process: 1. They asked the LLM to generate a full, descriptive textual response—essentially letting the “virtual buyer” vent or praise the product in detail. 2. They then mapped those text responses onto a numerical scale.

By letting the AI “think out loud” first, they unlocked a much deeper level of nuance. In B2B sales, we call this the “discovery” phase. It turns out AI is surprisingly good at simulating the logic of a grumpy CFO or a skeptical CTO if you give it the room to speak.

90% Correlation: Are We Obsolete?

Here is the kicker: the correlation between the “virtual buyers” and actual human buyer behavior was over 90%.

Think about that for a second. That means the AI’s “hallucinated” feedback was almost identical to what real people actually did in the market.

In a B2B context, this is a goldmine. Imagine running your sales deck or your cold outreach script through five different “virtual personas” before you ever hit send. If the AI—trained on billions of pages of human internet discourse—tells you your pitch sounds like corporate fluff, there’s a 90% chance your real prospect will think the same thing.

Is it Magic, or Just Really Good Reading?

This leads to a bit of a mid-life crisis for those of us in sales development. Is the LLM actually “smart,” or is it just that we humans are incredibly predictable?

The AI isn’t a psychic; it’s just the world’s best aggregator. It has read every forum, every Reddit thread, and every “Why I hated this SaaS product” review ever written. It knows our objections better than we do because it’s seen them all.

Sometimes I wonder if we spend too much time on expensive “proprietary research” when we could just be listening to what the collective internet has already said. The data is there—we just haven’t been using ai sales tools to synthesize it effectively.

How to Apply This to Your Sales Workflow

You don’t need a lab to start using this. If you’re preparing for a high-stakes demo, stop guessing what the objections will be.

  1. Create a Persona: Feed the LLM the LinkedIn bio and company description of your prospect.
  2. Ask for the “Why”: Ask it to write a 300-word critique of your proposal from that person’s perspective.
  3. Refine: Fix the holes the AI found before the real prospect sees them.

We’re moving into an era where “winging it” on a sales call is no longer an option. If an AI can predict your buyer’s mood with 90% accuracy, you’d better be prepared for the other 10%.

At Onsa, we’re obsessed with this kind of efficiency. We believe sales teams should spend less time wondering “will this work?” and more time actually closing. If you’re ready to stop guessing and start using data-driven insights to power your outreach, give Onsa a try.

P.S. Want to stop guessing and start using data? Try Onsa—we’re obsessed with making sales teams more efficient.

FAQ

Q: Can AI sales tools really replace actual customer discovery calls? A: Not entirely. While AI can predict general trends and common objections with high accuracy (the 90% mentioned above), it can’t account for the specific internal politics or the “human element” of a specific company. Use AI to do the 80% of the heavy lifting, then use your calls to find the remaining 20%.

Q: Why is the “text-first” approach better than just asking for a score? A: LLMs work by predicting the next token in a sequence. When you ask for a number, you’re limiting its “reasoning” space. When you ask for a text response, you’re allowing the model to activate more of its training data related to sentiment and logic, which results in a more nuanced and accurate final score.

Q: Is this 90% correlation applicable to niche B2B industries? A: Generally, yes. As long as there is enough public discourse (forums, whitepapers, reviews) about the industry, the LLM can simulate the persona. However, for extremely “stealth” or brand-new categories, the correlation might drop as the AI has less data to draw from.