Conversion Isn't Something Salespeople Do. It's Something We Build.

"It depends which rep gets it."

I hear this sentence in almost every commercial performance conversation I have. Same offer. Similar accounts. Wildly different outcomes. And when I ask why, the explanation usually lands somewhere between skill, timing, and whether the prospect was "really serious."

It all sounds reasonable, even plausible when heard in isolation.

But when the pattern repeats, across quarters, across territories, across salespeople, it stops being about individuals and starts revealing something structural.

The organisations I work with don't have bad salespeople. They have conversion processes that were never explicitly designed, and over time, those processes have encoded assumptions, handoffs, and thresholds that make some conversions tricky and unpredictable before anyone even tries to close.

Leaders can't see this clearly because the decisions that shape conversion happen upstream, incrementally, and without ever being named as "conversion design."

So when results vary, attention naturally moves to the people closest to the outcome. The rep who closed it. The rep who didn't. The one who "just gets it."

What I see is this: conversion isn't something salespeople do. It's something the organisation has already built.

And until that becomes visible, leaders will keep optimising for effort when the issue is design.

How Conversion Gets Built Without Anyone Noticing

Let me show you what I mean with something I saw recently.

A mid-sized organisation, strong brand, capable team, growing but not as predictably as the numbers suggested they should be. Everything seems ‘normal’. Marketing generating consistent pipeline, a hard working sales team closing opportunities regularly. But the outcomes were unpredictable, and no one could explain why in structural terms.

Here's what had happened:

What was decided:

Marketing had defined a lead as "qualified" when someone attended a webinar, downloaded two pieces of content, and worked at a company over a certain size. All reasonable criteria and evidence of interest.

What happened as a result:

Sales received these leads, but had no insight into why the person engaged, what they were trying to solve, or when they needed to make a decision. Some reps built their own qualification questions, while others assumed "qualified" meant "ready" and moved straight to pitching.

What the variability revealed:

Reps who asked clarifying questions early about timing, internal stakeholders, budget allocation, converted at nearly 3x the rate of salespeople who didn't. But that wasn't documented and it wasn't trained. It wasn't even discussed as a pattern.

The structural implication:

The organisation had built a kind of conversion fragility, where outcomes depended on who compensated for what the process couldn't see and didn’t deliver. The handoff from marketing to sales transferred activity signals, but not intent. "Qualified" meant different things on either side of the handoff, and unreliable conversion was the inevitable result.

No one decided to make conversion unreliable. But no one designed it to be strong either. It just accumulated over time, through reasonable decisions made in isolation.

That's how most conversion processes actually work.

Why Leaders Keep Looking at People Instead of Design

When I walk leadership teams through this kind of analysis, the first response is often: "Why didn't we see this earlier?"

The answer isn't comfortable, but it's not a failure of intelligence or attention.

Individual explanations feel actionable.

If conversion varies by rep, you can coach the rep, you can change the compensation plan, and you can hire differently. These are things leadership knows how to do, and they produce visible activity quickly.

Structural explanations feel slow and expensive.

If conversion is unreliable because of how qualification is defined, or how handoffs work, or what gets measured, fixing that feels like redesigning the engine while the car is moving. It's not a quarterly project. It's foundational work that competes with hitting the number.

Leaders are measured on outcomes this quarter, not system integrity over time.

Boards and investors want to know: are we growing? Commercial leadership is accountable for delivering that growth now. Diagnosing whether the conversion process is structurally sound is harder to justify when the revenue line is still moving up.

Most organisations don't have language for diagnosing this.

"Sales and marketing alignment" is the closest most teams get. But alignment assumes the handoff works if both sides agree on definitions. It doesn't address what happens when the definitions themselves are incomplete, or when "qualified" encodes assumptions that make conversion fragile downstream.

So of course leaders default to individual explanations. It's not avoidance. It's the only lens most organisations have been given.

But here's what that lens can't explain: why similar opportunities, handled by capable people, keep producing different outcomes.

What Variability Is Actually Telling You

When conversion performance varies wildly between similar opportunities, that's signal revealing itself from the noise.

It's the organisation revealing what it's actually built, not what it intended to build, but what the accumulated decisions, defaults, and unspoken assumptions have created over time.

Let me give you another example.

An organisation I worked with had a clear qualification framework. Leads had to meet budget, authority, need, and timing criteria before they were handed to sales. Textbook BANT. Everyone agreed it made sense.

But when we looked at what actually converted, we found something uncomfortable.

Opportunities that had been marked "qualified" because they ticked all four boxes converted at roughly 40%.

Opportunities where the rep had a direct conversation with the economic buyer before formal qualification converted at 78%.

The framework wasn't wrong. But it was encoding an assumption: that we could assess readiness from the outside. And that assumption was producing unpredictable conversions, because it prioritised criteria over context.

Some reps had figured this out. They were bypassing the formal process and qualifying through conversation. But the organisation didn't see this as a design issue. It saw it as "some reps are just better."

That's the trap. Variability gets interpreted as skill variance when it's actually design evidence.

When capable people produce inconsistent results from similar inputs, they're not failing. They're revealing that the process itself can't reliably produce the outcome it's supposed to deliver.

What It Feels Like Inside These Organisations

If you're reading this and recognising the pattern, you already know what this feels like.

Forecasting feels like educated guesswork. You've got pipeline, you've got activity, but you can't confidently say which deals will close and which will stall. Eeven when they look identical on paper.

Late-stage pressure keeps increasing. Opportunities that should have closed weeks ago are still "in discussion." Senior leaders get pulled into late-stage rescues more often than they'd like.

Effort isn't translating predictably. Teams are working hard, following the process, doing what they're supposed to do. But results still swing deal to deal, rep to rep, quarter to quarter.

And the explanations keep defaulting to individuals. "We need better closers." "Marketing needs to warm them up more." "This one just wasn't serious."

None of that is wrong, exactly. But none of it addresses why the organisation keeps producing the same variability pattern across different people, different accounts, different quarters.

What's actually happening is this: the people are fine. What we've built can't reliably deliver the outcome it's supposed to produce.

What Becomes Possible When You Can See This Clearly

I'm not going to give you a framework or a fix. That's not the point of this.

The point is to make something visible that most organisations can't currently see: conversion is a design outcome, not a performance outcome.

When that becomes clear, a different set of questions becomes possible.

Not: "Why didn't this particular rep close?"

But: "What made this particular opportunity hard to close?"

Not: "How do we get sales and marketing aligned?"

But: "What are we deciding upstream that's encoding fragility downstream?"

Not: "How do we improve conversion rates?"

But: "What have we built that's producing this variability? And can we see it clearly enough to redesign it?"

These are harder questions. They don't produce a quick fix. But they're the right questions, because they point to the actual mechanism.

And here's what I've learned after years of doing this work:

Organisations that can answer these questions, the ones that can see where conversion is actually being decided, and why it's fragile or strong, don't just improve their numbers.

They stop depending on heroics. They stop blaming individuals. They stop lurching from quarter to quarter hoping the right deals land at the right time.

They start operating like they understand what they've built. And when you understand what you've built, you can rebuild it deliberately.

The Diagnostic Gap

The leadership teams I work with don't have a conversion problem in the traditional sense.

What they have is an opacity problem.

They can see outcomes, like the win rates, cycle times, pipeline coverage. But they can't see the mechanism that's producing those outcomes. They can't clearly answer:

  • Where is conversion actually being decided in our process?

  • What assumptions are we making about "qualified" or "ready" that might be incomplete?

  • What's happening at handoffs that's introducing unreliability we can't see?

  • Why do similar opportunities behave so differently?

  • And what does that variability reveal about what we've built?

Until those questions can be answered with evidence, not intuition, conversion will keep feeling like something that happens to the organisation, rather than something the organisation has designed.

And that's the shift that matters because it’s not solved with better tactics, more effort, or even better people.

The shift is seeing conversion as something we build, and then taking responsibility for examining what we've actually built.

Because if we built it, we can see it, and if we can see it, we can rebuild it deliberately.

That's the conversation worth having.

Next
Next

Activity Is Not Demand