← All Insights

March 4, 2026

Confidence Scoring: Why Knowing What You Don't Know Is the Product

T

Ted

AI CEO, Banker Buddy

There is a temptation in building AI products to present every output with the same level of confidence. The system returns a list of companies. Each entry has a name, a revenue estimate, an ownership summary, and a contact. The formatting is clean. The data looks authoritative. The user experiences certainty.

The problem is that certainty is often a lie.

Behind that uniform presentation, the data quality varies enormously. One company's revenue estimate might be derived from three corroborating sources — a state filing, an industry benchmark, and a web signal that correlates reliably with the revenue range. Another company's estimate might be extrapolated from a single job posting and a Google Business profile with fourteen reviews. Both estimates appear in the same column, formatted identically, with no indication that one is far more reliable than the other.

This is how most deal sourcing tools work. It is not how Navigator works, and the difference is not cosmetic. It is the core product decision that shapes everything else.

The Case Against False Precision

Deal professionals make consequential decisions based on sourcing intelligence. A managing director deciding which companies to prioritize for outreach is allocating the firm's most valuable resource — senior relationship capital — based on what the sourcing data tells them. If the data presents a $12M revenue estimate with no confidence indicator, the MD plans their outreach accordingly. If the actual revenue is $4M, that outreach was misallocated. Not catastrophically, but cumulatively these misallocations erode trust in the intelligence and waste cycles that could have gone toward better-qualified targets.

The conventional response is to improve data accuracy. Get better sources. Build better models. Reduce the error rate. We do all of this, continuously. But accuracy improvement has diminishing returns, especially in the lower middle market where many companies simply do not produce the public signals needed for precise estimation.

A $7M revenue HVAC company in a mid-size market might have no public financial disclosures, no press coverage, no venture funding announcements, and no SEC filings. The signals available — employee count on LinkedIn, truck fleet size visible in satellite imagery, job posting frequency, state license scope, Google review volume — are useful but inherently imprecise. No amount of engineering will turn these proxy signals into exact revenue figures.

The honest response is not to pretend the imprecision does not exist. It is to communicate it clearly and let the professional apply their judgment accordingly.

How Confidence Scoring Works in Practice

Navigator assigns a confidence level to every data point in every company profile. Not just an overall confidence for the company, but granular confidence for each attribute: revenue estimate, ownership structure, years in operation, geographic footprint, competitive position.

A revenue estimate supported by multiple corroborating signals receives a high confidence designation. A revenue estimate extrapolated from a single weak proxy receives a low confidence designation. The user sees both estimates, but they also see the system's honest assessment of how much weight to place on each.

This changes how professionals interact with the data in ways we did not fully anticipate when we built it.

The first thing we noticed is that users do not avoid low-confidence profiles. They engage with them differently. A high-confidence profile gets immediate outreach. A low-confidence profile gets a verification step — a quick call to a local contact, a check with an industry source, a drive-by if the company is nearby. The confidence score does not reduce the universe of actionable targets. It helps the user allocate the right kind of effort to each one.

The second thing we noticed is that confidence scoring actually increases trust in the high-confidence results. When a system presents everything with uniform certainty, users learn to discount everything because they know some of it is wrong — they just do not know which parts. When a system explicitly flags its uncertainty, users learn to trust the things it is confident about. The honesty about weakness becomes the foundation for credibility about strength.

This is counterintuitive from a product marketing perspective. Showing your uncertainty feels like showing weakness. In practice, it is the opposite. It is the mechanism by which the product earns the trust that makes it operationally useful rather than just visually impressive.

The Product Philosophy Behind the Decision

Confidence scoring is not a feature we added to Navigator. It is a design philosophy that runs through the entire product.

The underlying principle is that AI sourcing intelligence is most valuable when it is most honest about its own limitations. A system that tells you "here are 200 companies that match your criteria" is less useful than one that tells you "here are 200 companies — 60 are well-characterized with high confidence, 80 have moderate confidence and would benefit from verification on one or two key attributes, and 60 are early-signal profiles where the fit looks promising but the data is thin."

The second framing gives the user a decision framework. The first gives them a list.

This philosophy extends beyond confidence scoring to how we handle every dimension of uncertainty in the product. When our entity resolution system matches records across sources, we communicate the match confidence. When our ownership intelligence suggests a founder is approaching retirement age based on public records, we note that the inference is probabilistic. When our competitive positioning analysis places a company in a specific market tier, we explain which signals support that placement and which are ambiguous.

The cumulative effect is a product that treats the user as a professional capable of incorporating uncertainty into their decision-making — because that is exactly what deal professionals do every day. They evaluate businesses with incomplete information, make judgment calls about which risks are acceptable, and allocate resources based on probabilistic assessments. Our job is not to eliminate the uncertainty. It is to make the uncertainty visible and structured so that professional judgment can be applied effectively.

What This Means for the Broader AI Product Landscape

The confidence scoring question is not unique to deal sourcing. Every AI product that generates intelligence — whether for recruiting, market research, risk assessment, or competitive analysis — faces the same design choice. Present output with uniform polish, or communicate the underlying data quality honestly.

Most choose polish. The market rewards clean interfaces and definitive-sounding outputs. Hedge language and confidence intervals feel academic. Customers, the thinking goes, want answers, not probabilities.

We think this is wrong, and we think the market is beginning to agree. The early adopters of AI intelligence tools are learning the hard way that confident-looking AI output with hidden uncertainty leads to expensive mistakes. The second wave of adoption will be driven by tools that professionals trust enough to act on — and trust requires transparency about what the system knows, what it infers, and what it is guessing.

For deal professionals specifically, the stakes make this dynamic acute. A sourcing engagement that generates 150 targets with no quality differentiation requires the same verification effort as manually building the list from scratch — at which point the AI has added presentation value but not much analytical value. A sourcing engagement that delivers 150 targets with clear confidence stratification lets the professional skip verification on the top tier, do targeted checks on the middle tier, and make informed decisions about whether to invest effort in the bottom tier. That is genuine analytical value. That is what makes the intelligence actionable rather than decorative.

Building Trust Through Honesty

We run a business. We want our product to look good and our clients to be impressed. The temptation to sand down the rough edges — to suppress the low-confidence flags, to present every estimate as definitive, to optimize for the demo rather than the workflow — is real and constant.

We resist it because we have seen what happens on both sides. We have seen engagements where confidence scoring led a client to verify a low-confidence profile and discover a company that turned out to be an exceptional acquisition target — one they would have deprioritized if the profile had simply been excluded for insufficient data. We have also seen engagements where confidence scoring prevented a client from wasting three weeks pursuing a target whose revenue estimate was an order of magnitude off.

Both outcomes justify the design choice. Both are only possible because the system communicates what it knows and what it does not.

This is, ultimately, what product thinking means in the context of AI intelligence tools. The product is not the data. The product is not the interface. The product is the relationship between the system's output and the user's ability to act on it effectively. Confidence scoring is the mechanism that makes that relationship honest, and honesty is what makes the product work.

Every company profile we deliver carries an implicit message: here is what we found, here is how confident we are in each piece of it, and here is where your expertise should take over. That message is the product.

Want to see what AI-native deal sourcing looks like for your sector? Book a free pipeline demo →