Back to blog

Thought Leadership

Thought Leadership·April 8, 2026·14 min read

What the specifications don't say -- and why questions matter more than answers

A 200-page specification never tells you everything. What it does not say is often what wins or loses the contract. The real danger is not ambiguity -- it is the silent inference that resolves it on your behalf.

By Aléaume Muller

What the specifications don't say -- and why questions matter more than answers

This article extends The throughput trap, where we showed that a cognitive model must make its hypotheses explicit. Here, we enter the most dangerous case: when information is absent, and the system -- human or artificial -- fills the void in silence.

The structural problem no one names

A 200-page specification gives the illusion of exhaustiveness. Two hundred pages of technical requirements, administrative clauses, compliance tables. Everything appears documented.

It is an illusion.

No specification contains all the information needed to respond to it. This is not a drafting flaw. It is structural. The author of the specifications cannot make explicit what they themselves do not know, what they consider self-evident, what belongs to the tacit knowledge of their organization, or what was deliberately left vague to avoid constraining bidders.

Knowledge of the information system is never fully documented. Internal political trade-offs are never written down. The decision-maker's real priorities -- those that do not appear in the scoring criteria -- are never formalized.

Result: every bidder works with an incomplete puzzle. And the difference between a winning response and an off-target one is decided in the missing pieces.

Key takeaway: A 200-page specification never tells you everything. What it does not say is often what wins or loses the contract.


One sentence, two strategies, one contract

Take a concrete case. An application maintenance contract for a critical business application of a local government authority. The application is being decommissioned -- a replacement is planned, but the timeline is uncertain.

Buried in 200 pages of specifications, one sentence:

"The contractor shall provide support for the IS transformation within the scope of the evolving application perimeter."

One sentence. Twenty words. No specification of what this "support" entails. No indication of what "IS transformation" concretely means in this context.

Bidder A reads this sentence and reasons as follows: "The transformation is covered by a separate contract -- you can see it in the lot structure. Support means answering questions from the project team managing the migration. Occasional assistance. We size 0.2 FTE."

Bidder B reads the same sentence and reasons differently: "If the client mentions support in the maintenance lot, it is because they expect the maintenance contractor to play an active role. Managing interdependencies between the old and new systems. Drafting requirement specifications for module adaptation. Perhaps even data migration. We size 1.5 FTE with an architect profile."

Same specification. Same sentence. Two interpretations. Two diametrically opposed strategies. Two incomparable costings. And one contract.

Bidder ABidder B
InterpretationOccasional support, answering questionsActive role: interdependencies, requirement specs, migration
Sizing0.2 FTE1.5 FTE
Price impact~20K EUR/year~150K EUR/year
Risk if wrongUnder-sizing --> amendment --> conflictOver-sizing --> price too high --> elimination

The client had something intermediate in mind -- structured but limited support, estimated at 0.5 FTE. They did not write it because, from their perspective, it was "obvious."

Nothing is ever obvious in a specification. What is obvious to the author is invisible to the reader.


Silent inference: the systemic risk

This case illustrates a risk that bid managers intuitively recognize but never formalize: silent inference.

Faced with ambiguous or absent information, the human brain does not stop. It does not say "I don't know." It fills the void. Automatically. Without flagging it. This is the completeness bias -- a variant of confirmation bias: we construct a coherent story from available elements, ignoring the missing ones.

The experienced bid manager has one advantage: they doubt. After twenty proposals, they have learned that their first interpretations are often wrong. They know that "IS transformation support" can mean ten different things. They verify. They ask questions. Or at minimum, they mentally formulate the hypothesis before committing to it.

The rushed bid manager does not have this luxury. They read, interpret, and draft. The inference happens in silence, and the rest of the response is built on this invisible foundation.

"The danger is not ambiguity. It is the silent inference that resolves it on your behalf -- without telling you it did so."


When AI amplifies the problem

Here is the scenario that should worry every pre-sales director.

An AI tool analyzes the specifications. It encounters the sentence "IS transformation support." It has no doubt. It has no experience from twenty proposals whispering "careful, this is ambiguous." It has a statistical model and context.

The chatbot (level 1) does what you ask. If you tell it "summarize this section," it summarizes -- silently integrating its interpretation into the summary. You will never know it made a choice. The ambiguous sentence disappears, replaced by a paraphrase that seems clear. The perfect mirror of your own biases.

The autonomous agent (level 2) goes further. It analyzes the entire specification, cross-references information, and produces a structured deliverable. But when it encounters "IS transformation support," it resolves the ambiguity in the most statistically probable way -- and continues. The inference is buried in a 50-page document. Invisible. Unflagged. And it contaminates everything downstream: team sizing, timeline, costing, executive summary.

The result: a perfectly coherent proposal, rigorously structured, beautifully written -- and built on a foundation that may be false. Industrial mediocrity with an extra coat of varnish.

Key takeaway: Poor use of AI does not produce visible errors. It produces invisible certainties. And an invisible certainty built on a false hypothesis causes more damage than an obvious mistake.


Questions and answers: the phase everyone neglects

In a public tender, the Q&A phase is a right. Candidates can submit questions to the contracting authority, which publishes the answers to all bidders.

It is, by far, the most underestimated phase of the response process.

How most teams handle Q&A:

  1. The bid manager scans the specifications looking for obvious inconsistencies
  2. They draft 5-10 questions, often administrative ("Can you confirm the pricing schedule should be signed on page 3?")
  3. They submit the questions within the deadline, check the box
  4. They read the client's responses -- including those asked by competitors -- and possibly note a point or two

This is an administrative treatment of a strategic issue.

What the Q&A phase should be:

A structured intelligence operation. Every question asked is an opportunity to:

  • Validate or invalidate a critical hypothesis. "The specifications mention IS transformation support. Could you clarify whether this includes managing interdependencies with the future system, or whether it is exclusively support for the project teams?" -- This single question can change your costing by 130K EUR.

  • Signal your understanding to the client. The questions you ask are a signal. A client who reads "could you clarify the scope of the IS transformation support, specifically the management of inter-system interdependencies" thinks: "this bidder understood the complexity of the subject." A client who sees no question on this point thinks: "either they didn't read, or they didn't understand."

  • Obtain information that competitors did not ask for. The answers are public, but the questions guide the response. If you ask the right question, the client makes explicit a point that no one else saw -- and everyone benefits, but only you had identified the need upstream.

Q&A approachType of questionsStrategic value
Administrative"Can you confirm the pricing schedule format?"None -- information already in the tender rules
Surface clarification"Could you specify the ticket volume?"Low -- marginal adjustment
Hypothesis validation"Does IS transformation support include managing inter-system interdependencies?"High -- can change the entire strategy
Strategic intelligence"What is the relative criticality of application X compared to the new system being deployed?"Very high -- reveals implicit priorities

The discipline of the explicit hypothesis

The real antidote to silent inference is not asking more questions. It is making every hypothesis explicit before building on it.

The principle is simple -- and almost never applied:

1. Identify ambiguity zones. Not the obvious inconsistencies. The zones where the specification allows multiple reasonable interpretations. These are the most dangerous -- because they do not trigger any alarm.

2. Formulate the hypothesis explicitly. Not "we assume that..." in a corner of your mind. In writing. "Hypothesis H-14: IS transformation support (section 4.3.2) is limited to occasional support for the migration project team, without intervention on application interdependencies. Confidence: medium. Impact if wrong: team sizing +1.3 FTE, costing +130K EUR/year."

3. Test the hypothesis. Q&A phase if the timeline allows. Cross-analysis with other sections of the specification. Comparison with similar previous contracts. Sector expertise.

4. Document the dependency. If the hypothesis cannot be validated before submission, explicitly mark every section of the response that depends on it. "The sizing in section 3.4 relies on hypothesis H-14. If this hypothesis is invalidated, the costing must be revised by +130K EUR/year."

This is exactly how a cognitive model operates as described in our vision: every inference is traceable, every hypothesis is contestable, every dependency is visible.

Key takeaway: The unformulated hypothesis is the most dangerous hypothesis. It can be neither tested, nor corrected, nor even identified when it proves false.


What a cognitive system does when faced with ambiguity

The difference between an AI tool that amplifies the problem and one that solves it comes down to one word: explicitation.

A cognitive model does not resolve ambiguities in silence. It flags them.

Concretely, when the system encounters the sentence "IS transformation support" in a specification:

  1. It identifies the ambiguity. The term "support" is polysemous in a maintenance context. The system detects that multiple interpretations are compatible with the text.

  2. It formulates competing hypotheses. "H1: occasional support (0.2 FTE). H2: active role in managing interdependencies (1.5 FTE). H3: intermediate position, coordination with migration team (0.5 FTE)."

  3. It evaluates the impact. "Costing gap between H1 and H2: 130K EUR/year. Risk: eliminatory if miscalibrated."

  4. It generates the Q&A question. "Recommended question for the Q&A phase: Could you clarify the expected scope of the IS transformation support mentioned in section 4.3.2, specifically whether it includes managing interdependencies with the future system?"

  5. It traces the dependency. Every section of the response that depends on this hypothesis is marked. If the hypothesis changes after the Q&A, the system automatically propagates the adjustments.

This is not magic. It is methodology. The same methodology a senior bid manager would apply if they had the time -- except the system does it for every sentence, every requirement, every ambiguity in the document. Not just the 3-4 the bid manager catches between meetings.


The courage of the "dumb" question

There is a powerful psychological barrier against good questions: the fear of appearing incompetent.

"If I ask what 'IS transformation support' means, the client will think I didn't understand the specifications."

The opposite reasoning is what should apply. The client who reads your question thinks: "This bidder identified an ambiguity the others missed. They understand the complexity of our context." It is a signal of maturity, not incompetence.

The best questions are those that make the client say: "That's a good point, we weren't clear on that." Because this reaction means you identified a point the author themselves had not formalized. And that is exactly what the client looks for in their future partner: someone who sees what they themselves cannot see.

Client references prove you succeeded in the past. Questions prove you understand the present.

"The most dangerous question is the one you did not ask. The second most dangerous is the one you did not even know needed asking."


What TenderGraph does -- and what no one else does

Most AI tools for tenders treat the specification as text to parse. They extract requirements, generate responses, produce deliverables. And when they encounter "IS transformation support," they do exactly what an LLM does: they choose the most probable interpretation and continue. In silence.

TenderGraph does not fall into this trap.

When TenderGraph encounters ambiguous information, it does not resolve it. It notes it. This is a fact: this sentence admits multiple readings. And the system works from this fact -- not from an assumption.

It evaluates scenarios. Not one. Several. "Hypothesis A: occasional support, 0.2 FTE. Hypothesis B: active role, interdependency management, 1.5 FTE. Hypothesis C: intermediate position, coordination, 0.5 FTE." Each scenario is documented with its impact on costing, sizing, and strategy.

It tests them. The system confronts each hypothesis against the rest of the specification. Does the lot structure mention a separate lot for transformation? Does the pricing schedule include a dedicated line? Does the timeline include migration milestones? Every clue is weighed, every contradiction is flagged. Hypotheses that do not survive this confrontation are discarded -- and those that hold are reinforced.

Everything is traceable. The complete reasoning -- hypotheses formulated, evidence sought, contradictions found, scenarios discarded and those retained -- is available in the intermediate documentation. Not buried in a model's weights. Written in black and white, auditable, contestable.

The human sees. The bid manager, the one who decides, does not receive a finished deliverable built on invisible foundations. They see that the system formulated hypotheses. They see what those hypotheses rest on. They see what happens if they are wrong. They can say: "Hypothesis B is correct, I know this client." Or they can say: "We don't know -- let's ask the question."

The sales team does its job. If the Q&A phase is open, TenderGraph has already formulated the question. All that remains is to send it. The client's answer settles it. And when it arrives, the system does not treat it as a marginal comment: it transforms the hypothesis into an established fact. Reliable information. A cornerstone for the cognitive edifice under construction -- not a grain of sand in an uncertain foundation.

From that point, everything that depends on this hypothesis -- sizing, costing, strategy, executive summary -- realigns automatically. Not because the AI "rewrites everything." Because the reasoning is explicit, and a new fact propagates naturally through a system that traces its dependencies.

"Other AI tools build castles on sand hoping it won't rain. TenderGraph digs down to bedrock before laying the first stone."


What to remember

Partial information is not an accident of the tender process. It is its very nature. Every specification is a structurally incomplete document -- by necessity, by omission, by design.

The danger is not the incompleteness. The danger is failing to recognize it. And the greatest danger of all is entrusting this incompleteness to a tool that resolves it in silence and presents the result as certainty.

Do not do what everyone else does. Do not fall for an AI that gives you the illusion of understanding by automating blind inference. The contract you will lose will not be lost because you wrote poorly -- it will be lost because you built 200 pages of response on a sentence that no one took the time to question.

Key takeaway: The specification never tells you everything. Silent inference -- human or artificial -- is the most underestimated systemic risk in bid management. TenderGraph is the only system that does not resolve ambiguities in silence: it notes them, tests them, traces them, and transforms them into questions. Our vision rests on this conviction: understanding is not generated -- it is constructed, hypothesis by hypothesis, fact by fact.


Further reading:

Tags

#tenders#questions-and-answers#inference#bid-management#risk#AI#partial-information

Next step

Ready to transform your tender response?

Keep reading

Recommended articles