How to Write a Technical Proposal That Wins Tenders — What No One Teaches You
This article builds on The Information Revolution, where we laid out the theoretical framework of the signal-to-noise ratio applied to tenders, and The Myth of the Executive Summary, where we demonstrated that the first document read by the evaluator is also the one everyone rushes through. Here, we turn to the technical proposal itself — the document that carries the score.
The technical proposal is the most written, the most recycled, and the least understood document in the tendering process. Every year, thousands of bid managers spend hundreds of hours producing documents of 100, 200, 300 pages — and most receive scores between 11 and 13 out of 20.
Not because they lack expertise. Not because their solution is flawed. Because they write a document. And a winning technical proposal is not a written document — it is a signal encoding system optimized for the scoring grid.
The difference between 12/20 and 17/20 is almost never a matter of substance. It is a matter of structure, information density, and understanding what the evaluator actually does when they open your file.
What the Evaluator Actually Does With Your Technical Proposal
Forget the image of an attentive reader devouring every page. A public procurement evaluator has five submissions to score in two days. Sometimes seven. Each submission runs between 80 and 300 pages. They are often alone, occasionally accompanied by a technical colleague who has read only "their" section. They have a scoring grid printed beside the screen.
Here is what actually happens.
The first 30 seconds: they open the file, look at the table of contents, verify the document is structured. If they do not immediately see the expected major sections, a negative signal takes root. They will not forget it.
The first 5 minutes: they read the executive summary — if one exists. This is where the first impression is formed. If they read "Our multidisciplinary team is committed to supporting your transformation through a proven approach," they already know they are looking at the same proposal as the other four. Their concentration drops a notch.
The actual evaluation: they do not read in order. They take their scoring grid, criterion by criterion, and search the proposal for the section that addresses it. They scan headings, opening paragraphs, tables. They look for addressable content — a piece of information they can link to a line on their grid and assign a score to.
The scanning behavior: in a 15-page section, the evaluator actually reads 3 to 4 pages. The heading, the first paragraph, the subheadings, the tables, the callout boxes, the section conclusion. The rest, they skim. This is not laziness — it is a channel capacity constraint. They physically cannot absorb 1,500 pages in two days.
The scoring: they score each criterion independently. If your best argument for the "methodology" criterion is buried in the "team organization" section, they will not find it. They will score based on what they found in the "methodology" section — even if it is weaker.
Key takeaway: The evaluator does not read your proposal. They scan it with a grid. Every piece of information that is not in the right place, in the right format, aligned with the right criterion — is lost information. Not because it is poor, but because it is invisible.
The Structure That Makes the Difference: Respond to the Scoring Grid, Not the Specifications
This is the most common mistake, and it is responsible for more lost points than any content deficiency.
Most bid managers structure their technical proposal by mirroring the layout of the specifications. The specifications have 8 technical chapters? The proposal will have 8 technical chapters, in the same order. It is logical. It is reassuring. It is wrong.
The specifications describe the need. The scoring grid describes what the evaluator will score. These are not the same things, and they do not follow the same structure.
Take an example. An IT services contract with this scoring grid:
| Criterion | Weighting | What the evaluator looks for |
|---|---|---|
| Understanding of the need | 30% | Reformulation, issue analysis, identified risks |
| Implementation methodology | 25% | Processes, tools, governance, KPIs |
| Human resources | 25% | Profiles, skills, organization, ramp-up |
| Risk management and quality | 20% | Risk identification, mitigation plan, SLAs |
The specifications, however, are structured by functional area: incident management, change management, release management, monitoring, reporting. Five technical chapters.
The proposal mirroring the specifications produces five major sections (incidents, changes, releases, monitoring, reporting). For each section, the bid manager mixes understanding, methodology, team, and risks. The evaluator scoring the "methodology" criterion (25% of the score) must search through five different sections to find the scattered pieces. They find three out of five. Score: 13/20.
The proposal mirroring the scoring grid produces four major sections (understanding, methodology, resources, risks). Within each section, functional areas are treated as subsections. The evaluator scoring the "methodology" criterion opens section 2, finds everything they are looking for, in order. Score: 16/20.
Same content. Different structure. Three-point gap.
| Approach | Structure | Evaluator behavior | Typical score |
|---|---|---|---|
| Mirroring specifications | By functional area | Must reconstruct each criterion by cross-referencing sections | 11-13/20 |
| Mirroring scoring grid | By scoring criterion | Finds each criterion in a dedicated section | 15-17/20 |
The specifications tell you what to cover. The scoring grid tells you how you will be scored. Structuring around the what instead of the how is like preparing for an exam by reading the textbook instead of studying past papers. You learn the subject. You do not prepare for the test.
Key takeaway: Your technical proposal outline should not mirror the specifications. It should mirror the scoring grid. Each scoring criterion = one section. Each sub-criterion = one subsection. The evaluator should never have to search.
The Three Mistakes That Cap Your Score at 12/20
After reviewing hundreds of technical proposals — as a writer, as a reviewer, as an evaluator — three patterns consistently emerge in submissions that stagnate between 11 and 13.
Mistake 1: The generic response
"Our project team will ensure rigorous incident tracking in accordance with ITIL best practices."
This sentence could appear in any proposal, for any contract. It is the competitor test: replace your company name with a competitor's. If the sentence still holds, it carries no differentiating signal. The evaluator reads it, learns nothing, and moves on. You just wasted a paragraph.
The generic response is the symptom of a proposal written from a template rather than from the tender documents. The bid manager took the proposal from the last contract, changed the client name, adjusted the volumes, and submitted. It is the recency bias materialized in a Word document.
The fix: every paragraph must contain at least one element specific to the current contract. A technology name mentioned in the specifications. A volume. A constraint. A risk identified within the client's context. If you cannot point to the tender document element that justifies this paragraph — delete it.
Mistake 2: The recycled paragraph
More insidious than the generic response: the paragraph that was excellent in the previous submission, and is irrelevant in this one.
A paragraph on change management in an SAP environment, perfectly calibrated for an industrial client — recycled verbatim into a local government contract that has no SAP. The evaluator spots the disconnect immediately. And the signal they receive is not "this candidate is a generalist" — it is "this candidate did not read our specifications."
Recycling is the most frequent source of destructive noise in a technical proposal. Every recycled paragraph degrades the signal-to-noise ratio of the entire document, because it occupies space without carrying relevant information.
Mistake 3: The absence of evidence
"Our expertise in complex project management enables us to ensure a controlled implementation."
Unsubstantiated claim. No evidence. No figures. No reference. The evaluator has no reason to believe you over the competitor who writes exactly the same thing.
A technical score is not an exercise in rhetorical persuasion. It is an exercise in demonstration. Every claim must be supported by:
- A figure: "Average resolution time for P1 incidents across our last 3 application management contracts: 2h14, against an SLA of 4h."
- A contextualized reference: "On contract [X] (comparable environment: 12,000 workstations, heterogeneous IT), we reduced recurring incident rates by 34% over 18 months." — Not a logo catalog, but a verifiable fact anchored to a similar context.
- A tangible deliverable: "The knowledge transfer methodology is supported by a documentation kit of 47 operational guides, an extract of which is provided in Appendix 3."
| Level of evidence | Example | Impact on score |
|---|---|---|
| No evidence | "Our expertise enables us to..." | Evaluator ignores — +0 points |
| Weak evidence | "We are accustomed to..." | Vague signal — +0.5 points |
| Contextual evidence | "In a similar context (12,000 workstations), result: -34% incidents" | Strong signal — +2 points |
| Documented evidence | Figure + reference + deliverable in appendix | Maximum signal — +3 points |
Key takeaway: Generic + recycled + no evidence = 12/20 guaranteed. This is not a value judgment — it is a mechanism. The evaluator scores what they see. If they see only noise, they assign the median score and move on to the next submission.
How to Encode Signal in Every Section
The signal/noise theoretical framework provides the "why." Here is the "how."
Signal encoding in a technical proposal rests on one principle: the signal must survive a diagonal read. Because that is how the evaluator will read it. Not because they are negligent, but because the time constraint demands it.
Technique 1: The first sentence of each section carries the message
The evaluator systematically reads the first sentence of every section and subsection. If your first sentence is "This section presents our methodological approach" — you just wasted the only slot guaranteed to be read.
Poor: "This section presents our incident management methodology."
Strong: "The primary risk in your incident management is not volume — it is qualification time. Our methodology specifically targets this step with an automated pre-diagnosis that reduces qualification time by 45%."
The first sentence must be a signal concentrate: it announces your understanding of the problem AND the value of your response. The rest of the section elaborates. But if the evaluator reads only the first sentences — they already have the essentials.
Technique 2: Tables carry the evidence
The evaluator's eye is drawn to visual breaks: headings, tables, callout boxes, diagrams. A table is read even in scan mode. A paragraph of prose may be skipped.
Concentrate your evidence in tables. Not decorative tables — informational tables.
| Process | Contractual SLA | Average performance (last 3 contracts) | Tool |
|---|---|---|---|
| P1 incident qualification | 30 min | 18 min | ServiceNow + AI pre-diagnosis |
| P1 incident resolution | 4h | 2h14 | Dedicated L2 team (3 engineers) |
| P2 incident resolution | 8h | 5h30 | L2/L3 rotation |
| Monthly review | M+5 business days | M+3 business days | Power BI dashboard |
This table carries more signal than two pages of prose. It is read in 15 seconds. It demonstrates rather than asserts. And it survives a diagonal read.
Technique 3: Strategic redundancy of win themes
A win theme is a differentiating argument you want to anchor in the evaluator's mind. It should not appear once in the proposal — it should be adapted across every relevant section, from different angles.
If your win theme is "automated pre-diagnosis," it appears:
- In the understanding of the need: "The volume of incidents is not the problem. Qualification time is. The automated pre-diagnosis reduces this time by 45%."
- In the methodology: technical description of the pre-diagnosis process, integration with the client's ITSM tool.
- In the resources: profile of the engineer who configures and maintains the pre-diagnosis, training for the client's team.
- In the risks: "Identified risk: resistance to change from the L1 team regarding pre-diagnosis. Mitigation: 3-month support program with visible performance metrics."
This is not repetition. It is redundancy in the Shannon sense: an error-correcting code. Even if the evaluator reads only one section in four, they encounter the win theme. The signal gets through despite channel noise.
Technique 4: Name the risks no one else names
The evaluator has read four submissions. All four say "our proven methodology ensures a controlled implementation." None names the specific risks of this contract. The fifth submission writes:
"The primary risk of this contract is not technical — it is the coexistence of the legacy system (whose decommissioning is planned but undated) and the new system during a transition period that could last 18 to 24 months. Our mitigation plan specifically addresses the application dependencies identified in section 4.3 of the specifications."
The evaluator puts down their pen. This candidate understood something the others did not see — or did not dare to write. It is the same signal as good Q&A questions: it demonstrates an understanding that goes beyond surface-level reading.
Key takeaway: Encoding signal is not about writing better. It is about placing information where the evaluator looks for it, in the format they absorb, with the evidence that transforms an assertion into a fact. First sentence = message. Table = evidence. Redundancy = robustness. Named risk = differentiation.
The Concrete Case: Two Technical Proposals for the Same Contract
Application Management Services contract for a metropolitan authority. Scope: 14 business applications, 8,000 users, Java/Oracle environment. Estimated budget: EUR 2.5M over 4 years. Technical score: 60% of the overall score.
Scoring criteria:
- Understanding of context and issues: 30 points
- Methodology and organization: 40 points
- Human resources: 30 points
Two bidders. Same company size. Comparable skills. Similar references. The difference is in the proposal.
Bidder A — The standard proposal
Structure: mirroring the specifications. Five major parts corresponding to the five functional areas (urban planning, finance, HR, CRM, GIS).
Executive summary: "Building on our recognized expertise in application management for the public sector, our multidisciplinary team is committed to supporting the Metropolitan Authority in the management and evolution of its application portfolio, through a structured and proven approach." — 42 words, zero information.
Understanding of context: a reformulation of the specifications. Two pages summarizing what the client wrote, with no analysis, no risk identification, no articulated issues. The evaluator reads their own text in condensed form.
Methodology: generic ITIL process description. No adaptation to the metropolitan authority's context. The same chapter appears in the company's last 15 proposals. A few generic process diagrams.
Human resources: a list of CVs. A table of profiles with years of experience and certifications. No explanation of the operational organization, absence management, or ramp-up plan.
References: four local authority logos. No detail on contexts, results, or challenges encountered.
Result: 12/20 — Detailed score: Understanding 16/30, Methodology 20/40, Resources 18/30.
Bidder B — The strategic proposal
Structure: mirroring the scoring grid. Three parts: Understanding, Methodology, Resources. Each functional area is addressed as a subsection within each part.
Executive summary: "Your primary challenge is not the routine maintenance of 14 applications — it is managing the technical debt accumulated on the Finance and HR modules (Java 8, Oracle 11g) during the transition to your future system, whose deployment timeline remains to be finalized. Our response is built around three pillars: securing service continuity on critical modules (section 2), a technical debt reduction plan calibrated over 18 months (section 3), and a team sized to absorb the workload peaks associated with application migrations (section 4)." — Specific. Factual. Structuring.
Understanding of context:
| Identified issue | Source (specifications section) | Impact | Our response (proposal section) |
|---|---|---|---|
| Technical debt Java 8 / Oracle 11g | §3.2.1, §4.1 | Risk of vendor end-of-support Q3 2027 | Migration plan §3.2 |
| Legacy/new system coexistence | §4.3.2 | Undocumented dependencies between 6 modules | Mapping §2.3 + assumption H-04 |
| Release workload peaks | §5.1 | 3 critical periods identified (budget, elections, back-to-school) | Elastic sizing §4.2 |
| Current vendor team turnover | Q&A session #7 | Loss of business knowledge on 3 modules | Transfer plan §4.3 |
Four issues. Four sources. Four traceable responses. The evaluator immediately sees that this candidate has read, understood, and structured.
Methodology: ITIL processes adapted to the context. Each process is described with the contract's specificities: "Incident qualification on the Finance module will be supplemented by automated pre-diagnosis (business rules extracted from existing functional documentation) to compensate for the lack of technical documentation identified in section 3.2.1 of the specifications." No generic description — every paragraph anchors the methodology in the need.
Human resources: operational organization chart. RACI matrix by functional area. Named backup plan (who replaces whom, within what timeframe). Ramp-up curve over the first 6 months with knowledge transfer milestones. Not a list of CVs — an organizational system.
References: two detailed references. For each: context (size, scope, technologies), major challenge encountered, solution implemented, quantified result. "Comparable context (metropolitan authority, 11,000 users, Java/Oracle): 28% reduction in critical bug backlog within 12 months, SLA compliance rate from 73% to 94%."
Result: 17/20 — Detailed score: Understanding 26/30, Methodology 34/40, Resources 25/30.
Five-point gap. Same contract. Same functional scope. The difference: one proposal speaks about the client, the other speaks about itself.
Key takeaway: The 12/20 proposal is not bad. It is invisible. It does not give the evaluator the elements to assign a high score — even if the underlying solution is solid. The 17/20 proposal encodes the signal so it survives the scan. Every scoring criterion has its section. Every claim has its evidence. Every risk is named.
What TenderGraph Does
TenderGraph does not write your technical proposal. It does the work no one has time to do — and on which the entire score depends.
Requirement extraction and mapping
The system analyzes the complete tender package — specifications, selection criteria, contract terms, pricing schedule, appendices — and extracts every requirement, every constraint, every scoring criterion. Not by page order. By semantic layer: functional requirements, technical constraints, scoring criteria, implicit assumptions, zones of ambiguity.
The result is a structured map of the need. The equivalent of two weeks of work by a senior bid manager — produced in minutes, with the rigor of a system that does not skip paragraphs and does not suffer from recency bias.
Scoring grid alignment
TenderGraph cross-references the scoring criteria with the extracted requirements and produces an alignment matrix: for each scoring criterion, which requirements are at stake, what weighting they carry, and what signal density each proposal section must achieve.
This is the difference between Bidder A (who structures by instinct) and Bidder B (who structures by the scoring grid). The system formalizes what the best bid managers do intuitively — but that calendar pressure prevents them from doing systematically.
Signal concentration
For each proposal section, TenderGraph identifies the high-signal elements: quantified evidence, contextual references, specific risks, measurable commitments. It flags noise zones: generic paragraphs, recycled phrasing, unsupported claims.
The bid manager retains control over the content. The system shows them where the signal is concentrated and where it is diluted — so they can allocate their time where it has the most impact, rather than polishing sections that carry no weight in the scoring grid.
Requirement-to-response traceability
Every proposal section is linked to the requirements it addresses. If a requirement is not covered by any section, the system flags it. If a section does not address any requirement, the system flags that too — it is probably noise.
This traceability is exactly what the evaluator does mentally when scoring: they check, for each criterion, whether the response addresses it. TenderGraph performs this work before them — so the response is already structured to match their reading flow.
Key takeaway: TenderGraph does not write the technical proposal. It builds the framework on which the proposal must be constructed: requirement mapping, scoring alignment, signal concentration, traceability. The substance remains the expert's. The structure becomes the one that maximizes the score. Our vision is that the technical proposal is not a literary exercise — it is an encoding exercise.
Key Takeaways
A 17/20 technical proposal and a 12/20 technical proposal often contain the same ideas, the same skills, the same solution. The difference is not in the substance — it is in how that substance is structured, encoded, and presented to an evaluator who has five submissions to read in two days.
Three principles separate the proposals that win contracts from those that merely "participate":
-
Structure around the scoring grid, not the specifications. Each scoring criterion = one section. The evaluator should never have to search.
-
Encode the signal so it survives the scan. First sentence = message. Table = evidence. Strategic redundancy = robustness. Named risk = differentiation.
-
Demonstrate instead of assert. Every paragraph carries a figure, a reference, or a tangible deliverable. Unsupported claims do not earn points — they generate noise.
A technical proposal is not a document you write. It is a system you engineer.
Further reading:
- The Myth of the Executive Summary — The executive summary is the gateway to the technical proposal. If it is generic, the rest is read with a negative bias.
- Why Your Client References Convince No One — References in a technical proposal are not logos. They are contextual evidence that validates your claims.
- What the Specifications Do Not Say — A technical proposal that addresses only what is written in the specifications misses what actually wins the contract.
- The Information Revolution: Signal and Noise — The theoretical framework behind signal encoding: Shannon applied to bid management.
- The Bid Manager's Worst Enemy: Themselves — The cognitive biases that produce generic technical proposals: recency, anchoring, completeness.
- The Acceleration of Pre-Sales Cycles — The time freed by automation should be reinvested in building signal, not in producing volume.