“Ship fast, fail fast” is easy to chant when you are pushing code to a cloud service. It gets harder when every iteration means a new PCB spin or CNC fixture. I have recently been interested on formal techniques to develop hardware in an agile way as we typically do with software. My previous experience around this area was floating around a “fail fast” approach and a hardware development lifecycle that needed some theoretical background to compare with. That is why I decided to investigate the topic a bit more. This post maps the journey from classic Waterfall, through the Systems‑Engineering V‑Model and Stage‑Gate, to the modern Agile‑Stage‑Gate hybrid. If you are wrestling with questions like “How do I keep my AI team sprinting while the mechanics freeze for tooling?”, read on.
1. Waterfall in Hardware – Why It Breaks?
The traditional hardware development process follows the systems engineering V-model and a clear waterfall approach. It seems that in both the startup and in the larger but innovative companies ecosystem there is a tendency to avoid the rigid planning that comes from waterfall and they look for a more agile method that commonly ends up being the chaos. A middle point is then needed. Lets review the main techniques to manage hardware projects to be able to reflect on the optimal one for our application.
Waterfall hardware development is a sequential, phased approach to designing and building hardware products, similar to its use in software development. It involves clearly defined stages where each phase must be completed before moving to the next, much like a waterfall flowing downwards. This method is often favored when requirements are well-understood and unlikely to change, and when there are strict timelines and budgets.

A waterfall project planning is entirely linear: work flows “down the waterfall,” one discipline finishes before the next begins. Testing sits at the end—often just two big events: system test and acceptance. A phase is considered “locked” once signed off; going back up‑stream is formally a change request.
Typical problems when people use waterfall alone are:
- Late discovery of defects: Design & code are finished long before anyone thinks about how they will be tested; flaws surface only during end‑of‑line system test when fixes are expensive.
- Poor requirement traceability: Once the document is thrown “over the wall,” there is little linkage back from a failing test to the specific requirement it violates.
- Costly rework when hardware hits metal: For mech‑elec‑SW products, machining and PCB fab happen before interfaces are fully exercised; any mismatch means scrap or ECOs.
- Difficulty accommodating change: A late customer request means reopening multiple downstream documents and re‑planning test, causing cascade delays.
- Schedule illusion: Testing effort is lumped into a single phase that is consistently underestimated.
- Stakeholder sign‑off too early: Customers approve a 100‑page requirements doc they barely understand, then discover gaps at acceptance.
Thankfully the V-model came to solve some of the problems and lacks from waterfall. The key idea of waterfall was “do all the design, then test”. With the V-model, this becomes “Design each layer hand in hand with the test that will prove it”. That single inversion—linking every decomposition step with a matching verification step—solves most of the waterfall headaches the text is hinting at.
2. Systems Engineering V-model
What it is?
The V-Model, also known as the Verification and Validation Model, is an engineering framework that emphasizes a systematic approach to ensure high-quality products or systems. It is called the “V-Model” due to its distinctive V-shaped diagram, which illustrates the relationship between different engineering activities. The model highlights the importance of verification and validation throughout the entire engineering lifecycle.
The V-Model consists of two major branches. The left side of the V represents the planning, design, and development phases, including requirements gathering, system design, detailed design, manufacturing, and assembly. Each of these phases corresponds to a specific verification activity on the right side of the V. The verification activities include reviews, inspections, testing, and analysis. This structured approach ensures that each engineering phase is closely linked to its corresponding verification activity, promoting early defect detection and reducing the risk of costly rework.
History
The V-Model has its roots in the waterfall model, a traditional sequential approach to engineering. The waterfall model followed a linear flow, where each stage had to be completed before moving onto the next. Recognizing the limitations of the waterfall model, the V-Model was introduced to address the need for better verification and validation practices. The V-Model gained prominence in the engineering field as a response to the growing complexity of products or systems and the need for more rigorous testing and analysis.
Strengths and weaknesses
The V-Model offers several strengths that make it a valuable approach for engineering projects. It promotes early involvement of verification activities, allowing for better requirements analysis and design validation. By integrating verification and validation throughout the engineering lifecycle, it ensures that potential issues are identified and addressed at an early stage, leading to higher quality products or systems. The V-Model also provides a clear understanding of project progress by mapping each engineering phase to its corresponding verification activity. This visibility helps in managing project timelines and identifying potential bottlenecks.
Unfortunately, the V-Model also has its weaknesses. It assumes that requirements are known and stable from the beginning, which is often not the case in real-world engineering projects. This can lead to challenges when requirements change or evolve during the development process. Additionally, the V-Model’s emphasis on verification towards the end of the lifecycle can result in late defect detection and increased rework. In projects with strict time constraints this can be problematic
How the V‑model mitigates some of the waterfall issues:
Late discovery of defects: Each level of design on the left leg has a mirrored verification step on the right (unit test ↔ detailed design, integration test ↔ architecture, etc.), so faults are found closer to where they were injected.
Poor requirement traceability: The V insists every requirement be mapped to at least one verification activity; trace matrices are created up‑front.
Costly rework when hardware hits metal: Interface definitions are frozen early and verified in component‑level tests before parts are built into higher assemblies.
Difficulty accommodating change: The V doesn’t remove change pain, but by pairing each spec with a test plan, impact analysis is clearer; agile “learning loops” can be inserted inside individual boxes without breaking the whole V.
Schedule illusion: Because verification is distributed, progress metrics are more realistic: you can’t tick off “architecture” until its matching integration‑test protocol is drafted.
Stakeholder sign‑off too early: Early validation activities (e.g., prototype demos, requirement reviews) sit opposite the top‑level definitions, forcing clearer discussion sooner.
3. Stage/Gate Process
The stage gate comes to allow for more flexibility on the development of new products by adding gates that allow the review team to pause, continue or stop the development process if some of the metrics are not pointing to the right direction. It adds involvement from the financials into the project development. We can distinguish these components:
Phase: Discrete chunks of work ( Conceive → Plan/Feasibility → Develop → Qualify → Launch → Deliver/Operate in the slide you shared). Each phase has a well-defined set of deliverables and ends with a review.
Gate: Formal decision checkpoints between the phases. A cross-functional board (engineering, finance, operations, marketing, compliance) reviews the evidence and issues one of four rulings:
Recycle – loop back and redo part of the previous phase, Go – proceed to next phase, Kill – terminate the project, Hold – pause until an external dependency clears
Governance purpose: Keep scarce money, test rigs, and head-count focused on the most promising work; catch fatal issues early; give executives predictable “drumbeats” for visibility and funding decisions.
Typical artefacts:
• Stage 1 – Conceive: Market research, Voice-of-Customer (VOC), business case, MRD
• Stage 2 – Plan/Feasibility: Product Requirements Doc (PRD), high-level schedule & budget, risk register
• Stage 3 – Develop: System architecture, detailed design, first prototypes, FMEA
• Stage 4 – Qualify: Verification & certification tests, pilot build, updated launch plan
• Stage 5 – Launch: Sales collateral, training, go-to-market assets• Stage 6 – Deliver/Operate: Post-launch monitoring, lessons-learned, end-of-life planning
The stage gate approach present clear success factors:
• Clear entry/exit criteria for every gate
• Small, empowered gate-review panel
• Time-boxed phases to avoid “analysis paralysis”
• Rolling financial forecast tied to gate outcomes
These introduce some benefits over previous approaches:
• Ensures compliance artefacts (CE/UKCA, ISO 10218) are ready before money is committed to the next build.
• Strong risk & cost control—great for capital-intensive hardware (robotics, aerospace, medical devices).
However, it is slightly more heavyweight and slower than pure agile as requirements must be “good enough” early. And can stifle innovation if gates become bureaucratic check-the-box exercises. Interestingly there is a modern twist, and most robotics companies now run a hybrid Stage/Gate Agile, keeping the phases but embedding agile sprints inside the develop and qualify phases so software/AI teams can iterate weekly while hardware follows the gate rhythm.
4. Hybrid Stage/Gate – Agile Process
Since ~2015 many robotics, consumer‑electronics and EV firms have stitched two‑week sprints inside the Stage‑Gate macro‑cadence. Picture a timeline where three sprints nest between Gate 2 and Gate 3:
|<– 6 weeks –>| Gate 3
Sprint 1 ⬛ ▶ Demo 1
Sprint 2 ⬛ ▶ Demo 2
Sprint 3 ⬛ ▶ Demo 3 + Gate deck
Gate decks now show video demos, burn‑down charts and velocity metrics rather than slide promises. Long‑lead hardware (motors, batteries, enclosures) freezes at Design Freeze (Gate 3) together with costly items (thanks to Ashish for his comment) ; software, firmware and ML models continue on a rolling CD track until Launch.
The hybrid approach to stage/gate and agile keeps the macro cadence – 5-to-7 high-level phases with formal Go / Kill / Hold / Re-cycle gates for funding, compliance and executive visibility. The difference lays on the introduction of 1 to 4 weeks sprints run by the cross functional team during each phase. With this approach, the senior management still get predictable drum-beats while team get rapid feedback loops.
The hybrid approach also keeps the phase entry/exit deliverables list (e.g., Design-Review pack, FMEA, costed BOM) while injecting sprint artefacts (backlog, demo, burndown, velocity into the gate package). Gate decks become evidence based rather than paper promises.
The gate review board is composed by engineering, supply-chain, finance, safety, while product owner and scrum master present live prototypes or simulations during the gate. Decisions are made on working increments rather than slideware.
The Long-lead and costly hardware items are locked once Gate 3 (“Design Freeze”) is passed. Software & AI stay on rolling continuous-delivery track after Gate 3. By doing this, hardware risk is capped, software keeps evolving until launch.
The gates don’t slow the sprints; they time-box them. Teams aim to complete “n” sprints between Gate 2 and Gate 3, knowing the review board will ask for a live demo plus the hard artefacts (drawings, test reports, cost delta) on a fixed date.
Benefits & trade-offs (what the research shows)
| Benefit (why execs buy in) | Citations |
| 20-50 % faster market release vs. classic gate-only processes | five-is.com |
| Higher team morale, better inter-disciplinary communication | researchgate.net |
| Early customer / regulator feedback reduces late re-design | planisware.com |
| Trade-offs: heavier PM overhead (two sets of artefacts); need for Agile-savvy gate reviewers | five-is.com |
Why an Agile-Stage-Gate really is different from the “classic” Stage-Gate—even though both allow some iteration
| Dimension | Classic Stage-Gate (1980s–2010s) | Agile-Stage-Gate / Hybrid (≈2014-→) | What actually changes |
| Cadence inside a stage | Iteration allowed but informal: teams loop on prototypes “until ready” and then prepare the huge gate package. | Iteration is time-boxed (1- to 4-week sprints). Each sprint must show a working, tested increment & a demo. | Predictable short feedback loops; slippage surfaces in weeks, not months. |
| Artefacts reviewed at the gate | Heavy document set: MRD, PRD, design pack, test protocols, financial worksheet. | Sprint artefacts replace most static docs: backlog burn-up, demo video, velocity, risk burndown. Gate criteria rewritten in user-story language (“Robot lifts 15 kg for 10 cycles without >60 °C motor temp”). | Decision board sees evidence from real builds, not just slides. |
| Customer / user involvement | Usually appears at Gate 4 or 5 (validation / launch). | Stakeholders invited to every sprint review—starting in Stage 1–2. | Early market or regulator feedback; pivots happen before cap-ex is locked. |
| Team structure | Functional hand-offs dominate. Marketing writes the MRD → Eng designs → Mfg industrialises. | Stable, cross-functional Agile squads run continuously; Scrum-of-Scrums syncs HW/SW tracks. | Fewer hand-offs, higher knowledge retention, faster decision-making. |
| Governance layer | Gates spaced 2-6 months apart; Go/Kill/Hold/Re-cycle. | Gates still there but become fixed “heartbeat reviews.” Example: 3 sprints (6 weeks) → Gate 3 demo. | Execs keep portfolio control and see steady progress metrics. |
| Budget release | Lump sum funded per stage. | Rolling-wave funding: release only the next 1-2 sprint budgets; adjust BAC after every gate. | Tighter capital control; easy to pull the plug or double-down. |
| Change management | Scope freeze until next gate; changes require CR & board sign-off. | Backlog is re-ordered every sprint; PO manages scope dynamically. Gate criteria updated if strategic pivot. | Encourages experimentation without heavyweight CR process. |
| Metrics | % deliverables complete, gate on-time %, NPV. | CPI/SPI and Agile flow metrics (velocity, cycle-time, escaped defects). | Combines financial discipline with throughput insight. |
Proof points from industry & research
- Cooper & Sommer’s field studies in major manufacturers report 20-50 % shorter time-to-market and higher team morale after adopting Agile-Stage-Gate hybrids. five-is.com
- Three European robotics/automation SMEs saw improved success rate and schedule predictability when they replaced Stage-3 “Design” documentation packets with sprint demos and burn-down metrics. orbit.dtu.dk
Large consumer-electronics firm credited its ability to switch battery chemistry mid-development (after a regulator alert) to the rolling-wave funding + sprint backlog mechanic; the same pivot would have triggered a full “Recycle” under classic Stage-Gate, adding months. innovationmanagement.se
Conclusions
Agile alone is too fluid for high‑capex hardware; Stage‑Gate alone is too slow for AI‑driven feature churn. A disciplined hybrid delivers 20‑50 % faster time‑to‑market (Cooper & Sommer, 2020) while preserving executive visibility.
Ship working hardware sooner, with enough empirical feedback baked in to pivot before the next board spin commits cash and carbon.

Leave a comment