A project uses prototyping to explore ideas and answer questions as quickly and cheaply as possible. This makes prototypes a tool for reducing risk. They can be used to check whether a system design approach is likely feasible. They can be used to gather evidence about which design choice likely provides a better solution. The act of building and exercising a prototype is a way to expose gaps in a concept or specification before investing effort in building the real thing.
The value of prototyping comes from learning or testing before a decision must be made. When the effort required to build and learn from the prototype is small, prototyping is worthwhile. Prototyping can provide focus for the team members who are investigating problems.
Prototyping early in system or component development helps reduce technical uncertainty. During concept development, a prototype of part of the system can help identify where there are technical uncertainties to resolve. It can guide people as they investigate these uncertainties to see if there are solutions, and to inform trade studies among different possible solutions.
During specification, the risk is that the specifications will be infeasible or incomplete. Some quick prototypes—especially simple models—provide a way to check whether the specifications can lead to a design and implementation.
Prototyping while designing part of the system is a way to evaluate different potential solutions. A prototype can provide verifiable evidence of how well different approaches will work, helping to catch biases and assumptions before investing the full effort of implementing the design.
The exercise of making a prototype can also help later verification work. Someone building a prototype will likely encounter many different ways that the prototype won’t work. Knowledge of these problems can be incorporated into the verification activities that check the eventual implementation.
Finally, the exercise of building a prototype is a way to find places where a concept is not complete. In one recent project, another team had developed a large body of quasi-requirements and commentary for a system that was intended to be an important part of aviation safety. The material turned out to be full of contradictions, missing ideas, and infeasible objectives. An exercise to model the system using basic system engineering tools, and to trace out how each key system function might work, revealed the specific errors in the requirements. Until doing that exercise, different team members would focus on different facts in the requirements and advocate different, incompatible concepts about how to address them; this led to unresolved disagreements but not progress toward a concept. As the exercise revealed the problems with the requirements, the team was able to come to consensus—often by accepting the evidence that the requirements were not correct and needed to be fixed.
Prototyping is a potentially dangerous tool, and can break a project when done poorly. I have seen two major problems: scope creep and using a prototype in production.
A prototyping effort has net positive value when the time and effort involved is much smaller than simply building the real thing. A prototype should provide answers quickly in order to inform other decisions, in the concept, specification, or design. A prototype should therefore be built as quickly—and therefore minimally—as possible. The prototype should focus on finding answers to specific questions. Almost always the prototype should be a simplification of the “real” thing, and it should be designed and built to far lower standards than the implementation that is eventually released.
I have found it easy to get carried away with the prototyping effort, to want to make it more complete or do a higher-quality implementation.
Building a prototype and then using it the other common problem. A prototype should be quick and dirty. It should not be built to the quality of an actual implementation; otherwise, it is just the implementation. The prototype should not cover all use cases. Its design should not be so thoroughly worked out. It should not be implemented using the same care as a real component.
Too often I have seen organizations confuse something that is a prototype with a proper implementation, or decide that they should just use a prototype in order to move a project forward. There is almost always an incentive to believe that the prototype is good enough. People who are not aware of the shortcomings and especially those who are concerned primarily with schedule or cost will see the prototype “working” and believe that the prototype is good enough to be the real thing.
I discussed two examples of this problem in Section 8.3.5.
There are good practices that help make a prototyping effective and worthwhile.
My final recommended practice is controversial: explicitly cripple the prototype so that it cannot be used as a real implementation. If it includes physical components, use components that are simpler or fragile compared to what would be needed in the real system. If it includes software, use languages and operating systems that are incompatible with those used for the real implementation. Doing so makes it obvious that the prototype is just a prototype, and makes the cost of somehow converting the prototype to a real implementation high—addressing the incentives to reuse a prototype.
Each of the good practices above has its complementary bad practice: prototyping without a plan, allowing the effort to drag on, and so on.
There are two additional practices to avoid, involving the mixing of prototypes with real system parts.
First, no person who is unaware of the limitations of a prototype should be able to make a decision about how to use the prototype. A prototype should have a great number of limitations compared to its real equivalent. If someone is unaware of these, then they cannot make an accurate decision about how to use a prototype or the information that comes from it. A decision-maker therefore must either educate themselves about the prototype, or get advice from others who do understand the prototype’s limitations.
Second, treating prototype development the same as real system or component development. I discussed the dangers of conflating a prototype with a real implementation above. It should be clear to a team member working one or the other which kind of work they are doing: they should use separate software repositories, for example, and provide different kinds of progress updates. Someone looking at a design document for a prototype must not be able to mistake it for a design for a real implementation.
There are many kinds of prototyping that teams use.
Paper exercises. The team can draw out potential designs on paper or whiteboards, or using system engineering tools. They might use common notations, such as UML, in some parts of the drawing. They can also list out functions or use cases that the system should support, and trace out how the parts of the system could behave to provide those functions.
Mathematical models. One can build mathematical models of possible designs to help evaluate them. Such models, when they can be found, can allow quick exploration of a range of design parameters—and they can support optimization algorithms that search for good points in a design parameter space.
I have used such models in three ways. First, queuing theory has helped answer a surprisingly large number of questions about potential capacity of part of a system, behavior during overloads, and choices for queuing disciplines. Second, reasoning about behaviors expressed as finite-state automata has helped find errors in protocol designs. And third, mathematical models of reliability have guided work in multiple projects about how to design redundancy into systems [Rao06], and where additional redundancy is not worth the cost.
Control systems in particular have a mathematical foundation that can help to build a model and then evaluate it under different situations.
Simple simulations. When a system does not have a reasonable mathematical formulation, one can still build and evaluate a simulation of it. While it may take more time to run a simulation than to evaluate a mathematical system, it can still be fast enough to evaluate many design options or to drive an optimization algorithm to find good designs.
I have built software prototypes as part of most projects I have worked on. In one project, we used a detailed simulation model of a redundant storage system to choose among hundreds of low-level policy options [Wilkes96].
Simulations, like mathematical models, are simplified versions of reality and thus run the risk of not matching the real world. I have found it important to calibrate parts of simulation models to match real behavior when possible. In the storage system simulation, we could only calibrate part of the model—but we spent significant effort ensuring that the part that we could calibrate (disk drive behavior) matched the physical drives being simulated.
Simulations can also fail to capture subtle timing issues in component behaviors. Many simulations I have used use a discrete time model, in which time progresses in instantaneous jumps rather than continuously. Some of these simulations did not capture subtle problems with the timing of interleaved behaviors of components operating in parallel.
Mockups. When most people think of a prototype, they think of a simplified version of the “real thing” that they can nonetheless touch and feel. Aircraft designers build scale models and test them in a wind tunnel. Software system designers build simplified versions of potential designs to see how they behave in realistic environments.
Physical mockups are helpful for checking how people will interact with a physical system. In one simple example, I was working on a project to build modular data storage system (Section 4.3). We wanted customers to be able to unpack and install the components quickly. We had foam mockups of the modules, and used them to validate how quickly a user could assemble them into a working system. (There was one realism problem: the foam mockups were far lighter than the real metal-and-circuit-board modules.)
In another project, my team built a simplified implementation of distributed consensus protocols in order to understand how they could be broken into modular components and to explore where there might be performance problems, before we committed to using a variation of those protocols in a project that required high reliability.
Infrastructure and tools. Because prototyping should be as quick and low-effort as possible, tools that provide general-purpose support can improve the net value of a prototyping effort. Also, because a prototype is explicitly not supposed to be real, one can use off-the-shelf tools that one would never use in a production system.
I have found simulation frameworks and languages essential for building simulations. These include languages and tools for writing the simulations, sometimes even graphically. I have used tools such as Orekit [Orekit25] and 42 [Stoneking19] that implement spacecraft and orbital dynamics when I’ve been prototyping how spacecraft will move relative to each other. Others I have known have used computational fluid dynamics tools to model airflow within electronics cabinets and around airframes. For some prototyping studies, tools that coordinate running many different evaluations with different parameters on a cluster of machines has also been essential.
Good lab setups have been important for physical prototyping. When I have worked with prototype electronic components, lab benches with anti-static equipment, scopes, and power supplies have been essential.
Finally, tools for collecting, managing, and analyzing data have been important. Most evaluations using prototypes have generated far more raw data than a person can manage and understand on their own. I have used tools to collect and store measurement data from an evaluation run, organize them according to what configuration was being measured, and then perform analyses on the results. Tools for interactively graphing parts of the data have been particularly useful, as have tools that help dig through measurement logs to look into the details of some anomalous event.