The Cleanroom Software Design Process

A Managerial and Engineering Process for the Development

of Quality Software with Certified Reliability

by

Pattie G. Dickerson

 

Introduction to the Cleanroom Process

The name "Cleanroom" comes from analogy with the cleanrooms of wafer fabrication plants. Rather than trying to clean the crud off wafers after they are made, the object is to prevent the dirt from getting into the lab environment in the first place. Similarly, with the Cleanroom method, the aim is to write the code correctly the first time, rather than trying to find the bugs once they are there. Cleanroom, therefore, focuses on defect prevention instead of defect correction, and certification of reliability for the intended environment of use.

Cleanroom represents a paradigm shift from traditional, craft-based practices to rigorous, engineering-based practices. Mathematical function theory is the basis for development practices, and applied statistics is the basis for testing practices. Cleanroom software engineering yields software that is correct by mathematically sound design, and software that is certified by statistically valid testing. The method produces highly robust code without taking any longer than the traditional software lifecycle. The difference is that proportionally much more of the time is spent in design.

Building on existing knowledge, tools, and techniques, Cleanroom practices are phased in and tailored to meet specific project needs. Cleanroom works for new code as well as for maintaining or improving existing systems. A project team may decide to adopt all or part of the Cleanroom approach, depending on its needs. It can even begin to use Cleanroom after work on the project has begun.

Cleanroom techniques can be applied at all levels of capability maturity. Cleanroom is compatible with other software methodologies, including object-orientation, client-server development, and computer aided software engineering (CASE). It can also improve quality when maintaining or improving existing systems. Cleanroom delivers near-term, measurable improvements in quality and productivity, without interfering with ongoing work.

Benefits

Cleanroom software engineering provides the management and engineering practices that will enable teams to achieve zero failures in field use, short development cycles, and long product life.

Zero failures in field use

The Cleanroom goal is to produce software that does not fail in field use. A related goal is to reduce failures found during independent certification testing to fewer than five failures per KLOC on first execution of code, in the first project. Experienced teams will do much better.

Short development cycles

Reduced cycle time results from an incremental development strategy and the avoidance of rework. New teams should experience a two-fold increase in productivity over their baseline on the first project. Productivity will continue to improve with additional experience.

Long product life

Cleanroom leads to an investment in assets such as detailed specifications and models of intended use that help keep a product viable for a longer life.

Return on Investment

The technical benefits of using Cleanroom translate into significant economic benefits. Direct and indirect benefits can be identified with a reduction of field-experienced failures, reduced cycle time, and longer product life. The indirect benefits of customer loyalty and fewer competitors are difficult to quantify. Most organizations keep data on the direct costs, however, and the return on investment in Cleanroom can be calculated. In a Cleanroom demonstration project in the Tank-automotive and Armaments Command at the U.S. Army Picatinny Arsenal, an 18 to 1 return on investment was reported after six increments.

The Cleanroom Methodology

The Cleanroom methodology spans the entire software lifecycle. It provides disciplines within which software teams can plan, specify, design, verify, code, test, and certify software. Management planning and control in Cleanroom is based on development of a series of increments, each of which represents operational user functions that can be accumulated top-down into a final product. A plan is created defining schedules, resources and contents of the series of increments. For example, a 100 thousand lines of code (KLOC) system might be developed in 10 increments averaging 10 KLOC each.

The specifications for each increment are created using the box structure methodology. Box structures define required system behavior and derive and connect objects comprising a system architecture. Each box has three forms: black, state, and clear, which have identical external behavior visible to a user but whose internals are increasingly detailed. The black box defines the external, user-visible view in terms of stimuli (inputs), responses (outputs), and transition rules that map stimuli to responses. The state box provides a view of the retained internal data (initial state plus stimulus to final state and response) required to satisfy black box behavior. The clear box defines the procedural functions on state data to satisfy black box behavior, often introducing new black boxes. Each transition--from black box to state box and from state box to clear box--is verified to ensure it satisfies the required system behaviors. New black boxes are similarly refined into state boxes and clear boxes, continuing in this manner until no new black boxes are required.

Development proceeds in three steps:

  1. Each increment is designed top-down, creating the usage hierarchy in three views: black-box, state-box, and clear box. The correctness of each view is verified. Each  "primary program refinement'' or "prime" (module) is defined mathematically as a function that sets each of a list of variables to some value. These replacements are defined as happening in parallel.
  2. Each increment is implemented by rigorous refinement of clear boxes into executable code. The code for the prime is written. Primes may be terminal (a sequence of code with no procedure calls) or non-terminal (either a sequence of lower-level primes, or one `while' or `if' construct involving lower-level primes).
  3. The code is verified to perform required functions according to the specification using functional verification arguments. The code for the prime is verified against the intended function in a peer review meeting. Unless every member of the peer review panel is completely convinced that the code matches the intended function (possibly with trivial changes), the prime is sent back for rework and re-verification.

Incremental Development

Incremental development as practiced in Cleanroom provides a basis for statistical quality control of the development process. Each increment is a complete iteration of the process. As is typical in statistical process control, measures of performance in each iteration of the process are compared with pre-established standards to determine whether or not the process is "in control." Performance is typically assessed during increment testing in measures such as errors per thousand lines of code, rate of growth in MTTF, or number of sequential error-free random test cases.

 If the process is in control, work on the next increment continues. If the process is determined to be "out of control," i.e., if quality standards are not met, testing of the increment ceases and developers return to the design stage.

Feedback produced in each increment is used for project management and process improvement. The team examines all feedback, identifies problems, adjusts the incremental development plan if needed, and improves the overall software process as needed.

The key ideas in incremental development are as follows.

Developing the right system requires customer feedback throughout the development process. In incremental development, increments are executed by users in the operational environment to facilitate customer clarification of requirements.

Developing the system right requires management control of resources and technical control of complexity. In incremental development, risks to the project are assessed at planned intervals and managed through the incremental development plan.

Product quality requires process control. As an iterative process of complete development cycles (i.e., specification, design, verification, testing), incremental development enables process measurement and control throughout the software development process.

Increment planning requires assessment of the specific circumstances in each project. The considerations are both management and technical, and are based on both facts and assumptions. Following are common factors in increment planning.

Clarity of Requirements

The common motivation behind iterative development methods is the fact that requirements can rarely be established with certainty at the outset of a project. Under incremental development, customers provide feedback on the evolving system by direct operation of user-executable increments.

The relative clarity of requirements may influence an increment plan in two ways. Volatile requirements may be implemented in an early increment, so they can be clarified. If the user interface is not well-established, for example, it is an ideal candidate for an early increment. Alternatively, unstable requirements may be planned for later implementation, when questions affecting the requirements have been settled. Requirements to be settled by concurrent research (e.g., performance benchmarking) might be scheduled for a late increment, after research results are known.

Operational Usage Probabilities

A functional usage distribution is developed as part of a top-level Cleanroom specification. Expected usage probabilities of system functions are established from historical data and best estimates provided by customers. System functions with high expected usage probabilities will receive greatest exposure in the field, and may therefore benefit from the greatest exposure to testing. Since increments are cumulative, the functions developed in early increments will be tested several times (i.e., at the conclusion of each increment). System functions expected to receive the greatest operational usage by customers, therefore, are candidates for early increments.

Reliability Management

Increasingly, customers are specifying formal software reliability requirements. Reliability "sensitivities" and allocations can be calculated for subsystems, and subsystems that will have the greatest impact on total system reliability may be candidates for an early increment.

System Engineering

"Controlled iteration" is a key engineering principle in hardware development. The minimal machine is built in the first iteration, and is enhanced in subsequent iterations until the complete machine has been built. Incremental development of software is entirely compatible with this standard approach to hardware development.

"Smart machines" with embedded software must be developed as a coordinated effort between hardware and software engineers, and incremental development is an ideal framework for this coordination. A machine must be powered-on, for example, before it can be used. The software for system start-up, therefore, would likely be among the functions implemented in the first increment of an embedded software project.

Functional Dependencies

In most applications there is some logical allocation of functions to increments based on relationships among functions. In a database application, for example, an add must precede a delete. In a statistical application, data must be entered or retrieved before it can be analyzed. Although program stubs (i.e., null or "To Be Implemented" responses) may be used in most instances, some initializing functions will require early implementation.

Technical Challenges

Novel or particularly complex work may pose a risk to the schedule or even the viability of a project. If such work is scheduled for an early increment, experience will either lend support to existing plans or point to the need to revise plans. If aspects of the project are not novel or complex in absolute terms, but are novel or complex relative to the experience of the team, an early gauge on feasibility is still desirable.

Leveraging Reuse

The Cleanroom process emphasizes economy of effort through use of "common services" (certified reusable components) across and within systems.

When existing components are identified as potentially reusable, the development team must evaluate the relative effort required to tailor the component for use in the new system vs. develop a new component from scratch. If the evaluation is in favor of the existing component, the team may want to use the component in an early increment in order to validate its expected performance.

New common services may be desirable candidates for an early increment as well. Since common services may be in multiple places in a system, they have a disproportionate impact on system reliability relative to other, single-instance components.

Since objects may be reusable parts, the rationale for object development in an incremental development plan follows the rationale for reusable components in general.

Risk-Driven Increments

Risk analysis is used to determine the size and content of each increment. Following a risk assessment, the project team defines an increment plan of one or more increments. Each increment mitigates some risk of project failure. For example, if the user-interface requirements are poorly understood, then the project may be at high risk of delivering a hard-to-use product. To mitigate this risk, the early increments would contain mostly user-interface code, but not much "function," and could be shown to selected customers. Different increment plans are used when requirements stability or performance properties are high risks.

Incremental Development Benefits

Incremental development has many advantages over bottom-up and traditional "waterfall" life cycles:

risks are addressed and mitigated systematically

the cost of test scaffolding and drivers for artificial interfaces is reduced

early quality measurement is meaningful because the "real" interfaces are tested

every test case is a rehearsal of actual product use

a user view of the system is seen early in the process

high-level interfaces are thoroughly tested from the first increment onward

a running system exists early, and can be delivered if necessary

Perhaps the greatest benefit of incremental development is that it gives the manager a chance to measure, assess, and improve the software development process at several points during development.

Correctness Verification and Statistical Quality Control

The fundamental approach to verification as espoused by Cleanroom is aimed at introducing mathematical reasoning, not mathematical notation into the verification process. The principal motivation is to provide a rigorous methodology for software development and to provide a firm foundation as an engineering discipline. Mathematical verification of programs is done by using a few basic control structures and defining proofs following rules specified in a correctness theorem. The proof strategy is divided into small parts that easily accumulate into proof for a large software system.

The method of human mathematical verification used in Cleanroom is called functional verification. Functional verification is organized around correctness proofs, which are defined for the design constructs used in a software design. Using this type of functional verification, the verification problem changes from one with an infinite number of combinations to consider to a finite process because the correctness theorem defines the required number of conditions that must be verified for each design construct used. It reduces software verification to ordinary mathematical reasoning about sets and functions. The objective is to develop designs in concert with associated correctness proofs. Designs are created with the objective of being easy to verify. A rule of thumb followed is that when designs become difficult to verify they should be redone for simplicity.

Statistical quality control is used when you have too many items to test all of them exhaustively. Instead, you statistically sample and analyze some items and scientifically assess the quality of all of the items through extrapolation. This technique is widely used in manufacturing in which items in a production line are sampled, the quality is measured, then sample quality is extrapolated to the entire production line, and flaws are corrected if the quality is not as expected.

For software, this notion has been evolved so that you perform statistical usage testing--testing the software the way the users intend to use it. This is accomplished by defining usage probability distributions that identify usage patterns and scenarios with their probability of occurrence. Tests are derived that are generated based on the usage probability distributions. System reliability is predicted based on analysis of the test results using a formal reliability model, such as mean-time-to-failure.

The underlying concern is that random, statistical-based testing will not provide sufficient coverage to ensure a reliable product is delivered to the customer. The coverage concern stems from a misapprehension that statistical implies haphazard, large, and costly and that critical software requirements, which may be statistically insignificant, are overlooked or untested. Coverage is directly related to the robustness of the usage probability distributions that control the selection process and has not proven to be a problem in current applications of the methods. In one study performed on the level of requirements coverage using statistical testing, 100 percent of the high- level requirements were covered, 90 percent of the subcomponent-level requirements were covered, and approximately 80 percent of all requirements were covered.

The Cleanroom method asserts that statistical usage testing is many times more efficient than traditional coverage testing in improving the reliability of software. Statistical testing, which tends to find errors in the same order as their seriousness (from a user's point of view), will uncover failures several times more effectively than by randomly finding errors without regard to their seriousness. The basis for software reliability starts with the definition of a statistical model, generally based on the concept that input data comes in at random times and with random contents. With defined initial conditions, any such fixed use is distinguishable from any other use. These uses can be assembled into a sequence of uses, and the collection identified as a stochastic process subject to evaluation using statistical methods.

Coverage testing is anecdotal and can only provide confidence about the specific paths tested. No assessment can be made about the paths not tested. Because usage testing exercises the software the way the users intend to use it, high-frequency errors tend to be found early. For this reason, statistical usage testing is more effective at improving software reliability than is coverage testing. Coverage testing is as likely to find a rare execution failure as it is to find a frequent one. If the goal of a testing program is to maximize the expected mean-time-to-failure, hence the reliability of the system, a strategy that concentrates on failures that occur more frequently is more effective than one that has an equal probability of finding high- and low- frequency failures.

Experimental data from projects where both Cleanroom verification and more traditional debugging techniques were used show that the Cleanroom-verified software exhibited fewer errors injected. Those errors were less severe (possibly attributable to the philosophy of design simplification) and required less time to fix.

The User's View of Software Quality

Most software users do not care how many defects are in the code. They care instead about how often the software fails to meet their needs, how severe each failure is, and how long the repairs take. Cleanroom testing adapts statistical quality control techniques to measure these quality characteristics, enabling targeted quality improvements.

In precision manufacturing, statistical quality control begins with a precise specification. A statistical sample of the manufactured product is measured against the specification, and the quality characteristics of that sample are used to estimate the overall quality of the manufacturing run.

The quality of a software product does not depend on variations in the physical copies of the product, but in its execution behavior. In Cleanroom, testers statistically sample these behaviors by providing appropriate inputs, and measure each against the specification. Data from the testing process is entered into a reliability model that predicts the quality of the software in the field.

The Operational Profile

The challenge of statistical testing lies in sampling the executions. Even a small system has an essentially infinite number of execution paths through the code. There is no chance of executing all of them, so how can they be sampled most effectively?

Guiding the sample is an operational profile of how the product will be used. Usage patterns that are most likely will appear most often in the sample, while unlikely patterns will appear less often. For example, in a word processor the "open file" command is much more frequently used than the "define macro" command. The operational profile guides the selection or generation of test cases that form the statistical sample for quality measurement.

Benefits of User-View Testing

It is a significant engineering task to create an operational profile that accurately reflects how the product will be used. However, it has significant benefits when compared to other forms of testing. It locates and fixes the most likely failures first, and permits statistically valid predictions of field failure rates and associated repair costs. Developing the operational profile improves the team's understanding of requirements, and provides additional information that can guide design decisions.

Quality Improvement Strategies

Cleanroom is a natural complement to other quality strategies. The Software Engineering Institute's Capability Maturity Model is a natural fit with the phased introduction of Cleanroom. At each of the five levels of capability maturity, Cleanroom techniques are added and improved.

Cleanroom adheres closely to the ideals of Total Quality Management. Design and specification choices are documented and quality-checked. Each team member is directly responsible for key quality aspects. Early prototype increments, and the operational profile of usage, keep the focus tightly on the user, customer, or market.

Management Practices

Using the Cleanroom methodology requires a change in paradigm--from viewing software development as an art or craft to viewing it as an engineering discipline. As such, it must have a rigorous foundation. In other engineering disciplines, failures are neither expected nor accepted as normal. Other engineering professions have minimized error by developing a sound theoretical base on which to build design practices. Cleanroom methods provide a theoretical foundation for a comprehensive engineering process that has been reduced to practice for commercial software development.

Using Cleanroom methods requires commitment from management to provide training (for both management and technical personnel) in the skills needed to implement the methodology. It also requires discipline. Management must allow the process to unfold naturally; technical personnel must rigorously follow the process. It may require additional tools, such as some automated support to develop the randomly generated test suites from the usage probability distributions. In spite of these requirements, many have found that the return made the investments worthwhile.

The Cleanroom project is organized into teams that review the quality of each work product. Benefits include: high quality even before testing begins; and knowledge dissemination that provides flexibility in deploying resources.

Teamwork

Civil engineering has, over the millennia, evolved from using ad hoc, trial and error techniques to the point where relatively few buildings, bridges, or dams collapse unexpectedly. This success is partly due to the institution of inspections and reviews. Well known principles are used to evaluate designs and determine any failure modes before construction begins. Further inspections during construction assess the conformance of each component (such as a support or truss) to its specification. Specification and design changes are carefully scrutinized, to limit the side-effects of any change. Life-critical structures and functions are more thoroughly analyzed than minor or superficial ones. These engineering techniques of inspection and review are central to Cleanroom.

Team Structures

The team organization is determined by the characteristics of the project. A five-person project would have a single team review all the work products. A twenty-person project can split into design and test teams. A still larger project could be organized as a team of teams, with three to seven individuals assigned to each unit of work.

A Cleanroom project team is generally a small team with independent specification, development, and certification sub-teams. Teams are typically six to eight persons in size. In a large project, small teams may be formed for the development of each subsystem, enabling concurrent engineering after the top-level architecture has been established.

Cleanroom teams have the goals of error-free development and failure-free performance. Small teams work in a disciplined fashion to ensure the intellectual control of work in progress. Peer review of all work products results in identification of defects as early as possible in the development cycle.

The team review techniques used in Cleanroom are constructive : instead of looking for bugs, team members work to convince one another of the quality of the design. Unlike bug-hunting, Cleanroom reviews are finite and closed-ended: each program contains a finite number of parts, and the quality of each part can be ascertained independently. Cleanroom reviews find subtle errors as well as simple blunders. Reviewers always concentrate on the big picture, and strive to simplify the system whenever possible. The result is systems that are easy to maintain and improve, and that are generally smaller than their original estimates.

Flexibility through Teamwork

Because several team members understand each part of the system, the project's success is not as dependent on any one individual. The reduced reliance on "gurus" allows managers more flexibility in resource allocation when circumstances change. It also prevents key staff members from being held hostage to the support of a particular product. Team review reduces the impact of individual errors, but it does not mean "design by committee." Individuals propose creative solutions to the team, and the team acts as a quality filter, giving advice as necessary.

Cleanroom and Training

Some training will be required to implement Cleanroom methods, but it need not take place all at once. Managers need a thorough understanding of Cleanroom imperatives, and a core group of practitioners needs sufficient orientation in Cleanroom practices to be able to localize the process in consultation with other staff members.

Adapting the process for the local environment (establishing a local design language, local verification standards, etc.) is the time-consuming part of implementing Cleanroom, but process definition is necessary regardless of methodology. Once a locally-defined Cleanroom process exists, the most effective training begins as the process is used, clarified, and refined in practice.  

Phased Introduction of Cleanroom

The cost-effective application of Cleanroom practices requires judgment and experience that are best obtained through practical application. The phased introduction of Cleanroom techniques is the best way to acquire this experience. Tailoring Cleanroom to a specific project helps leverage the existing knowledge base, and allows the project to meet quality objectives more quickly.

Using Cleanroom Techniques After Development Is Underway

The phased introduction begins by introducing certain Cleanroom practices right away, even before the next development cycle begins. Subsystems and components can be developed incrementally for better control. Documenting the behaviors of modules and conducting team reviews are cost-effective in themselves. Understanding the user's perspective in testing, even if combined with traditional testing, also conveys important benefits.

Achieving Predictability and Manageability

For a project or organization that is having difficulty predicting or managing software projects, the first phase of Cleanroom focuses on bringing the process under control. Three Cleanroom practices support this effort: incremental development, team ownership, and separation of testing from development. The team should discuss and inspect every work product. Traditional testing methods can remain in place until the team is ready for statistical testing.

Improving Quality

Once a project has achieved some level of manageability, the focus can shift to quality improvement. The development team's reviews become more rigorous, and statistical testing is introduced. The development and testing practices can be applied independently, with some teams preferring to concentrate on improving the quality of the developed code while using their traditional testing approach. The rigor of the reviews, and the specification, documentation, and design techniques required to support them, are tailored to meet project-specific quality goals.

Cleanroom Integration

The key to success with any quality improvement or cost reduction strategy is to leverage the expertise and tools that are already in place. Cleanroom provides additional capabilities that build on existing knowledge and techniques to solve software-process problems.

Cleanroom Results

The Cleanroom methodology has been used to develop a variety of types of applications--most of them have been sold commercially either as individual products or embedded in operational application systems. Because Cleanroom began as an industrial practice, rather than as an academic exercise, there are few side-by-side, controlled experiments of its use. However, there are numerous published case studies of Cleanroom usage. They represent a broad cross section of applications and systems.

Software development organizations start by using selected capabilities in the Cleanroom method and try additional capabilities as confidence is built. The project strategy generally used starts with development-related features, i.e., correctness-proving and verification-based inspections, then turn to formalizing the software requirements specification, and last to statistical testing techniques. Introducing reliability measurements is an easy extension after the commitment to statistical testing has been made. This type of strategy was used in the implementation of the AOEXPERT/MVS product development.

  Formal
Specification
Baseline
Design
Correctness
Verification
No Unit
Test
Statistical
Testing
Reliability
Measure
Average
Total Use
Completed IBM projects 33 100 66 100 66 50 69
Completed external projects 0 100 0 100 0 100 50
Current IBM project 80 100 100 100 40 40 76
Current external projects 100 100 50 100 50 0 66
 Profile of Cleanroom Method Experience (Percent Projects Using Cleanroom Components).

Quality Improvement

Every Cleanroom case study reports fewer defects than expected; many projects reported as much as a 10-fold reduction. Each defect prevented represents a potential failure, crash, or outage avoided.

Cost Reduction

Those case studies that measured productivity reported major improvements compared to similar projects. This is not unexpected: Every defect prevented is a defect that does not have to be found and fixed during testing or in the field. Furthermore, the Cleanroom techniques produce software that is simpler and easier to understand, so Cleanroom teams rarely introduce new bugs when fixing old ones.

Examples

Several major products have been developed using Cleanroom and delivered to customers. These include:

IBM COBOL/SF restructuring tool. (85,000 lines of PL/I code) This application reported a ten-fold reduction in total defects per thousand lines of code found during testing, compared to similar projects. At the same time, it reported a five-fold improvement in developer productivity measured in lines of code per labor month. This application was delivered to customers in 1988 and logged only seven errors in its first three years of service, all of which were simple fixes.

IBM AOExpert/MVS™ system outage analyzer. (107,000 lines of code) This product combined knowledge-based techniques with systems software. Despite its complexity, it achieved a more than ten-fold reduction in total errors per thousand lines of code found during testing, compared to similar projects. At the same time it reported a three-fold improvement in productivity. No operational errors were reported during beta testing.

Ericsson Telecom OS32 operating system. (350,000 lines of C and PLEX). This telecommunications switch operating system reported a rate of one failure per thousand lines of code in testing, a seventy percent improvement in development productivity, and a 100 percent improvement in testing productivity.

Summary of Cleanroom Benefits

Cleanroom is based on the principle that defect prevention is better than defect removal. This principle guides a number of practices that are phased-in and tailored to meet specific project needs.

Incremental development improves manageability and predictability. It permits prototyping to understand the user's needs early in the process, and it supports meaningful estimations of product quality. Risk-driven increment planning allows the project to address risks in order of their importance.

Team organization encourages knowledge dissemination, and gives flexibility to both management and staff.

Specification and design documentation promotes independence among system parts, and leads to robust, reliable, and maintainable systems. Team review of specifications and designs is the primary technique for delivering high-quality systems.

Testing practices based on statistical quality control produce a measurement of quality that is meaningful to the user. The development of an operational profile encourages the understanding of the eventual product usage. By using this profile to drive testing, every test execution is a rehearsal of eventual customer use.

Quality improvement and cost reduction are closely related. Each defect prevented is one that does not require locating and correcting, possibly introducing a new defect. Making programs smaller, simpler, and better-understood-gaining intellectual control-has a dramatic effect on the cost of maintenance. Such programs will be longer-lived, since new function can be added more easily. Reducing maintenance backlogs frees up valuable resources to solve new problems and gain competitive advantage. Making the software process predictable and manageable reduces the costs required to mitigate those risks.

Cleanroom provides flexible practices that build on a team's current knowledge base. No new languages or tools are required, and the introduction of Cleanroom can begin at any time.

Defect prevention-getting it right the first time-is an attainable goal with Cleanroom, and is vital to the survival of any business that develops software.

References:

Oshana, Robert S. and Richard C. Linger. Capability Maturity Model Software Development Using Cleanroom Software Engineering Principles - Results of an Industry Project, PDF file, viewed November 13, 2001.  http://www.computer.org/proceedings/hicss/0001/00017/00017042abs.htm

Redmiles, David F.  LIFECYCLE VERIFICATION & VALIDATION,  viewed November 13, 2001. http://www1.ics.uci.edu/~redmiles/ics121-FQ99/lecture/five/

Deck, Michael. Cleanroom Software Engineering Myths and Realities, viewed November 14, 2001. http://www.cleansoft.com/myths.pdf

Becker, Shirley A., Michael Deck, and Tove Janzon. Cleanroom and Organizational Change, viewed November 14, 2001. http://www.cleansoft.com/pnsqc96.pdf

Deck, Michael. Cleanroom Software Engineering: Quality Improvement and Cost Reduction, viewed November 14, 2001. http://www.cleansoft.com/pnsq94.pdf

Deck, Michael. Cleanroom and Object-Oriented Software Engineering: A Unique Synergy, viewed November 14, 2001.  http://www.cleansoft.com/stc96.pdf

http://www.cs.utk.edu/~trammell/FDA/CRM_processes.html, no author, viewed November 14, 2001.

Hong, Mike M. Software Defect Prevention, viewed November 15, 2001. http://sern.ucalgary.ca/~hongm/seng/621/SENG621_PRESENTATION/

Ett, William. Guide to Integrating Object Oriented Methods with Cleanroom Software Engineering, viewed November 16, 2001. http://source.asset.com/stars/loral/cleanroom/oo/c_pha.htm