This article provides a brief introduction to the Capability Maturity Model Integration (CMMI) that aims to cover most of the ground, if at a fairly shallow depth. The CMMI is a process-based model that sketches out a comprehensive picture of development. It builds on that to define a method for developing organization standard processes and for keeping them relevant. Those processes are leveraged to ultimately deploy statistical process control to improve organizational performance. The model is supported by a standard method for assessing an organization, SCAMPI appraisals. My hope is that after reading this article, the reader will be able to make an informed decision on whether or not digging into the CMMI further is warranted. Note that the notion of the "organization" in the CMMI allows for smaller groups within a company to be the focus of CMMI-based process improvement - so you don't have to wait for your whole company to get on board to get started.
I'd like to thank the following people for their help in reviewing this article: Gary Cort, Peter Harding, Melanie Paquette, Lovina Srivastava and Anthony Weicker.
The CMMI grew out of earlier models like EIA 731, Capability Maturity Model (CMM) and Software Capability Maturity Model (SW-CMM) as a way to pull together disparate models into a coherent whole. The model describes a system for making products and other work like systems development or services. In this article, I'll focus exclusively on its use in the context of software development.
In the continuous representation of the Capability Maturity Model Integration for Development (CMMI-DEV), the world of development is divided into process areas. Examples of process areas include Requirements Development and Project Planning. There are 22 process areas in total. For the most part, the process areas will be familiar aspects of development that professionals will recognize even if they have no prior knowledge of the CMMI. Within these process areas are specific practices that capture industry best practices in a manner that attempts to avoid prescribing how the practice is performed. The intent of the model is very much to avoid telling people how to do their jobs but rather to focus on what they should be trying to accomplish. For example, in the Project Planning process area, some example specific practices include "Determine Estimates of Effort and Cost" and "Identify Project Risks" - again these will be familiar aspects of how a project team goes about doing its work. Within a process area, the specific practices are grouped into specific goals - for example, the specific practice "Determine Estimates of Effort and Cost" is assigned to the specific goal "Establish Estimates" in the Project Planning process area while the specific practice "Identify Project Risks" is in the "Develop a Project Plan" specific goal.
An important distinction between the specific practices and goals is that the former are expected while the latter are required. The notion of model expectation here is significant. The model allows for the possibility that the organization has found alternate ways to do its work than are suggested by the specific practices. Returning to the example, while the model recognizes the possibility that an organization would not do effort or cost estimation, it does require that some form of relevant estimate is developed somehow. Note that this notion of the specific practices being expected rather than required ought not to be misconstrued to mean they are optional - they really are expected.
Some process areas are discussed below, but see this article for a more comprehensive treatment.
The Continuous Representation: Generic Goals, Generic Practices and Capability Levels
When an organization satisfies all of the specific goals within a process area, it is said to be operating at capability level one in the continuous representation of the model (and if one or more goals aren't being satisfied, then the organization is said to be operating at capability level zero in that area). The remaining capability levels are two through five. Capability level two is largely about establishing organizational commitment to and support for performing the work in a given process area. Three is about establishing organizational standards (in the form of organization standard process frameworks and tailoring guidelines used to optimize their use within a given context). Capability level three also introduces a feedback loop where improvement information obtained from those actually using the processes is used to improve them. The last two capability levels, four and five, are about applying a particular improvement technique, called statistical process control, to the organization standard processes to first stabilize them and then optimize their performance.
As you may have noted, none of the discussion on these higher capability levels has been tied to the specifics of any process area. In fact, there are goals and practices relevant to the progression of a process area through these capability levels, but they are shared across all process areas - hence, they are called generic goals and generic practices. As before, the practices are expected, while the goals are required. As an example, generic goal two is "Institutionalize a Managed Process" - achieving it implies that a given process area is operating at capability level two. Within that goal are ten generic practices, one of which is "Train People". Providing training to people performing work in a given process area is a model expectation but not, strictly speaking, a model requirement.
The Staged Representation
Everything we've covered to this point is relevant to the continuous representation of the model. There is another representation, the staged representation. The staged representation defines a suggested path of improvement through the process areas. The major milestones along that path are the maturity levels. For example, an organization which has the following process areas operating at capability level two is said to have achieved maturity level two:
- Project Planning
- Project Monitoring and Control
- Requirements Management
- Process and Product Quality Assurance
- Configuration Management
- Measurement and Analysis
- Supplier Agreement Management (if applicable to the organization)
So there is a straight-forward mapping between the staged and continuous representations and, necessarily, between maturity levels and capability levels. The staged representation can be very helpful in understanding the relationships and synergies between the process areas since it takes a cross-process area perspective whereas the continuous representation takes more of a process area by process area view. Which representation to use is entirely up to the organization to choose. The continuous representation gives the organization great flexibility to chart its own improvement course while the staged representation provides a common path that many other organizations have found useful in the past.
A Few Process Area Specifics
The engineering process areas and most of the project management process areas are familiar aspects of most development processes, but some other process areas deserve a brief overview to help complete the picture for people with no prior exposure to the CMMI.
The Measurement and Analysis (MA) process area forms a part of the rich model support for quantitative decision making and performance management. In MA, measurement objectives aligned with business objectives are first defined - this is very much analogous to the "Goal" and "Question" parts of the "Goal, Question, Metric" approach to measurement. It puts the measurement work on a solid foundation of business need. From there, measurement specifications and analysis procedures are defined and the measurements themselves are collected, analyzed and used. Although MA is one of the process areas on the path to maturity level two, the model emphasis on measurement does not stop there - the improvement information collected at capability level three and the statistical process control introduced at capability levels four and five rests squarely on the measurement foundations established in MA at capability level one.
Configuration Management (CM) is likely one of the most poorly understood process areas among those new to the CMMI. In my experience, most people will align CM in their minds with source code control and change control, which is correct but leaves out an important aspect of CM - baseline management. In it, we first define baselines, which are sets of things that we're going to keep current and mutually consistent. Change control is then applied to these baselines. For a simple example, we can imagine a software project defining a baseline of key technical deliverables - say the requirements documents, the architecture and design documents, the source code and perhaps the user documentation. When we accept a change to the project, the baseline definition tells us to consider the effect on all of the items in the baseline - so a requirements change does not just result in an update to the requirements document, but rather also propagates to all the other affected documents and sources in the baseline. When we've updated the set to reflect the change, we publish it as the new baseline. There are other aspects of CM (including CM audits) but the heart of the discipline is this definition of baselines and controlling change to them. The CM investment at capability level one is leveraged in all process areas in the capability level two generic practice, "manage configurations", which is about applying the CM discipline to work done in all process areas.
The CMMI is very much a process-based approach to development - given that, it's critical to ensure that the processes are being used correctly. That's where the Process and Product Quality Assurance (PPQA) process area comes in. In it, adherence to the process and product standards is evaluated and any issues found are tracked to closure. To make this possible, it's important that performing the processes leaves some tangible evidence behind. So doing a design on a white board and then erasing the white board may meet a particular designer's needs, but will almost certainly lead to issues in objectively evaluating whether or not the design process was followed - if not in other areas like communicating the design to others or in recording what it was for future reference. At capability level one, the expectation is that the organization is doing PPQA on at least some processes and products. At capability level two, this ability is leveraged across the board in the generic practice "objectively evaluate adherence" which is in place to ensure that the emerging processes available at capability level two are being used correctly.
Appraisals - Verifying Capability or Maturity Level Achievement
Let's step back a moment from the model definition itself and consider how you might use it in your organization. If your goal is simply to improve the processes in your organization, you can work informally from the model, using its suggestions in the order they make most sense to you. In this scenario, you're free to skip practices, goals or whole process areas if they're either of no perceived value to your organization or if the value of pursuing them at that time isn't compelling enough. To be honest, my personal experience has been that this is the healthiest approach to CMMI-based process improvement. The drawback to this approach though is that you may not end up achieving any particular capability level or maturity levels - and these aspects of the model are tailor-made for senior management to base organizational objectives on. It's for this reason that most organizations do not do this - instead, they define targets in the form of either capability level profiles (a set of process areas to advance to specific capability levels) or maturity levels. The organization then plans out how it will achieve those targets and then executes on those plans. When the organization deems itself ready, it verifies achievement of the targets through an appraisal.
The CMMI comes with both a set of requirements an appraisal must satisfy and a particular appraisal method - the latter is the SCAMPI method. I'll focus on the highest fidelity SCAMPI method - the SCAMPI A. Note that there are smaller versions of SCAMPI appraisals that can be used to help verify that the plan for process improvement is itself sound from a model perspective and that initial deployments of the new processes are achieving the objectives - these uses of appraisals are beyond the scope of this article.
In a SCAMPI A, a team of qualified appraisers led by a certified SCAMPI A Lead Appraiser assesses the organization against the model. They do this using primarily two sources of information - interviews and documentation. Both of these sources must be present. This proviso ensures that the appraisal team is not swayed either by a preponderance of people saying they do a given process in the absence of documentary evidence demonstrating they do it or by a stack of documentary evidence indicating processes satisfying model intent which no one at the organization actually uses.
One key observation resulting from this is the fact that, as was noted in the PPQA case above, execution of a process must leave some tangible evidence behind - the SCAMPI appraisal method echoes this requirement in demanding documentary evidence to support the case for an organization performing a given practice or achieving a given goal. So again, it's not enough that a process is performed - it must leave evidence behind.
Similarly, the method could be incorrectly swayed by looking only at a particular project which happens to be "better" from a CMMI perspective. To counter this risk, the SCAMPI method places a great deal of focus on how to select projects to help ensure that the results coming out of the appraisal will be indicative of the expected performance of the organization on their next project.
In this article, we've sketched out the CMMI as a model with a foundation in development best practices, upon which is built an incremental approach to the development of organization standard processes. Those processes are, in turn, leveraged in driving improvement through the application of statistical process control. The intent throughout is to be comprehensive, in the sense that all aspects of what you need to do to deliver software products should be covered, but in an implementation specific-free manner in the sense that many different ways of accomplishing the model expectations and requirements are possible. The CMMI is fundamentally a process-based approach to organizational improvement. Defined processes are create, used, audited, stabilized and optimized. Capability and maturity levels (and the continuous and staged representations) provide guidance on the direction the organization can take in process improvement. SCAMPI appraisals provide a method for assessing progress.
My hope is that this article has provided you enough background to make reasonably informed decisions regarding whether or not the CMMI will make sense in your organization.