THE U.S. ARMY’S AFTER ACTION REVIEWS: SEIZING THE CHANCE TO LEARN
An Excerpt from:
David A Garvin, “Learning In Action, A Guide to Putting the Learning Organization to Work” (Boston: Harvard Business School Press, 2000), 106-116. (Section Headers created by the Wildland Fire Lessons Learned Center to enhance readability)
Reprinted with Permission
THE U.S. ARMY’S AFTER ACTION REVIEWS: SEIZING THE CHANCE TO LEARN
The U.S. Army is one of the few organizations to have institutionalized these reflection and review processes, especially at the group level. After Action Reviews (AARs) are now standard Army procedure. 49 They were introduced in the mid-1970s and were originally designed to capture lessons from the simulated battles of the National Training Centers. The technique diffused slowly—according to the Army’s chief of staff, it was a decade before the process was fully accepted by line officers and embedded in the culture—and only in recent years have AARs become common practice. The turning point was the Gulf War. AARs sprang up spontaneously as small groups of soldiers gathered together, in foxholes or around vehicles in the middle of the desert, to review their most recent missions and identify possible improvements. Haiti marked a further step forward. There, for the first time, AARs were incorporated into all phases of the operation and were used extensively to capture and disseminate critical organizational knowledge. The technique is relatively straightforward. It bears a striking resemblance to “chalk Talks” in sports, where players and coaches gather around a blackboard shortly after a game to discuss the team’s performance. Both chalk talks and AARs are designed to make learning routine, to create, as one commander put it, “a state of mind where everybody is continuously assessing themselves, their units, and their organizations and asking how they can improve.” In practice, this means that all participants meet immediately after an important activity or event to review their assignments, identify successes and failures, and look for ways to perform better the next time around. The process maybe formal or informal, may involve large or small groups, and may last for minutes, hours, or days. But discussion always revolves around the same four questions: • What did we set out to do? • What actually happened? • Why did it happen? • What are we going to do next time? Time Criteria According to Army guidelines, roughly 25 percent of the time should be devoted to the first two questions, 25 percent to the third, and 50 percent to the fourth. Establish The Facts The first question is deceptively simple. Group members must agree on the purpose of their mission and the definition of success. Otherwise, there will be no basis for evaluating performance or comparing plans with results. In the Army, objectives are normally defined with great precision. They include three elements: “the key tasks involved, the conditions under which each task may need to be performed, and the acceptable standards for success. (For example, at a range of 2,000 yards, hit an enemy tank moving at 20 miles per hour over uneven terrain at night with an 80% success rate.)” 50 With objectives like these, there is little ambiguity, and it is easy to determine whether a job has been done well or poorly. Such clarity also avoids confused, inconclusive reviews. According to an experienced AAR facilitator: Unsuccessful AARs are often those where the boss has the attitude, “I don’t know what I want, so I can’t tell you exactly what to do. But I’ll recognize it when I see it. So just go out there and do good things.” That’s not helpful. We insist that our leadership, from the
very top officer to those in charge of three to five men, give soldiers clear guidance. They must have a standard.51 The second question requires that participants agree on what actually happened during a mission. This too is more difficult than it first appears. Facts can be slippery, especially when stress is high and events move rapidly. All too often, memories are flawed, leading to competing or inconsistent stories. Reality—what soldiers call “ground truth”—becomes difficult to pin down, resulting in gridlock and AARs that progress slowly if at all. But these problems can be overcome. At the National Training Centers, facts are verified by pooling information from three diverse, objective sources: observer-controllers, instrumentation, and taping. Observer-controllers are skilled, experienced soldiers who shadow individual officers throughout their training exercises. The also provide on-the-spot coaching and lead AARs. (Not surprisingly, many later do a tour of duty at the Center for Army Lessons Learned [CALL], where they are assigned to a Lessons Learned Division.) A training exercise for three thousand to four thousand people normally involves approximately six hundred observer-controllers. Typically, their time in service makes them a bit senior to the officers they are observing, providing both credibility and clout. And because they have complete access to battle plans, are intimately familiar with the terrain, and are constantly present during maneuvers, they can effectively arbitrate debates when facts are in dispute. Technology, in the form of instrumentation and taping, provides an additional source of objective information. The resulting record is extremely detailed and leaves little room for argument. Onboard microprocessors track the exact position and movement of vehicles over time, while sophisticated, laser-based technologies note when and where weapons were fired as well as the resulting hits and misses. Video cameras, mounted at critical locations throughout the training centers, record troop movements. These films provide vivid, compelling testimony, with extraordinary fidelity. As one officer put it: “If a picture is worth a thousand words, a motion picture must be worth a million.” Audiotapes round out the story, conveying the exact timing and content of communications both within and across units. Together, these tools and approaches ensure that facts are reconstructed with considerable accuracy. During AARs at the National Training Centers, soldiers have little problem answering the question, What actually happened? Unfortunately, they face many more difficulties in the field, where observer-controllers and recording technologies are not always available. Occasionally, CALL teams and combat video crews will be on hand to provide objective data. But in most cases, accurate reconstruction depends on pooling multiple perspectives in a process that resembles “majority rules.” Then, immediacy is crucial to success, as is wide participation. To minimize memory losses, AARs must be conducted as soon after the event as practical—preferably, the very same day. They should include, whenever possible, all key participants, as well as unbiased third-party observers, members of staff and supporting units, and even senior commanders. Participants should agree on some mechanism to resolve disagreements and ensure that discussion does not grind to a halt when differences emerge. Why Questions #1 and #2 Are “Keys To Success” Once the facts are established, diagnosis can begin. Outside the Army, many groups start their reviews at this stage, assuming that prior steps can be omitted without problems. But agreement on both the standards to be met (question one) as well as actual performance (question two) is essential to avoiding endless debates. The Army’s insistence that the first 25 percent of every AAR be devoted to these topics is a critical insight. And the benefits are hardly confined to the military. Companies can also gain by devoting time up front to clarifying goals and targets and setting unambiguous standards—expected levels of customer satisfaction, milestones for project completion, penetration rates for new products—and then comparing
them with results during the review process. By deferring diagnosis, these two steps vastly improve the odds that ensuing discussions will be grounded and productive. The third question begins the process of analysis by asking for an examination of cause and effect. At this stage, the goal is to tease out the underlying reasons for success or failure. A tank unit expected to reach a critical checkpoint at a certain hour but was twenty minutes late; what caused the discrepancy? A scout sent out to inspect a position to the north but ended up five miles east; how did he become lost? A commander planned to coordinate artillery attacks with two other battalions but never communicated his intentions; what caused the breakdown? Answering these questions requires problem-solving skills, as well as a willingness to accept responsibility. Groups must brainstorm possible explanations and then find ways to choose among several plausible alternatives, often in the face of limited and conflicting data. They must also be ruthlessly honest. Individuals need to face up to their own deficiencies, avoiding the alltoo-common tendency to turn a deaf ear when personal errors or weaknesses are uncovered. This is particularly true of leaders. As one commander observed: “If you are not willing to hear criticism, you probably shouldn’t be doing an AAR.” At times, analysis is simple, and cause and effect are easy to untangle. Missed opportunities or roads not taken are usually obvious to both individuals and groups. In Haiti, a sergeant responsible for convoying soldiers to the beach returned several hours late because one of his trucks became stuck in the sand. The ensuing AAR was brief and to the point: he had failed to pack a tow bar. The first units entering Port-au-Prince were startled to discover that delivering babies was an important part of their mission. They quickly wrote an AAR to ensure that all medics received at least rudimentary obs