data Archives - The Systems Thinker https://thesystemsthinker.com/tag/data/ Thu, 01 Feb 2018 15:12:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Learning Histories: “Assessing” the Learning Organization https://thesystemsthinker.com/learning-histories-assessing-the-learning-organization/ https://thesystemsthinker.com/learning-histories-assessing-the-learning-organization/#respond Thu, 25 Feb 2016 17:04:59 +0000 http://systemsthinker.wpengine.com/?p=5061 nyone working to build a learning organization will, sooner or later, run up against the challenge of “proving” the value of what he or she has done. Without some form of assessment, it is difficult to learn from experience, transfer learning, or help an organization replicate results. But assessment strikes fear in most people’s hearts. […]

The post Learning Histories: “Assessing” the Learning Organization appeared first on The Systems Thinker.

]]>
Anyone working to build a learning organization will, sooner or later, run up against the challenge of “proving” the value of what he or she has done. Without some form of assessment, it is difficult to learn from experience, transfer learning, or help an organization replicate results.

But assessment strikes fear in most people’s hearts. The word itself draws forth a strong, gut-level memory of being evaluated and measured, whether through grades in school, ranking in competitions, or promotions on the job. As writer Sue Miller Hurst has pointed out, most people have an intrinsic ability to judge their progress. But schools and workplaces subjugate that natural assessment to the judgment of teachers, supervisors, and other “experts,” whose appraisals determine promotions, wealth, status, and, ultimately, self-esteem.

Assessing Learning

Is it possible to use assessment in the service of learning? Can assessment be used to provide guidance and support for improving performance, rather than elicit fear, resentment, and resignation? This has been a guiding question at the MIT Center for Organizational Learning for several years, as we have struggled to find a reasonable way to assess learning efforts. The motivations are essentially pragmatic: our corporate affiliates need some idea of the return on their investments, and we as researchers need a better understanding of our work.

To create a new system of assessment, we started by going back to the source — to the people who initiate and implement systems work, learning laboratories, or other pilot projects in large organizations. We then tried to capture and convey the experiences and understandings of these groups of people. The result is a much-needed document that moves beyond strict assessment into the realm of institutional memory. We call it a “learning history.”

The Roots of a New Storytelling

A learning history is a written document or series of documents that is disseminated to help an organization become better aware of its own learning efforts. The history includes not just reports of action and results, but also the underlying assumptions and reactions of a variety of people (including people who did not support the learning effort). No one individual view, not even that of senior managers, can encompass more than a fraction of what actually goes on in a complex project — and this reality is reflected in the learning history. All participants reading the history should feel that their own points of view were treated fairly and that they understand many other people’s perspectives.

A learning history draws upon theory and techniques from ethnography, journalism, action research, oral history, and theater. Ethnography provides the science and art of cultural investigation — primarily the systematic approach of participant observation, interviewing, and archival research. From journalism come the skills of getting to the heart of a story and presenting it in a way that draws people in. Action research brings to the learning history effective methods for developing the capacities of learners to reflect upon and assess the results of their efforts. Finally, the tradition of oral historians offers a data collection method for providing rich, natural descriptions of complex events, using the voice of a narrator who took part in the events. All of these techniques help the readers of a learning history understand how participants attributed meaning to their experience.

Each part of the learning history process — interviews, analysis, editing, circulating drafts, and follow-up — is intended to broaden and deepen learning throughout the organization by providing a forum for reflecting on the process and substantiating the results. This process can be beneficial not only for the original participants, but also for researchers and consultants who advised them — and ultimately for anyone in the organization who is interested in the organization’s learning process.

Insiders versus Outsiders

One goal of the learning history work is to develop managers’ abilities to reflect upon, articulate, and understand complex issues. The process helps people to hone their assessments more sharply by communicating them to others. And because a learning history forces people to include and analyze highly complex, dynamic interdependencies in their stories, people understand those interdependencies more clearly.

In addition, the approach of a learning history is different from that of traditional ethnographic research. While ethnographers define themselves as “outsiders” observing how those inside the cultural system make sense of their world, a learning history includes both an insider’s understanding and an outsider’s perspective.

Having an outside, “objective” observer is an essential element of the learning history. In any successful learning effort, people undergo a transformation. As they develop capabilities together, gain insights, and shift their shared mental models, they change their assumptions about work and interrelationships. This collective shift reorients them so that they see history differently. They can then find it difficult to communicate their learning to others who still hold the old frame of reference. An outside observer can help bridge this gap by adding comments in the history such as, “This situation is typical of many pilot projects,” or by asking questions such as, “How could the pilot team, given their enthusiasm, have prevented the rest of the organization from seeing them as some sort of cult?”

Similarly, retaining the subjective stance of the internal managers is important for making the learning history relevant to the organization. In most assessments, experts offer their judgment and the company managers receive it without gaining any ability to reflect and assess their own efforts. The stance of a learning history, on the other hand, borrows from the concept of the “jointly told tale,” a device used by a number of ethnographers in which the story is “told” not by the external anthropologist or the “naive” native being studied, but by both together. For these reasons, the most successful learning history projects to date seem to involve teams of insiders (managers assigned to produce and facilitate the learning history) working closely with “outside” writers and researchers hired on a contractual basis.

Results versus Experience and Skills

Companies today don’t have a lot of slack resources or extra cash. Thus, in every learning effort, managers feel pressured to justify the expense and time of the effort by proving it led to concrete results. But a viable learning effort may not produce tangible results for several years, and the most important results may include new ways of thinking and behaving that appear dysfunctional at first to the rest of the organization. (More than one leader of a successful learning effort has been reprimanded for being “out of control.”) In today’s company environment of downsizing and re-engineering, this pressure for results undermines the essence of what a learning organization effort tries to achieve.

One goal of the learning history work is to develop managers’ abilities to reflect upon, articulate, and understand complex issues.

Yet incorporating results into the history is vital. How else can we think competently about the value of a learning effort? We might trace examples where a company took dramatically different actions because of its learning organization efforts, but it is difficult to construct rigorous data to show that an isolated example is typical. Alternatively, we might merely assess skills and experience. A learning historian might be satisfied, for instance, with saying, “The team now communicates much more effectively, and people can understand complex systems.” But that will be unpersuasive — indeed, almost meaningless — to outsiders.

In this context, assessment means listening to what people have to say, asking critical questions, and engaging people in their own inquiries: “How do we know we achieved something of value here? How much of that new innovation can we honestly link to the learning effort?” Different people often bring different perceptions of a “notable result” and its causes, and bringing those perceptions together leads to a common understanding with intrinsic validity.

For example, one corporation’s learning history described a new manufacturing prototype that was developed by the team. On the surface, this achievement was a matter of pure engineering, but it would not have been possible without the learning effort. Some team members had learned new skills to communicate effectively with outside contractors (who were key architects of the prototype), while others had gained the confidence to propose the prototype’s budget. Still others had learned to engage with each other across functional boundaries to make the prototype work. Until the stories of these half-dozen people were brought together, they were not aware of the common causes of each other’s contributions, and others in the company were unaware of the entire process. The learning history thus included a measurable “result” — the new prototype saved millions of dollars in rework costs — but simply reporting a recipe for constructing new prototypes would be of limited value. At best, it would help other teams mimic the original team, but it wouldn’t help them learn to create their own innovations. Only stories, which deal with intangibles such as creating an atmosphere of open inquiry, can convey the necessary knowledge to get the next team started on its own learning cycle.

The Strength of the Story

Some learning histories have been created after a project is over. Participants are interviewed retrospectively, and the results of the pilot project are more-or-less known and accepted. Other histories are researched while the story unfolds, and the learning historian sits in on key meetings and interviews people about events that may have taken place the day before. “Mini-histories” may be produced from these interviews, so that the team members can reflect on their own efforts as they go along and improve the learning effort while it is still underway. But such reflection carries a burden of added discipline: it adds to the pressure on the learning historian to “prove results” on the spot, to serve a political agenda, or to justify having a learning history in the first place.

HOW TO CREATE A LEARNING HISTORY

While every learning history project is different, we have found the following steps and components useful. See page 5 for an excerpt from an actual learning history.

Accumulate Data

Start by gathering information through interviews, notes, meeting transcripts, artifacts, and reports. For a project that involved about 250 people, we found we needed to interview at least 40 individuals from all levels and perspectives to get a full sense of the project. We try to interview key people several times, because they often understand things more clearly the second or third time. It is useful to come up with an interview protocol based on notable results (e.g., “Which results from this project do you think are significant, and what else can you tell us about them?”). All interviews in our work are audiotaped and transcribed.

Sort the Material

Once you have gathered “a mess of stuff” accumulated on a computer disk, you will want to sort it. Try to group the material into themes, using some social science coding and statistical techniques, if necessary, to judge the prevalence of a given theme. This analysis produces a “sorted and tabulated mess of stuff” that will become an ongoing resource for the learning history group as it proceeds. The learning historians might work for several years with this material, continually expanding and reconsidering it. They can use it as an ongoing resource, spinning off several documents, presentations, and reports from the same material.

Write the Learning History

At some point, whether the presentation is in print or another medium, it must be written. Generally, we produce components in the order given here, although they may not necessarily appear in that order in the final document:

  • Notable results: How do we know that this is a team worth writing about? Because they broke performance records, cut delivery times in half, returned 8 million dollars to the budget, or made people feel more fulfilled? Include whatever indicators are significant in your organization. It is helpful to use notable results as a jumping-off point, particularly if you are willing to investigate the underlying assumptions—the reasons why your organization finds these particular results notable. Often, a tangible result (the number of engineering changes introduced on a production line) signifies an intangible gain (the willingness of engineers to address problems early, because they feel less fear).
  • A curtain-raiser: What will the audience see when the drama opens? We begin by thinking very carefully about how the learning history opens. The curtain-raiser must engage people and give them a flavor for the full story without overwhelming them with plot details. The curtain-raiser may be a vignette or a thematic point; often, it’s a striking and self-contained facet of the whole.
  • Nut ’graf: (journalism jargon for the thematic center of a news story). If you only had one or two paragraphs to tell the entire learning history, what would you put in those paragraphs? Even if this thematic point doesn’t appear in the final draft, it will help focus your attention all the way through the drafting.
  • Closing: What tune will the audience be singing when they leave the theater? How do you want them to be thinking and feeling when they close the report or walk away from the presentation? You may not keep the closing in its first draft form, but it is essential to consider the closing early in your process because it shapes the direction that the rest of your narrative will take.
  • Plot: How do you get people from the curtain-raiser to the closing? Will it be strictly chronological? Will you break the narrative up into thematic components? Or will you follow specific characters throughout the story? Every learning history demands a different type of plot, and we try to think carefully about the effects of the different styles before choosing one. So far we have found that many plots revolve around key themes, such as “Innovation in the Project” and “Engaging the Larger System.” Each theme then has its own curtain-raiser, nut ’graf, plot, and closing.
  • Exposition: What happened where, when, and with whom? Here is where you say there were 512 people on the team, meeting in two separate buildings, who worked together from 1993 to 1995, etc. The exposition must be told, but it often has no thematic value. It should be placed somewhere near the beginning, but after the nut ’graf.
  • The right-hand column (jointly told tale): So far, the most effective learning histories tell as much of the story as possible in the words of participants. We like to separate these narratives by placing them in a right hand column on the page. We interview participants and then condense their words into a well-rendered form, as close as possible to the spirit of what they mean to say. Finally, we check the draft of their own words with each speaker before anyone else sees it.
  • The left-hand column (questions and comments): In the left column, we have found it effective to insert questions, comments, and explanations that help the reader make sense of the narrative in the right-hand column.

To create an ongoing learning history, an organization must embrace a transformational approach to learning. Instead of simply learning to “do what we have always done a little bit better,” transformational learning involves re-examining everything we do—including how we think and see the world, and our role in it. This often means letting go of our existing knowledge and competencies, recognizing that they may prevent us from learning new things. This is a challenging and painful endeavor, and learning histories bring us face to face with it. When the learning history is being compiled simultaneously with the learning effort, then the challenge and pain of examining existing frameworks is continuous. But to make the best of a “real-time” learning history, admitting and publicizing mistakes must be seen as a sign of strength. Uncertainty can no longer be a sign of indecisiveness, because reflecting on a learning effort inevitably leads people to think about muddled, self-contradictory situations. Much work still needs to be done on setting the organizational context for an ongoing learning history so that it doesn’t set off flames that burn up the organization’s good will and resources.

Currently, there are almost a dozen learning history projects in progress at the Learning Center. In pursuing this work, we no longer talk about “assessing” our work. Instead, we talk about capturing the history of the learning process. It is amazing how this approach and new language changes the tenor of the project. People want to share what they have learned. They want others to know what they have done — not in a self-serving fashion, but so others know what worked and what didn’t work. They don’t want to be assessed. They want their story told.

George Roth is an organizational researcher with the MIT Center for Organizational Learning and a consultant active in the study of organizational culture, change, and new technology introduction.

Art Kleiner is co-author and editorial director of The Fifth Discipline Fieldbook, and author of the forthcoming The Age of Heretics, a history of the social movement to change large corporations for the better.

EXCERPT FROM A LEARNING HISTORY

The post Learning Histories: “Assessing” the Learning Organization appeared first on The Systems Thinker.

]]>
https://thesystemsthinker.com/learning-histories-assessing-the-learning-organization/feed/ 0
Facilitative Modeling: Using Small Models to Generate Big Insights https://thesystemsthinker.com/facilitative-modeling-using-small-models-to-generate-big-insights/ https://thesystemsthinker.com/facilitative-modeling-using-small-models-to-generate-big-insights/#respond Thu, 21 Jan 2016 01:00:09 +0000 http://systemsthinker.wpengine.com/?p=1750 ll you need to do is read the paper or watch the news to realize that the world is becoming more difficult to understand than ever before. For instance, is the U. S. policy in Iraq achieving its intended results? Why is the stock market rising?  When will our healthcare system be able to continue […]

The post Facilitative Modeling: Using Small Models to Generate Big Insights appeared first on The Systems Thinker.

]]>
All you need to do is read the paper or watch the news to realize that the world is becoming more difficult to understand than ever before. For instance, is the U. S. policy in Iraq achieving its intended results? Why is the stock market rising?  When will our healthcare system be able to continue protecting us from health crises when more and more people are finding it difficult to receive medical treatment due to rising health costs? In response to such enormous complexity, the thoughtful observer will likely have more questions than answers! Even relatively small social systems, such as business organizations, face so many problems and choices that it’s hard to know where to start. Should we build our CRM (customer relationship management) capacity

RIGOR VS. SUPPORT

RIGOR VS. SUPPORT

The Facilitative Modeling approach for making important decisions combines high levels of analytical rigor with high levels of stakeholder support.

before we increase investment in R&D? What about staff training? Will developing a new product line increase our revenue or perhaps reduce “brand strength”? Trying to juggle so many competing demands and uncertain outcomes has led many organizations to fall back on a “stovepipe” approach, in which each functional area tries to maximize its impact — even when many experts agree that this tactic is generally detrimental to a company’s overall health. What we need are approaches that can help us effectively deal with the myriad issues we face by drawing upon the wisdom embedded “across the organization” or in external partners.

Common Decision-Making Approaches

Because of this level of complexity in all aspects of organizational life, organizations usually rely on what I refer to as the “shoot from-the-hip” approach for making important decisions. You’ve seen this technique if you’ve ever been in a team meeting in which a decision must be made today. Some members of the group toss out their ideas; most participants stay silent. Eventually, the team leader contributes his or her opinion, and everyone agrees. Decision made! Most meeting participants later bemoan the “poor” decision, claiming they won’t support it. The result? The new policy dies on the vine prior to implementation, leaving the organization the same as it was before.

In analyzing “shoot-from-the-hip” decisions, we observe that they lack strength in at least two major areas: analytical rigor and stakeholder support (see “Rigor vs. Support”). This isn’t a novel observation: Organizations have struggled with these two shortcomings for years and have devised various ways to overcome them.

1. The Technological Approach Before making a major decision, in order to increase the level of analytical rigor (or understanding of the issues), managers often rely on analysts and their toolkit — what I call the Technological Approach. Organizations adopting the Technological Approach generally do so because they’ve fallen victim to the mindset that they must find the perfect answer. The idea is that if you throw enough analysis at an issue, you can completely understand everything and uncover an ideal solution. These organizations think the answer must be found in the numbers.

To process the data they generate, organizations subscribing to the Technological Approach employ spreadsheets and statistical techniques. Some even build large simulation models to test nearly infinite possible scenarios. However, these tools can obscure the assumptions underlying the analysis. And because decision-makers aren’t privy to these hidden assumptions, they cannot compare them to their own mental models — so they do not trust the resulting recommendations. This lack of trust in the analysis is a major factor in why, although usually carefully applied, the Technological Approach rarely generates the support needed to lead to effective policy-making.

2. The Stakeholder Approach In contrast, proponents of a Stakeholder Approach often put technology aside and instead try to build knowledge and support through stakeholder involvement. Well-known techniques that follow this approach include Future Search, Open Space Technology, the World Café, various forms of dialogue — even some facilitated mapping sessions using causal loop diagrams and systems archetypes. These methodologies share an underlying mindset — by getting representation from different players in “the system,” everyone will gain a broader view of the problem at hand. Further, by allowing participants to express divergent perspectives in an unconstrained fashion, the Stakeholder Approach lets them formulate creative, systemic recommendations.

Whether trying to define the problem or to generate solutions, people applying these processes (if only implicitly) tend to follow a model of interaction described by Interaction Associates as the Open-Narrow-Close model. During the Open phase, participants get all of the data on the table while defining the problem; if they’re generating solutions, this is the stage in which creative solutions spring forth from the group’s collective wisdom. During the Narrow phase, contributors take an overwhelming list of choices (problems or solutions) and narrow them down to a few to consider further. During the Close phase, they actually choose which problems to tackle or solutions to implement and how to do so. Managers then often assign groups to each of the major action items identified during this stage and give them their blessing to “go forth and implement.”

The Stakeholder Approach includes processes that build broad support — unlike what often occurs in the Technological Approach. Plus, it helps those involved to see the system from a broad spatial and sometimes temporal perspective. These results are necessary and important for creating effective changes in any system.

A major weakness of the Stakeholder Approach, however, is that the processes used to narrow and choose

APPROACHES FOR IMPLEMENTING SYSTEMS THINKING

APPROACHES FOR IMPLEMENTING SYSTEMS THINKING

Facilitative Modeling serves as a middle ground between the Technological Approach and the Stakeholder Approach.

among the resulting divergent issues/strategies lack rigor and usually

rely on the assumption that, simply by having enough stakeholder representation, the group will make excellent decisions. But as Irving Janis learned by studying extremely poor decisions (such as the Bay of Pigs fiasco and the escalation of the Vietnam War, which he described in his book Groupthink), groups with very high average IQs can function well below expectations.

Barry Richmond of High Performance Systems, Inc. created a simple example called the Rookie-Pro exercise that also illustrates this point. Despite working with a much simpler human. resource system than that found in most organizations, only 10 to 15 percent of individuals can guess the system’s future behavior — even after lengthy discussion! So the assumption that the collective wisdom of the group will surface in a way that leads to optimal decision-making is tenuous at best.

In addition, the framework employed to guide team members in narrowing and choosing among different options doesn’t help to determine if elements of the proposed solutions need to be implemented at different times and in varying degrees. The result is that the organization often chooses to put the same amount of resources and effort into each action item. Nor does the Stakeholder Approach determine if the issues are interconnected — different groups may be separately implementing policies that should be done together or, even worse, are mutually exclusive.

Facilitative Modeling

The good news is that there is a way to both rigorously understand (or

even reduce) complexity and improve stakeholder support! Practitioners are often drawn to the field of systems thinking because of its promise to build collective understanding — to get everyone on the same page. Even so, these managers can be pulled between the Technological Approach (big simulation models created by experts) or the Stakeholder Approach (facilitated sessions using causal loop diagrams or systems archetypes). But there’s a middle ground — a large range of activities that I refer to as “Facilitative Modeling” — where tremendous power resides (see “Approaches for Implementing Systems Thinking”).

Facilitative Modeling is a Technological Approach, because it uses computer simulation and the scientific method to build understanding. It is also a Stakeholder Approach, because it requires the input of the important stakeholder groups, uses a common language so everyone can get on the same page, and creates small, simple, and easy-to-understand models. The models don’t generate the answer; rather they facilitate rigorous discussion. Facilitative Modeling usually culminates in a facilitated multi-stake-holder session in which the participants generate common understanding and make well-informed decisions.

Overview of the Process

In the Facilitative Modeling process, a group of stakeholders identifies and addresses an issue critical to their collective success. The issue is often one that has been resistant to organizational efforts to “fix” it. After choosing the area for exploration, the group sets the agenda for a facilitate

session. In preparation for that meeting, several individuals in the group serve as a modeling team and develop (alone or working with a modeler) a series of simple systems thinking simulation models that clearly articulate important components of the issue. These components may include the historical trend for that issue, the future implications if the trend continues, possible interventions, and the unintended consequences of some of these solutions. The models are deliberately kept small so that stakeholders will understand them and the development process remains manageable.

However, it’s not enough just to make models! In fact, building useful models is probably less than half of what makes a Facilitative Modeling initiative successful. The process requires the modeling team and perhaps others to create additional materials for the facilitated session, such as workbooks for tracking experiments and writing reflections, as well as CDs of the models for after the session. A facilitator and/or design team needs to carefully plan various aspects of the session, such as appropriate questions, suggested experiments to run on the model, and a mix of small and large group discussion.

The facilitated session represents the culmination of the process. During the gathering, teams of two to four people explore the models on computers. The session includes large group interludes and debriefs between exercises. And at the end of the session, participants discuss and agree on

THE FACILITATIVE MODELING PROCESS

A Facilitative Modeling Process contains the following major steps:

  1. Identify an issue of importance
  2. Determine stakeholders who have impact on/from the issue
  3. Use stakeholders to redefine the issue (either individually or collectively)
  4. Develop an agenda for a facilitated session
  5. Develop (usually more than one) model that surfaces important aspects of the issue
  6. Develop supporting materials
  7. Participate in a session using the models as tools for helping stakeholders explore, experiment with, and discuss the issues
  8. Use insights from the models and discussion to determine action items and next steps

next steps based on the insights that emerged during the event (see “The Facilitative Modeling Process”).

Facilitative Modeling in Action

Using the Facilitative Modeling Process outlined above, a nonprofit organization recently explored potential issues associated with implementing new funding policies. This organization was responsible for improving the health and welfare of the poor population in a community by giving funds to other local nonprofits to provide services. Originally, the organization had determined which organizations to fund and how much funding to supply by analyzing the services that the target organization would provide; in recent years, it had settled into just increasing the amount of funding incrementally over the previous year’s figure. To create more accountability among the local organizations and improve outcomes in the community, the nonprofit had decided to apply a performance driven approach to funding (that is, base funding on projected improvements to performance indicators and then renew the funding if the community experienced noticeable improvement in those areas).

Some members of the organization, as well as members of an important partner group, were concerned about the potential barriers to implementing this updated approach and were eager to understand possible unintended consequences that might result from the change. They agreed that a Facilitative Modeling approach would be an excellent way to surface and discuss these issues in a way that would give all stakeholders shared insight. In little more than five days of working with a facilitator and a few representatives from the organization and its partner, the team developed three small “conversational” models for a one-day facilitated session.

At the beginning of the session, the group adopted a set of ground rules to guide their interactions. Once participants agreed to the guidelines, they began by experimenting with the first model. The purpose of this initial simulation was to surface and discuss the potential dynamics associated with implementing the new funding approach. Allowing “sub groups” to work with the models at their own speed often increases their level of understanding. However, even those with some skill at reading stock and flow diagrams similar to the one shown here can be quickly overwhelmed by maps. The simulation included a function that let the sub groups slowly unfurl pieces of the map so that they more easily followed its logic (see “The First Map” on p. 5).

The map shown here represents one way to look at the different organizations affected by the nonprofit’s funding decisions. The language of stocks and flows is ideally suited for looking at this issue. The three stocks at the top of the diagram (the rectangles labeled “Resistant,” “Not Committed,” and “Committed”) represent groups of organizations. Currently, because the new approach has yet to be implemented, all organizations would belong in the “Not Committed” stock. Eventually, as the new funding approach is made into policy, organizations would begin to move into the “Committed” or “Resistant” stocks. Obviously, if possible, the funding group wanted to avoid any organizations becoming “Resistant.”

At the session, the individual groups discussed the meaning of each of the stocks. What does it mean to be “Committed”? “Resistant”? They mulled over the question, What number of “Resistant” organizations would pose a problem for the program as a whole? Can “Committed” organizations become “Resistant”? Is it realistic to assume (as the model does) that “Resistant” organizations never become “Committed”?

Talking about the diagram helped he sub groups, and eventually the entire group, reach consensus about how organizations might become committed or resistant to the changed funding policies. For many of the participants, it was the first time they had discussed the potential that some of their client organizations might resist the changes! By working with the model, the group was able to surface an unpleasant concept in a way that allowed them to grapple with its implications for their changed strategy.

They then entered different values into the model to experiment with how the funding organization might allocate its resources in the coming months. How much effort should they put into developing the performance-driven funding program? How much into explaining the program to the funded organizations? And how much of each should they do prior to officially announcing the program? After announcing it? In short, the group wrestled with the systemic or “chestration” (a concept developed by Barry Richmond) of resources the magnitude and timing of efforts required to successfully implement the strategy.

The group concluded that, in the first phase of development, they should apply most of their efforts to designing the new policy. Doing so builds the “Clarity of the Program,” which is useful in preventing “Doubts About the New Approach” down the road. They realized that they would need to allocate at least some resources in the first phase to working with the client groups and addressing their doubts about the change. This process would also help them to refine the approach (see “Implementation Timetable” on page 6). The next phase would require additional work with the other stakeholder groups to explain the program prior to release. The third and fourth phases would involve implementation; this is when the nonprofit’s staff members would spend most of their time addressing the doubts of the affected organizations.
The group realized that the exact numbers of organizations in each category wouldn’t be the same in real life as in the simulation, but that the stories described by the model were consistent with what they now expected might happen when overhauling their approach to funding. In keeping with the need for systemic orchestration the group concluded that their allocation of strategic resources must shift over time, depending on which phase they were in (for example, in the second phase, they would need to apply some resources to program development and even more to working with stakeholders).

Working with Subsequent Models

In Facilitative Modeling, each model tends to add to the understanding generated by previous ones. Because the performance-based funding approach would require implementing a new IT system, the second model helped participants explore how a funded organization would need to allocate resources in order to develop a new IT system and build its staff ’s capacity to use it. The third model served as the capstone exercise, because it required participants to explore how client organizations might allocate their resources across the following needs: providing services, building and maintaining the IT system, investing in staff skill development, and collaborating with partner organizations.

THE FIRST MAP

THE FIRST MAP

The three stocks at the top of the diagram (the rectangles labeled “Resistant,” “Not Committed,” and “Committed”) represent groups of organizations. As the new funding approach is made into policy, organizations would begin to move from the “Not Committed” stock into the “Committed” or “Resistant” stocks.

During the large-group debrief of the third model, the nonprofit’s senior director said that he didn’t like one dynamic that he experienced with the model. In all cases, after the funding change, the youth population’s sense of disconnection from the community initially worsened, even when the simulated strategies encouraged a majority of client agencies to be committed to the shift and to effectively implement performance-based approaches to providing services. When he experimented with the model, the director kept trying to avoid this “worse-before-better” dynamic. Through probing questions, the group learned that it wasn’t that he didn’t expect this behavior to happen, he just wished it wouldn’t!

IMPLEMENTATION TIMETABLE

IMPLEMENTATION TIMETABLE

By using the model to explore the magnitude and timing of efforts required to successfully implement the strategy, the group concluded that, in the first phase of development, they should focus on designing the new policy.

This revelation led to an interesting discussion of what is often an undiscussable in the public sector: that policies designed to improve social systems often take time before they lead to noticeable improvements and that there is often conspicuous degradation of performance in the interim. The director expressed that it was political suicide to admit that things might actually get worse before improving. Ultimately, through the facilitated discussion, he came to understand that regardless of whether he wanted to admit that such a dynamic might occur, it was inevitable, given the long delays before activities such as IT development and skill-building would have a positive effect on services. Through this admission, he and his staff were then able to explore options for mitigating the effects of this unavoidable dynamic.

Ultimately, the nonprofit’s staff left the session with useful insight in several areas. First, they all understood that some of their client organizations might resist the new approach. Second, they realized that it would be helpful for them to include those organizations in developing the program. Third, the group agreed that building staff skills was likely to be a more challenging impediment to successful implementation of the changed approach than developing the IT infrastructure. Finally, they accepted that systemwide implementation would require orchestrating a series of activities that, even in the best of circumstances, would cause a “worse-before-better” dynamic. All of these insights were just the beginnings of an ongoing dialogue, and all were facilitated by using small models to focus the conversation.

The Value of Facilitative Modeling

As shown in the example above, there is a powerful place for small models in a facilitated environment. The process used for developing good systems thinking models increases the rigor of the analysis and captures the benefits of a Technological Approach. At the same time, by keeping models small, Facilitative Modeling improves on the benefits of a Stakeholder Approach and increases the likelihood that all participants end up in alignment. Moreover, the Facilitative Modeling approach uses a language — stocks and flows — that is more representative of reality than other visual mapping languages. For this reason, the participants are able to discuss and come to a novel understanding of the assumptions built into the model. Running the simulation provides an essential test of the group’s understanding and facilitates further conversations about the likelihood of different results. The computer-generated “microworld” creates a safe environment for experimentation.

NEXT STEPS

  • Read up on the value of small models, starting with the resources in the “For Further Reading” section.
  • It’s unusual to find modeling and facilitation skills in the same person, so look around your organization for people who might work in teams to create one of these events. They’ll likely need some training.
  • Pick an issue that is generating a “buzz” in the organization. Quickly develop a map and model that fits on one screen or one flipchart. Don’t search for the truth, just useful insights.
  • Keep at it! Rather than using Facilitated Modeling as a one-time event, think about applying it as part of an ongoing organizational dialogue.

The post Facilitative Modeling: Using Small Models to Generate Big Insights appeared first on The Systems Thinker.

]]>
https://thesystemsthinker.com/facilitative-modeling-using-small-models-to-generate-big-insights/feed/ 0