Donella Meadows: Leverage Points: Places to intervene in a system


A wonderful article from Donella Meadows drawing on Jay Forrester pointing out the places to intervene in a system that have most leverage. Almost invariably you find they are intuitively known yet somehow being used in the wrong direction! This is a well-considered and powerful argument as to where we might make most impact if we are trying to effect some sustainable change in the environment. (Introduced by John Atkinson).


 

By Donella Meadows. Available at donellameadows.org

Folks who do systems analysis have a great belief in “leverage points.” These are places within a complex system (a corporation, an economy, a living body, a city, an ecosystem) where a small shift in one thing can produce big changes in everything.

This idea is not unique to systems analysis — it’s embedded in legend. The silver bullet, the trimtab, the miracle cure, the secret passage, the magic password, the single hero who turns the tide of history. The nearly effortless way to cut through or leap over huge obstacles. We not only want to believe that there are leverage points, we want to know where they are and how to get our hands on them. Leverage points are points of power.

The systems analysis community has a lot of lore about leverage points. Those of us who were trained by the great Jay Forrester at MIT have all absorbed one of his favorite stories. “People know intuitively where leverage points are,” he says. “Time after time I’ve done an analysis of a company, and I’ve figured out a leverage point — in inventory policy, maybe, or in the relationship between sales force and productive force, or in personnel policy. Then I’ve gone to the company and discovered that there’s already a lot of attention to that point. Everyone is trying very hard to push it IN THE WRONG DIRECTION!”

The classic example of that backward intuition was my own introduction to systems analysis, the world model. Asked by the Club of Rome to show how major global problems — poverty and hunger, environmental destruction, resource depletion, urban deterioration, unemployment — are related and how they might be solved, Forrester made a computer model and came out with a clear leverage point1: Growth. Not only population growth, but economic growth. Growth has costs as well as benefits, and we typically don’t count the costs — among which are poverty and hunger, environmental destruction, etc. — the whole list of problems we are trying to solve with growth! What is needed is much slower growth, much different kinds of growth, and in some cases no growth or negative growth.

The world’s leaders are correctly fixated on economic growth as the answer to virtually all problems, but they’re pushing with all their might in the wrong direction.

Another of Forrester’s classics was his urban dynamics study, published in 1969, which demonstrated that subsidized low-income housing is a leverage point.2 The less of it there is, the better off the city is — even the low-income folks in the city. This model came out at a time when national policy dictated massive low-income housing projects, and Forrester was derided. Now those projects are being torn down in city after city.

Counterintuitive. That’s Forrester’s word to describe complex systems. Leverage points are not intuitive. Or if they are, we intuitively use them backward, systematically worsening whatever problems we are trying to solve.

The systems analysts I know have come up with no quick or easy formulas for finding leverage points. When we study a system, we usually learn where leverage points are. But a new system we’ve never encountered? Well, our counterintuitions aren’t that well developed. Give us a few months or years and we’ll figure it out. And we know from bitter experience that, because of counterintuitiveness, when we do discover the system’s leverage points, hardly anybody will believe us.

Very frustrating, especially for those of us who yearn not just to understand complex systems, but to make the world work better.

So one day I was sitting in a meeting about how to make the world work better — actually it was a meeting about how the new global trade regime, NAFTA and GATT and the World Trade Organization, is likely to make the world work worse. The more I listened, the more I began to simmer inside. “This is a HUGE NEW SYSTEM people are inventing!” I said to myself. “They haven’t the SLIGHTEST IDEA how this complex structure will behave,” myself said back to me. “It’s almost certainly an example of cranking the system in the wrong direction — it’s aimed at growth, growth at any price!! And the control measures these nice, liberal folks are talking about to combat it — small parameter adjustments, weak negative feedback loops — are PUNY!!!”

Suddenly, without quite knowing what was happening, I got up, marched to the flip chart, tossed over to a clean page, and wrote:

PLACES TO INTERVENE IN A SYSTEM

(in increasing order of effectiveness)

9. Constants, parameters, numbers (subsidies, taxes, standards).
8. Regulating negative feedback loops.
7. Driving positive feedback loops.
6. Material flows and nodes of material intersection.
5. Information flows.
4. The rules of the system (incentives, punishments, constraints).
3. The distribution of power over the rules of the system.
2. The goals of the system.
1. The mindset or paradigm out of which the system — its goals, power structure, rules, its culture — arises.

Everyone in the meeting blinked in surprise, including me. “That’s brilliant!” someone breathed. “Huh?” said someone else.

I realized that I had a lot of explaining to do.

I also had a lot of thinking to do. As with most of the stuff that come to me in boil-over mode, this list was not exactly tightly reasoned. As I began to share it with others, especially systems analysts who had their own lists and activists who wanted to put the list to immediate use, questions and comments came back that caused me to rethink, add and delete items, change the order, add caveats.

In a minute I’ll go through the list I ended up with, explain the jargon, give examples and exceptions. The reason for this introduction is to place the list in a context of humility and to leave room for evolution. What bubbled up in me that day was distilled from decades of rigorous analysis of many different kinds of systems done by many smart people. But complex systems are, well, complex. It’s dangerous to generalize about them. What you are about to read is a work in progress. It’s not a recipe for finding leverage points. Rather it’s an invitation to think more broadly about system change.

Here, in the light of a cooler dawn, is a revised list:

PLACES TO INTERVENE IN A SYSTEM

(in increasing order of effectiveness)

12. Constants, parameters, numbers (such as subsidies, taxes, standards).
11. The sizes of buffers and other stabilizing stocks, relative to their flows.
10. The structure of material stocks and flows (such as transport networks, population age structures).
9. The lengths of delays, relative to the rate of system change.
8. The strength of negative feedback loops, relative to the impacts they are trying to correct against.
7. The gain around driving positive feedback loops.
6. The structure of information flows (who does and does not have access to information).
5. The rules of the system (such as incentives, punishments, constraints).
4. The power to add, change, evolve, or self-organize system structure.
3. The goals of the system.
2. The mindset or paradigm out of which the system — its goals, structure, rules, delays, parameters — arises.
1. The power to transcend paradigms.

To explain parameters, stocks, delays, flows, feedback, and so forth, I need to start with a basic diagram.

The “state of the system” is whatever standing stock is of importance — amount of water behind the dam, amount of harvestable wood in the forest, number of people in the population, amount of money in the bank, whatever. System states are usually physical stocks, but they could be nonmaterial ones as well — self-confidence, degree of trust in public officials, perceived safety of a neighborhood.

There are usually inflows that increase the stock and outflows that decrease it. Deposits increase the money in the bank; withdrawals decrease it. River inflow and rain raise the water behind the dam; evaporation and discharge through the spillway lower it. Births and immigrations increase the population, deaths and emigrations reduce it. Political corruption decreases trust in public officials; experience of a well-functioning government increases it.

Insofar as this part of the system consists of physical stocks and flows — and they are the bedrock of any system — it obeys laws of conservation and accumulation. You can understand its dynamics readily, if you can understand a bathtub with some water in it (the state of the system) and an inflowing faucet and outflowing drain. If the inflow rate is higher than the outflow rate, the stock gradually rises. If the outflow rate is higher than the inflow, the stock gradually goes down. The sluggish response of the water level to what could be sudden twists in the input and output valves is typical — it takes time for flows to accumulate, just as it takes time for water to fill up or drain out of the tub.

The rest of the diagram is the information that causes the flows to change, which then cause the stock to change. If you’re about to take a bath, you have a desired water level in mind. You plug the drain, turn on the faucet and watch until the water rises to your chosen level (until the discrepancy between the desired and the actual state of the system is zero). Then you turn the water off.

If you start to get in the bath and discover that you’ve underestimated your volume and are about to produce an overflow, you can open the drain for awhile, until the water goes down to your desired level.

Those are two negative feedback loops, or correcting loops, one controlling the inflow, one controlling the outflow, either or both of which you can use to bring the water level to your goal. Notice that the goal and the feedback connections are not visible in the system. If you were an extraterrestrial trying to figure out why the tub fills and empties, it would take awhile to figure out that there’s an invisible goal and a discrepancy-measuring process going on in the head of the creature manipulating the faucets. But if you watched long enough, you could figure that out.

Very simple so far. Now let’s take into account that you have two taps, a hot and a cold, and that you’re also adjusting for another system state — temperature. Suppose the hot inflow is connected to a boiler way down in the basement, four floors below, so it doesn’t respond quickly. And you’re making faces at yourself in the mirror and not paying close attention to the water level. And, of course, the inflow pipe is connected to a reservoir somewhere, which is connected to the whole planetary hydrological cycle. The system begins to get complex, and realistic, and interesting.

Mentally change the bathtub into your checking account. Write checks, make deposits, add a faucet that keeps dribbling in a little interest and a special drain that sucks your balance even drier if it ever goes dry. Attach your account to a thousand others and let the bank create loans as a function of your combined and fluctuating deposits, link a thousand of those banks into a federal reserve system — and you begin to see how simple stocks and flows, plumbed together, make up systems way too complex to figure out.

That’s why leverage points are not intuitive. And that’s enough systems theory to proceed to the list.


 

 

Leave a Reply