Harvard Business Review: The Unselfish Gene
Harvard Business Review:ssä julkaistu artikkeli käy läpi moninaisia myyttejä ihmisen "luontaisesta" itsekkyydestä, ja tarkastelee viimeaikaista sosiobiologista tutkimusta (josta esimerkkejä myös täällä, täällä ja täällä) aiheen tiimoilta, valoittaen ihmisten käytöksen olevan paljon epäitsekkäämpää kuin aiemmin on oletettu.
Valtavirran taloustieteen käyttämät olettamukset siitä, mitkä asiat motivoivat ihmisiä toimimaan, ovat usein vahvasti virheellisiä tai jopa tarkoitushakuisilta vaikuttavia, kuten esimerkiksi uutiset johtajien palkkauksesta osoittavat.
Harvard Business Review:n artikkeli käy läpi, kuinka ihmisten motivoituminen rakentuu kestävämmin moninaisuuden, yhteistyön, tasa-vertaisuuden, omaehtoisuuden ja avoimuuden varaan.
In 1976, evolutionary biologist Richard Dawkins wrote in The Selfish Gene, “If you wish, as I do, to build a society in which individuals cooperate generously and unselfishly towards a common good, you can expect little help from biological nature. Let us try to teach generosity and altruism, because we are born selfish.” By 2006, the tide had started to turn. Harvard University mathematical biologist Martin Nowak could declare, in an overview of the evolution of cooperation in Science magazine, “Perhaps the most remarkable aspect of evolution is its ability to generate cooperation in a competitive world. Thus, we might add ‘natural cooperation’ as a third fundamental principle of evolution beside mutation and natural selection.”
Why is this deep-rooted belief about human selfishness beginning to change? To some extent, the answer is specific to evolutionary biology. But similar ideas challenging the notion that people are born selfish have surfaced in several other fields, such as psychology, sociology, political science, and experimental economics. Together, these ideas are tracing a new intellectual arc in the disciplines concerned with human action and motivation.
Until the late 1980s, our understanding of what made people tick was marked by the rise of an ever more precisely defined model of self-interested rationality—the rational actor theory—which provided the basis for thinking about human behavior, institutions, and organizations. Assuming that we are uniformly rational and concerned only with advancing our material interests provided good enough predictions about our behavior—or so we thought—and convinced us that we are best off designing systems as though we are selfish creatures. Moreover, people who don’t cooperate can ruin things for everyone, so to save ourselves from freeloaders we built systems by assuming the worst of everyone.
Nowhere are the assumptions about the effective harnessing of self-interest, and the terrible consequences, expressed more clearly than in former Federal Reserve chairman Alan Greenspan’s 2008 testimony to the U.S. Senate after the collapse of the banking and credit system. “Those of us who have looked to the self-interest of lending institutions to protect shareholders’ equity—myself, especially—are in a state of shocked disbelief,” Greenspan said. “I’ve been going for 40 years or more with very considerable evidence that it was working exceptionally well.”
The widespread conviction about the power of self-interest is based on two long-standing, partly erroneous, and opposing assumptions about getting people to cooperate. One of them inspired the philosopher Thomas Hobbes’s Leviathan in 1651: Humans are fundamentally and universally selfish, and governments must control them so that they don’t destroy one another in the shortsighted pursuit of self-interest. The second is Adam Smith’s alternative solution: the invisible hand. Smith’s 1776 book, The Wealth of Nations, argued that because humans are self-interested and their decision making is driven by the rational weighing of costs and benefits, their actions in a free market tend to serve the common good. Though their prescriptions are very different, both the Leviathan and the invisible hand have the same starting point: a belief in humankind’s selfishness.
Models of self-interested rationality increasingly came to be seen as universally correct and applicable across an ever-expanding range of human practices. Economics became the primary medium of expression. For example, Nobel laureate Gary Becker argued in 1968 that the calculus of criminals is best understood as a set of rational trade-offs between the benefits of crime and the costs of punishment, discounted by the probability of detection. Imposing harsher punishments and increasing police enforcement, people concluded, are the obvious ways to tackle crime. The same year, Garrett Hardin described the tragedy of the commons—the parable about farmers who shared a piece of land with no restrictions on the number of cattle each could graze on it. They kept letting more cattle graze on the commons until the grass was gone, leaving nothing for anyone. No one stopped grazing animals, Hardin argued, for fear of losing out to the other farmers, who would continue overexploiting the commons. The conclusion was that as self-interested actors, human beings will inevitably destroy shared resources unless the latter are subject either to regulation or to property rights.
Like biology, however, the discipline of economics has changed over the years. In 2009, Elinor Ostrom was awarded the Nobel Prize in economics for showing how commons can—and do—sustain themselves for centuries as well-functioning systems. The most striking example is in Spain, where thousands of farmers have been managing their access to water through self-regulated irrigation districts for more than five centuries. To take another example, 75% of U.S. cities with populations of more than 50,000 have successfully adopted some version of community policing, which reduces crime not by imposing harsher penalties but by humanizing the interactions of the police with local communities.
The Success of the Commons (Located at the end of this article)
Overcoming our assumptions about self-interest is critical to diagnose the risks that new business rivals pose. In 1999, two experts showed how Microsoft’s entry into the encyclopedia market with Encarta symbolized the transformation made possible by networked information economics. Here was a major player leveraging a powerful position, gained by early-mover advantages and network effects, to bundle a product and distribute it widely at a low cost. Britannica’s lumbering 32-volume, multi-thousand-dollar offering didn’t stand a chance. Ten years later, Britannica had been pushed to a different model—but not by Encarta. Microsoft stopped producing Encarta in 2009 because of competition from a business model that is inconceivable according to the belief in self-interested rationality: Wikipedia.
If you feel that Wikipedia—the seventh or eighth most trafficked website, with more than 300 million visitors a month—is unique, ask Zagat’s how the user-generated Yelp has affected its market or Fodor what it thinks about TripAdvisor. The rise of open source software is an example of the same dynamic. For more than 15 years, companies have used open source Apache software for mission-critical web applications, with Microsoft’s server software trailing a distant second. Companies such as Google, Facebook, and Craigslist have also found ways to become profitable by engaging people. Our old models of human behavior did not—could not—predict that.
The way these organizations work flies in the face of the assumption that human beings are selfish creatures. For decades, economists, politicians, legislators, executives, and engineers have built systems and organizations around incentives, rewards, and punishments to get people to achieve public, corporate, and community goals. If you want employees to work harder, incorporate pay for performance and monitor their results more closely. If you want executives to do what’s right for shareholders, pay them in stock. If you want doctors to look after patients better, threaten them with malpractice suits.
Yet, all around us, we see people cooperating and working in collaboration, doing the right thing, behaving fairly, acting generously, caring about their group or team, and trying to behave like decent people who reciprocate kindness with kindness. The adoption of cooperative systems in many fields has been paralleled by a renewed interest in the mechanics of cooperation among researchers in the social and behavioral sciences. Through the work of many scientists, we have begun to see evidence across several disciplines that people are in fact more cooperative and selfless—or behave far less selfishly—than we have assumed. Perhaps humankind is not so inherently selfish after all.
Dozens of field studies have identified cooperative systems, many of which are more stable and effective than incentive-based ones. Evolutionary biologists and psychologists have found neural and possibly genetic evidence of a human predisposition to cooperate, which I shall describe below. After years of arguments to the contrary, there is growing evidence that evolution may favor people who cooperate and societies that include such individuals.
In fact, a distinct pattern has emerged. In experiments about cooperative behavior, a large minority of people—about 30%—behave as though they are selfish, as we commonly assume. However, 50% systematically and predictably behave cooperatively. Some of them cooperate conditionally; they treat kindness with kindness and meanness with meanness. Others cooperate unconditionally, even when it comes at a personal cost. (The remaining 20% are unpredictable, sometimes choosing to cooperate and other times refusing to do so.) In no society examined under controlled conditions have the majority of people consistently behaved selfishly.
Predisposed to Cooperate (Located at the end of this article)
That’s perhaps why using controls or carrots and sticks to motivate people isn’t effective. We need systems that rely on engagement, communication, and a sense of common purpose and identity. Most organizations would be better off helping us to engage and embrace our collaborative, generous sentiments than assuming that we are driven purely by self-interest. In fact, systems based on self-interest, such as material rewards and punishment, often lead to less productivity than an approach oriented toward our social motivations.
The challenge we face today is to build new models based on fresh assumptions about human behavior that can help us design better systems. The image of humanity this shift requires will allow us to hold a more benevolent model of who we are as human beings. No, we are not all Mother Teresa; if we were, we wouldn’t have heard of her. However, a majority of human beings are more willing to be cooperative, trustworthy, and generous than the dominant model has permitted us to assume. If we recognize that, we can build efficient systems by relying on our better selves rather than optimizing for our worst. We can do better.
The Science of Cooperation
What would the world be like if some people consistently operated as self-interested rational actors while others did not? Take the experiments that Lee Ross and his colleagues conducted with American college students and Israeli fighter pilots. As we know, in prisoner’s dilemma games, the two players will both be better off if they cooperate, but neither can trust the other to do so. Game theory predicts that both players will choose not to cooperate instead of taking the risk of losing out by cooperating. Extensive experimental work, however, has shown that people actually cooperate more than the theory predicts.
Ross and his collaborators told half the players in their experiments that they were playing the Community Game and the other half that they were playing the Wall Street Game. The two groups were identical in all other respects. Yet, in the Community Game group, 70% started out playing cooperatively and continued to do so throughout the experiment. In the Wall Street Game group, the proportions were reversed: 70% of the players didn’t cooperate with one another. Thirty percent started out playing cooperatively but stopped when the others didn’t respond.
The Wall Street Game (Located at the end of this article)
This experiment illustrates a couple of points. One, we are not all the same. About 30% of players cooperated even in the Wall Street Game while another 30% acted with self-interested rationality even when told they were in the Community Game. Two, many of us are influenced by context. According to Ross, the framing of the games influenced 40% of the sample. The players who thought they were acting in a context that rewarded self-interest behaved in a manner consistent with that expectation; participants who felt they were in a situation that demanded a prosocial attitude conformed to that scenario. When Ross and his colleagues asked the subjects’ teachers or commanders to predict who would and wouldn’t cooperate, it turned out that the game’s framing forecast behavior better than the teachers and commanders could. It seemed that participants who were seen as self-interested could be induced to cooperate if the games they were playing were reframed.
Anyone designing a cooperative system—be it an organizational process, a legal regime, or a technical platform—and optimizing it for only 30% of the population leaves on the table massive amounts of human potential. Moreover, such systems have to rely on monitoring, rewards, and punishments; their efficiency is limited by information-gathering techniques. Systems that harness intrinsic motivations and self-directed cooperative behavior don’t need to limit themselves to knowledge of what people will do. Every participant becomes his or her own monitor, bringing insight and initiative to the task—whether or not someone is monitoring behavior.
What might account for human cooperation? The first generation of explanations in evolutionary biology began with the theory of kin selection, which predicts that human beings will incur costs only to save others who carry their genes, such as siblings and cousins. Evolutionary biologist J.B.S. Haldane put it in less than romantic terms: “I will jump into the river to save two brothers or eight cousins.” That explained the cooperative behavior in ant and bee colonies as well as in smaller family groups. From there, it was a small hop to accepting reciprocity between individuals not genetically related as an important source of cooperation: “I’ll scratch your back if you immediately scratch mine.”
However, these theories still could not explain field observations in the wild, such as those of coyotes and badgers in the National Elk Refuge in Wyoming. Scientists there observed that the two groups of animals collaborated to hunt ground squirrels. Coyotes, which are faster and have a larger range, would scout, and once they spotted a squirrel, they would signal to the badgers. The badgers, which are underground hunters and catch their prey by trapping it in dead-end tunnels, would then burrow and lie in wait. The squirrels were trapped between a hammer and an anvil: If they escaped the badgers by going above ground, the coyotes would catch them. If they evaded the coyotes by ducking below ground, the badgers would corner them. At the end of a hunt, only one or the other would eat the squirrel, but still the badgers and coyotes collaborated.
Over the years, researchers have developed models of indirect reciprocity, networked reciprocity, and even group selection to explain observations of looser and more remote cooperation. The findings in biology meet human society directly in the work on gene-culture coevolution of anthropologists Peter Richerson and Robert Boyd. They have been gathering evidence for the proposition that cultural practices, too, are subject to evolutionary pressures and that human individuals and cultures evolve toward more-successful strategies.
Imagine two groups. In one, the practice of serving in the army is valued; in the other, it’s not. In the first group, people are willing to fight and risk their lives for their group or donate special skills such as weapon making or intelligence gathering. In the second, they aren’t. If these two groups go to war, the outcome will never be in doubt. And populations don’t have to wait until genetic changes disperse these traits; they can copy one another’s best practices if they seem to work better.
Boyd and Richerson argue that cultures evolve not only through the copying of practices but also through genetic changes; in other words, genes and cultures coevolve. Cultural practices can influence the genetic development of populations that adopt them, favoring genetic predispositions that benefit most from the cultural practice or make following it easier. The researchers’ most physiologically striking example is adult lactose tolerance, which is widespread among descendents of European people who drink milk but is rare among those who created yogurts and cheeses to break down lactose so that they could consume milk products. Lactose tolerance is a genetic trait, but it can be attributed to a cultural practice—drinking milk rather than eating yogurt or cheese—that has existed for a very short time in evolutionary terms.
What might the genetic components of a cooperative culture look like? Political scientists such as James Fowler and his collaborators found that the decision to vote has a strong genetic component. In a 2008 paper in the American Political Science Review, they described their analysis of the voting behavior of 400 identical and nonidentical twins in the Los Angeles area. All the twins in the study were raised together, which meant that the effects of early upbringing, socioeconomic status, and political affiliation didn’t compromise the results. The study found that identical twins were more likely to show the same behavior—either vote or not vote—than were nonidentical twins. Statistical analyses conducted by Fowler and his collaborators suggest that slightly more than 50% of the concordance in behavior was due to genetics.
How could genes account for a practice that has been widespread in the modern world for only around 100 years? That’s a blip on the evolutionary radar; a gene for voting couldn’t possibly have evolved in so short a time. Besides, voting is a puzzle for the rational actor model. The probability that an individual’s vote will affect the outcome of a policy he or she cares about is infinitesimally small, so much so that any cost, including a 15-minute detour, should outweigh it. Still, hundreds of millions of us around the world violate self-interested rationality in public every year. We vote.
What on earth does the propensity to vote have to do with collaboration, you might ask? Imagine there are such things as personality traits, which typify an individual’s behavior. Imagine that one such trait is conscientiousness. People who have that trait—in personality psychology, it is one of the Big Five—tend to be happier with themselves and do what they think is right according to the cultural context. Voting happens to be one way—a relatively inexpensive way—of making conscientious people comfortable in their own skin.
Now let’s bring Boyd and Richerson’s theory back into the equation. Imagine that over the millennia, some cultures rewarded and valued conscientiousness. In those cultures, people who had a genetic predisposition to be conscientious would thrive. Because they would be considered desirable mates, they would reproduce relatively more often, meaning there would be more of them over time. The cultures, in turn, would be able to sustain cooperation more effectively because people would be driven to do the right thing even when they weren’t directly monitored, punished, or rewarded.
Several studies suggest that personality traits are partly heritable. A few years ago, Thomas Bouchard and Matt McGue published an extensive review of twin, adoption, and biological studies that looked at genetic influences on psychological and personality differences. They concluded that personality traits such as extraversion, neuroticism, agreeableness, and openness were on average between 42% and 57% heritable, while shared environmental factors such as the home, which most people believe are major influences, did not correlate with personality.
The biology of cooperation draws our attention because it speaks with the authority of the most reliable way we know how to know: science. If we simply say the word empathy, it sounds mushy. If a scientist like Tania Singer shows, using fMRI scans, that women’s brains light up in three places when they get electric shocks, and that when their partners are shocked, their brains light up in two of the same three places, we understand empathy not as a hard-to-define feeling but as something that people experience in a physical sense. This phenomenon was originally discovered by neurophysiologist Giacomo Rizzolatti, who also found that our brains mirror not only pain and motor movements but pure emotions as well. When Rizzolatti and his colleagues showed subjects videos in which people were expressing disgust on their faces, the same neurons fired in the subjects’ brains as the ones that had been activated when they themselves were exposed to disgusting smells. Cognitively and emotionally, we may be able to “feel” what others are feeling.
Neuroscience also shows that a reward circuit is triggered in our brains when we cooperate with one another, and that provides a scientific basis for saying that at least some people want to cooperate, given a choice, because it feels good. Kevin McCabe and his collaborators have shown that people are rewarded when they trust others; James Rilling and his team have demonstrated that our brains light up differently when we are playing with another human being than they do when we are using a computer.
As we learn more about the biology of behavior, we are gaining a better grasp of the role that genes play in interactions with culture. The ability to trust is a key element in cooperation. It appears to have a biological component, suggesting it may even have a genetic basis. One animal study recently looked at the effects of the brain chemical oxytocin on trust formation in voles. The researchers compared monogamous prairie and pine voles, and polygamous mountain and meadow voles, which mated more promiscuously. They found that the monogamous voles had higher-density oxytocin receptors in many areas of the brain than did the polygamous voles. This meant more-trusting partnerships occurred between the animals whose brains had better oxytocin uptake. Researchers later found that when human beings were given oxytocin nasal spray, they, too, were more likely to trust their partners.
We are far from having a clear model that connects all these dots; what I offer is conjecture based on research that has not made the leap to claim those connections. However, the argument is suggestive; it gives us a framework to grapple with the idea that many of us are, by a combination of nature, nurture, and the interactions between us, much better and less selfish than our standard models predict, as philosophers such as Jean-Jacques Rousseau and David Hume have argued. In fact, it brings the centuries-old debate between Hobbes and Rousseau—or between the Adam Smith of The Wealth of Nations and the Adam Smith of The Theory of Moral Sentiments—to the present, with genetics and fMRI studies thrown in as fresh evidence. Over the past decade, Rousseau seems to have gained the advantage over Hobbes.
The Building Blocks of Cooperative Systems
Analyzing the research on human cooperation from so many disciplines helps identify some levers that may motivate people to contribute to the collective effort instead of pursuing their own interests at the group’s expense. These levers aren’t equally appropriate for all types of systems; different combinations are better for different activities and populations. With that caveat, here are seven preliminary ideas for building cooperative systems.
Communication.
Nothing is more important in a cooperative system than communication among participants. When people are able to communicate, they are more empathetic and more trusting, and they can reach solutions more readily than when they don’t talk to one another. Over hundreds of experiments spanning decades, no single factor has had as large an effect on levels of cooperation as the ability to communicate.
Framing and authenticity.
People react differently depending on how situations are framed, but they aren’t stupid. It’s important that the frame fit reality. Framing a practice as collaborative or a system as a community may encourage cooperation for a while, but it won’t last if that claim isn’t believable.
Empathy and solidarity.
For reasons biological and social, the more empathy and solidarity we feel with others, the more likely we are to account for their interests. Similarly, solidarity with a group makes us more likely to sacrifice our interest for that of the collective. The difference between solidarity and discrimination is a slippery slope, though. While it’s impossible to deny the role of team spirit in getting people to cooperate, we do need to be wary of its ability to exclude nonmembers.
The first step in constructing a team or encouraging prosocial behavior is to expand the set about whom participants feel they should be concerned. That’s not difficult; experimental economists such as Bruno Frey and Iris Bohnet have shown that just seeing another participant’s face increases cooperation levels substantially. Empathy for those affected by our actions alters the outcomes we care about, and that, in turn, changes our behavior.
Fairness and morality.
People care about being treated fairly. According to work conducted by Ernst Fehr and his collaborators, “fair” does not mean “equal.” In experiments where some people gained because of skill or luck, the others initially were willing to let them walk away with a much larger share of the gains.
We are also flexible in our ability to accept norms of fairness. Historian Andrea McDowell, for example, showed how mining camps during the California gold rush enacted very different codes for what counted as fair or unfair claim jumping. Though the norms differed among camps, each camp applied its own norms uniformly, and newcomers accepted them. In another groundbreaking study, when anthropologists Rob Boyd and Joe Henrich and economists Sam Bowles, Colin Camerer, Ernst Fehr, and Herbert Gintis conducted games with subjects from 15 tribal societies all over the world, they found enormous differences in conceptions of fairness.
Fair Does Not Mean Equal (Located at the end of this article)
To some extent, people care about having a fair share of whatever benefits their cooperation produces. Experiments conducted in market-based societies suggest that most participants tend to follow an equal distribution norm if there’s no reason to deviate from it. If some players deviate too far from the norm, the others will punish them even if by doing so they lose, too. In fact, most people prefer to lose everything than to walk away with too small a share of the gains.
People also care about doing the right thing, whatever that is. Clearly defined values are crucial to cooperation; discussing, explaining, and reinforcing the right or ethical thing to do will increase the degree to which people behave that way. It’s important for a cooperative system to have codes that are predicated less on rules and more on social norms. They must be flexible or plastic enough to adapt to change, and they must be transparent. Letting people in a system see what others are doing reinforces social norms and gets people to comply—not because they fear embarrassment or ostracism but because they want to do what is normal.
Rewards and punishment.
In order to foster cooperation, it is critical to set up systems that appeal to participants’ intrinsic motivations—that is, what they want to do from within—instead of systems based on monitoring people and rewarding or punishing them according to their behavior.
Two facts make this tough to implement. First, intrinsic motivation is in its infancy as an area of inquiry. Second, there is a consistent and stable body of work that tells us that if we add money, things may go worse rather than better. That is, monetary incentives and material rewards can crowd out intrinsic motivations to cooperate or display empathetic behavior. If you are invited to a dinner party, you can bring a gift—flowers, wine, or whatever counts as a friendly gesture. If instead you leave $100 on the table at the end of the meal, you will destroy the atmosphere because you have turned a social interaction into a commercial exchange. This captures the findings of studies in experimental economics and psychology as well as many field studies of the crowding-out phenomenon.
For example, a recent study in Sweden, which has a purely voluntary blood donation system, showed that women’s contributions decreased when they were offered payments. Donating blood is a way for people to signal that they are the kind willing to sacrifice for the good of others; offering money spoiled that effect. To test that hypothesis, the experimenters later permitted donors to give the money they would have received to a foundation that works on children’s health issues. Sure enough, the women’s contributions went back up.
The Power of Intrinsic Motivation (Located at the end of this article)
Whenever you design a policy that relies on monetary rewards, you have to assume that it will have side effects on the psychological, social, and moral dimensions of human motivation. A change that would lead to more behavior of the kind you are rewarding, or less of the kind you are punishing, may cause the exact opposite behavior because the effects on the material self-interest vector will be more than canceled out by the effects on the intrinsic motivation vectors. We shouldn’t try to motivate people only by offering them material payoffs; we should also focus on motivating them socially and intellectually by making cooperation social, autonomous, rewarding, and even—if we can swing it—fun.
Reputation and reciprocity.
One extremely important form of cooperation hinges on long-term reciprocity, both direct and indirect. Systems that rely on reciprocity, particularly of the “pay-it-forward” kind, are enormously valuable but easily corrupted. Reputation is the most powerful tool against this. As online systems such as eBay have shown, even anonymous reputation systems—such as “handles” that betray nothing of a person’s real identity—are sufficient to keep people in line.
Diversity.
Systems that harness diverse motivations are more productive than are those built for people who care only about material payoffs. Because we differ from one another, cooperative systems have to be flexible. They also need to recognize that we are sensitive to the costs of cooperating, but the degree of sensitivity can change. It’s possible to create a system that depends on massive self-sacrifice, but it’s extremely tough to sustain it. The fate of the nationalist and communist experiments of the 20th-century in Germany, Russia, and China provide ample evidence of that.
Debunking the Myth of Selfishness
Given the wealth and depth of the evidence that refutes the model of self-interested rationality, why does it still dominate? Four reasons account for its persistence.
Partial truth.
The myth of universal selfishness endures in part because it isn’t entirely wrong—only mostly so. We all have experienced moments when we have been torn between what is good for us and for others, and many of us have, on occasion, caved in to self-interest even when other values demanded something else. We can recognize ourselves in the story of rational self-interest. However, it’s a problem when we take a partial truth and treat it as though it were the whole. We then build systems, like the Wall Street Game, that drive almost everyone to act in a self-interested way instead of the Community Game, where the self-interested are a minority.
History.
The roots of the assumption about universal selfishness are probably as old as human culture. Nonetheless, it gained prominence from the 1950s to the 1980s, becoming dominant in public discourse around the world during the Cold War, when the global power struggle between the United States and former Soviet Union was couched as an ideological battle between capitalism and free enterprise, on the one hand, and socialism and collectivism on the other. Nothing is more seductive than seeing our moral and political views vindicated as those that scientifically reflect human nature. The end of the Cold War era has made it possible to see new scientific observations for what they are: progress rather than a threat to capitalism.
Simplicity.
Human beings tend to seek simple and neat explanations for a complex world. Coherent stories help organize different facts, ideas, and insights, and help predict what will happen if we do X or what we will find if we look under Y. In psychology, that’s called cognitive fluency: the tendency to hold on to things that are simple to understand and remember. A straightforward, uncomplicated theory of human nature that reduces our actions to simple, predictable responses to rewards and punishments is appealing to the human mind. But our experiences are more complex.
Habit.
Almost two generations of human beings have been educated and socialized to think in terms of universal selfishness. “We need to get the incentives right” has been the watchword for anyone engaged in designing any kind of interaction, organization, or law. “What’s in it for him/her/us?” is the question we have trained ourselves to ask first. Once we get in the habit of thinking of ourselves in a particular way, we tend to interpret all the evidence we encounter to fit our preconceptions and assumptions.
When we see acts of generosity or cooperation, for example, we tend to interpret them through the lens of self-interest. The first generation of economic scholarship on open source software analyzed the voluntary contributions of participants as an attempt to improve their reputations and long-term employment prospects—interpretations that were refuted by the decade of empirical research that followed. Through sheer force of habit, our erroneous beliefs and ways of thinking about human nature are interpreted as evidence and become entrenched. New insights need to overcome substantial barriers before they are accepted.
In today’s world, adaptability, creativity, and innovativeness appear to be preconditions for organizations and individuals to thrive. These qualities don’t fit well with the industrial business model; they aren’t amenable to monitoring and pricing. We need people who aren’t focused only on payoffs but do the best they can to learn, adapt, improve, and deliver results for the organization. Being internally motivated to bring these qualities to bear in a world where insight, creativity, and innovation can come from anyone, anywhere, at any time is more important than being able to calculate the costs, benefits, risks, and rewards of well-understood actions in well-specified contexts. Alongside creativity, drive, flexibility, and diversity, we must include social conscience and authentic humanity when trying to design cooperative systems.
The Success of the Commons
In Spain, thousands of farmers have been managing their access to water through self-regulated irrigation districts for more than five centuries. In the United States, 75% of cities with populations of more than 50,000 have successfully adopted some version of community policing, which reduces crime not by imposing harsher penalties but by humanizing the interactions of the police with local communities.
Predisposed to Cooperate
In experiments testing cooperative behavior, 50% of participants systematically and predictably behave cooperatively. Some do so conditionally; they treat kindness with kindness and meanness with meanness. Others cooperate unconditionally, even when it comes at a personal cost.
The Wall Street Game
Lee Ross and collaborators told one group of participants that it would be playing the Community Game and another group that it would play the Wall Street Game. In the first group, 70% started out playing cooperatively and continued to do so throughout the experiment. In the second group, 70% of the players didn’t cooperate with one another. Thirty percent started out playing cooperatively but stopped when the others didn’t respond.
Fair Does Not Mean Equal
We are flexible in our ability to accept norms of fairness. Historian Andrea McDowell, for example, showed that mining camps during the California gold rush enacted very different codes for what counted as fair or unfair claim jumping. Though the norms differed among camps, each camp applied its own norms uniformly, and newcomers accepted them.
The Power of Intrinsic Motivation
A consistent and stable body of work tells us that monetary incentives and material rewards can crowd out intrinsic motivations to cooperate and display empathetic behavior. A recent study in Sweden, which has a purely voluntary blood donation system, showed that women’s contributions decreased when they were off.ered payments. But when the experimenters permitted donors to give the money they would have received to a foundation that works on children’s health issues, the women’s contributions increased
Valtavirran taloustieteen käyttämät olettamukset siitä, mitkä asiat motivoivat ihmisiä toimimaan, ovat usein vahvasti virheellisiä tai jopa tarkoitushakuisilta vaikuttavia, kuten esimerkiksi uutiset johtajien palkkauksesta osoittavat.
Harvard Business Review:n artikkeli käy läpi, kuinka ihmisten motivoituminen rakentuu kestävämmin moninaisuuden, yhteistyön, tasa-vertaisuuden, omaehtoisuuden ja avoimuuden varaan.
Harvard Business Review 07/2011
The Unselfish GeneIn 1976, evolutionary biologist Richard Dawkins wrote in The Selfish Gene, “If you wish, as I do, to build a society in which individuals cooperate generously and unselfishly towards a common good, you can expect little help from biological nature. Let us try to teach generosity and altruism, because we are born selfish.” By 2006, the tide had started to turn. Harvard University mathematical biologist Martin Nowak could declare, in an overview of the evolution of cooperation in Science magazine, “Perhaps the most remarkable aspect of evolution is its ability to generate cooperation in a competitive world. Thus, we might add ‘natural cooperation’ as a third fundamental principle of evolution beside mutation and natural selection.”
Why is this deep-rooted belief about human selfishness beginning to change? To some extent, the answer is specific to evolutionary biology. But similar ideas challenging the notion that people are born selfish have surfaced in several other fields, such as psychology, sociology, political science, and experimental economics. Together, these ideas are tracing a new intellectual arc in the disciplines concerned with human action and motivation.
Until the late 1980s, our understanding of what made people tick was marked by the rise of an ever more precisely defined model of self-interested rationality—the rational actor theory—which provided the basis for thinking about human behavior, institutions, and organizations. Assuming that we are uniformly rational and concerned only with advancing our material interests provided good enough predictions about our behavior—or so we thought—and convinced us that we are best off designing systems as though we are selfish creatures. Moreover, people who don’t cooperate can ruin things for everyone, so to save ourselves from freeloaders we built systems by assuming the worst of everyone.
Nowhere are the assumptions about the effective harnessing of self-interest, and the terrible consequences, expressed more clearly than in former Federal Reserve chairman Alan Greenspan’s 2008 testimony to the U.S. Senate after the collapse of the banking and credit system. “Those of us who have looked to the self-interest of lending institutions to protect shareholders’ equity—myself, especially—are in a state of shocked disbelief,” Greenspan said. “I’ve been going for 40 years or more with very considerable evidence that it was working exceptionally well.”
The widespread conviction about the power of self-interest is based on two long-standing, partly erroneous, and opposing assumptions about getting people to cooperate. One of them inspired the philosopher Thomas Hobbes’s Leviathan in 1651: Humans are fundamentally and universally selfish, and governments must control them so that they don’t destroy one another in the shortsighted pursuit of self-interest. The second is Adam Smith’s alternative solution: the invisible hand. Smith’s 1776 book, The Wealth of Nations, argued that because humans are self-interested and their decision making is driven by the rational weighing of costs and benefits, their actions in a free market tend to serve the common good. Though their prescriptions are very different, both the Leviathan and the invisible hand have the same starting point: a belief in humankind’s selfishness.
Models of self-interested rationality increasingly came to be seen as universally correct and applicable across an ever-expanding range of human practices. Economics became the primary medium of expression. For example, Nobel laureate Gary Becker argued in 1968 that the calculus of criminals is best understood as a set of rational trade-offs between the benefits of crime and the costs of punishment, discounted by the probability of detection. Imposing harsher punishments and increasing police enforcement, people concluded, are the obvious ways to tackle crime. The same year, Garrett Hardin described the tragedy of the commons—the parable about farmers who shared a piece of land with no restrictions on the number of cattle each could graze on it. They kept letting more cattle graze on the commons until the grass was gone, leaving nothing for anyone. No one stopped grazing animals, Hardin argued, for fear of losing out to the other farmers, who would continue overexploiting the commons. The conclusion was that as self-interested actors, human beings will inevitably destroy shared resources unless the latter are subject either to regulation or to property rights.
Like biology, however, the discipline of economics has changed over the years. In 2009, Elinor Ostrom was awarded the Nobel Prize in economics for showing how commons can—and do—sustain themselves for centuries as well-functioning systems. The most striking example is in Spain, where thousands of farmers have been managing their access to water through self-regulated irrigation districts for more than five centuries. To take another example, 75% of U.S. cities with populations of more than 50,000 have successfully adopted some version of community policing, which reduces crime not by imposing harsher penalties but by humanizing the interactions of the police with local communities.
The Success of the Commons (Located at the end of this article)
Overcoming our assumptions about self-interest is critical to diagnose the risks that new business rivals pose. In 1999, two experts showed how Microsoft’s entry into the encyclopedia market with Encarta symbolized the transformation made possible by networked information economics. Here was a major player leveraging a powerful position, gained by early-mover advantages and network effects, to bundle a product and distribute it widely at a low cost. Britannica’s lumbering 32-volume, multi-thousand-dollar offering didn’t stand a chance. Ten years later, Britannica had been pushed to a different model—but not by Encarta. Microsoft stopped producing Encarta in 2009 because of competition from a business model that is inconceivable according to the belief in self-interested rationality: Wikipedia.
If you feel that Wikipedia—the seventh or eighth most trafficked website, with more than 300 million visitors a month—is unique, ask Zagat’s how the user-generated Yelp has affected its market or Fodor what it thinks about TripAdvisor. The rise of open source software is an example of the same dynamic. For more than 15 years, companies have used open source Apache software for mission-critical web applications, with Microsoft’s server software trailing a distant second. Companies such as Google, Facebook, and Craigslist have also found ways to become profitable by engaging people. Our old models of human behavior did not—could not—predict that.
The way these organizations work flies in the face of the assumption that human beings are selfish creatures. For decades, economists, politicians, legislators, executives, and engineers have built systems and organizations around incentives, rewards, and punishments to get people to achieve public, corporate, and community goals. If you want employees to work harder, incorporate pay for performance and monitor their results more closely. If you want executives to do what’s right for shareholders, pay them in stock. If you want doctors to look after patients better, threaten them with malpractice suits.
Yet, all around us, we see people cooperating and working in collaboration, doing the right thing, behaving fairly, acting generously, caring about their group or team, and trying to behave like decent people who reciprocate kindness with kindness. The adoption of cooperative systems in many fields has been paralleled by a renewed interest in the mechanics of cooperation among researchers in the social and behavioral sciences. Through the work of many scientists, we have begun to see evidence across several disciplines that people are in fact more cooperative and selfless—or behave far less selfishly—than we have assumed. Perhaps humankind is not so inherently selfish after all.
Dozens of field studies have identified cooperative systems, many of which are more stable and effective than incentive-based ones. Evolutionary biologists and psychologists have found neural and possibly genetic evidence of a human predisposition to cooperate, which I shall describe below. After years of arguments to the contrary, there is growing evidence that evolution may favor people who cooperate and societies that include such individuals.
In fact, a distinct pattern has emerged. In experiments about cooperative behavior, a large minority of people—about 30%—behave as though they are selfish, as we commonly assume. However, 50% systematically and predictably behave cooperatively. Some of them cooperate conditionally; they treat kindness with kindness and meanness with meanness. Others cooperate unconditionally, even when it comes at a personal cost. (The remaining 20% are unpredictable, sometimes choosing to cooperate and other times refusing to do so.) In no society examined under controlled conditions have the majority of people consistently behaved selfishly.
Predisposed to Cooperate (Located at the end of this article)
That’s perhaps why using controls or carrots and sticks to motivate people isn’t effective. We need systems that rely on engagement, communication, and a sense of common purpose and identity. Most organizations would be better off helping us to engage and embrace our collaborative, generous sentiments than assuming that we are driven purely by self-interest. In fact, systems based on self-interest, such as material rewards and punishment, often lead to less productivity than an approach oriented toward our social motivations.
The challenge we face today is to build new models based on fresh assumptions about human behavior that can help us design better systems. The image of humanity this shift requires will allow us to hold a more benevolent model of who we are as human beings. No, we are not all Mother Teresa; if we were, we wouldn’t have heard of her. However, a majority of human beings are more willing to be cooperative, trustworthy, and generous than the dominant model has permitted us to assume. If we recognize that, we can build efficient systems by relying on our better selves rather than optimizing for our worst. We can do better.
The Science of Cooperation
What would the world be like if some people consistently operated as self-interested rational actors while others did not? Take the experiments that Lee Ross and his colleagues conducted with American college students and Israeli fighter pilots. As we know, in prisoner’s dilemma games, the two players will both be better off if they cooperate, but neither can trust the other to do so. Game theory predicts that both players will choose not to cooperate instead of taking the risk of losing out by cooperating. Extensive experimental work, however, has shown that people actually cooperate more than the theory predicts.
Ross and his collaborators told half the players in their experiments that they were playing the Community Game and the other half that they were playing the Wall Street Game. The two groups were identical in all other respects. Yet, in the Community Game group, 70% started out playing cooperatively and continued to do so throughout the experiment. In the Wall Street Game group, the proportions were reversed: 70% of the players didn’t cooperate with one another. Thirty percent started out playing cooperatively but stopped when the others didn’t respond.
The Wall Street Game (Located at the end of this article)
This experiment illustrates a couple of points. One, we are not all the same. About 30% of players cooperated even in the Wall Street Game while another 30% acted with self-interested rationality even when told they were in the Community Game. Two, many of us are influenced by context. According to Ross, the framing of the games influenced 40% of the sample. The players who thought they were acting in a context that rewarded self-interest behaved in a manner consistent with that expectation; participants who felt they were in a situation that demanded a prosocial attitude conformed to that scenario. When Ross and his colleagues asked the subjects’ teachers or commanders to predict who would and wouldn’t cooperate, it turned out that the game’s framing forecast behavior better than the teachers and commanders could. It seemed that participants who were seen as self-interested could be induced to cooperate if the games they were playing were reframed.
Anyone designing a cooperative system—be it an organizational process, a legal regime, or a technical platform—and optimizing it for only 30% of the population leaves on the table massive amounts of human potential. Moreover, such systems have to rely on monitoring, rewards, and punishments; their efficiency is limited by information-gathering techniques. Systems that harness intrinsic motivations and self-directed cooperative behavior don’t need to limit themselves to knowledge of what people will do. Every participant becomes his or her own monitor, bringing insight and initiative to the task—whether or not someone is monitoring behavior.
What might account for human cooperation? The first generation of explanations in evolutionary biology began with the theory of kin selection, which predicts that human beings will incur costs only to save others who carry their genes, such as siblings and cousins. Evolutionary biologist J.B.S. Haldane put it in less than romantic terms: “I will jump into the river to save two brothers or eight cousins.” That explained the cooperative behavior in ant and bee colonies as well as in smaller family groups. From there, it was a small hop to accepting reciprocity between individuals not genetically related as an important source of cooperation: “I’ll scratch your back if you immediately scratch mine.”
However, these theories still could not explain field observations in the wild, such as those of coyotes and badgers in the National Elk Refuge in Wyoming. Scientists there observed that the two groups of animals collaborated to hunt ground squirrels. Coyotes, which are faster and have a larger range, would scout, and once they spotted a squirrel, they would signal to the badgers. The badgers, which are underground hunters and catch their prey by trapping it in dead-end tunnels, would then burrow and lie in wait. The squirrels were trapped between a hammer and an anvil: If they escaped the badgers by going above ground, the coyotes would catch them. If they evaded the coyotes by ducking below ground, the badgers would corner them. At the end of a hunt, only one or the other would eat the squirrel, but still the badgers and coyotes collaborated.
Over the years, researchers have developed models of indirect reciprocity, networked reciprocity, and even group selection to explain observations of looser and more remote cooperation. The findings in biology meet human society directly in the work on gene-culture coevolution of anthropologists Peter Richerson and Robert Boyd. They have been gathering evidence for the proposition that cultural practices, too, are subject to evolutionary pressures and that human individuals and cultures evolve toward more-successful strategies.
Imagine two groups. In one, the practice of serving in the army is valued; in the other, it’s not. In the first group, people are willing to fight and risk their lives for their group or donate special skills such as weapon making or intelligence gathering. In the second, they aren’t. If these two groups go to war, the outcome will never be in doubt. And populations don’t have to wait until genetic changes disperse these traits; they can copy one another’s best practices if they seem to work better.
Boyd and Richerson argue that cultures evolve not only through the copying of practices but also through genetic changes; in other words, genes and cultures coevolve. Cultural practices can influence the genetic development of populations that adopt them, favoring genetic predispositions that benefit most from the cultural practice or make following it easier. The researchers’ most physiologically striking example is adult lactose tolerance, which is widespread among descendents of European people who drink milk but is rare among those who created yogurts and cheeses to break down lactose so that they could consume milk products. Lactose tolerance is a genetic trait, but it can be attributed to a cultural practice—drinking milk rather than eating yogurt or cheese—that has existed for a very short time in evolutionary terms.
What might the genetic components of a cooperative culture look like? Political scientists such as James Fowler and his collaborators found that the decision to vote has a strong genetic component. In a 2008 paper in the American Political Science Review, they described their analysis of the voting behavior of 400 identical and nonidentical twins in the Los Angeles area. All the twins in the study were raised together, which meant that the effects of early upbringing, socioeconomic status, and political affiliation didn’t compromise the results. The study found that identical twins were more likely to show the same behavior—either vote or not vote—than were nonidentical twins. Statistical analyses conducted by Fowler and his collaborators suggest that slightly more than 50% of the concordance in behavior was due to genetics.
How could genes account for a practice that has been widespread in the modern world for only around 100 years? That’s a blip on the evolutionary radar; a gene for voting couldn’t possibly have evolved in so short a time. Besides, voting is a puzzle for the rational actor model. The probability that an individual’s vote will affect the outcome of a policy he or she cares about is infinitesimally small, so much so that any cost, including a 15-minute detour, should outweigh it. Still, hundreds of millions of us around the world violate self-interested rationality in public every year. We vote.
What on earth does the propensity to vote have to do with collaboration, you might ask? Imagine there are such things as personality traits, which typify an individual’s behavior. Imagine that one such trait is conscientiousness. People who have that trait—in personality psychology, it is one of the Big Five—tend to be happier with themselves and do what they think is right according to the cultural context. Voting happens to be one way—a relatively inexpensive way—of making conscientious people comfortable in their own skin.
Now let’s bring Boyd and Richerson’s theory back into the equation. Imagine that over the millennia, some cultures rewarded and valued conscientiousness. In those cultures, people who had a genetic predisposition to be conscientious would thrive. Because they would be considered desirable mates, they would reproduce relatively more often, meaning there would be more of them over time. The cultures, in turn, would be able to sustain cooperation more effectively because people would be driven to do the right thing even when they weren’t directly monitored, punished, or rewarded.
Several studies suggest that personality traits are partly heritable. A few years ago, Thomas Bouchard and Matt McGue published an extensive review of twin, adoption, and biological studies that looked at genetic influences on psychological and personality differences. They concluded that personality traits such as extraversion, neuroticism, agreeableness, and openness were on average between 42% and 57% heritable, while shared environmental factors such as the home, which most people believe are major influences, did not correlate with personality.
The biology of cooperation draws our attention because it speaks with the authority of the most reliable way we know how to know: science. If we simply say the word empathy, it sounds mushy. If a scientist like Tania Singer shows, using fMRI scans, that women’s brains light up in three places when they get electric shocks, and that when their partners are shocked, their brains light up in two of the same three places, we understand empathy not as a hard-to-define feeling but as something that people experience in a physical sense. This phenomenon was originally discovered by neurophysiologist Giacomo Rizzolatti, who also found that our brains mirror not only pain and motor movements but pure emotions as well. When Rizzolatti and his colleagues showed subjects videos in which people were expressing disgust on their faces, the same neurons fired in the subjects’ brains as the ones that had been activated when they themselves were exposed to disgusting smells. Cognitively and emotionally, we may be able to “feel” what others are feeling.
Neuroscience also shows that a reward circuit is triggered in our brains when we cooperate with one another, and that provides a scientific basis for saying that at least some people want to cooperate, given a choice, because it feels good. Kevin McCabe and his collaborators have shown that people are rewarded when they trust others; James Rilling and his team have demonstrated that our brains light up differently when we are playing with another human being than they do when we are using a computer.
As we learn more about the biology of behavior, we are gaining a better grasp of the role that genes play in interactions with culture. The ability to trust is a key element in cooperation. It appears to have a biological component, suggesting it may even have a genetic basis. One animal study recently looked at the effects of the brain chemical oxytocin on trust formation in voles. The researchers compared monogamous prairie and pine voles, and polygamous mountain and meadow voles, which mated more promiscuously. They found that the monogamous voles had higher-density oxytocin receptors in many areas of the brain than did the polygamous voles. This meant more-trusting partnerships occurred between the animals whose brains had better oxytocin uptake. Researchers later found that when human beings were given oxytocin nasal spray, they, too, were more likely to trust their partners.
We are far from having a clear model that connects all these dots; what I offer is conjecture based on research that has not made the leap to claim those connections. However, the argument is suggestive; it gives us a framework to grapple with the idea that many of us are, by a combination of nature, nurture, and the interactions between us, much better and less selfish than our standard models predict, as philosophers such as Jean-Jacques Rousseau and David Hume have argued. In fact, it brings the centuries-old debate between Hobbes and Rousseau—or between the Adam Smith of The Wealth of Nations and the Adam Smith of The Theory of Moral Sentiments—to the present, with genetics and fMRI studies thrown in as fresh evidence. Over the past decade, Rousseau seems to have gained the advantage over Hobbes.
The Building Blocks of Cooperative Systems
Analyzing the research on human cooperation from so many disciplines helps identify some levers that may motivate people to contribute to the collective effort instead of pursuing their own interests at the group’s expense. These levers aren’t equally appropriate for all types of systems; different combinations are better for different activities and populations. With that caveat, here are seven preliminary ideas for building cooperative systems.
Communication.
Nothing is more important in a cooperative system than communication among participants. When people are able to communicate, they are more empathetic and more trusting, and they can reach solutions more readily than when they don’t talk to one another. Over hundreds of experiments spanning decades, no single factor has had as large an effect on levels of cooperation as the ability to communicate.
Framing and authenticity.
People react differently depending on how situations are framed, but they aren’t stupid. It’s important that the frame fit reality. Framing a practice as collaborative or a system as a community may encourage cooperation for a while, but it won’t last if that claim isn’t believable.
Empathy and solidarity.
For reasons biological and social, the more empathy and solidarity we feel with others, the more likely we are to account for their interests. Similarly, solidarity with a group makes us more likely to sacrifice our interest for that of the collective. The difference between solidarity and discrimination is a slippery slope, though. While it’s impossible to deny the role of team spirit in getting people to cooperate, we do need to be wary of its ability to exclude nonmembers.
The first step in constructing a team or encouraging prosocial behavior is to expand the set about whom participants feel they should be concerned. That’s not difficult; experimental economists such as Bruno Frey and Iris Bohnet have shown that just seeing another participant’s face increases cooperation levels substantially. Empathy for those affected by our actions alters the outcomes we care about, and that, in turn, changes our behavior.
Fairness and morality.
People care about being treated fairly. According to work conducted by Ernst Fehr and his collaborators, “fair” does not mean “equal.” In experiments where some people gained because of skill or luck, the others initially were willing to let them walk away with a much larger share of the gains.
We are also flexible in our ability to accept norms of fairness. Historian Andrea McDowell, for example, showed how mining camps during the California gold rush enacted very different codes for what counted as fair or unfair claim jumping. Though the norms differed among camps, each camp applied its own norms uniformly, and newcomers accepted them. In another groundbreaking study, when anthropologists Rob Boyd and Joe Henrich and economists Sam Bowles, Colin Camerer, Ernst Fehr, and Herbert Gintis conducted games with subjects from 15 tribal societies all over the world, they found enormous differences in conceptions of fairness.
Fair Does Not Mean Equal (Located at the end of this article)
To some extent, people care about having a fair share of whatever benefits their cooperation produces. Experiments conducted in market-based societies suggest that most participants tend to follow an equal distribution norm if there’s no reason to deviate from it. If some players deviate too far from the norm, the others will punish them even if by doing so they lose, too. In fact, most people prefer to lose everything than to walk away with too small a share of the gains.
People also care about doing the right thing, whatever that is. Clearly defined values are crucial to cooperation; discussing, explaining, and reinforcing the right or ethical thing to do will increase the degree to which people behave that way. It’s important for a cooperative system to have codes that are predicated less on rules and more on social norms. They must be flexible or plastic enough to adapt to change, and they must be transparent. Letting people in a system see what others are doing reinforces social norms and gets people to comply—not because they fear embarrassment or ostracism but because they want to do what is normal.
Rewards and punishment.
In order to foster cooperation, it is critical to set up systems that appeal to participants’ intrinsic motivations—that is, what they want to do from within—instead of systems based on monitoring people and rewarding or punishing them according to their behavior.
Two facts make this tough to implement. First, intrinsic motivation is in its infancy as an area of inquiry. Second, there is a consistent and stable body of work that tells us that if we add money, things may go worse rather than better. That is, monetary incentives and material rewards can crowd out intrinsic motivations to cooperate or display empathetic behavior. If you are invited to a dinner party, you can bring a gift—flowers, wine, or whatever counts as a friendly gesture. If instead you leave $100 on the table at the end of the meal, you will destroy the atmosphere because you have turned a social interaction into a commercial exchange. This captures the findings of studies in experimental economics and psychology as well as many field studies of the crowding-out phenomenon.
For example, a recent study in Sweden, which has a purely voluntary blood donation system, showed that women’s contributions decreased when they were offered payments. Donating blood is a way for people to signal that they are the kind willing to sacrifice for the good of others; offering money spoiled that effect. To test that hypothesis, the experimenters later permitted donors to give the money they would have received to a foundation that works on children’s health issues. Sure enough, the women’s contributions went back up.
The Power of Intrinsic Motivation (Located at the end of this article)
Whenever you design a policy that relies on monetary rewards, you have to assume that it will have side effects on the psychological, social, and moral dimensions of human motivation. A change that would lead to more behavior of the kind you are rewarding, or less of the kind you are punishing, may cause the exact opposite behavior because the effects on the material self-interest vector will be more than canceled out by the effects on the intrinsic motivation vectors. We shouldn’t try to motivate people only by offering them material payoffs; we should also focus on motivating them socially and intellectually by making cooperation social, autonomous, rewarding, and even—if we can swing it—fun.
Reputation and reciprocity.
One extremely important form of cooperation hinges on long-term reciprocity, both direct and indirect. Systems that rely on reciprocity, particularly of the “pay-it-forward” kind, are enormously valuable but easily corrupted. Reputation is the most powerful tool against this. As online systems such as eBay have shown, even anonymous reputation systems—such as “handles” that betray nothing of a person’s real identity—are sufficient to keep people in line.
Diversity.
Systems that harness diverse motivations are more productive than are those built for people who care only about material payoffs. Because we differ from one another, cooperative systems have to be flexible. They also need to recognize that we are sensitive to the costs of cooperating, but the degree of sensitivity can change. It’s possible to create a system that depends on massive self-sacrifice, but it’s extremely tough to sustain it. The fate of the nationalist and communist experiments of the 20th-century in Germany, Russia, and China provide ample evidence of that.
Debunking the Myth of Selfishness
Given the wealth and depth of the evidence that refutes the model of self-interested rationality, why does it still dominate? Four reasons account for its persistence.
Partial truth.
The myth of universal selfishness endures in part because it isn’t entirely wrong—only mostly so. We all have experienced moments when we have been torn between what is good for us and for others, and many of us have, on occasion, caved in to self-interest even when other values demanded something else. We can recognize ourselves in the story of rational self-interest. However, it’s a problem when we take a partial truth and treat it as though it were the whole. We then build systems, like the Wall Street Game, that drive almost everyone to act in a self-interested way instead of the Community Game, where the self-interested are a minority.
History.
The roots of the assumption about universal selfishness are probably as old as human culture. Nonetheless, it gained prominence from the 1950s to the 1980s, becoming dominant in public discourse around the world during the Cold War, when the global power struggle between the United States and former Soviet Union was couched as an ideological battle between capitalism and free enterprise, on the one hand, and socialism and collectivism on the other. Nothing is more seductive than seeing our moral and political views vindicated as those that scientifically reflect human nature. The end of the Cold War era has made it possible to see new scientific observations for what they are: progress rather than a threat to capitalism.
Simplicity.
Human beings tend to seek simple and neat explanations for a complex world. Coherent stories help organize different facts, ideas, and insights, and help predict what will happen if we do X or what we will find if we look under Y. In psychology, that’s called cognitive fluency: the tendency to hold on to things that are simple to understand and remember. A straightforward, uncomplicated theory of human nature that reduces our actions to simple, predictable responses to rewards and punishments is appealing to the human mind. But our experiences are more complex.
Habit.
Almost two generations of human beings have been educated and socialized to think in terms of universal selfishness. “We need to get the incentives right” has been the watchword for anyone engaged in designing any kind of interaction, organization, or law. “What’s in it for him/her/us?” is the question we have trained ourselves to ask first. Once we get in the habit of thinking of ourselves in a particular way, we tend to interpret all the evidence we encounter to fit our preconceptions and assumptions.
When we see acts of generosity or cooperation, for example, we tend to interpret them through the lens of self-interest. The first generation of economic scholarship on open source software analyzed the voluntary contributions of participants as an attempt to improve their reputations and long-term employment prospects—interpretations that were refuted by the decade of empirical research that followed. Through sheer force of habit, our erroneous beliefs and ways of thinking about human nature are interpreted as evidence and become entrenched. New insights need to overcome substantial barriers before they are accepted.
In today’s world, adaptability, creativity, and innovativeness appear to be preconditions for organizations and individuals to thrive. These qualities don’t fit well with the industrial business model; they aren’t amenable to monitoring and pricing. We need people who aren’t focused only on payoffs but do the best they can to learn, adapt, improve, and deliver results for the organization. Being internally motivated to bring these qualities to bear in a world where insight, creativity, and innovation can come from anyone, anywhere, at any time is more important than being able to calculate the costs, benefits, risks, and rewards of well-understood actions in well-specified contexts. Alongside creativity, drive, flexibility, and diversity, we must include social conscience and authentic humanity when trying to design cooperative systems.
The Success of the Commons
In Spain, thousands of farmers have been managing their access to water through self-regulated irrigation districts for more than five centuries. In the United States, 75% of cities with populations of more than 50,000 have successfully adopted some version of community policing, which reduces crime not by imposing harsher penalties but by humanizing the interactions of the police with local communities.
Predisposed to Cooperate
In experiments testing cooperative behavior, 50% of participants systematically and predictably behave cooperatively. Some do so conditionally; they treat kindness with kindness and meanness with meanness. Others cooperate unconditionally, even when it comes at a personal cost.
The Wall Street Game
Lee Ross and collaborators told one group of participants that it would be playing the Community Game and another group that it would play the Wall Street Game. In the first group, 70% started out playing cooperatively and continued to do so throughout the experiment. In the second group, 70% of the players didn’t cooperate with one another. Thirty percent started out playing cooperatively but stopped when the others didn’t respond.
Fair Does Not Mean Equal
We are flexible in our ability to accept norms of fairness. Historian Andrea McDowell, for example, showed that mining camps during the California gold rush enacted very different codes for what counted as fair or unfair claim jumping. Though the norms differed among camps, each camp applied its own norms uniformly, and newcomers accepted them.
The Power of Intrinsic Motivation
A consistent and stable body of work tells us that monetary incentives and material rewards can crowd out intrinsic motivations to cooperate and display empathetic behavior. A recent study in Sweden, which has a purely voluntary blood donation system, showed that women’s contributions decreased when they were off.ered payments. But when the experimenters permitted donors to give the money they would have received to a foundation that works on children’s health issues, the women’s contributions increased