« “The Economist” Likes… | Home | Wargaming or: Men Are… »

From B Schools to COIN: Improving the U.S. Army's Brand Management

(To read the rest of "Over-Reacting to COIN (Again): On Cultural Empathy and 'Gratitude Theory'", please click here and scroll to the bottom.)

A few weeks back, to help decide between business schools, I sat in on a class at USC’s Marshall School of Business. I thought it was going to be on marketing. Instead, got a lesson on counter-insurgency warfare and Afghanistan. The class was “Brand Management”.

The professor started his lecture by describing the three principles that guide all human decision making (the realm of marketing). The third principle, the topic of that day’s class, was “emotional predisposition.” He described how brands use advertising to create an emotional predisposition towards their products, specifically how those products can “enrich, entertain, or enable” your life.

Later in the class, he repeated the three major principles of marketing: “All humans are motivated by utility maximization, the minimum effort principle and emotional predisposition, in some measure.”

He used different terms than I did in December, but made the same point: humans don’t measure everything by utility maximization--some emotions override any cost/benefit decision. For example, one of our grandfathers, who fought in the Pacific, refused to think about considering even contemplating buying a Japanese motor vehicle. They could have given them away for free, but he wouldn’t have budged.

In other words, the professor described the exact model of human behavior I believe we need in counter-insurgencies. We cannot kill our way to victory because inflicting widespread death will have severe emotional consequences. (Which I will discuss more.) At the same time, we cannot simply buy things for the population if we haven’t established security for the population (I’ll discuss this straw man soon, too.). Instead, we need a population-centric approach that secures the population, reconstructs and builds a functioning government, and hunts down, detains or kills those who inflict violence on the government or population.

The professor added a key component to human nature that we had neglected. One of his principles was “the minimum effort principle”. While I haven’t specifically related this idea to insurgencies, plenty of other writers have. (Our post on management “Improve the Fighting Position” is about combating “the minimum effort principle” as it relates to your desk at work.) For example, a national security academic we hold in the highest regard made this point in an op-ed for the Daily Beast:

Populations, in civil wars, make cold-blooded calculations about their self-interest. If forced to choose sides in a civil war—and they will resist making that choice for as long as possible, for understandable reasons—they will side with the faction they assess to be the one most likely to win.

Yep, that is Andrew Exum, who I cited in “Getting Rid of the Chicago School of Counter-Insurgency”. While I rebutted his statement about “cold-blooded calculations”, that part in the middle, between the dashes, precisely sums up the minimum effort principle. The other book I recommend on this topic is A People Numerous and Armed by John Shy, whose thesis is that the American Revolution forced Americans to choose a side; it politicized the people leading to universal male suffrage.

So a psychologist with a Noble prize for economics, a marketing professor and The Economist have all said that our models of human behavior should include rationality, utility maximization, the minimum effort principle and emotional predisposition. Thanks to being the world’s foremost economic power, we can model and predict human behavior. Our Army could--hypothetically--tap into these vast reserves of marketing knowledge.

The question is, will we update our models to reflect that humans are rational and emotional, or will we just believe we can kill our way to victory?

two comments

Interesting, and in the black box realm of theory it makes sense. The problem as I see it is the assertion that being the world’s foremost economic power means we can model and predict human behavior. Quite frankly, we can’t – which is why International Relations theory has devolved into the snipe hunt for the perfect independent variable in the academic realm and continues to drift ever further away from utility in the public policy sphere.

Political Science, as a discipline, holds that human behavior ought to be predictable with the right combination of rigorous theoretical and methodological underpinnings. Some of these include utility maximizing behavior, rational actor assumptions, gains maximizing, satisficing behavior, and a whole host of other explanatory factors. Here is the not-so-secret: there aren’t any Social Science theories that can actually predict human behavior with anything like general utility, because humans are incredibly complex and it is almost impossible to isolate the variables that influence our decision making. Marketing even tells us that human beings, when asked, are often wrong when it comes to identifying their preferences – see Malcom Gladwell’s iconic TED talk on spaghetti sauce…

For a brilliant laydown of Social Science/modeling limitations (and something I wish I’d read before grad school) see John Lewis Gaddis, Landscape of History. The bottom line is this – the study of history, which at least offers the potential for far more complete information than Social Science predictive theory, rarely arrives at some definitive truth about complex problems. If a discipline that looks at issues through the lens of all available information can’t explain why things happened the way they did (and we frequently can’t – anyone reached real consensus on that Vietnam thing yet?) why do we assume we can predict what large groups of people we don’t understand will do given some series of inputs?

Given a dynamic and highly complex operating environment subject to variables we are incapable of isolating even if we could identify them, of what value is predictive modeling that draws from American domestic marketing practices? We face the mother of all incomplete information problems, which our army is emotionally predisposed to deal with in a far less theoretical fashion. At the end of the day, we can’t (and shouldn’t) kill our way to the win…but isn’t it the height of hubris to believe that we can model and predict what other people will do given ‘rationality’ as the base assumption? Whose rationality? [see http://danieldrezner.com/research/guest/..]]

I do agree with the bit between the dashes – that populations will resist commitment for as long as possible – but I think the idea we can model or influence that commitment is incredibly problematic. I love Shy’s work as much as any American military historian, but keep in mind – the population resisting commitment was the one the Revolutionaries came from – no third party rallied the colonists to the cause of independence. For one perspective on how this went the last time we tried, read Stu Herrington’s Stalking the Vietcong. [http://www.amazon.com/Stalking-Vietcong-Operation-Phoenix-Personal/dp/0345472519]


I don’t think modeling is without utility, as long as you don’t take it too far. Marketers use those models effectively. But they are only dealing with a narrow range of variables, how do you get people to buy this particular thing, so Josiah is right about small wars having many more variables. Those extra variables would only affect how precisely accurate you can be with predictions, it does not mean you can’t use them to help you figure out some basic trends and make broad predictions about behavior.

There is the importance of emotion for example. You may not be able to precisely model the effect the emotions of people whose relations are killed by a mistaken air strike are going to be on a small war, but you can predict that there will be an effect and it probably won’t be be to the favor of whoever made the bad air strike. That has value, a lot of value. The fact that you can’t make a precise prediction doesn’t lessen the value.

What you are really dealing with here is human nature and its effect on war. One of the values of the models, however imprecise, is that you can lead technotypes to acknowledge the existence of human nature and maybe to even see its importance, without them hardly knowing it. You sneak it up on them. Some of them won’t accept “People are really going to be mad if you kill their mother by mistake” because that is just so not something a tenured prof at Cornell would say. But if you use words like “model” and “algorithm”, then they might listen.