Sunday, April 30, 2017

The siren song of homogeneity


The U.S. and Europe are in a time of great political change. Policies haven't changed that much yet, but the set of ideas that drive movements and activism and the public discussion have altered radically in the last few years. In the U.S., which of course I know the best, there have been new outpourings on the left - the resurgent socialist movement and the social justice movement chief among them. But as far as I can see, the biggest new thing is the alt-right. Loosely (we can argue about definitions all day, and I'm sure many of you will want to do so), the alt-right wants to make American society homogeneous. Most of the enthusiasm is for racial homogeneity, but religion seems to figure into it a bit as well.

The siren song of homogeneity is a powerful one. On Twitter and elsewhere, I am encountering more and more young people (mostly men) who openly yearn for a society where everyone is white. The more reasonable among these young people tell me that homogeneity reduces conflict, increases social trust, and has a number of other benefits. They often cite Japan as their paradigmatic homogeneous society; some explicitly say they want a white version of Japan.

Those are the reasonable ones - the less reasonable ones tend to communicate in memes, threats, and slurs ("Fuck you, Jew! How about open boarders for ISRAEL!!", etc.). But the fact that these men are dedicating so much time, effort, and passion into those memes, threats and slurs says something important. It says that there is passion in this movement.


Is the alt-right really a growing, rising movement?

Much of the passion for white homogeneity seems new to me - twenty years ago, despite the existence of Nazi-type websites like Stormfront, the idea of making America an all-white nation seemed like a fringe notion. Perhaps it still is a fringe notion - after all, social media acts as a force multiplier that allows a relatively small number of highly committed individuals to seem like a huge army. And perhaps this kind of sentiment was always reasonably common in America, but simply kept under wraps by the mainstream media before the internet emerged to make it more visible.

There is some evidence to support the contention that alt-right ideas are still highly unpopular in America. A 2016 Pew survey found that only 7 percent of Americans say that growing diversity makes the country a worse place to live:


Compare that to 31 percent in Britain and Germany and 36 percent in the Netherlands!

Meanwhile, recent polls find support for immigration:


That's a short time series, so here's a longer one from Gallup. It also shows a gentle downtrend in anti-immigrant sentiment, and also pegs it at just under 40 percent:


As Gallup's racial breakdown shows, the decline in anti-immigrant sentiment is being driven by whites - anti-immigrant sentiment is actually slightly up among blacks and Hispanics. That implies that much of what anti-immigrant sentiment does exist is not due to a growing yearning for a homogeneous white nation. A substantial majority of white Americans supports letting undocumented immigrants stay, as long as certain conditions are met - that doesn't exactly seem like a vote for white homogeneity.

So it's certainly possible that the alt-right - even defined very generally, including the more moderate "alt-light" and the quietly sympathetic "alt-white" - is a shrinking, dying idea that is only becoming louder and more aggressive because it's under threat. It's possible that Trump's election was really driven more by people's economic hopes that he would bring back dying industries and bring American jobs back from overseas, or even just by a desire to roll the dice of change.

But I think that whether or not the alt-right is really a growing, burgeoning movement, it makes sense to take it and its ideas seriously. First, the presence of Trump in the White House will probably force much of the country to listen to what the alt-right has to say. Even though he isn't really their man, he has hired several people who at least loosely sympathize with the movement's ideas - Bannon, Miller, Anton and Gorka among them. That means that at least as long as Trump's butt is planted in a chair in the Oval Office, alt-right ideas have at least a chance of making it into government policy. That means the alt-right, and their ideas, matter.

And even beyond that, I feel an emotional desire to engage with the alt-right - at least, the more reasonable among them. I couldn't care less about the people in Europe supporting Le Pen or Geert Wilders, but alt-right Americans are my countrymen. I'm a nationalist at heart and I care about what my countrymen think.

And I think that there are a decent of young (mostly) men out there whose intellectual lives will be defined by this stuff - who will spend their 20s and 30s entranced by the idea of a homogeneous white society. Just as there are old hippies who still look at the world through the lens of the 1960s anti-war movement, in a few decades there will be some aging white Millennial men for whom Pepe the Frog and r/thedonald and Kekistan and the Great Meme War were the climax of their youthful energy and imagination. I want to engage with those people, even if (as I predict) they ultimately lose.


Is the alt-right really a pro-homogeneity movement? Is Trumpism?

Every movement is...well, heterogeneous. Alt-right people talk a lot about homogeneity, but it's certainly not the only thing they talk about, or the only reason for their movement. Some may join the alt-right simply out of a fear of the social justice movement - banding together for mutual defense. Others may simply be opposed to some group of immigrants - someone who would be fine with a Cuban neighbor might be terrified of a Syrian one. Still others may be religious traditionalists looking for a home after the collapse of the Christian right, neo-Confederates allied to an Old South style of racial politics or just Trump fans looking for a cool club to join. For some, "homogeneity" might be simply a convenient rallying cry for expelling undesirable groups from the country, or for instituting one's chosen value system. As for Trumpism, that almost certainly has multiple causes - anything as big and all-encompassing as a presidential election will have multiple causes.

But I think research shows that fear of ethnic heterogeneity is a real driver of Trump support. For example, this study shows that reminding white people with strong white identification that America is getting less white (which might not actually be true, but we'll get to that later) increased support for Trump. And anecdotally, support for homogeneity pops up again and again in pro-Trump literature and discourse. Here's a quote from Trump advisor Michael Anton's famous essay "The Flight 93 Election," widely considered to be one of the basic Trumpist manifestos:
Third and most important, the ceaseless importation of Third World foreigners with no tradition of, taste for, or experience in liberty means that the electorate grows more left, more Democratic, less Republican, less republican, and less traditionally American with every cycle. As does, of course, the U.S. population[.]
So I'd say the case is fairly clear that the desire for a homogeneous society runs strong through both the alt-right and the broader Trump movement.


The data-based case for homogeneity

The case for homogeneity comes down to the idea that a homogeneous society is a nicer place to live. Alt-right people cite Japan's stunningly low crime rate, for example, as evidence that ethnically similar people don't fight. They also claim that homogeneity increases social trust.

There is a reasonably large body of research that supports the "trust" idea. For a good list of links to those papers, check out this post by blogger James Weidmann, better known as Roissy. Roissy sums up the thesis in one simple equation: "Diversity + Proximity = War." I'm not going to replicate the whole list here, but here's a very small sampling:

1. A study in Denmark showing a negative correlation between reported trust and ethnic diversity at the municipality leve from 1979-2009

2. A study in Britain find that people who stay in communities after those communities become more diverse report more negative attitudes toward their communities afterward

3. A study in the Netherlands finds that increasing diversity in classrooms made kids more likely to choose friends of similar ethnicity

4. A study found that across Europe, different-ethnicity immigration tends to decrease social trust, while similar-ethnicity immigration tends to increase it.

Roissy didn't include econ papers on his list, but economists have also flagged the dangers of ethnic divisions. Alesina, Baqir, and Easterly (ironically, a rather diverse team of authors) famously found that ethnic divisions reduce public good provision. Alesina, Glaeser, and Sacerdote hypothesize that diversity is what prevents America from having a Europe-style welfare state.

There are lots of postulated mechanisms for how diversity reduces trust and leads to dysfunctional societies. Maybe people are genetically programmed to cooperate with those who are genetically more similar to them. Maybe people who belong to different groups have different interests. Maybe we just generally fear that which is different and strange.

On top of this appeal to evidence, however, there's an emotional appeal - as there always is for any really important political idea. There's the negative appeal of fear of diversity - the specter of becoming a minority, potentially hated, despised, and/or oppressed by other groups. But there also seems to be a yearning for a half-imagined utopia - a "Japan for white people", where shared whiteness produces a neighborly camaraderie, social cohesion, and peace that is unknown in much of modern America.


Caveats to the data-based case for homogeneity

Roissy is a polemic blogger; his aim is to advocate, not to educate. The academic case for homogeneity is not nearly as clear-cut as what he presents.

Many of the studies he cites have methodological issues. For example, one study finds that "neighborly exchange" is negatively correlated with diversity. But its data set doesn't allow it to compare recently diversified neighborhoods with neighborhoods that recently received a lot of internal in-migration - in other words, it may simply be that a flood of newcomers, be they the same race as the majority or not, tends to disrupt neighborly friendships. In fact most of the cited studies tend to have this problem - it's hard to distinguish between the impact of population mobility and the impact of diversity itself.

Other studies he cites show some cases in which ethnic diversity increases trust. For example, a study in America found a U-shaped relationship between ethnic fractionalization and trust, meaning that high and low diversity places tend to have more trust than medium-diversity places (which makes sense if medium-diversity places are places where a bunch of newcomers just showed up).

Also, it's worth noting that many of the studies Roissy cites are from Europe. It may be the case that Europe functions differently than America, and is not an appropriate comparison. Most Europeans may think of their societies as based on ethnicity - "blood and soil", as some say - while this may hold true for only a minority of Americans. Also, recent European nonwhite immigration may be very different from the type of nonwhite immigration America gets - where America has recently mostly taken in hard-working Hispanics and high-skilled Asians and Africans, Europe has tended to take a lot of lower-skilled Middle Easterners and North Africans. Not only might the latter tend to be a more fractious type of immigrant, but there's also an enmity between Europe and the MENA region that goes back further than reliably recorded history. That could contribute to the distrust. In other words, the kind of diversity you get probably matters a lot.

Then there are all of the contrary studies Roissy, as a polemic blogger, doesn't cite. It's a big literature, and there are lots of findings that go in the other direction. For example:

1. A recent study in Southern California found that ethnic diversity is associated with decreased crime and higher home values

2. A study in Britain showed no relationship between ethnic diversity and trust.

3. A study in Europe found a positive long-term effect of diversity on trust.

4. A 2014 literature survey finds that "ethnic diversity is not related to less interethnic social cohesion."

5. A 2008 study in Europe found that ethnic diversity didn't decrease social capital.

6. A 2007 study in Britain found that the negative effect of diversity on social cohesion disappears after controlling for economic variables.

7. There's also a big literature on diversity and group decision-making, most (but not all) of which concludes that ethnic diversity makes groups smarter.

I could go on - most of this is the result of me just doing Google Scholar searches for "diversity and trust" and "diversity and social capital" and picking out any studies on the first page or two that seem to contradict the "diversity decreases trust" conclusion. That's hardly a scientific way to proceed, but it does show that if you get your academic information from a polemicist, you're going to get a distorted picture of the academic literature.

My point here is not to say that the alt-right is wrong about homogeneity and trust. They might be right - my sense from reading literature surveys is that the correlation between homogeneity and trust is a common finding, but not overwhelmingly common. My point here is to say that the question of homogeneity and trust is not yet answered. This is not surprising, because both homogeneity and trust are big, expansive, vaguely defined concepts, which usually means clear-cut answers don't exist.

Another thing that bothers me about many of these studies is that I tend to be a bit skeptical of survey research. This is not to say survey research is worthless, but I guess like any good economist I instinctively put more stock in measures of actual behavior. Roissy's link list does include some studies showing diversity increases conflict, but to my knowledge, the academic consensus is that immigration reduces crime. That literature review is from a few years back, but recent research all seems to confirm the finding. To me, lower crime is a much more tangible result than people simply saying negative things on a survey.

But an even more important reason why you shouldn't put too much stock in this literature is that almost none of these studies are very good at dealing with endogeneity. Here are some examples of endogeneity issues:

* Suppose low-skilled immigrants tend to move to areas with low social trust, because businesses in places with low social cohesion tend to hire cheap labor.

* Suppose large empires tend to conquer lots of different ethnicities and encourage internal migration that increases local ethnic diversity, but suppose that large empires also tend to collapse, causing lots of local conflicts.

* Suppose exogenous events that cause waves of newcomers - conflicts, recessions, out-migration from declining areas - are also things that tend to reduce trust.

To really control for these kinds of things, you really need natural experiments. They already do this for things like the impact of immigration on wages. But to isolate the effect of ethnic diversity from the effect of population mobility - i.e., to tell the difference between "newcomers of any race" and "newcomers of a minority race" - will require finding some situation where different ethnicities of newcomers are randomly assigned to different areas.

Anyway, so that's interesting. But to me, the really interesting question is whether ethnicity itself is endogenous.

More on that later, though. First, let's shift gears from data to anecdote, so I can talk about my experiences living in an ethnically homogeneous society.


My own experience in a homogeneous society

As regular blog readers know, I've live in Japan (for a total of about 3.5 years). Though I'm of course not Japanese, the experience taught me much about how Japanese people live and think. So I have observed at least one good example of a homogeneous society up close. While that example might not generalize, here are my thoughts.

First of all, if you think Japanese people share a sense of camaraderie and togetherness from all being the same ethnicity, think again. Because Japan is homogeneous, ethnicity just isn't that salient to most Japanese people - when a Japanese person meets another Japanese person, they don't think "Japanese person," they just think "person". Ethnic identity isn't on their minds.

Because of this, ethnic homogeneity creates very little solidarity on a day-to-day basis in Japan. Japanese people are generally wary of striking up conversations with strangers - more wary than Americans of different races are of striking up conversations with each other, I find. Services like Craigslist that facilitate informal transactions between private parties are rarely used - when I ask Japanese people why, they say it's because they can't trust strangers. Some Japanese people have told me that they feel far less shy talking to a foreigner than they do talking to another Japanese person.

I suspect that the feeling of ethnic solidarity that many alt-right whites feel for other alt-right whites is something unique to minorities. People who have always been part of the overwhelming majority just don't think about ethnicity enough for it to create bonds of solidarity - except in extreme situations, like a foreign war.

Surveys corroborate my hunch. Japan has always reported relatively low levels of interpersonal trust - until recently, considerably lower than in the U.S.:


Now keep in mind, that's trust, which is very different from trustworthiness. Japanese people, as a rule, are some of the most scrupulously honest people I've ever met. I've had old Japanese women run to catch up with me on the street, handing me a penny I dropped. The one time I dropped a substantial amount of cash on the ground, it was a yakuza bodyguard who notified me. Japanese people generally deserve high trust, but don't necessarily give it to each other. 

Urban Japan also seems to me to have little tradition of "neighborly exchange" (I'm sure this is different in small towns, but Japan is very highly urbanized). I see very few people saying hello to their neighbors. One person I knew who did this was considered eccentric.

So if you think a homogeneous society means that people will tip their hat to you on the street and be you're friend just because you're the same race as them, think again. 

However, Japanese culture also has quite a lot of unwritten rules, which almost everyone follows. Some of these are speech rules - the famous Japanese "politeness". Some are rules about work - the famous Japanese "corporate culture". Some are rules about service in restaurants and shops. There are many others. 

These rules - which people sometimes mistakenly label "conformity" - would be harder to turn into universal norms in the diverse United States. Foreigners, or people from other parts of the country, might just not know the rules. And people from certain ethnic backgrounds might resent being pressured to follow those rules by people of other ethnic backgrounds, and so might intentionally disobey. The less other people follow a social rule, the less incentive there is for me to follow it.

So Japanese homogeneity seems to produce a society where everyone's minor, day-to-day interaction is a little more predictable

How about politics? Japan has long been dominated by a single political party (the LDP), and politics is traditionally conducted via factions within that ruling party. There's little question in my mind that homogeneity is one of the causes of one-party dominance - there's no ethnic minority to form the core of an opposition party. 

So how does that work out? Japanese politics is famously dysfunctional - the debt is out of control, patronage politics is rife, and there's usually a dearth of leadership. This was as true before World War 2 as it is today - Japan in the 30s was afflicted with frequent coup attempts and plenty of extremism, and essentially bumbled its way into multiple disastrous wars. Nowadays, Japanese political dysfunctionality is more likely to manifest itself as wasteful spending and obstruction of needed economic reforms.

However, it's worth noting that Japan has not experienced a "populist backlash" like other countries. Shinzo Abe is a true nationalist leader, and a responsible one. He was quick to quell outbursts of racism against those minorities that do exist in Japan, and in general has a pretty progressive agenda. And overall, Japanese people are (so far) pretty happy with Abe. He's worlds away from a Trump or a Le Pen or an Erdogan or a Chavez. So it's possible that homogeneity exerts a stabilizing effect on Japanese politics, insulating it from periodic outbreaks of madness, while making it less responsive in normal times due to the lack of a credible opposition.

As for crime, everyone knows that Japan is an extraordinarily safe country. It's hard for people who've never lived there to wrap their heads around how safe it is - teenage girls walk the streets of major cities alone at night in schoolgirl skirts and fear absolutely nothing. Is Japan so nonviolent because of its homogeneity? It's hard to say. In America, immigration - which is usually nonwhite immigration - tends to decrease crime. The ultra-diverse New York City and Los Angeles are some of the lowest-crime cities in America. Also, Japan does have a few very diverse neighborhoods, and these are also quite safe. So my instinct is to say that Japan's secret safety sauce is something else. But I don't really know.

So overall, if I were to draw conclusions from my experience in Japan, I'd say that homogeneity has its advantages and disadvantages, but ultimately isn't clearly better or worse. 

(But as an aside, if I were making policy, I'd recommend that Japan not take in mass immigration. I don't know whether their society could handle it, but I say, no need to mess with a good thing. But that's also why I recommend that America and Canada keep taking in lots of immigrants - we've got a different kind of good thing going. Anyway, that's my instinct.)


How racial is homogeneity?

But here's one coda, which leads into my next point. Are Japanese people all the same race? Maybe not. Japan was formed from the confluence of two groups, the Jomon (unusually densely populated hunter-gatherers) and the Yayoi (rice farmers). This genetic mixing is still very apparent in the genetic data. And perhaps as a result of this, you see a reasonably large diversity of features in Japanese people. For example, here are two Japanese guys:


Are those two guys the same race? Technically, yes. In America they'd both be "Asian", in Asia they'd both be "Japanese". Neither American culture nor Japanese culture recognizes any ethnic difference between these two men. And sure, they both have straight black hair, and their skin tones aren't that different. But the pretty big difference in physical appearance between those two guys - and between many people in Japan - makes me wonder whether our definitions of race aren't a little...elastic.


What if homogeneity is a choice?

In lefty circles, it's common to hear people say that "race is a social construct." What could that possibly mean? Obviously, physical differences are real. And obviously, those differences are going to be clustered, because for most of human history - and even now, really - there was only limited population mixing across areas. A clustering algorithm will pick out clusters of traits, and you can call those "races" if you want.

But are the "races" we recognize the same that would be picked out by a clustering algorithm? Sometimes, sure. But not always. The two pictures above demonstrate that even in a supposedly super-homogeneous place like Japan, genetic differences exist that culture and society just don't recognize as representing different ethnicities.

Another important example is "Han Chinese". When you look at the genetics, Han Chinese people are actually pretty diverse. Another is "Turkish". Here are two Turkish actors I just found by Googling:



Wow. Compared to these guys, the two Japanese guys above look like twins. Obviously, these two men have ancestors from very different geographic locations, and yet somehow they're both Turks. Just like some British people have red hair and some have black, and just like some Japanese people have "sauce faces" and some have "soy faces", some Turkish people have dark skin and some have light. A difference in appearance need not translate to a difference in race, in the real world.

But the most interesting example might be "white." In America, we have a race called "white" that Europe just doesn't seem to have. In Europe, anecdotally, ethnicity is defined by language, and perhaps also by religion. While skin color differences are recognized, European ethnic definitions are usually much finer. In America, though, they're all just "white." 

In fact, who's included in "white" seems to change quite a lot over time. In 1751, Benjamin Franklin was arguing against North European immigration on the grounds that Swedes, French people, Russians, and most Germans weren't "white":
Which leads me to add one Remark: That the Number of purely white People in the World is proportionably very small. All Africa is black or tawny. Asia chiefly tawny. America (exclusive of the new Comers) wholly so. And in Europe, the Spaniards, Italians, French, Russians and Swedes, are generally of what we call a swarthy Complexion; as are the Germans also, the Saxons only excepted, who with the English, make the principal Body of White People on the Face of the Earth.
What a difference two and a half centuries make, eh? And the expanding definition of whiteness doesn't seem confined to the distant past, either. Twentieth-century immigrant groups like Italians, Jews, and Poles were initially not considered "white" (except by the legal system), but rather "white ethnics". Now, no one in America questions whether Italians are white, and were in fact white all along, from the very start. And the only people who question whether Ashkenazic Jews are white are a few screeching Nazis on Twitter (who may or may not reside in the U.S.). 

In fact, this may already be happening with Hispanics. More and more Hispanics are declaring themselves white

"Black" and "Asian" are other examples. In America, "black" people are all assumed to be part of one big race, as are "Asian" people. But try telling Hutus and Tutsis in Africa that they're both part of the same ethnically homogeneous group. Or try going to a bar in Korea and telling some guys that they're the same race as Japanese people (My advice: Be ready to duck). Ethnic differences that Americans don't even recognize the existence of are the basis of genocide in other parts of the world.

"White" too. Hitler's plan for the Soviet Union involved genocide of Slavs on a scale so epic that it makes it clear that the Holocaust was just a dress rehearsal:


Now that's some #whitegenocide, right there. Even though Germany lost, they made considerable headway toward making this a reality, slaughtering over 20 million Russians.

So you have blue-eyed Turks thinking they're the same race as black-haired Turks. You have pale Americans and swarthy Americans both calling themselves "white". And then you have Germans launching an all-out apocalyptic war to exterminate a group of people that they probably couldn't even tell from themselves if they all had the same clothes and haircuts. 

(Random anecdote: One time, in Germany, a German woman came up to me and started speaking rapid German. She was astonished to find that I was American, and said "But you look so German!")

OK, but suppose you don't buy all this stuff about the social definition of race. That's hippie-dippy bullshit, right? Genetic differences are real, end of story. OK, but even then you must admit the power of intermarriage.

Intermarriage was probably essential for the creation of the white race here in America. This is from a recent National Academy of Sciences report titled "The Integration of Immigrants into American Society":
Historically, intermarriage between racial- and ethnic minority immigrants and native-born whites has been considered the ultimate proof of integration for the former and as a sign of “assimilation” (Gordon, 1964; Alba and Nee, 2003). When the rate of interethnoracial or interfaith marriage is high (e.g., between Irish Americans and non-Irish European Americans or between Protestants and Catholics), as happened by the late 20th century for the descendants of the last great immigration wave, the significance of group differences generally wanes (Alba and Nee, 2003). Intermarriage stirs the ethnic melting pot and blurs the color lines.
When tons of people have Irish, German, and English ancestors, it's just very hard to keep those three ethnic categories separate in society. The same thing happened to Italians and Jews after World War 2. In the early 1960s, the outmarriage rate among Italian Americans was over 40 percent. Jews took a little longer, but got there eventually - the Jewish outmarriage rate is now 58 percent, and among the non-Orthodox it's 71 percent. 

(In case you were wondering, somewhere around 33% of native-born Hispanic and Asian Americans currently marry non-Hispanic whites.)

Whether you believe race is fundamentally about biology or sociology, intermarriage erases racial boundary lines. It's the final proof that ethnic homogeneity is not fixed, but changes depending on what people do.


An alternate theory: Trust causes homogeneity

Once you realize that homogeneity can be produced, through redefinition and through intermarriage, an alternate theory presents itself for why there might be a correlation between homogeneity and trust: Places with high trust become more homogeneous over time. 

This could happen genetically. When people associate freely and don't have intergroup suspicions and hatreds, they probably tend to hook up and get married with each other a lot more. Over time, the prevalence of trust leads to a genetically homogeneous group.

This could also happen socially. When people of disparate groups are bound together for a common purpose - fighting a war against a neighboring country, for example - the increased feeling of solidarity and commonality might cause them to start to consider themselves as one single race. 

So what produces trust? Perhaps another big, nebulous thing: institutions. Research shows that when organizations like the military, colleges, and public schools put people in close contact and make them cooperate, they start to trust people of other ethnic groups more. For example, here's the abstract of a 2006 American Economic Review paper called "Empathy or Antipathy? The Impact of Diversity":
Mixing across racial and ethnic lines could spur understanding or inflame tensions between groups. We find that white students at a large state university randomly assigned African American roommates in their first year were more likely to endorse affirmative action and view a diverse student body as essential for a high-quality education. They were also more likely to say they have more personal contact with, and interact more comfortably with, members of minority groups. Although sample sizes are too small to provide definitive evidence, these results suggest students become more empathetic with the social groups to which their roommates belong.
And here's the abstract from a very recent paper called "Trust, Ethnic Diversity, and Personal Contact: Experimental Field Evidence":
We combine a lab and a field experiment in the Norwegian Armed Forces to study how close personal contact with minorities affect in-group and outgroup trust. We randomly assign majority soldiers to rooms with or without ethnic minorities and use an incentivized trust game to measure trust. First, we show that close personal contact with minorities increases trust. Second, we replicate the result that individuals coming from areas with a high share of immigrants trust minorities less. Finally, the negative relationship between the share of minorities and out-group trust is reversed for soldiers who are randomly assigned to interact closely with minority soldiers. Hence, our study shows that social integration involving personal contact can reduce negative effects of ethnic diversity on trust.
Crucially, unlike most of the papers about diversity and trust cited above, these studies are randomized experiments

Because they're randomized experiments, they're inevitably small-scale. These are moderate, short-run effects - to really know whether institutions like schools and the military can erase racial boundaries over many decades is beyond the scope of controlled experimentation. So these papers are really just suggestive.

But the notion seems to fit with American history. The Civil War seemed to put an end to the eruption of anti-Catholic sentiment, allowing Irish and South German Americans to integrate both socially and genetically into the emerging white race. And after World War 2, the outmarriage of Italian, Jewish, and Polish Americans accelerated. In both cases, the experience of being part of a nation at arms, cooperating side by side in a desperate, titanic struggle, probably erased a lot of the suspicions, prejudices, etc. that had persisted before the wars.

Anyway, this alternate theory can potentially explain the correlation between trust and homogeneity - places with institutions that create high trust levels tend to become more homogeneous over time. 


An alternate theory: "War + Proximity = Diversity"

But what about all those wars? Most of the time there's a really big war, there's at least some modest ethnic difference between the combatants - British vs. French, German vs. Russian, Hutu vs. Tutsi, Japanese vs. Korean. If small differences like those could cause such incredible bloodshed, think about what calamities could be caused by the difference between groups as distinct as Africans and Europeans!

In fact, I think the historical record gives us a clue as to why this idea is wrong. The bloodiest wars in history are mostly either civil wars in China, or interstate wars in Europe or East Asia. This was true even when Europe and Japan had global reach. They chose to kill people who looked a lot like them, rather than people who looked very different. In fact, genocides between extremely distinct groups - for example, the Belgian genocide in the Congo - are the exception, not the rule. In fact, plenty of mass killings happen among people who don't recognize any ethnic differences between the sides at all - the Khmer Rouge, Mao Zedong, the Spanish Civil War, etc. 

So we have big genetic differences not even being recognized in some parts of the world, and tiny, possibly undetectable genetic differences being the basis for genocide in other parts of the world. I'd say the thesis that "Diversity + Proximity = War" is, at the very least, suspiciously incomplete.

A better general theory, I think, is that most competition happens between groups of people that are pretty similar. Similar people have similar interests and desires, which naturally leads them to compete. But when people fight en masse, they need ways to organize themselves in order to motivate soldiers to kill others who look and act like them. Thus, they exaggerate any small differences they can find. "You're German, superior to those inferior Slavs; exterminate them!" Etc.

Under this theory, the "#whitegenocide" that some alt-right people fear - a term they use for race mixing - is actually the exact opposite of real genocide. Under this theory, race mixing happens when high social trust causes group differences to stop mattering, while genocide happens when low social trust causes previously insignificant group differences to start mattering.

To sum up, instead of "Diversity + Proximity = War", we might theorize that "War + Proximity = Diversity" - wars give people a reason to emphasize and magnify small differences. 

It's why you don't often see humans fighting emus


A compromise theory

Given the evidence on both sides, and the plausibility of both the pro-homogeneity and the pro-diversity theories, it seems at least somewhat likely to me that the real world features a combination of the two. Here's how the compromise theory goes: At first, when an influx of new people comes in, there's a natural reaction of distrust, and existing communities get fractured. However, as time goes on, the previous inhabitants and the newcomers get used to each other. This process is accelerated by integrating institutions like public schools, colleges, and the military, and is complete once intermarriage is widespread. However, social conflict, especially political conflict, can keep this integration from happening, causing groups not to mix and people to continue to emphasize and maintain their differences. 

So the compromise theory says: In the short run, increased diversity causes decreased trust; in the long run, high trust cause increased homogeneity. 

Or, as I once put it on Twitter: "One strange-looking person in your neighborhood is a guest. 100 are an invasion. 1000 are just the neighbors."

If this theory is right, America's success depends on having institutions strong enough to integrate Asians and Hispanics - the two most recently arrived big groups - with the existing groups of whites and blacks. In other words, this theory says that homogeneity isn't the means, it's the goal. 

Who knows; one day even white and black Americans might consider themselves part of the same ethnic group.


The dream of a white nation

But what about the people who don't want that? What about the alt-right folks and fellow-travelers who have no intention of waiting around for America's various races to all decide they're on the same team? Many want to take the shortcut to a homogeneous society - they want to live in a place where only white people are allowed. They want the dream of a half-remembered, half-imagined 1950s Southern California - the clean streets, the nice lawns, the dependable white neighbors who tip their hat and say hi to you as they stroll down the lane. And dammit, they want it now

Well, the short answer is: I don't know how they're going to get it. It's not going to be possible for them to reimplement racial segregation, or kick all the Asians and Hispanics out of the country. Any serious, large-scale attempt to do that would mean civil war and the collapse of America, which I guarantee would not lead to a nice pleasant racially homogeneous peaceful life for anyone anytime soon.

And what are the other options for creating Whitopia? Secede? Not gonna work. You can go to small towns and gated communities, but the jobs won't follow you, and by the law of the land, any nonwhite person who wants to can buy the house next to you. So what other options are there? Move to Argentina, I guess. Or maybe New Zealand.

It's this paucity of options, I think, that has so many alt-right people so freaked out. For people who want a white heterogeneous society, there's pretty much just nowhere to go. Until recently there was Europe, but with the rise of substantial nonwhite minorities there, and with most European leaders still committed to allowing large-scale nonwhite immigration, that avenue to Whitopia - or Kekistan, as it were - seems closed down. To those who dream of white homogeneity, it must seem like they're being hounded to the ends of the earth, denied any place to call their home, told everywhere by their leaders to integrate with the nonwhite people nextdoor. No wonder they're going crazy on Twitter.

I wish it were different. I wish there were some island nation where alt-right folks could go, and establish their all-white nation-state. It doesn't seem likely to happen, but if it could, I'd say: More power to you.

But the ironic thing is, suppose they did get their Kekistan. Suppose New Zealand decided to become an all-white country (like it did in 1920), and twenty million alt-right types from around the world moved there (giving it about a quarter the population density of Japan). I think it just wouldn't work.

I think people would move there, and find that homogeneity doesn't automatically produce trust and goodwill and social peace. They would find that their population was a highly selected set - it would be made up of people who couldn't get along with the people in their homelands. And they would find that the real thing keeping most of them from getting along with their neighbors wasn't ethnic diversity - it was their own personalities. 

Eventually, social strife would return. Neighbors would feud over land and resources and power and community status. Gunfights would erupt. Killdozers would be unleashed. The government would lurch from crisis to crisis. Protectionist economic policies would be tried and would fail. The economy would languish. Some people would emigrate, back to the hellscapes of diversity. 

And those who remained would cling to the theory that "Diversity + Proximity = War". No one likes to give up their cherished social theories, especially if it's the theory that the country was founded on. Just as with Hutus and Tutsis, the inhabitants of Kekistan would "discover" ethnic differences that had been there all along. Suddenly they wouldn't be just white people anymore, but Russian-Kekistanis, Italian-Kekistanis, Hungarian-Kekistanis. Strife and distrust would return, and the new country would undergo decades, if not centuries, of brutal upheaval, fragmentation, clan warfare, unstable military rule, competing aristocracies, atrocities, and poverty.

I didn't just make that prediction up, by the way. That's pretty much just the history of Japan

So although there's certainly a case to be made for homogeneity, I'd say the case is a lot weaker and more uncertain than its proponents believe. And more importantly, there's no path for how to get there - at least, not for a country like America. Except for a few small towns scattered throughout the country, the dream of an all-white utopia is likely to remain just that.

Friday, April 14, 2017

Why the 101 model doesn't work for labor markets


A lot of people have trouble wrapping their heads around the idea that the basic "Econ 101" model - the undifferentiated, single-market supply-and-demand model - doesn't work for labor markets. To some people involved in debates over labor policy, the theory is almost axiomatic - the labor market must be describable in terms of a "labor supply curve" and a "labor demand curve". If you tell them it can't, it just sort of breaks their brain. How could there not be a labor demand curve? How could there not be a relationship between the price of something and how much of it people want to buy?

Answer: If you can't observe it, you might as well treat it as if it doesn't exist.

People forget this, but demand curves aren't actually directly observable. They're hypotheticals - "If the price were X, how much would you buy?" You can give people a survey, but the only way to really know how much people would buy is to actually move the price to X. And the only way to do that is to shift the supply curve. But how do you know what the supply curve is? The only way is to shift the demand curve!

This is called an identification problem. Unless you can observe something that's clearly a shock to only one of the curves but not the other, you can't know what the curves look like. (Paul Romer explains this in section 4.1 of his essay "The Trouble With Macroeconomics".)

And with labor markets, it's very hard to find a shock that only affects one of the "curves". The reason is because almost everything in the economy gets produced with labor. If you find a whole bunch of new workers, they're also a whole bunch of new customers, and the stuff they buy requires more workers to produce. If you raise the minimum wage, the increased income to those with jobs will also boost labor demand indirectly (somehow, activist and businessman Nick Hanauer figured this out when a whole lot of econ-trained think-tankers missed it!).

Labor is a crucial input in so many markets that it really needs to be dealt with in general equilibrium - in other words, by analyzing all markets at once - rather than by treating it as a single market in isolation. That makes the basic Econ 101 partial-equilibrium model pretty useless for analyzing labor.

"But," you may say, "can't we make some weaker assumptions that are pretty reasonable?" Sure. It makes sense that since it takes some time for new businesses to be created, a surge of unskilled immigration should represent a bigger shock to labor supply than to labor demand in the very short run. And it makes sense that a minimum wage hike wouldn't raise labor demand enough to compensate for the wedge created by the price floor.

With these weaker assumptions, you can get a general sense of the supply and demand curves. Problem: The results then contradict each other. Empirical results on sudden unskilled immigration surges indicate a very high elasticity of labor demand, while empirical results on minimum wage hikes indicate a very low elasticity of labor demand. Those can't both be true at the same time.

So if you accept these plausible, weak identifying assumptions, it still doesn't make sense to think about labor markets as described by an S curve and a D curve.

Of course, you could come up with some weird, stinky, implausible identifying assumptions that could reconcile these empirical facts (and the various other things we know about labor markets). With baroque enough assumptions, you can always salvage any theory, as Romer points out (and as Lakatos pointed out). But at some point it just starts to seem silly.

In fact, there are a number of other reasons why the Econ 101 theory isn't a good fit for labor markets:

1. Supply-and-demand graphs are for one single commodity; labor is highly heterogeneous.

2. Supply-and-demand graphs are static models; because of labor laws and implicit contracts, labor markets involve lots of forward-looking behavior.

3. Supply-and-demand graphs are frictionless; labor markets obviously involve large search frictions, for a number of reasons.

If Econ 101 supply-and-demand models worked for every market, the vast majority of the modern economics profession would be totally useless. Claiming that the econ 101 model must always be a good model basically says that most of econ is barking up the wrong tree, and that it's just all in Marshall. Fortunately, economists tend to be a smart, scientifically-minded bunch, and so they realize that general equilibrium effects, heterogeneity, forward-looking behavior, search frictions, etc. exist, and often are essential to understanding markets.

The Econ 101 supply-and-demand model is just not a good description for the labor market. The theoretical construct known as "the labor demand curve" is ontologically suspect, i.e. it is a poor modeling choice. If we adopt some sort of positivist or empiricist philosophy - "if I can't observe it, it might as well not exist" - then we might as well say that "the labor demand curve" doesn't exist. It's not an actual thing.

Thursday, April 13, 2017

Ricardo Reis defends macro


I really like this defense of macroeconomics by Ricardo Reis. He makes it clear that he's sort of playing devil's advocate here:
While preparing for this article, I read many of the recent essays on macroeconomics and its future. I agree with much of what is in them, and benefit from having other people reflect about economists and the progress in the field. But to join a debate on what is wrong with economics by adding what is wronger with economics is not terribly useful. In turn, it would have been easy to share my thoughts on how macroeconomic research should change, which is, unsurprisingly, in the direction of my own research. I could have insisted that macroeconomics has over-relied on rational expectations even though there are at least a couple of well developed, tractable, and disciplined alternatives. I could have pleaded for research on fiscal policy to move away from the over-study of what was the spending of the past (purchases) and to focus instead on the spending that actually dominates the government budget today (transfers). Going more methodological, I could have elaborated on my decade-long frustration dealing with editors and journals that insist that one needs a model to look at data, which is only true in a redundant and meaningless way and leads to the dismissal of too many interesting statistics while wasting time on irrelevant theories. However, while easy, this would not lead to a proper debate
Reis goes on to defend academic macro from some of the main recent criticisms, including:

  • Macro relies on representative agents
  • Macro ignores inequality
  • Macro ignores finance
  • Macro ignores data and focuses mainly on theory

He gives a sampling of 8 job market papers by recent highly successful job market candidates, and a sampling of recent articles in the Journal of Monetary Economics. This actually seems like a pretty stringent test of the criticisms to me - job market papers are probably weighted toward theory, for signaling purposes, while the JME has a reputation as a very (small-c) conservative macro journal.

But as Reis shows, modern macro papers generally don't fit the caricature described above. There's lots of heterogeneity in the models, a fair amount of attention to inequality and distributional concerns, plenty of finance, and lots and lots of data.

Reis is right; a lot of these criticisms are now out of date. That doesn't mean they were never right, though. There was a time when macro models mostly did use representative agents, when financial sectors were rarely modeled, and when calibration served as the only empirical test of many models. The point is not that the critics were full of it, but that macroeconomists were aware of the problems and moved their field in the direction it needed to go. Macro is a dynamic field, not a hidebound one.

And Reis himself shows that macroeconomists - at least, many of them - know of a number of areas where the field still needs to improve. He wants to move away from exclusive reliance on rational expectations, and stop forcing authors to stick in theory sections when they're not really needed.

This all sounds great to me. Personally, I'm particularly happy about the increase in "micro-focused macro". Arlene Wong's JMP, which Reis references, is a great example of this. Very cool stuff. Basically, finding out more micro facts about the effects of business cycles will help guide new theories and (ideally) discipline old ones.

But one problem still nags at me, which Reis doesn't really address. Why didn't macro address some of these problems earlier - i.e., before the crisis? For example, why was finance so ignored? Sure, there were some macro models out there that included finance - even a few prominent ones - but most researchers modeling the business cycle didn't feel a need to put financial sectors in their models. Another example is the zero lower bound, and its importance for monetary policy. A few macroeconomists were definitely clued into this years before the crisis, but they seem to have been a less-influential minority, mostly confined to international macro. In the runup to the crisis, macro researchers were generally not sounding the alarm about the danger from financial shocks, and after the recession hit and rates went to zero, many leading macroeconomists still dismissed the idea of fiscal stimulus.

Fixing problems quickly is great, but it's also important to ask why the problems were there in the first place.

One possible answer is sociological - as Paul Romer tells it, it was largely the fault of Robert Lucas and Thomas Sargent for bullying the profession into adopting bad models (or the fault of their early critics like Solow for bullying them into becoming bullies).

I don't know how true that story is. But I do think there's another potential explanation that's much more about methodology and less about personalities. One thing I still notice about macro, including the papers Reis cites, is the continued proliferation of models. Almost every macro paper has a theory section. Because it takes more than one empirical paper to properly test a theory, this means that theories are being created in macro at a far greater rate than they can be tested.

That seems like a problem to me. If you have an infinite collection of models sitting on the shelves, how does theory inform policy? If policy advisers have an endless list of models to choose from, how do they pick which one to use? It seems like a lot of the time it'll come down to personal preference, intuition, or even ideology. A psychologist once joked that "theories are like toothbrushes...everyone has one, and nobody wants to use anyone else's." A lot of times macro seems like that. Paul Pfleiderer calls this the "chameleon" problem.

It seems to me that if you want to make a field truly empirical, you don't just need to look at data - you need to use data to toss out models, and model elements like the Euler equation. Reis' suggestion that journal editors stop forcing young authors to "waste time on irrelevant theories" seems like one very good way to reduce the problem of model proliferation. But I also think macro people in general could stand to be more proactive about using new data to critically reexamine canonical assumptions (and a few do seem to be doing this, so I'm encouraged). That seems like it'll raise the chances that the macro consensus gets the next crisis right before it happens, rather than after.

Sunday, April 09, 2017

Can rationalist communities still change the world?


In my last post, I recounted some historical examples of times when (broadly defined) rationalist communities - groups of smart generalists debating and trying to figure things out - changed the world. A number of people noted that my examples are all from the fairly distant past, and have asked whether similar changes are possible today. 

Well first of all, I think there's a selection bias at work in our assessment of who "changed the world". The Royal Society looks world-changing now, but in its day it probably just looked like a bunch of eccentric tinkerers and nerds. It took centuries of progress, based on the foundations discovered in the 17th century, for those contributions to be properly recognized as world-shaking. 

Second, I think whether groups are able to make big changes, especially in the social sciences, depends in large part on external events - i.e., whether there are big political changes happening at the time, for other reasons. The Meirokusha came along during a time when Japan was undergoing rapid opening and industrialization, and the Scottish Enlightenment came just before the Industrial Revolution, a wave of revolutions in Europe, and the formation of the U.S. and the British Empire. Right now, the world is in a relative period of stability (knock on wood) - it's hard to find recent groups that changed the world, because the political world just hasn't changed that much.

But given these caveats, I think it's still clear that a relatively small community of smart people can change the world. 

One example would be the group of physicists, mathematicians, and engineers who came out of Europe in the early 20th century - Einstein, von Neumann, Bohr, Fermi, Schrodinger, and all the rest. These folks did a lot of groundbreaking physics and math, but they also invented refrigerators and nukes, made major advances in economics and computing, and probably did more to reinvent science than anyone since the Royal Society. The modern world is largely built around technologies and ideas that came out of that community. They weren't as cohesive of a group as the Royal Society, but they did mostly know each other, and there was probably significant cross-pollination of ideas. 

Another group that changed the world was the Chicago School of economics. In the mid to late 20th century, the Chicago economics department saw a remarkable confluence of talent - Gary Becker, Robert Lucas, Milton Friedman, Ronald Coase, Frank Knight, and many others. The ideas that came out of that community changed the face of modern society. Some of those changes are things that many people don't like, but the same is true of the Scottish Enlightenment, the Meirokusha, the Progressives, the Fabian Socialists - indeed, the same is true of any social science community. The Chicago School thinkers were specialized in social science, but within that broad category, their ideas were remarkably general, dealing with almost every important social, political, and economic issue of the day. 

Both of these communities could also reasonably be described as "rationalist". They were specialized, but not hyper-specialized - Einstein invented a refrigerator, Milton Friedman wrote political philosophy, etc. They had their ideological biases, at both the individual and the group level, but most of them were keen on figuring out how the world really worked (though there might have been exceptions to this). 

Of course, just because it happened recently doesn't mean it could happen again. The 20th century might have seen the last great scientific advances that the human race will ever make. The ideas of the Chicago School might be the last coherent, original, influential outpouring of social thought. But note that this was just as possible in 1870, or in 1570, as it is today. There were probably scientists and social thinkers in those days who believed that everything important had been discovered and created. Almost by definition, big paradigm shifts in either natural or social science come unexpectedly. 

A more worrying argument is that modern intellectual communities are too highly specialized. Some believe that science is just so hard now that progress can only be made by teams of super-specialized people digging deep into one domain. Others think that the incentive structures of modern academia, business, government, etc. encourages too much specialization

I'm not sure whether science is too hard for generalist communities of polymaths to make big breakthroughs. The biggest scientific breakthroughs usually involve the establishment of completely new fields that few people were working on before - physics in the 1600s, electrical engineering in the 19th and early 20th centuries, computer science in the 20th century. We might have run out of new domains of knowledge, or we might not have - in fact, we'll never know whether we have or not.

I do worry about incentives. Modern academia is very siloed - there's a lot of pressure to publish in your own field's highly specialized journals. Scholars who venture into other areas, with new perspectives - think of Gary Becker trying to do sociology - are often resisted and even vilified as "imperialists" by the existing research community. Specialized academic communities can even become sort of like "mafias", resistant to new ideas. 

I doubt that this is quite as big a barrier as many fear. The smartest people in their fields - think Terence Tao in math, Feng Zhang in biology, or Ivan Werning or Markus Brunnermeier in economics - have zero problem getting published in their own field, and have plenty of time to work on and think about other things. I picked these examples because they're all obviously highly curious people who have explored a lot of different fields within their discipline. Similarly, I don't think professional "mafias" will be able to successfully resist new ideas if those new ideas work. If Terence Tao went out tomorrow and made a macroeconomic theory that could predict the effects of monetary policy really well, I doubt even the most concerted resistance by macroeconomists could stop it from being accepted.

Meanwhile, the internet has opened up tons of opportunities for collaboration and cross-pollination. The economics blogosphere is a good example of this. In many ways, it's one of the most successful rationalist communities around today. Econ bloggers are often accomplished academics - Paul Krugman, Paul Romer, Narayana Kocherlakota, Brad DeLong, and others have all held faculty posts at top schools. But the range of topics the blogosphere deals with is fantastically wide - everything from presidential politics to art and culture to the history of science. 

And though it's hard to tell, I'd say the blogosphere has had some real influence. The voluminous discussions of fiscal policy, as well as Krugman's forceful advocacy, have probably made austerity less popular across the developed world. Word of mouth tells me that relentless blogger criticism of macroeconomics has helped push younger academics toward more empirical and more micro-grounded research (and I think you can already see this in the literature). By publicizing the discoveries of academics' mistakes, such as with the Reinhart-Rogoff affair, econ blogs are also leading to a democratization of research evaluation and critique that might eventually challenge, or complement, the peer review system. And new ideas have come from the blogosphere - for example, Robin Hanson's use of signaling to explain social phenomena.

But the econ blogosphere has a problem - in order to have continued and expanded relevance, we need new people and we need more brain power. Much of the impetus for the efflorescence of blogging between 2009 and 2013 came from the Great Recession. The current crop of bloggers has had some spectacularly interesting exchanges - for example, Steve Williamson and Narayana Kocherlakota have a long-running monetary policy debate on Twitter that is more interesting than any other such debate I've ever seen. But for the blogosphere to become a rationalist community for the ages, we need more very smart people. We need polymathically inclined folks like Brunnermeier and Werning to start blogs, and to have exchanges of ideas on wide-ranging topics. If that were to happen, the blogosphere might eventually have an influence up there with the Chicago School. Currently, that is still a distant dream. 

Saturday, April 08, 2017

When rationalists remade the world


Defending the "rationalist" community from its most recent crop of assailants, Scott Alexander writes:
There have been past paradigms for which...criticisms [of rationality] are pretty fair. I think especially of the late-19th/early-20th century Progressive movement. Sidney and Beatrice Webb, Le Corbusier, George Bernard Shaw, Marx and the Soviets, the Behaviorists, and all the rest. Even the early days of our own movement on Overcoming Bias and Less Wrong had a lot of this. 
But notice [I posted] links...to book reviews, by me, of books studying those people and how they went wrong. So consider the possibility that the rationalist community has a plan somewhat more interesting than just “remain blissfully unaware of past failures and continue to repeat them again and again”... 
Look. I’m the last person who’s going to deny that the road we’re on is littered with the skulls of the people who tried to do this before us. But we’ve noticed the skulls. We’ve looked at the creepy skull pyramids and thought “huh, better try to do the opposite of what those guys did”.
This is good (though I think the Progressives did pretty darn well). But I think it's important to realize that rationalism's history is also full of startling, dramatic successes. It isn't all skulls. There are also...whatever positive metaphor-thing is the opposite of skulls. It's hard to know what changes the course of the human race, but there are cases where there's a good argument to be made that rationalist communities changed the course of the human race.

The most important example, of course, is the Royal Society in 17th and 18th century Britain. which included many of the first modern scientists, and laid the foundation for most of modern science. It included people like Robert Hooke and Robert Boyle, and of course Isaac Newton. A basic high school mechanics course is just a rehash of what these people discovered.

The Royal Society were not specialists buried deep within highly focused academic departments, but generalists - "natural philosophers", in an age when "scientist" wasn't yet a job description. Newton spent years writing about theology and served as master of the Royal Mint. Hooke investigated optics and gases, but also studied fossils and wrote about evolution. Etc. They were just a bunch of smart folks who liked to think about things. They weren't always friends - Newton and Hooke famously didn't get along - but they had a rationalist approach, powered by constant discussion and exchange of ideas.

And the Royal Society can be seen as part of a larger, longer-term European rationalist community. Johannes Kepler, Galileo Galilei, Rene Descartes, Francis Bacon, and Gottfried Leibniz are just a few other generalists who wrote letters and articles propounding some form of the new rationalist philosophy that swept over the European continent from the late 1500s through the 1700s.

A second example, fairly close in time and place to the first, was the Scottish Enlightenment. A relatively small group of Scottish people based in Edinburgh and Glasgow came up with many of the social and political philosophies upon which the modern world is based. These included Henry Home (Lord Kames), Adam Smith, and David Hume. They went over to each other's houses and had drunked debates all in Latin, and formed a club called the Select Society that eventually gave birth to the Edinburgh Review. All were intellectual generalists, not specialists, and their way of thinking bears the unmistakable mark of rationalism. (Sadly, it doesn't seem Lord Kames and Thomas Bayes had any significant contact, though they went to the same university at the same time.)

A third example is Meirokusha, a philosophical society in Meiji Japan. The most famous member of Meirokusha was Fukuzawa Yukichi, who founded Keio University and is now on the 10,000-yen bill (similar to the $100, which is funny, because Fukuzawa is similar to Ben Franklin). The Meirokusha tasked themselves with figuring out what kind of society Japan should be, in the turbulent years following the country's emergence from international isolation. Some of them favored Confucianism, and others favored Western political and social philosophy, but all were well-versed in both, and most had lived overseas. It even had an American member, William Elliot Griffis.

Meirokusha tried publishing a magazine, but the authorities of the time shut it down. But the society's members had a very large and lasting influence on the direction of Japan's development. Fukuzawa was known as a proponent of a strong educational system and aggressive import of foreign ideas and technology. Those ideas are clearly visible as the foundations of Japan's approach to modernization. He also advocated a moral philosophy of individualism and self-reliance. I know less about the other Meirokusha members; I should learn more, since people regard them as highly influential.

So there are three examples of historical communities that seem very rationalist, and which changed the world in (I would argue) mostly positive ways. They may not have tried to use Bayesian reasoning, but their approaches were all recognizably rationalist in nature. They were generalists, who collaborated mostly out of pure intellectual curiosity rather than for money. Their interactions were characterized by lively, occasionally acrimonious debate, but always with the goal of understanding the world better.

In other words, it's not all skulls out there. I don't know what mark, if any, the modern "rationalist" community will leave on the world, or whether it'll even prove to be the most important rationalist community out there today. But I do think the idea of such a community is a good one.

Friday, April 07, 2017

Intuitionism vs. empiricism


A few weeks ago, I disagreed with a Russ Roberts post about empirical economics. I wrote that if you don't rely on empirical evidence or formal theory, you're going to just end up relying on intuition gleaned from half-remembered simplistic theory and non-careful evidence:
[One] option [for making policy predictions] is to rely on casual intuition - not really theory, but a sort of general gestalt idea about how the world works. If we're of a free-market sort of persuasion, our casual intuition would tell us that minimum wage is government interference in the economy, and that this is bound to turn out badly. Russ seems to be advocating for this... 
As I see it, the fourth option is by far the worst of the bunch. Theories can be wrong, stylized facts can be illusions, and empirical studies can lack external validity. But where does casual intuition even come from? It comes from a mix of half-remembered theory, half-remembered stylized facts, received wisdom, personal anecdotal experience, and political ideology. In other words, it's a combination of A) low-quality, adulterated versions of the other approaches, and B) motivated reasoning.  
If we care about accurate predictions, motivated reasoning is our enemy. And why use low-quality, adulterated versions of theory and empirics when you can use the real things?
In a recent episode of EconTalk, Russ demonstrates this "intuitionist" approach, as applied to the question of the minimum wage. Russ is interviewing Andrew Gelman about problems with statistics in empirical research. Andrew explains things like p-hacking and the "garden of forking paths" (his colorful term for data-dependent testing decisions), which make reported confidence bands smaller than they should be and thus lead to a bunch of false positives.

Russ uses Andrew's explanations as reason to be even more skeptical about empirical results that don't match his intuition. Because I can't find an official transcript, here is an unofficial transcript I just made of the relevant portion (Disclaimer: I don't know the standard rules for making transcripts of things; I edited out pauses and other minor stuff):
ROBERTS: But let's take the minimum wage. Does an increase in the minimum wage affect employment, job opportunities for low-skilled workers - a hugely important issue, it's a big, real question. And there's a lot of smart people on both sides of this issue who disagree, and who have empirical work to show that they're right, and you're wrong. And each side feels smug that its studies are the good studies. And I reject your claim, that I have to accept that it's true or not true. I mean, I'm not sure which...Where do I go, there? I don't know what to do! I mean, I do know what to do, which is, I'm going to rely on something other than the latest statistical analysis, because I know it's noisy, and full of problems, and probably has been p-hacked. I am going to rely on basic economic logic, the incentives that I've seen work over and over and over again, and at my level of empirical evidence that the minimum wage isn't good for low-income people is the fact that firms ship jobs overseas to save money, they put in automation to save money, and I assume that when you impose a minimum wage they're going to find ways to save money there too. So it's not a made-up religious view, I have evidence for it, but it's not statistical. So, what do I do there?
I think several things are noteworthy about Russ' monologue here.

First, Russ claims that minimum wage studies are "full of problems" and have "probably been p-hacked". As far as I can tell, he doesn't have any evidence for this, or can even name what these problems are. He doesn't seem to know, or even consider, how precise the authors' reported confidence intervals in any of these studies even are in the first place - did they report p-values of 0.048, or 0.0001?

As for p-hacking and data-dependent testing, the basic test of a minimum wage hike's effect on employment is pretty universal and is known and decided upon before the data comes in (including things like subsamples and controls). So while some analyses in any minimum wage study are vulnerable to p-hacking, the basic test of employment isn't really that vulnerable.

So Russ seems to have interpreted Andrew to mean that all empirical studies are p-hacked, and are therefore all unreliable. Did Andrew want to convey a message of "Don't pay attention to data, because all hypothesis tests are irrevocably compromised by p-hacking and data-dependent testing, so just go with your intuition"? I doubt that's the message Andrew wanted to convey.

Second, Russ' preferred method of analysis is exactly the "intuitionism" I described above. Russ states his intuition thus: Because companies do lots of things to try to lower costs, minimum wage must be bad for low-income people. But that intuition is not nearly as rich as even a very simple Econ 101 supply-and-demand theory. In the simple theory, if the elasticities of labor supply and demand are low, the minimum wage has only a very small negative effect on employment - consistent with the empirical consensus.

And if you model a firm, you find that there are a number of ways companies can save money in response to a minimum wage increase other than firing workers. They can raise prices. They can reduce markups. They can reduce the salaries of higher-paid workers. There are all kinds of margins for cost minimization not included in Russ' intuition, but which could explain the empirical consensus.

Also, a simple monopsony model shows that cost minimization can make a minimum wage raise employment rather than lower it. A monopsonistic company lowers costs by paying workers less and producing an inefficiently small quantity. Minimum wages, by raising costs, actually increase efficiency and raise employment in that simple model, which would also explain the empirical consensus.

But Russ' intuition doesn't include simple monopsony models, multiple margins of cost reduction, or inelastic supply and demand curves. The intuition is just one piece of a theory. That's why I think intuitionism, as a method for understanding the world, is strictly dominated by a combination of formal theory and empirical evidence.

Finally, by declaring empirical economics to be useless, Russ is condemning the majority of modern economics research. I'm not sure if Russ realizes this, but empirical, statistical research is most of what economists actually do nowadays:


Theory papers were down to less than 30 percent of top journal papers in 2011, and the trend has probably continued since then. By dismissing empirics, Russ is dismissing most of what's in the AER and the QJE. He's dismissing most of the work that economics professors are doing, day in and day out. He may not realize it, but he's claiming that the field has turned en masse toward bankrupt, useless endeavors.

That is a bold, aggressive claim. It's a much stronger indictment of the economics profession than anything ever written by Paul Krugman, or Paul Romer, or Brad DeLong, or any of those folks. I don't know if Russ realizes this.


Update

Russ responds in the comments. He notes that a slightly abridged transcript of the Gelman interview can be found at this link

David Childers notes in the comments that correcting for publication bias generally results in smaller estimates of the disemployment effects of minimum wage.