Social scientists continue to launch randomized field experiments to learn about human behavior. In one well known field experiment, Bertrand and Mullainathan mailed out randomized resumes to potential employers where the resume sender hoped for a job interview. Unknown to the firms who received these, the resumes were for identical individuals with the only difference between the two being that some had "black names" while others had majority names. They documented statistically significant evidence that EMILY AND GREG are more likely to be interviewed than LAKISHA AND JAMAL (even though they are all identical!). This paper has over 1600 Google Cites. This is an example of how social scientists can learn about firms by seeing how they respond to a randomized treatment.
In a new Example, Gary King and co-authors have published a paper in Science in which they randomly posted statements on China's social media and then observed whether the Chinese State censors allowed the statements to be posted or whether they were erased. The authors document that the censors do not like mentions of "collective action".
While I applaud the authors for their work, to an economist this piece poses more questions than it answers.
Economists say that the privately optimal amount of any action is such that at the margin the marginal benefits of the activity equals the marginal cost. What is the marginal cost to a Chinese censor saying "no" and refusing to post the post? The censor must read the post but when in doubt the censor is highly likely to just choose to say "no". What criteria are censors promoted on? Bad choices must get them fired and thus they must have strong incentives to block everything.
The problem in this "market" is that there is no price mechanism. The creator of the social media content cannot convey the intensity of his preference to have his post posted. He could post it repeatedly. The censor cannot signal his intensity of dislike of the post. This is a market with quantities but with no prices.
An additional study would collect information on who becomes a Chinese censor? How long do they spend on each post? Are humans even reading them or is a machine screening them? From the CCP's perspective are machines better than humans in "filtering" the posts?
Did Gary King and co-authors really believe that a random set of blog posts would be censored? Of course, there will be a selection effect. What discussions and co-ordination are precluded in modern China if "collective action" is rarely said on the Internet? What is the counter-factual here about how this censorship slows down learning and the potential rise of Democracy in modern China?