Rhiannon Jerch is finishing her Ph.D. at Cornell. I serve on her dissertation committee and we have co-authored this JPUBE paper together. This blog post will discuss her job market paper.
She studies the long run urban growth consequences of the Clean Water Act's mandate in the early 1970s that cities improve their water quality through treating their wastewater.
In the absence of this mandate, how would cities have set their budget expenditures? Millions of Americans live in small cities (cities with fewer than 150,000 people) that are not suburbs of bigger cities. The local government in such cities collects tax revenue and receives transfers from the federal government and state government and uses these resources to provide basic services to their constituents. Basic economic logic posits that these local leaders will optimally allocate such expenditures. Thus the local "bang per buck" spent on each local public good will be equated. This is the standard efficiency first order conditions.
This equilibrium would be a pareto optimum if there weren't cross-city externalities and if cities didn't face lumpy adjustment costs. Rhiannon's starting point is that cities are not independent islands. She is interested in water pollution externalities. Given that rivers flow in one direction, some cities are upstream and some are downstream. If an upstream city j does not invest in water treatment, then much of the costs of this under-investment are borne by city m downstream. Such cross-boundary spillovers provide an immediate rationale for federal intervention.
Rhiannon has to work hard to create her spatial panel data set as she identifies which cities are upstream and downstream of whom. Coasian logic helps her to create an instrumental variable for whether a city has already made wastewater treatment investments before the Clean Water Act.
Suppose that a major city is downstream from the polluter. Such a city will use its clout (because of the total damage it is suffering under the status quo) to either compensate or sue the upstream city to take costly actions to mitigate the water pollution. This means that the 1970s Clean Water Act was less likely to be binding for such an upstream city because it is connected to a major downstream city. The modern LATE literature is subtle about who is the "marginal city" affected by an instrument or a surprise mandate and Rhiannon's paper has this flavor.
A direct quote from her paper concerning the findings;
"In aggregate, this change in cities’
amenity-tax bundles had insignificant impacts on population growth or housing prices, suggesting
that the mandated infrastructure was at least valued at its marginal cost to local residents.
However, the aggregate effects mask important sources of heterogeneity in city responses. Per
capita compliance costs were 20% higher among smaller cities unable to exploit scale economies
in infrastructure realized by larger cities. Despite having higher compliance costs, I show that
smaller cities received less grant funding per resident and used those funds more efficiently relative
to larger cities. These results suggest that both efficiency and equity could have improved
under the CWA federal aid program if grant allocation followed city-specific abilities to benefit
from scale economies. Lastly, I show that the value of mandate compliance to local residents is
greater among cities with warmer summer climates, closer proximity to large waterbodies, and
greater exposure to upstream abatement."
Rhiannon and I have often spoke about the complementarity between public capital upgrades and private capital investment. Intuitively, if rich people are willing to pay more for a condo near the water if the water is clean and the air is not polluted then this is an example of such complementarity. In my own research, I have explored this theme here.
Her core question of "who benefits and who loses from federal mandates?" is important. My prior had been that the affected cities would lose because this mandate would be a binding constraint that would force them to substitute away from activities that were privately beneficial. Her result suggest that one must be a bit more nuanced here. There are cases in which a federal mandate imposed on a locale may improve quality of life in the affected place (as well as in the downstream areas that have been suffering from the Pigouvian spillovers).
Rhiannon's work makes an original contribution to environmental, urban and public finance economics. The field is moving forward!
-
The 2018 Nobel Laureate (Paul Romer) has published a great opinion piece in the WSJ today. While he doesn't offer a quantitative analysis of how much extra growth we would achieve, he does deliver 3 constructive suggestions for accelerating economic growth.
His proposals all focus on idea generation and reducing the barriers for such ideas to flow across people. The "privatization" of knowledge slows down economic growth because certain permutations of these ideas are not discovered and acted upon as quickly in these closed-loop systems. For those who know some mathematics, read Erzo Luttmer's papers to see how ideas and learning stimulate firm level economic growth and thus macro-growth. My old friend Dr. Luttmer is the leader in developing the new generation of economic growth models.
Dr. Romer's core ideas;
1. He supports government investment in scientific infrastructure for accelerating "building on the work of others". Python is an example of such open source technology. I was a pinch surprised here that he didn't endorse a larger budget for NSF and NIH.
2. Transparency --- Paul argues that the private sector needs incentives to innovate but when such innovations occur that such firms must reveal enough details about their breakthroughs to allow other "outsiders" to build on these insights. The Devil will be in the Details here. Romer argues that learning will occur more quickly under these rules. Will such required "revelations" slow down the initial research?
Paul is also concerned about data monopolies. Firms such as Uber sit on a treasure trove of data. If you are interested in transport questions in cities, their data is much better than the Department of Transportation's National Household Transportation Surveys but Uber's research incentives are solely focused on making $ for that company. This means that their data will be under-utilized and the research will focus on topics that boost Uber's profits. From the perspective of learning about transportation in cities (and the sharing economy's labor market) , this is a shame and this has economic growth consequences (I admit that this last link is a pinch murky).
3. Romer wants to preserve the firewalls between government agencies and politics. The Federal Reserve has been able to do its job because the Fed Chief is independent of politics. If the EPA and other agencies also enjoy such "independence" then the rules of the game will be such that high quality individuals will choose to work for government and higher quality government will help us to achieve higher economic growth. Again, I agree with this point, but it will difficult to quantify.
Permit me to build on #2;
Suppose that for a representative set of Americans that IRS (Chetty data), Amazon, Bank of America, E-Trade, Equifax, Google, Facebook, Zillow, Uber, Netflix, and Twitter created an integrated daily data based with anonymous identifiers detailing all daily activity.
Such a data set would facilitate research on numerous fronts. Research on climate change adaptation would be greatly aided as the researchers would merge in temperature, pollution and natural disaster risk and actually test how different people respond to exogenous events.
Would we need national income accounts if we had such merged micro-data? In real time, for many people we could track the evolution of key state variables related to credit, health, well being.
-
The Repec rankings are out for October 2018. In the name of full disclosure, here is mine. I see that my "strength of students" puts me in the 4th percentile. I need to improve on that category. I'm happier with my score in this category. Here are my peers according to REPEC.
Similarly ranked authors
These peers are ranked around you and are listed in random order:
- Michael Steven Weisbach
- Giovanni Dosi
- Varadarajan Chari
- Takatoshi Ito
- Glenn Paul Jenkins
- Richard H. Clarida
- Harvey Rosen
- Lucrezia Reichlin
- Edward E. Leamer
- Ivo Welch
- Thomas R. Palfrey
- Giancarlo Corsetti
- Richard Schmalensee
- Alan Manning
- David M. Cutler
- John C. Quiggin
- Fabio Canova
- Steven J. Davis
- Jong-Wha Lee
- Yuriy Gorodnichenko
One ranking I like is to go to the "Sandbox" and switch to the arithmetic mean and drop no categories;192 Simon Johnson 463.78 193 Walter Erwin Diewert 465.56 194 Thomas Lemieux 466.84 195 Gene Grossman 468.2 196 Matthew E. Kahn 474.64 197 Martin Eichenbaum 476.87 198 Darrell Duffie 476.95 199 Tullio Jappelli 477.7 200 Richard H. Thaler 481.39 201 David N. Weil 481.78 202 Martin James Browning 483.75 203 Jeremy Greenwood 486.06 204 Gary Koop 486.87 205 Martin L. Weitzman 488.53 206 Stephen James Redding 490.12 207 Serena Ng 491.37 208 Claudia Goldin 494.45 209 Jordi Gali 496.78 210 Robert M. Townsend 498.53