Friday, March 31, 2006

A Test of the Efficient Markets Hypothesis?

I recognize that I don't have much street credibility in proposing profitable asset trading strategies. But, this New York Times article suggests that there is money to be made by selling short companies such as General Motors with an older workforce. This article claims that with the new pension accounting rules that such companies will have a negative net worth. You don't have to be Eric Mindich to predict that this might lead to a lower stock price. My question for you is whether stock prices already reflect this information or whether stock prices have not adjusted yet to this "new news" concerning how the new pension rules will affect corporate accounting claims.

One leading berkeley researcher (see http://www.econ.berkeley.edu/~sdellavi/) has documented how the stock market does not capitalize clear demographic trends. For example, if there are a lot of little infants in a new birth cohort then demand for bicycles will rise when they reach age 5+.

If you do make money trading on my demographics based strategy, I would appreciate it if you buy a few copies of my Brookings "Green Cities" book when it is published this summer. If Hollywood makes a movie about my book, I hope you go see it!


March 31, 2006
Shocks Seen in New Math for Pensions
By MARY WILLIAMS WALSH

The board that writes accounting rules for American business is proposing a new method of reporting pension obligations that is likely to show that many companies have a lot more debt than was obvious before.

In some cases, particularly at old industrial companies like automakers, the newly disclosed obligations are likely to be so large that they will wipe out the net worth of the company.

The panel, the Financial Accounting Standards Board, said the new method, which it plans to issue today for public comment, would address a widespread complaint about the current pension accounting method: that it exposes shareholders and employees to billions of dollars in risks that they cannot easily see or evaluate. The new accounting rule would also apply to retirees' health plans and other benefits.

A member of the accounting board, George Batavick, said, "We took on this project because the current accounting standards just don't provide complete information about these obligations."

The board is moving ahead with the proposed pension changes even as Congress remains bogged down on much broader revisions of the law that governs company pension plans. In fact, Representative John A. Boehner, Republican of Ohio and the new House majority leader, who has been a driving force behind pension changes in Congress, said yesterday that he saw little chance of a finished bill before a deadline for corporate pension contributions in mid-April.

Congress is trying to tighten the rules that govern how much money companies are to set aside in advance to pay for benefits. The accounting board is working with a different set of rules that govern what companies tell investors about their retirement plans.

The new method proposed by the accounting board would require companies to take certain pension values they now report deep in the footnotes of their financial statements and move the information onto their balance sheets — where all their assets and liabilities are reflected. The pension values that now appear on corporate balance sheets are almost universally derided as of little use in understanding the status of a company's retirement plan.

Mr. Batavick of the accounting board said the new rule would also require companies to measure their pension funds' values on the same date they measure all their other corporate obligations. Companies now have delays as long as three months between the time they calculate their pension values and when they measure everything else. That can yield misleading results as market fluctuations change the values.

"Old industrial, old economy companies with heavily unionized work forces" would be affected most sharply by the new rule, said Janet Pegg, an accounting analyst with Bear, Stearns. A recent report by Ms. Pegg and other Bear, Stearns analysts found that the companies with the biggest balance-sheet changes were likely to include General Motors, Ford, Verizon, BellSouth and General Electric.

Using information in the footnotes of Ford's 2005 financial statements, Ms. Pegg said that if the new rule were already in effect, Ford's balance sheet would reflect about $20 billion more in obligations than it now does. The full recognition of health care promised to Ford's retirees accounts for most of the difference. Ford now reports a net worth of $14 billion. That would be wiped out under the new rule. Ford officials said they had not evaluated the effect of the new accounting rule and therefore could not comment.

Applying the same method to General Motors' balance sheet suggests that if the accounting rule had been in effect at the end of 2005, there would be a swing of about $37 billion. At the end of 2005, the company reported a net worth of $14.6 billion. A G.M. spokesman declined to comment, noting that the new accounting rule had not yet been issued.

Many complaints about the way obligations are now reported revolve around the practice of spreading pension figures over many years. Calculating pensions involves making many assumptions about the future, and at the end of every year there are differences between the assumptions and what actually happened. Actuaries keep track of these differences in a running balance, and incorporate them into pension calculations slowly.

That practice means that many companies' pension disclosures do not yet show the full impact of the bear market of 2000-3, because they are easing the losses onto their books a little at a time. The new accounting rule will force them to bring the pension values up to date immediately, and use the adjusted numbers on their balance sheets.

Not all companies would be adversely affected by the new rule. A small number might even see improvement in their balance sheets. One appears to be Berkshire Hathaway. Even though its pension fund has a shortfall of $501 million, adjusting the numbers on its balance sheet means reducing an even larger shortfall of $528 million that the company recognized at the end of 2005.

Berkshire Hathaway's pension plan differs from that of many other companies because it is invested in assets that tend to be less volatile. Its assumptions about investment returns are also lower, and it will not have to make a big adjustment for earlier-year losses when the accounting rule takes effect. Berkshire also looks less indebted than other companies because it does not have retiree medical plans.

Mr. Batavick said he did not know what kind of public comments to expect, but hoped to have a final standard completed by the third quarter of the year. Companies would then be expected to use it for their 2006 annual reports. The rule will also apply to nonprofit institutions like universities and museums, as well as privately held companies.

The rule would not have any effect on corporate profits, only on the balance sheets. The accounting board plans to make additional pension accounting changes after this one takes effect. Those are expected to affect the bottom line and could easily be more contentious.

Wednesday, March 29, 2006

EPA versus DOD: Who Wins This Regulatory Wrestling Match?

This is an interesting example of regulations are made when two different parts of the government disagree about how to proceed. In this case, the EPA and Department of Defense appear to have opposite views on the effects of the solvent TCE. This Los Angeles Times article below suggests that many people have been exposed to it on and near military bases.

The theory of compensating differentials predicts that if people know that they are being exposed to a risk (such as cancer) they will be compensated through either higher wages or by paying lower rents for real estate. But, if people are unaware of what they are being exposed to then these "victims" will not be compensated ex-ante for living in a nasty place.

This article suggests that DOD is blocking the EPA from carying out regulation that would mitigate this problem. Economists would argue that the optimal TCE regulation would enforce until the marginal benefits of further regulation equals the marginal cost of further TCE regulation. Calculating where these two curves cross is much easier said than done. The military has an incentive to over-state what is the marginal cost of mitigating this problem.



http://www.latimes.com/news/printedition/la-na-toxic29mar29,0,7858796.story
From the Los Angeles Times
How Environmentalists Lost the Battle Over TCE
By Ralph Vartabedian
Times Staff Writer

March 29, 2006

After massive underground plumes of an industrial solvent were discovered in the nation's water supplies, the Environmental Protection Agency mounted a major effort in the 1990s to assess how dangerous the chemical was to human health.

Following four years of study, senior EPA scientists came to an alarming conclusion: The solvent, trichloroethylene, or TCE, was as much as 40 times more likely to cause cancer than the EPA had previously believed.

The preliminary report in 2001 laid the groundwork for tough new standards to limit public exposure to TCE. Instead of triggering any action, however, the assessment set off a high-stakes battle between the EPA and Defense Department, which had more than 1,000 military properties nationwide polluted with TCE.

By 2003, after a prolonged challenge orchestrated by the Pentagon, the EPA lost control of the issue and its TCE assessment was cast aside. As a result, any conclusion about whether millions of Americans were being contaminated by TCE was delayed indefinitely.

What happened with TCE is a stark illustration of a power shift that has badly damaged the EPA's ability to carry out one of its essential missions: assessing the health risks of toxic chemicals.

The agency's authority and its scientific stature have been eroded under a withering attack on its technical staff by the military and its contractors. Indeed, the Bush administration leadership at the EPA ultimately sided with the military.

After years on the defensive, the Pentagon — with help from NASA and the Energy Department — is taking a far tougher stand in challenging calls for environmental cleanups. It is using its formidable political leverage to demand greater proof that industrial substances cause cancer before ratcheting up costly cleanups at polluted bases.

The military says it is only striving to make smart decisions based on sound science and accuses the EPA of being unduly influenced by left-leaning scientists.

But critics say the defense establishment has manufactured unwarranted scientific doubt, used its powerful role in the executive branch to cause delays and forced a reduction in the margins of protection that traditionally guard public health.

If the EPA's 2001 draft risk assessment was correct, then possibly thousands of the nation's birth defects and cancers every year are due in part to TCE exposure, according to several academic experts.

"It is a World Trade Center in slow motion," said Boston University epidemiologist David Ozonoff, a TCE expert. "You would never notice it."

Senior officials in the Defense Department say much remains unknown about TCE.

"We are all forgetting the facts on the table," said Alex A. Beehler, the Pentagon's top environmental official. "Meanwhile, we have done everything we can to curtail use of TCE."

But in the last four years, the Pentagon, with help from the Energy Department and NASA, derailed tough EPA action on such water contaminants as the rocket fuel ingredient perchlorate. In response, state regulators in California and elsewhere have moved to impose their own rules.

The stakes are even higher with TCE. Half a dozen state, federal and international agencies classify TCE as a probable carcinogen.

California EPA regulators consider TCE a known carcinogen and issued their own 1999 risk assessment that reached the same conclusion as federal EPA regulators: TCE was far more toxic than previous scientific studies indicated.

TCE is the most widespread water contaminant in the nation. Huge swaths of California, New York, Texas and Florida, among other states, lie over TCE plumes. The solvent has spread under much of the San Gabriel and San Fernando valleys, as well as the shuttered El Toro Marine Corps base in Orange County.

Developed by chemists in the late 19th century, TCE was widely used to degrease metal parts and then dumped into nearby disposal pits at industrial plants and military bases, where it seeped into aquifers.

The public is exposed to TCE in several ways, including drinking or showering in contaminated water and breathing air in homes where TCE vapors have intruded from the soil. Limiting such exposures, even at current federal regulatory levels, requires elaborate treatment facilities that cost billions of dollars annually. In addition, some cities, notably Los Angeles, have high ambient levels of TCE in the air.

An internal Air Force report issued in 2003 warned that the Pentagon alone has 1,400 sites contaminated with TCE.

Among those, at least 46 have involved large-scale contamination or significant exposure to humans at military bases, according to a list compiled by the Natural Resources New Service, an environmental group based in Washington.

The Air Force was convinced that the EPA would toughen its allowable limit of TCE in drinking water of 5 parts per billion by at least fivefold. The service was already spending $5 billion a year to clean up TCE at its bases and tougher standards would drive that up by another $1.5 billion, according to an Air Force document. Some outside experts said that estimate was probably low.

After the EPA issued the draft assessment, the Pentagon, Energy Department and NASA appealed their case directly to the White House. TCE has also contaminated 23 sites in the Energy Department's nuclear weapons complex — including Lawrence Livermore National Laboratory in the Bay Area, and NASA centers, including the Jet Propulsion Laboratory in La Cañada Flintridge.

The agencies argued that the EPA had produced junk science, its assumptions were badly flawed and that evidence exonerating TCE was ignored. They argued that the EPA could not be trusted to move ahead on its own and that top leaders in the agency did not have control of their own bureaucracy.

Bush administration appointees in the EPA — notably research director Paul Gilman — sided with the Pentagon and agreed to pull back the risk assessment. The matter was referred for a lengthy study by the National Academy of Sciences, which is due to issue a new report this summer. Any resolution of the cancer risk TCE poses will take years and any new regulation could take even longer.

The delay tactics have angered Republicans and Democrats who represent contaminated communities, where residents in some cases have elevated rates of cancer and birth defects but no direct proof that their illness is tied to TCE.

Half a dozen members of Congress last year wrote to the EPA, demanding that it issue interim standards for TCE, instead of waiting years while scientific battles are waged between competing federal agencies. EPA leaders have rejected those demands.

"The evidence on TCE is overwhelming," said Dr. Gina Solomon, an environmental medicine expert at UC San Francisco and a scientist at the Natural Resources Defense Council. "We have 80 epidemiological studies and hundreds of toxicology studies. They are fairly consistent in finding cancer risks that cover a range of tumors. It is hard to make all that human health risk go away."

But Raymond F. DuBois, former deputy undersecretary of Defense for installations and environment in the Bush administration, said the Pentagon had not been willing to accept whatever came out of the EPA, though it cared a great deal about base contamination.

"If you go down two or three levels in EPA, you have an awful lot of people that came onboard during the Clinton administration, to be perfectly blunt about it, and have a different approach than I do at Defense," DuBois said. "It doesn't mean I don't respect their opinions or judgments, but I have an obligation where our scientists question their scientists to bring it to the surface."

The military has virtually eliminated its use of TCE, purchasing only 11 gallons last year, said Beehler, an attorney who used to head environmental affairs for Koch Industries Inc., a large industrial conglomerate in Wichita, Kan.

In its fight against the 2001 risk assessment, the Pentagon has gone to the very fundamentals of cancer research: toxicology, the study of poisons; and epidemiology, the science of how diseases are distributed in the population. This scientific approach has worked better than past arguments that cleanups are a costly diversion from the Pentagon's mission to defend U.S. security.

A few months after the 2001 draft risk assessment came out, an Air Force rebuttal charged that the EPA had "misrepresented" data from animal and human health studies.

It said "there is no convincing evidence" that some groups of people, like children and diabetics, are more susceptible to TCE, a key part of the EPA's report. And it said the EPA had failed to consider viewpoints from "scientists who believe that TCE does not represent a human cancer risk at levels reasonably expected in the environment."

But comments such as these are outside the scientific mainstream. Other federal agencies have also expressed grave concern about TCE and some experts say it is only a matter of time before the chemical is universally recognized as a known carcinogen.

"Do I think TCE causes cancer? Yes," said Ozonoff, the Boston University TCE expert. "There is lots of evidence. Is there a dispute about it? Yes. Whenever the stakes are high, that's when there will be disputes about the science."

The 2001 risk assessment found TCE was two to 40 times more likely to cause cancer than was found in an assessment conducted in 1986, a wide range that reflected many scientific uncertainties. Because cancer risk assessments are not an exact science, federal regulators have historically exercised great caution in protecting public health.

The California EPA, the nation's largest and best-funded state environment agency, assessed TCE in 1999 and also found reason for concern. Its risk assessment fell in the middle of the EPA risk range, according to the study's author, Joseph Brown.

Rodents fed TCE develop liver and kidney cancer, and humans exposed to TCE show elevated rates of many types of cancer and birth defects. But industry experts fire back that evidence on TCE is still weak. Just because rats and mice get cancer from high levels of TCE doesn't prove that humans will get cancer from low levels of TCE, they say. And the epidemiological research is less convincing than animal studies, they say.

The U.S. still uses about 100 tons of TCE annually, a fraction of the consumption before the mid-1980s, when it was first classified as a probable carcinogen. It was once widely used in consumer products, such as correction fluid for typewriters and spot cleaners.

"If TCE is a human carcinogen, it isn't much of one," said Paul Dugard, a toxicologist at the Halogenated Solvents Industry Alliance Inc., which represents TCE manufacturers. "People exposed at low levels shouldn't be concerned.

"EPA's philosophy is still one of being super conservative and that is being pushed back against."

EPA officials were braced for such a controversy when the TCE assessment was issued and quickly convened a scientific advisory board to review the work. The board included public health officials at state agencies, academics and chemical industry scientists.

About one year later, the board issued its findings, praising the risk assessment and urging the EPA to implement it as quickly as possible. But the board also suggested some changes, including stronger support for its calculations of TCE's health risks and a clearer disclosure of its underlying assumptions.

The report, particularly the request for additional work, was interpreted as a serious problem by Gilman, the EPA research director.

He said the board's findings represented a "red flag" and "raised very troubling issues," all of which were key arguments by Gilman and others for stopping the assessment.

But members of the scientific advisory team dispute Gilman's interpretation, saying they felt the 2001 risk assessment was good science and their recommended changes amounted to normal commentary for such a complex matter.

"I thought by and large we supported the EPA and that its risk assessment could be modified to move forward," said Dr. Henry Anderson, the chairman of the scientific advisory board and a physician with the Wisconsin Division of Public Health. "That movement to shuttle the issue to the National Academy of Sciences was nothing like what we had in mind."

By 2004, the matter was out of the EPA's hands. The National Academy of Sciences received a $680,00 contract from the Energy Department to study TCE — a decision dictated by a working group at the White House. The briefings to the national academy on how to evaluate TCE were given by White House staff as well as the EPA.

The White House originally formed the working group — made up of officials from the Pentagon, Energy Department and NASA — in 2002 to combat the EPA's assessment of another pollutant, perchlorate. That group stayed in business to fight the TCE risk assessment. The group was co-chaired by officials in the Office of Management and Budget and the White House Office of Science and Technology Policy. The officials declined requests for interviews.

Given the controversy and stakes involved, the issue was bound to end up with National Academy of Sciences, said Peter Preuss, director of the National Center for Environmental Analysis, the EPA organization that produced the 2001 risk assessment. "It got very difficult to proceed," Preuss said.

The lead author of the 2001 health risk assessment, V. James Cogliano, agreed that the findings ran into trouble when Defense Department officials went to the White House. "Most of it was behind the scenes," said Cogliano, now a senior official at the International Agency for Research on Cancer in Lyon, France.

He added: "The degree of opposition was not surprising given the degree of economic interests involved."

The political maneuvering marked a significant change, Cogliano said. In the 1980s, Defense Department officials accepted every possible safeguard recommended by the EPA for incinerators to burn nerve gas and other chemical weapons, he recalled.

At that time, Defense Department officials said, "You put in every margin of safety, because we want to be sure it will be safe," he said. "There was no argument. There is a different spirit today."

Every health risk assessment is also getting more technically complex and more bureaucratically difficult, Preuss said.

When the EPA issued its first health risk assessment in 1976, it ran four pages and it was based in large part on studies that counted "bumps and lumps" on animals subjected to possible carcinogens. By contrast, EPA scientists now must show not only that a substance causes tumors, but the internal biological processes that are responsible. And the work is subject to greater scrutiny.

"It is true that there is more interagency review now of our work," Preuss said. "We have a couple steps where we send our assessments to the White House and they distribute them to other agencies. Each year, additional steps are taken."

All of the EPA's travails — the toughened scientific demands, the loss of authority, the interagency battles — have clearly taken a heavy toll and diminished the agency's stature.

"Inside the Beltway, it is an accepted fact that the science of EPA is not good," said Gilman, now director of the Oak Ridge Center for Advanced Studies in Tennessee, which conducts broad research on energy, the environment and other areas of science. Gilman said an entire consulting industry has sprung up in Washington to attack the EPA and sow seeds of doubt about its capabilities.

The delays in assessing TCE have also left many contaminated communities with few answers.

"My constituents who live at a recently named Superfund site … are forced to live everyday with contaminated groundwater, soil and air and can't afford to wait the years it would take for the results of your outsourced re-review," Rep. Sue W. Kelly (R-N.Y.) told EPA officials at a hearing last year.

"I have talked to a lot of sick people," said Rep. Maurice D. Hinchey (D-N.Y.), whose district includes hundreds of homes contaminated by TCE vapors, traced to an IBM Corp. factory. IBM has paid for air filtration systems for 400 homes, but has balked at more funding based on uncertainty over the health risk. "These people are deeply frustrated and increasingly angry," Hinchey said.

Meanwhile, many environmentalists are discouraged by what they view as a virtual emasculation of the EPA in this battle.

"The general public has no idea this is happening," said Erik Olson, a lawyer at the Natural Resources Defense Council. "The Defense Department has succeeded in undermining the basic scientific process at EPA. The DoD is the biggest polluter in the United States and they have made major investments to undercut the EPA."

Tuesday, March 28, 2006

Sprawl and Europe

I am in rainy Berkeley attending a OECD Roundtable on Transportation, Urban Form and Economic Growth. There are roughly 50 participants and the group includes economists, urban planners, sociologists, political scientists and others. Such a diverse "Noah's Ark" offers benefits and costs. The major benefit is that I'm learning from the non-economists and several of my friends (the economists) are here.
The cost is that the discussion across groups is strange and meandering.

There are interesting public policy issues at stake here. If Europe's cities had lower gasoline taxes or invested even more in fast clean public transit, how would this affect urban form? How would this affect "vehicle dependence"? What is the relationship between urban form and economic growth. Some economists here have argued that in cities where you can commute at faster speeds that workers are more likely to be matched with their best job because they will have a wider choice set.

Saturday, March 25, 2006

The Big Push for Innovation

This article below provides an interesting case study debating the merits of engaging in a "Big Push" for encouraging the development of disease vaccines.

Some environmentalists have talked about similar initiatives for encouraging increased "green" innovation. A policy initiative that simply subsidizes R&D may yield little extra research if the supply of researchers is inelastic. In this case, the subsidies will simply drive up the wages of the core scientists. In a globalized world labor market, if scientists can migrate across nations then subsidies are more likely to yield "research gains".

Some governments are pre-committing to purchase "green power" at a specific price. Producers who can produce such power at a lower average cost will earn profits.

I continue to focus on expectations of future prices. For business people in different sectors whether they are Walmart or a manufacturing plant what is their best guess of what will be the price of gasoline and electricity five years from now?
Dick Cheney might buy a Prius if he was convinced that the price of gasoline would be $25 a gallon!

Economics focus

Push and pull
Mar 23rd 2006
From The Economist print edition


Should the G8 promise to buy vaccines that have yet to be invented?

JESUIT missionaries in Peru stumbled across the first effective treatment for malaria: an alkaloid called quinine, extracted from the bark of the cinchona tree. Unfortunately, a vaccine for this resilient disease, or other big killers such as tuberculosis (TB) and AIDS, does not grow on trees. Inventing one, and bringing it to market, is a risky and costly undertaking. And the people in direst need of such vaccines—the poor—lack the purchasing power to make them worth a company's while.

Michael Kremer, an economist at Harvard University, argues that donors—ie, rich countries' governments—could engineer a market where none yet exists*. They should make a legally binding commitment to buy a vaccine, if and when one is invented. If credible, such a promise would create an incentive for profit-seeking companies to find, test and make life-saving jabs or pills. Whereas today public money “pushes” research on neglected diseases, under his proposal the promise of money tomorrow would “pull” research along.

This elegant notion, often called an “advance purchase commitment” (APC), has migrated with unusual speed from Mr Kremer's blackboard to the communiqués of the powerful. Next month, the finance ministers of the G8 countries will settle on one or two proposals in this spirit. As well as the toughest nuts—vaccines for AIDS, malaria and TB—three softer targets are also vying for the G8's attention: rotavirus (which causes diarrhoea in children), human papillomavirus (a cause of cervical cancer), and pneumococcus (a bacterium that causes pneumonia).

But even as it wins converts, the APC idea is also collecting critics. None is more dogged than Andrew Farlow, an economist at Oxford University and author of a sprawling critique† of Mr Kremer's big idea and its application to malaria in particular. APCs, he says, are a “policy boil” that needs to be lanced.

There is, Mr Farlow points out, no such thing as “a” malaria vaccine. The first vaccine to market may not be the best possible. The earliest polio shot, for example, was superseded by an oral vaccine, which was easier to administer and lasted longer. From the outset, the G8 will have to set out the traits of a vaccine it would be willing to buy: how effective it must be; how long it should last; the maximum number of doses it should require. Its thorny task is to decide what would be desirable, at what price, long before anyone knows what is feasible.

However the terms are set, Mr Farlow argues, the G8 pledge will at best motivate firms to hit this mark, but not surpass it. Firms are, after all, competing for a limited pot of money. (Mr Kremer and his collaborators suggest it should be about $3 billion for a malaria, AIDS or TB vaccine, which is about the value of the market for a new drug in the rich world.) A company that comes to market second, with a later, better vaccine may find the pot already emptied by its swifter rival. If companies anticipate this danger, they will lower their sights, settling for a vaccine that just clears the bar set by the G8 donors.

To this objection, the advocates of pull-funding have at least two responses. First, in the face of a disease such as malaria, which kills up to 2.7m people a year, speed is itself a virtue, for which some sacrifice in quality may be worthwhile. Second, in its current incarnation, the APC creates some room for later vaccines to enter the market. Money will not be showered on a company the moment it crosses the finish line, but will be paid out a little at a time. In one scenario, the final customers for vaccines would set the pace. For example, the Kenyan health ministry would decide whether to buy a company's vaccine, for $1 per dose, and the company would then receive a “top-up” payment of $14 from the G8. If the pot (including the Kenyan co-payments) contained $3 billion, it would be drained only after 200m shots had been sold. This, advocates hope, will give a second-generation vaccine time to steal the market from its forerunners.

Unfortunately, this set-up creates problems of its own. Corruption is one danger. If every dollar that a health ministry spends on a given vaccine is worth another $14 to the company supplying it, an unscrupulous firm might go to illegal lengths to attract the ministry's custom.

A cure or a placebo?

What if companies fail to bite at the carrot the G8 dangles before them, or fall short of a vaccine donors are willing to buy? Well then, argues Mr Kremer, donors have lost nothing. If his scheme fails, the public purse has lost hardly a cent. If it succeeds, then every dollar spent is eminently worthwhile. This, needless to say, is a big part of his scheme's appeal to politicians.

But Mr Farlow doubts that a G8 promise would be credible unless funds were set aside in advance—money he would rather were used elsewhere. There is much else to spend it on. By one count, more than 25 malaria vaccines are currently in or near clinical trials, pushed by public money and philanthropic generosity. Several vaccines, for example against hepatitis B, have already been invented, but fail to reach all the poor.

APCs should not stop governments providing a push wherever it is needed, Mr Kremer insists. But will governments themselves hear him? Critics argue that even if his scheme makes no claim on public funds today, it has still made a big demand on political attention, diverting it from other ends. Mr Kremer sincerely hopes APCs will help provide the world's poor with much-needed vaccines. It might; but despite his best intentions it might instead provide politicians with a prophylactic against other pressing demands for their help.

Thursday, March 23, 2006

A "Natural Experiment" for testing the Induced Innovation Hypothesis

Do demand curves slope down? China's government will soon offer us another test of econ 101 by increases its taxes on energy and resource consumption. Wooden chopsticks will face a 5% higher tax! What will people substitute to?

Free market environmentalits should be excited about this opportunity to study how consumers and producers respond to these higher taxes. Do these economic actors economize on "natural capital" as its price goes up? Peak Oilers should pay close attention to the short term and medium term effects of this tax.

Will a higher Chinese tax encourage U.S exporting firms (who hope to sell to Chinese consumers) to innovate so that their products economize on resource consumption? If this is the case and there is learning by doing within U.S firms, then the Chinese tax could help green our economy! Perhaps globalization does offer some environmental benefits?


March 23, 2006
China Raises Taxes to Curb Use of Energy and Timber
By KEITH BRADSHER

HONG KONG, Thursday, March 23 — The Chinese government announced plans on Wednesday to increase existing taxes and impose new ones on April 1 for everything from gas-guzzling vehicles to chopsticks in a move to rein in rising use of energy and timber and the widening gap between rich and poor.

New or higher taxes will fall on vehicles with engines larger than two liters, disposable wooden chopsticks, planks for wood floors, luxury watches, golf clubs, golf balls and certain oil products.

China's finance ministry disclosed the higher taxes Tuesday night in a statement that was reported Wednesday morning by the official New China News Agency. The statement offered another sign that some senior Chinese officials may be having second thoughts about the rapid growth of privately owned family vehicles, whose sales rose to 3.1 million last year from just 640,000 in 2000.

"In recent years, car ownership in China has grown rapidly and fuel consumption has risen considerably, and this highlights the conflict between supply and demand of oil resources," the statement said. "At the same time, pollution caused by motor cars has become the main source of pollution in big and medium-size cities."

The finance ministry is imposing a 5 percent tax on chopsticks and floor planks, citing a need to conserve timber. Environmentalists around the world have been warning that China's voracious demand for wood was contributing to the clear-cutting of many forests, especially in Southeast Asia.

The production of disposable wooden chopsticks consumes two million cubic meters (70.6 million cubic feet) of timber each year, the ministry said. Plastic chopsticks, which can be washed and reused, will not be subject to the new tax.

A new tax of 10 percent on yachts, golf clubs and golf balls, and a 20 percent tax on luxury watches, is squarely aimed at China's emerging elite of wealthy industrialists and well-connected Communist officials.

China's yacht market is still in its infancy, as military restrictions on ocean traffic and commercial restrictions on river traffic have limited yachts to lakes — although a few entrepreneurs have been able to get around the rules to cruise on the Yangtze River near Shanghai.

Chinese officials have periodically assailed golf, especially when villages and farms are demolished with little compensation to make way for new golf courses.

The biggest commercial effect of the new taxes is likely to fall on sport utility vehicles and luxury sedans. China is reducing its tax on vehicles with engines of 1 to 1.5 liters to 3 percent from 5 percent, while leaving the rate unchanged for slightly more powerful engines. The tax rate will rise to 20 percent, from 8 percent now, for vehicles with engines larger than four liters.

The taxes are likely to affect foreign automakers, especially American manufacturers, more than Chinese companies, which tend to make models with smaller engines.

The big question for automakers is how much of the tax to pass on to consumers, since the tax is collected from the manufacturers. With a week and a half remaining until the new tax takes effect, marketing executives scrambled on Wednesday to assess the impact and no automaker immediately raised prices.

"We are doing the calculations and assessing the impact, and on the other hand watching the actions of our competitors," said Kenneth Hsu, a spokesman for the China operations of Ford Motor, which sell everything from compact cars with 1.6-liter engines to Lincoln Navigator full-size S.U.V.'s with 5.4-liter engines.

Trevor Hale, a DaimlerChrysler spokesman, said the company offered fuel-efficient engines; many Mercedes sedans sold in China have considerably smaller engines than models sold in the United States.

Chinese officials considered and rejected a tax system based on gas mileage instead of engine displacement. That approach would have benefited foreign automakers who possess better technology that permits them to squeeze more power out of the same size engine than purely Chinese manufacturers can.

General Motors China welcomed the new taxes on Thursday but voiced a reservation: "While we believe the new measure will be more environmentally friendly and help lower energy consumption in China, we think it would be more reasonable to base the tax rate on the actual fuel consumption of a vehicle instead of the size of its engine displacement, which is a widely accepted practice worldwide."

Yale Zhang, an analyst in the Shanghai office of CSM Worldwide, a big automotive consulting firm based in the Detroit suburbs, said that Chinese automakers had growing influence in policy debates and that the new rules might lead to a proliferation of vehicles with engines a hundredth of a liter below the thresholds for higher taxes.

Chinese regulators have already imposed stringent fuel-economy regulations that take effect for all vehicles sold after July 1, and have said that they are considering a separate gas-guzzler tax for models that do not comply. The finance ministry's statement on the tax increases on April 1 made no mention of such a gas-guzzler tax, however, and finance ministry officials could not be reached for elaboration.

The finance ministry also announced a modest new tax of a penny (0.1 yuan) a liter for aviation fuel and 2 cents (0.2 yuan) a liter for naptha, solvents and lubricants, but said it would not collect the new aviation fuel tax for now and would collect only 30 percent of the new tax on naptha, solvents and lubricants.

Applying taxes on oil products but not collecting them while prices are high could set a precedent for how China handles taxes on gasoline and diesel. Chinese officials have said repeatedly that they would like to raise fuel taxes to encourage conservation, but do not want to act while world oil prices are close to record levels.

On April 1, China will also lower its tax on motorcycles with engines under 250 cubic centimeters to 3 percent from 10 percent, while leaving the tax unchanged at 10 percent for motorcycles with larger engines.

Western manufacturers like Harley-Davidson are trying to break into the Chinese market with powerful bikes, while Chinese manufacturers like Lifan mainly produce less powerful models.

Wednesday, March 22, 2006

A Corporate "Day of Shame" on Addressing Climate Change

Good students always like report card day. This organization (http://www.ceres.org/pub/publication.php?pid=84) has determined which corporations are "naughty" and "nice" with regards to addressing climate change issues.

Among the industry sector leaders and laggards:

Sector Leaders Laggards

Oil/Gas BP (90 points*) ExxonMobil (35)
Chemical DuPont (85**) PPG (21)
Metals/Mining Alcan (77) & Alcoa (74) Newmont (24)
Electric Power AEP & Cinergy (both 73) Sempra Energy (24)
Auto Toyota (65) Nissan (33)

What do these rankings mean? How were they generated? I have no idea! But, we love rankings. Perhaps Zagats will put out a new ranking based on which restaurants create the least greenhouse gases?

To be serious for a second, in the absence of the greenhouse gas pollution permits what incentives do for profit companies for conserving on their emissions? If they anticipate that in the future President Hilary Clinton will push for such carbon taxes then they will have an incentive. Are these "nice" companies trying to buy good public relations with greens? How have they explained to their shareholders how it makes good business sense today to be green?

Perhaps they will argue that they bet that fossil fuels will be more expensive in the future and thus it is wise to substitute away today. This logic makes sense to me.

How will consumers react to this report? Will they boycott the "brown" companies?
Here is what the New York Times has to say.


March 22, 2006
Study Says U.S. Companies Lag on Global Warming
By CLAUDIA H. DEUTSCH

European and Asian companies are paying more attention to global warming than their American counterparts. And chemical companies are more focused on the issue than oil companies.

Those are two conclusions from "Corporate Governance and Climate Change: Making the Connection," a report that Ceres, a coalition of investors and environmentalists, expects will influence investment decisions.

The report, released yesterday, scored 100 global corporations — 74 of them based in the United States — on their strategies for curbing greenhouse gases. It covered 10 industries — oil and gas, chemicals, metals, electric power, automotive, forest products, coal, food, industrial equipment and airlines — whose activities were most likely to emit greenhouse gases. It evaluated companies on their board oversight, management performance, public disclosure, greenhouse gas emissions, accounting and strategic planning.

The report gave the chemical industry the highest overall marks, with a score of 51.9 out of a possible 100; DuPont, with 85 points, was the highest-ranking American company in any of the industries. Airlines, in contrast, ranked lowest, with a score of 16.6; UAL, the parent of United Airlines, received just 3 points.

The study gave General Electric, American Electric Power and Cinergy among the highest scores in their industries. But over all, it concluded, American companies "are playing catch-up" with international competitors like BP, Toyota, Alcan, Unilever and Rio Tinto.

"Dozens of U.S. businesses are ignoring the issue with 'business as usual' responses that are putting their companies, and their shareholders, at risk," said Mindy S. Lubber, president of Ceres and director of the Investor Network on Climate Risk, a group whose members control a total of $3 trillion in investment capital. "When Cinergy and American Electric Power are tackling this issue, and Sempra and Dominion Resources are not, that should be a red flag to investors."

Art Larson, a Sempra Energy spokesman, took exception to Sempra's score of 24. He said that Sempra, based in San Diego, had been "aggressive in promoting energy efficiency and procuring renewable energy sources," and that "in the area of environmental responsibility, Ceres seems to give more weight to words over action." Hunter Applewhite, a spokesman for Dominion, a big electric utility in Richmond, Va., that scored 27, said the company had no comment on its ranking.

Members of the Investor Network said they would take the report's conclusions seriously. "We need to continue to press poor-performing companies to clean up their act," said California's state treasurer, Phil Angelides, who is on the board of two pension funds that collectively manage more than $300 billion in assets.

Connecticut's state treasurer, Denise L. Nappier, who administers a $22 billion investment fund, lauded the report as an "unprecedented window into how companies most affected by climate risk are responding at the board level, through C.E.O. leadership and strategic planning."

The report does show progress since 2003, when a much smaller Ceres study concluded that most American companies were ignoring the threat of climate change. Since then, Ceres notes, Chevron Texaco has invested $100 million in developing cleaner fuels, Ford Motor introduced the first American hybrid car, American Electric Power has committed itself to "clean coal" technologies and G.E. has introduced its Ecomagination program stressing "green" products. And many companies including Dow Chemical, Anadarko Petroleum and Cinergy have board committees that oversee the curbing of greenhouse gases.

"More U.S. companies realize that climate change is an enormous business issue that they need to manage immediately," Ms. Lubber said.

Still, the top-scoring company, with 90 points, was BP, a British company that has said it will invest $8 billion in solar, wind and other clean-energy technologies in the next decade. "BP understands that all companies must work to reduce their carbon footprint, starting with fossil fuels," Ms. Lubber said.

Tuesday, March 21, 2006

Are Economic Booms Bad for Your Health?

When I was in graduate school, I was taught that economics is the study of incentives and their intended and unintended consequences. Today, economics is morphing into the empirical field where we challenge the "conventional wisdom".

This paper http://www.nber.org/papers/w12102 offers some novel freakonomics.

A Healthy Economy Can Break Your Heart

Christopher J. Ruhm

NBER Working Paper No. 12102
Issued in March 2006
NBER Program(s): HC

---- Abstract -----

Panel data econometric methods are used to investigate how the risk of death from acute myocardial infarction (AMI) varies with macroeconomic conditions after controlling for demographic factors, fixed state characteristics, general time effects and state-specific time trends. The sample includes residents of the 20 largest states over the 1979 to 1998 period. A one percentage point reduction in unemployment is predicted to raise AMI mortality by 1.3 percent, with a larger increase in relative risk for 20-44 year olds than older adults, particularly if the economic upturn is sustained. Nevertheless, the much higher absolute AMI fatality rate of senior citizens implies that they account for most of the additional deaths. This suggests the importance of factors like air pollution and traffic congestion that increase with economic activity, are linked to coronary heart disease and may have particularly strong effects on vulnerable segments of the population, such as the frail elderly. AMI mortality risk quickly rises when the economy strengthens and increases further if the favorable economic conditions persist. This is consistent with strong effects of other short-term factors on heart attack risk and with health being a durable capital stock that is affected by flows of lifestyle behaviors and environmental conditions whose effects accumulate over time.


What could be the causal mechanism here? One plausible environmental story relates to electric utilities. During boom times, high emitting low productivity electric utilities may be used to ramp up the supply of power. Such scale effects could increase ambient air pollution. The author clearly has a nice empirical fact but not a great story for what is the true data generating process.

I have not read this paper but it would interest me if he has a placebo control group such as another disease death rate for a disease that does not have an environmental component. If he could show that business cycles have no effect on this "control group"'s death rate then I would be more convinced about his "environmental" hypothesis that pollution and congestion during booms are killing people.

Monday, March 20, 2006

A Benevolent Planner's Problem

Suppose that you are Paul Krugman. You are a benevolent planner who wants to maximize the well being of all 300 million people in the United States. Where should these people live and at what population density? Would you build 100 3 million person cities with the density of Houston or of New York City's Downtown? Would you build 10 Mega Cities or would you distribute the population uniformly across space?

Economists are always interested in how close does the outcome we see in our market economy approximate the "ideal outcome" that the benevolent all knowing planner could achieve.

To solve the planner's problem you would have to grapple with the question of the costs and benefits of cities and how these private benefits and social costs change as a function of city size. Ed Glaeser's 1998 Journal of Economic Perspectives piece ("Are Cities Dying?") offers a clear overview of the economic forces encouraging and discouraging the formation of big cities. Vern Henderson also posts several useful urban papers here:
http://www.econ.brown.edu/faculty/henderson/papers.html

If you would like to see my new paper: "Quality of Life in Sprawled and Compact Cities" a .pdf is available here:
http://fletcher.tufts.edu/faculty/kahn/publications.asp

Sunday, March 19, 2006

Is Chicago an Outlier?

Most cold cities have not performed great over the last 30 years. What is it about New York City, Boston and Chicago relative to other cities such as Cleveland, Detroit and Philadelphia. How do we explain why the former set have boomed relative to the second set?

My future colleague Dan Drezner provides some exciting analysis here.
http://www.danieldrezner.com/archives/002638.html

I was born in Chicago. I lived in NYC from 1973-1984. I lived in chicago from 1988 to 1993 and I've lived in Boston from 1996 to 1998 and from 2000 until now so I think I have some street credibility on this subject.

1. Michael Jordan changed the course of Chicago transforming it into a "cool" city.

2. Boston, NYC and Chicago all have serious Universities nestled in them this provides a constant source of intellectuals and serious young people flowing into these cities

3. NYC and Chicago are immigrant hotbeds and this fosters the Jane Jacobs diversity element.

4. Boston, NYC and Chicago are all cold but they strike me as "green cities" where there are many communities with high quality of life.


Ed Glaeser has posted to his harvard webpage historical essays on the long run dynamics of Boston and New York City. You should read these papers if you want details on the long run cycles for these cities.

If education is the key to local economic growth, then Boston, Chicago and NYC's success has been based on attracting and retaining the high skilled to live their lives in these cold cities. Quality of life and economic opportunity play a key role here in achieving these goals.

Wednesday, March 15, 2006

Globalization and the "Weakest Link"

Can public health epidemics such as SARS stop commerce in a major city? This article in the Times argues that globalized companies face the risk of losing access to their workforce in other countries if a disease spreads but that these companies have not planned for this contingency.

A free market view of public health investments by nation would argue that if globalized firms can credibly threaten to invest less FDI in nations where public health outbreaks are likely then this provides incentives for the government of these nations to invest more in public health to reduce the probability of disease outbreaks.

My point is similar to the public finance literature on when local governments are disciplined to not raise taxes or provide low quality public services. If firms can "vote with their feet" and locate elsewhere then capitalist mobility can be a driver of improved governance in LDC nations.



March 16, 2006
Is Business Ready for a Flu Pandemic?
By ELISABETH ROSENTHAL and
KEITH BRADSHER

Governments worldwide have spent billions planning for a potential influenza pandemic: buying medicines, running disaster drills, developing strategies for tighter border controls. But one piece of the plan may be missing: the ability of big corporations to continue to provide vital services.

Airlines, for instance, would have to fly health experts around the world and overnight couriers would have to rush medical supplies to the front lines. Banks would need to ensure that computer systems continued to move money internationally and that local customers could get cash. News outlets would have to keep broadcasting so people could get information that might mean the difference between life and death.

"I tell companies to use their imagination to think of all the unintended consequences," said Mark Layton, global leader for enterprise risk services at Deloitte & Touche in New York. "Will suppliers be able to deliver goods? How about services they've outsourced — are they still reliable?"

Experts say that many essential functions would have to continue despite the likelihood of a depleted work force and more limited transportation. Up to 40 percent of employees could be sick at one time.

Indeed, the return of the bird migration season has touched off new worries over how a serious outbreak could interrupt business in many parts of the world simultaneously, perhaps for months on end.

The World Health Organization has confirmed 173 cases of the avian flu virus in humans, most of whom had close contact with diseased birds. Of those, 93 people died, almost all of them in Asia. Vietnam has been particularly hard hit. In January, though, the first human cases were confirmed in Turkey — far from the origin of the virus in central China.

And in recent weeks, officials in several European and African countries have confirmed the virus in wild or domestic flocks of birds. While avian influenza does not now readily infect humans or spread among them, scientists are worried that the virus could soon acquire that ability through normal biological mixing, setting off a disastrous human pandemic.

Yet despite this threat, many companies have only rudimentary contingency plans in place. In a survey of more than 100 executives in the United States by Deloitte & Touche, released this January, two-thirds said their companies had not yet prepared adequately for avian flu, and most had no one specifically in charge of such a plan.

"Business is not prepared for even a moderate avian flu epidemic," the report concluded.

In contrast, corporations in Southeast Asia have made more headway, in part because the avian influenza virus has been circulating in birds in Asia for years. Also, Asian companies learned in the 2003 outbreak of Sudden Acute Respiratory Syndrome, or SARS, that even a small infectious outbreak could have devastating economic consequences, bringing commerce in Hong Kong, Singapore and Beijing to a near standstill.

A recent survey of 80 corporate officials at an avian flu seminar held by the American Chamber of Commerce in Hong Kong found that nearly every company had someone in charge of avian flu policy, and 60 percent had clearly stated plans that could be put in place immediately. These included provisions for employees to work at home to prevent the spread of disease in the office, and for relaying warnings to workers by text messages to mobile phones.

The lack of corporate preparedness elsewhere has "enormous implications," the Deloitte report said.

"A pandemic flu outbreak in any part of the world would potentially cripple supply chains, dramatically reduce available labor pools," the report said. "In a world where the global supply chain and real-time inventories determine most everything we do, down to the food available for purchase in our grocery stores, one begins to understand the importance of advanced planning."

Among the prepared, HSBC, a global bank that started as the Hongkong and Shanghai Bank and remains the dominant bank in Hong Kong, has an especially detailed plan for avian flu, drawing on its experience with SARS. The company has been making preparations for employees to work from home, but is also preparing to divide work among multiple sites, an approach that appeared in only 37 percent of the plans in the American Chamber survey.

The hope is that if the flu races through the staff at one site, another site may be spared. During SARS, the bank activated an emergency center at the opposite end of Hong Kong's harbor and sent 50 bond traders there with instructions that they were not to see anyone from the head office even at social occasions.

In the survey of companies conducted by the American Chamber of Commerce in Hong Kong, provisions of corporate contingency plans ranged from allowing some employees to work from home — the most popular strategy, included in 72 percent of avian flu contingency plans — to the outright closing of offices, included in 32.5 percent of plans.

Other methods to prevent spread include canceling face-to-face meetings in favor of teleconferencing, and installing germ-killing hand washes in offices. Many companies also proposed more stringent health monitoring of employees, families and company visitors. In Singapore, throughout the SARS outbreak, many businesses required temperature checks before entering buildings, as a way to screen out those who might be ill.

Some of the most important planning involves not employee health, but how to continue to deliver vital services in a crisis. Time Warner's Cable News Network is making preparations to stay on the air from different locations.

"If there should be something that quarantines the production center here in Hong Kong, we could hand off to London and Atlanta," Stephen Marcopoto, president of Turner International Asia Pacific, a Time Warner unit in Hong Kong, said.

Time Warner is also working to create a mechanized cart that could automatically load tape after tape into a satellite transmission system, so it could keep stations like Cartoon Network on the air — a boon if children were homebound for months.

But many corporate plans are painted in fairly broad brush strokes, part of general disaster planning. And many companies refuse to discuss details.

"As other global players, we have a global business continuity program in place that covers a wide range of contingencies, including flu pandemic," said Klaus Thoma, a spokesman for Deutsche Bank in Frankfurt, who said that details were privileged company information.

Likewise, FedEx, the express delivery service, has been "monitoring the situation for some time," said Sandra Munoz, a spokeswoman for the company in New York, noting that FedEx had "the flexibility within our system to make the necessary adjustments to minimize any impact to our customers, regardless of the situation." Without going into details, FedEx said that it had developed contingency plans "down to every district or market here in Asia Pacific," said John Allison, a company representative in Hong Kong.

But Mr. Layton says he is worried that many companies are not thinking about the unique problems that pandemic flu raises. "They are adapting existing risk-management strategies, which are fine, but they really have to go beyond that," he said.

But even in hot spots in Asia, not all companies are readying themselves for an outbreak. In the Guangdong Province in south China, which was the epicenter of SARS and where avian flu is already widespread, a recent survey by the American Chamber of Commerce found that 54 percent of members had made no preparations.

"We're trying to push them to develop plans," said Harley Seyedin, the Guangdong chamber's president.

Tuesday, March 14, 2006

Media Economics

I've been blogging for 1/2 a year now. At the start, I had plenty to say but now somehow I have less to say. I did want to mention a promising branch of research in economics focusing on the consequences of media coverage.

It is clear to me that the media play a key role in focusing social interactions if the media is covering a story then everyone starts to talk about it. The Larry Summers at Harvard Excitement is one prime example. The Harvard Crimson (the student newspaper) generated a ton of attention for itself as it became part of the story as its coverage fanned the fires as partisans from both sides used it to communicate their views on the issue.

The role of the media in day to day life raises a fundamental question of causality. When we see Fox News viewers vote Republican, is this selection or treatment? In English, do Republicans choose to watch shows that reinforce their worldviews or does watching Fox turn a Jane Fonda or Ted kennedy into a dick Chaney?

One recent Berkeley study claims that it is treatment as they investigate how voting patterns change in communities that are "treated" with new access to the Fox network.

My own interest in media issues focuses on the media's role in shaping what environmental issues that people are thinking about. If the media played closer attention to climate change in the United States, would the typical voter be demanding greater action by the Congress to reduce our greenhouse gas emissions?

In the graph below I report New York Times Coverage of "Oil Spills" and "Nuclear Disasters" before and after the Exxon Valdez shocks and the Chernobyl shocks; note how well known shocks affect the time series of media coverage and then notice the decay. It looks to me like there is a "window of opportunity" for activists to make progress after such shocks but then people move on and return to "business as usual".

Saturday, March 11, 2006

Can Network Theory Thwart Terrorists?

The goal of social science is to explain and predict human behavior. The goal of Homeland Security is to predict when and where the next terrorist attack will occur and to stop it. As the article below discusses, network theorists have been able to convince some Government officials that they may be good "detectives" for catching terrorists. Are they right?

Network theorists observe some data such as who sends e-mail to who and from these patterns try to tease out "6 degrees of separation". This descriptive research is fine but here is my question. Suppose that the true members of the network know that they are being watched and they are worried about being discovered. What SUBSTITUTION effects will they engage in? I will be quite impressed if Duncan Watts and friends can devise a feedback loop such that the network theorist can predict how self interested people will adjust their behavior to not get caught when they want to communicate with their network but still not be caught. Structural econometric research is only in its infancy but this is the question this work tries to build on. What substitution patterns do economic agents have access to? When will they substitute? What data clues will outsiders observe if the terrorists substitute communication modes or "login names"?

This looks like the Hawthorne effect doesn't it? As the observer watches the subject, the subject changes his behavior. Catching terrorists would be a lot easier if terrorists were dumb enough to not respond to the increased monitoring by changing their communications patterns.

Hopefully network theorists have access to data when the terrorists let their guard down and were communicating "out in the open". Otherwise, this fancy theory is dead in the water because the bad guys will be moving faster than the nerds who are chasing them.


March 12, 2006
Idea Lab
Can Network Theory Thwart Terrorists?
By PATRICK RADDEN KEEFE

Recent debates about the National Security Agency's warrantless-eavesdropping program have produced two very different pictures of the operation. Whereas administration officials describe a carefully aimed "terrorist surveillance program," press reports depict a pervasive electronic net ensnaring thousands of innocent people and few actual terrorists. Could it be that both the administration and its critics are right? One way to reconcile these divergent accounts — and explain the administration's decision not to seek warrants for the surveillance — is to examine a new conceptual paradigm that is changing how America's spies pursue terrorists: network theory.

During the last decade, mathematicians, physicists and sociologists have advanced the scientific study of networks, identifying surprising commonalities among the ways airlines route their flights, people interact at cocktail parties and crickets synchronize their chirps. In the increasingly popular language of network theory, individuals are "nodes," and relationships and interactions form the "links" binding them together; by mapping those connections, network scientists try to expose patterns that might not otherwise be apparent. Researchers are applying newly devised algorithms to vast databases — one academic team recently examined the e-mail traffic of 43,000 people at a large university and mapped their social ties. Given the difficulty of identifying elusive terror cells, it was only a matter of time before this new science was discovered by America's spies.

In its simplest form, network theory is about connecting the dots. Stanley Milgram's finding that any two Americans are connected by a mere six intermediaries — or "degrees of separation" — is one of the animating ideas behind the science of networks; the Notre Dame physicist Albert-Laszlo Barabasi studied one obvious network — the Internet — and found that any two unrelated Web pages are separated by only 19 links. After Sept. 11, Valdis Krebs, a Cleveland consultant who produces social network "maps" for corporate and nonprofit clients, decided to map the hijackers. He started with two of the plotters, Khalid al-Midhar and Nawaf Alhazmi, and, using press accounts, produced a chart of the interconnections — shared addresses, telephone numbers, even frequent-flier numbers — within the group. All of the 19 hijackers were tied to one another by just a few links, and a disproportionate number of links converged on the leader, Mohamed Atta. Shortly after posting his map online, Krebs was invited to Washington to brief intelligence contractors.

Announced in 2002, Adm. John Poindexter's controversial Total Information Awareness program was an early effort to mine large volumes of data for hidden connections. But even before 9/11, an Army project called Able Danger sought to map Al Qaeda by "identifying linkages and patterns in large volumes of data," and may have succeeded in identifying Atta as a suspect. As if to underline the project's social-network principles, Able Danger analysts called it "the Kevin Bacon game."

Given that the N.S.A. intercepts some 650 million communications worldwide every day, it's not surprising that its analysts focus on a question well suited to network theory: whom should we listen to in the first place? Russell Tice, a former N.S.A. employee who worked on highly classified Special Access Programs, says that analysts start with a suspect and "spider-web" outward, looking at everyone he contacts, and everyone those people contact, until the list includes thousands of names. Officials familiar with the program have said that before individuals are actually wiretapped, computers sort through flows of metadata — information about who is contacting whom by phone or e-mail. An unclassified National Science Foundation report says that one tool analysts use to sort through all that data is link analysis.

The use of such network-based analysis may explain the administration's decision, shortly after 9/11, to circumvent the Foreign Intelligence Surveillance Court. The court grants warrants on a case-by-case basis, authorizing comprehensive surveillance of specific individuals. The N.S.A. program, which enjoys backdoor access to America's major communications switches, appears to do just the opposite: the surveillance is typically much less intrusive than what a FISA warrant would permit, but it involves vast numbers of people.

In some ways, this is much less alarming than old-fashioned wiretapping. A computer that monitors the metadata of your phone calls and e-mail to see if you talk to terrorists will learn less about you than a government agent listening in to the words you speak. The problem is that most of us are connected by two degrees of separation to thousands of people, and by three degrees to hundreds of thousands. This explains reports that the overwhelming number of leads generated by the N.S.A. program have been false positives — innocent civilians implicated in an ever-expanding associational web.

This has troubling implications for civil liberties. But it also points to a practical obstacle for using link analysis to discover terror networks: information overload. The National Counterterrorism Center's database of suspected terrorists contains 325,000 names; the Congressional Research Service recently found that the N.S.A. is at risk of being drowned in information. Able Danger analysts produced link charts identifying suspected Qaeda figures, but some charts were 20 feet long and covered in small print. If Atta's name was on one of those network maps, it could just as easily illustrate their ineffectiveness as it could their value, because nobody pursued him at the time.

One way to make sense of these volumes of information is to look for network hubs. When Barabasi mapped the Internet, he found that sites like Google and Yahoo operate as hubs — much like an airline hub at Newark or O'Hare — maintaining exponentially more links than the average. The question is how to identify the hubs in an endless flow of records and intercepted communications. Scientists are using algorithms that can determine the "role structure" within a network: what are the logistical and hierarchical relationships, who are the hubs? The process involves more than just tallying links. If you examined the metadata for all e-mail traffic at a university, for instance, you might find an individual who e-mailed almost everyone else every day. But rather than being an especially connected or charismatic leader, this individual could turn out to be an administrator in charge of distributing announcements. Another important concept in network theory is the "strength of weak ties": the most valuable information may be exchanged by actors from otherwise unrelated social networks.

Network academics caution that the field is still in its infancy and should not be regarded as a panacea. Duncan Watts of Columbia University points out that it's much easier to trace a network when you can already identify some of its members. But much social-network research involves simply trawling large databases for telltale behaviors or activities that might be typical of a terrorist. In this case the links among people are not based on actual relationships at all, but on an "affiliation network," in which individuals are connected by virtue of taking part in a similar activity. This sort of approach has been effective for corporations in detecting fraud. A credit-card company knows that when someone uses a card to purchase $2 of gas at a gas station, and then 20 minutes later makes an expensive purchase at an electronics store, there's a high probability that the card has been stolen. Marc Sageman, a former C.I.A. case officer who wrote a book on terror networks, notes that correlating certain signature behaviors could be one way of tracking terrorists: jihadist groups in Virginia and Australia exercised at paint-ball courses, so analysts could look for Muslim militants who play paint ball, he suggests. But whereas there is a long history of signature behaviors that indicate fraud, jihadist terror networks are a relatively new phenomena and offer fewer reliable patterns.

There is also some doubt that identifying hubs will do much good. Networks are by their very nature robust and resistant to attack. After all, while numerous high ranking Qaeda leaders have been captured or killed in the years since Sept. 11, the network still appears to be functioning. "If you shoot the C.E.O., they'll hire another one," Duncan Watts says. "The job will still get done."

Patrick Radden Keefe, a Century Foundation fellow, is the author of "Chatter: Dispatches from the Secret World of Global Eavesdropping."

Friday, March 10, 2006

Why Do Suburbanites All Have Green Lawns?

Economists are quite interested in social interaction (i.e contagion) models. If your neighbors are unemployed, does this have a causal impact on raising the probability that you are unemployed? During the U.S Civil War, if your fellow soldiers started deserting during combat does this raise the probability that you desert?

Below I report a wackier social interaction model, do suburbanites have green grass lawns because this is the social norm and they want to fit in? I have found that in my Belmont, MA old suburb town that the neighbors treated us better and were more welcoming once we did a better job of cutting our grass and "fitting in". I would call this "shamed by the Joneses".

Returning the Green Cities theme, what are the environmental costs of this lawn craze? The Book Reviewer is unhappy with how the author of this book addresses this.


March 10, 2006
Books of The Times | 'American Green'
Why Grass Really Is Always Greener on the Other Side
By WILLIAM GRIMES

A couple of years ago, a homeowner in Seattle decided to take extreme action against the moles that had turned his lawn into a complex network of raised grassy veins. He poured gasoline into the mole holes, tossed a match and incinerated his yard.

Many of the approximately 60 million Americans with lawns can understand the feeling. A well-tended yard is not only personal territory, to be defended unto death, but also a work of art. Like a painting, it has form and color. Like a child, it is alive. No wonder feelings run high, and the lawn, as a canvas for personal expression, engages the suburban American male at the deepest possible level. Americans like Jerry Tucker, who turned his yard into a replica of the 12th hole at Augusta National Golf Club.

The often-crazed love affair between Americans and their lawns is Ted Steinberg's subject in "American Green." Mr. Steinberg, an environmental historian at Case Western Reserve University in Cleveland, likens this relationship, and the insane pursuit of lawn perfection, to obsessive-compulsive disorder, and he may very well be right. That would at least explain the behavior of a homeowner who clips her entire front yard with a pair of hand shears, or Richard Widmark's reaction on waking up in the hospital after a severe lawn mower accident in 1990. "The question I asked the doctors was not 'Will I ever act again?' " he later recalled, "but 'Will I ever mow again?' "

How did a plant species ill suited to the United States, and the patrician taste for a rolling expanse of green take root from the shores of the Atlantic to the desiccated terrain of Southern California? The short answer is that it didn't, not until after the Civil War. Although Washington and Jefferson had lawns, most citizens did not have the hired labor needed to cut a field of grass with scythes. Average homeowners either raised vegetables in their yards or left them alone. If weeds sprouted, fine. If not, that was fine, too.

Toward the end of the 19th century, suburbs appeared on the American scene, along with the sprinkler, greatly improved lawn mowers, new ideas about landscaping and a shorter work week. A researcher investigating the psychology of suburbanites in 1948 observed shrewdly that the American work ethic coexisted uneasily with free time, and that "intense care of the lawn is an excellent resolution of this tension." At least until the moles arrive.

Mr. Steinberg cannot decide whether he is writing a cultural history, an environmental exposé or a series of Dave Barry columns. As cultural history, "American Green" is relentlessly superficial, a grab bag of airy generalizations and decrepit clichés about the cold war and the conformist 1950's. As environmental exposé, it is confused and poorly explained. It is impossible, reading Mr. Steinberg on lawn-care products, to assess risks. At times, it sounds as if any homeowner spreading the standard lawn fertilizers and herbicides might as well take out a gun and shoot his family. A few pages later, the environmental threat seems trivial.

Sometimes, he simply punts. Building a case against power mowers, which Mr. Steinberg regards as unsafe at any speed, he introduces the story of a "lawn professional" who lost the fingers on both hands while trying to keep a wayward mower from rolling into a lake. This might be a damning piece of evidence if Mr. Steinberg did not then add, sheepishly, that "perhaps this is a suburban legend." Half-serious, intellectually incoherent, "American Green" shambles along like this, scattering bits and pieces of history, sociology and consumer advice as it goes.

There are just enough fascinating bits to keep the pages turning. It is gratifying to learn that grass really is greener on the other side of the fence. An observer looking down at his own lawn sees brown dirt along with green grass blades, but only grass blades next door, because of the angle of vision. It is useful to focus on one of the pet claims of the lawn-care industry, that a lawn 50 feet square produces enough oxygen to satisfy the respiratory needs of a family of four. This is probably true, but, as Mr. Steinberg points out, superfluous, since there is no oxygen shortage on Earth.

Mr. Steinberg does make the case fairly convincingly that the pursuit of the perfect lawn cannot be explained without golf, which has played on the homeowner's weak sense of self-esteem by rubbing his face in fantasy images. Perfection at Augusta requires a team of specialists and a multimillion-dollar investment in infrastructure. The average golf green gets more pampering and primping than Heidi Klum's cheekbones, but that is the lawn that suburbanites want. Companies like Scotts have convinced them that to achieve it, they need to follow a regimen of constant seeding, watering, fertilizing and herbiciding.

The future looks troubled for the American lawn. Some homeowners have given up entirely, paving over their yards to create more parking space. Others are embracing the native-plant movement and turning their lawns into miniature prairies and meadows. Nellie Shriver, of the Fruitarian Network, stopped mowing for moral reasons. "It is impossible to mow the grass without harming it," she said. "We believe grass has some sort of consciousness, that it has feelings."

Even more alarming, for the lawn-care industry, is the kind of post-lawn sensibility exhibited by an Atlanta real estate broker. "When something bores me, I get rid of it," she said. "Lawns bore me."

Wednesday, March 08, 2006

The Benefits of Sprawl

In late March, I'll fly to Berkeley to participate in an OECD Roundtable on sprawl. Below, I report a draft of the paper. I apologize that I don't show you the tables or figures. I can't figure out how to upload a .pdf file so I simply pasted this in. I'm trying to signal to you that not only can bloggers blog but we can also write "real" papers. I will let the market judge the value of this piece but I enjoyed writing it.


The Quality of Life in Sprawled versus Compact Cities

Matthew E. Kahn

Tufts University

March 2006


Introduction

Today, most Americans who live in metropolitan areas live in single detached homes and commute to work by automobile. New York City is America’s sole urban center where a significant fraction of the population lives in apartment buildings, works downtown and commutes by public transit. As transportation costs continue to decline and household incomes rise, we are choosing sprawl as we live and work in the suburbs.
The conventional wisdom is that this trend imposes major social costs relative to its benefits. An advanced Google search reveals that there are 39,500 entries for the exact phrase “costs of sprawl” while there are only 455 entries for the exact phrase “benefits of sprawl”. The beneficiaries of sprawl may be a “silent majority” who are not as politically active as center city boosters, environmentalists and the urban poor’s advocates in voicing their views on the merits of the ongoing decentralization of jobs and people taking place across cities in the United States.
This paper seeks to address this intellectual imbalance by presenting original empirical work documenting some of the benefits of living in a sprawled metropolitan area. This paper uses a number of U.S data sets to explore how sprawl improves quality of life. I focus on how sprawl affects firms, workers and consumers.
Opponents of sprawl often argue that suburbanization may offer private benefits but that it imposes social costs. The “cost of sprawl” literature posits that there are many unintended consequences of the pursuit of the “American Dream” that range from increased traffic congestion, urban air pollution, greenhouse gas production, farmland paving, to reducing center city tax revenues, and denying the urban poor access to employment opportunities. The last section of the paper argues that environmental regulation, new markets, and technological advance have helped to mitigate several of the social costs of sprawl.

Measuring Sprawl in the United States

The first step for comparing quality of life indicators in compact versus sprawl cities is objective data that allows major cities to be classified by “sprawl category”. A 2005 study by Reid Ewing, Rolf Pendall and Don Chen titled Smart Growth America creates such data for 83 major U.S metropolitan areas in the year 2000 (see www.smartgrowthamerica.org). These areas represent nearly half of the nation’s population. Table One lists these areas and reports their “compactness” ranking. Major metropolitan areas are listed from most sprawled to least sprawled. These authors base their sprawl index on four factors; residential density, neighborhood mix of homes, jobs, and services, strength of activity centers and downtowns, and accessibility of the street network.
As discussed by Ewing, Pendall and Chen (2005), “The most sprawling metro area of the 83 surveyed is Riverside, California, with an Index value of 14.22. It received especially low marks because it has few areas that serve as town centers or focal points for the community: for example, more than 66 percent of the population lives over ten miles from a central business district;• it has little neighborhood mixing of homes with other uses: one measure shows that just 28 percent of residents in Riverside live within one-half block of any business or institution; its residential density is below average: less than one percent of Riverside’s population lives in communities with enough density to be effectively served by transit; its street network is poorly connected: over 70 percent of its blocks are larger than traditional urban size.”
It is important to note that even in compact metropolitan areas such as New York City, there is significant suburban growth at the fringe. A broader definition of the New York City metropolitan area would include large pieces of New Jersey and Connecticut.
In previous research, I have used the share of employment within a certain radius of the CBD as my prime measure of sprawl (see Kahn 2001, Glaeser and Kahn 2004). The Ewing, Pendall, Chen (2005) measure is more comprehensive and offers an independent measure of sprawl.
Throughout this paper, I use their compactness index (see Table One) to partition metropolitan areas into four groups (i.e high sprawl, sprawl, low sprawl, very low sprawl). The most sprawled metro areas are those whose compact index lies between the 0 and 25th percentiles of the empirical distribution as listed in Table One. The least sprawled metro areas are those in the top 25th percentile of the empirical distribution. This simple classification system allows me to compare outcome indicators in low sprawl versus high sprawl areas.
Outcome Measures

Ideally, I would like to observe how people who currently live in sprawled cities would have lived their lives had they lived in a compact city. This counter-factual would allow me to measure how sprawl affects household wellbeing. If this information could be combined with preference information on how much people are willing to pay for such amenities as a short commute or a nice house then it would be straightforward to estimate the benefits of sprawl. In reality, this counter-factual can only be approximated by examining the outcomes for observationally similar people who live in high sprawl and compact cities.

Housing Consumption

The 2003 American Housing Survey (AHS) micro data set is a representative national sample for examining housing consumption in high sprawl and low sprawl cities. Over 20,000 people are sampled. Using the geographical identifiers in this data base, I merge the metropolitan area sprawl measures to this micro data. For 77 major metropolitan areas, I examine housing consumption in compact versus sprawled cities.
In Table Two, I focus on home ownership propensities and land consumption as a function of urban form. As shown in the top row of Table One, home ownership rates are 8.5 percentage points higher in the most sprawled cities relative to the most compact cities. In compact cities, the median household lives on a lot that is 40% smaller than the median household who lives in a sprawled city (i.e the 0 to 25th percentile of the compact distribution). The differential with respect to interior square footage is smaller. The median household in a compact city lives in a unit with 158 fewer square feet than the median household in a sprawl city.
While there are clear housing consumption gains for households in sprawled metropolitan areas, these observable differentials do not reveal how much households value such gains. The population differs with respect to its housing preferences. Those people with the greatest taste for large single detached housing will migrate to cities and areas where they can cheaply achieve their housing goals.
Some cities such as New York City remain compact due to maintaining a large share of employment downtown. Other cities have increased their compactness by fighting sprawl through Smart Growth policies of land use controls. A political economy literature has examined the distributional effects of who gains and who loses when cities battle sprawl (Katz and Rosen 1987, Portney 2002, Glaeser and Gyourko and Saks 2006, Quigley and Raphael 2005). Incumbent homeowners gain twice from such from anti-growth policies. By limiting increases in housing supply, these policies raise the value of existing homes. If these policies increase the quality of life of the city, then this will increase the demand for the existing homes.
Who loses from “Smart Growth” policies? It is well known that minority homeownership rates have lagged behind whites (see Colllins and Margo 2001). Part of this gap is due to differentials in wealth accumulation. In previous research, I have documented that blacks who live in sprawl cities “catch up” on some housing consumption dimensions to whites relative to the black/white housing consumption differential in compact cities (Kahn 2001). In Table Three, I present some new evidence on this question. I use the 2003 AHS data and focus on one measure of housing consumption, the number of rooms in the housing unit. I use multivariate regression techniques (i.e ordinary least squares) to control for such important demographic features as household income, the household’s size, presence of children. Controlling for these factors, I examine how urban form affects housing consumption.
As shown in Table Three, an increase in the metropolitan area compactness index reduces minority household housing consumption. This estimate is statistically significant. For white households, the compactness index has a negative but small statistically insignificant coefficient. Moving the average minority household from a high sprawl city (Atlanta) to a low sprawl city (Portland) would reduce its rooms consumption by -.52 = -.6658*log(126.1/57.7). These results support the hypothesis that sprawl encourages housing convergence. Why could this be? Housing is more affordable in high sprawl areas. Such areas are not erecting entry barriers and developers are building homes. Future work might study whether immigrant housing consumption in European cities is more comparable to natives in less compact cities.

Commute Times

Are commute times higher or lower in compact cities? In compact cities, people are likely to live closer to their downtown jobs but people are more likely to commute by relatively slow public transit. In a monocentric city, workers who commute by private vehicle are likely to slow each other down as they each impose congestion externalities on each other. In contrast, in sprawled metropolitan areas featuring multiple employment centers, workers commute by private vehicle at faster speeds (Gordon, Kumar and Richardson 1991, Crane 2000).
To begin to examine these issues, I use commute data from the 2003 American Household Survey. This data set reports the distance to work, and commute time for heads of households. In Table Two, I report summary statistics for workers in compact versus sprawled cities. Relative to workers in compact cities, workers in sprawled cities commute an extra 1.8 miles further each way but their commute is 4.3 minutes shorter. Over the course of a year (400 trips), they save 29 hours. While the workers living in sprawled cities have a longer commute measured in miles, they are commuting at higher speeds. Table Two shows that workers in sprawled cities commute at a speed 9.5 miles per hour faster than workers in compact cities.
The Neighborhood Change Database reports the share of census tract commuters who have a less than 25 minute commute by year. In Figure One, I graph this with respect to the census tract’s distance from the Central Business District (CBD). The figure shows that in both 1980 and 2000, the share of commuters with a short commute declines over the distance 0 to 10 miles from the CBD. Starting at the 11th mile from the CBD, the share of commuters with a short commute actually stops declining. This is strong evidence of the effect of sprawl. A large share of residents at such locations are not commuting downtown. Note the differential between the 1980 and the 2000 graphs. Over these twenty years, suburban households (i.e those living more than ten miles from the CBD) have experienced a large percentage increase in short commutes. For example, ten miles from the CBD between 1980 and 2000 there has been over a fifteen percentage point increase in the share of commutes with a commute of 25 minutes or less. This is strongly suggestive evidence of the commuting gains brought about by employment suburbanization. Employment sprawl has shortened commute times for suburban residents as such workers can commute faster over a shorter distance relative to if they worked downtown.

Additional Benefits of Sprawl

This section briefly highlights a variety of potentially important benefits of sprawl. Data limitations preclude presenting original data analysis measuring the size of each of these effects but I believe that each contributes to household well being in sprawled cities.

The Location of Employment Within the Metro Area

In the year 2000, only 21% of Atlanta’s jobs were located in zip codes within 10 kilometers of the CBD. In Boston, 52% of this area’s jobs were located within 10 kilometers of the CBD (Baum-Snow and Kahn 2005).
Firms gain by having the option of locating some of their employment further from the high land priced CBD. The key reasons for why firms choose particular locations include 1. land costs, 2, access to ideas, 3. access to workers and 4. transport cost savings for inputs and output. For example, manufacturing industries which are more land intensive are more likely to decentralize while skill intensive industries are less likely to decentralize (Glaeser and Kahn 2001). Those firms that gain from “Jane Jacobs” learning from other types of firms have an incentive to locate in diverse high density downtowns.
Within firms, non-management occupations are increasingly being sited at the edge of major cities (Rossi-Hansberg, Sarte and Owens 2005). This cost savings increases firm profits. Firms that are able to split their activities between headquarters and production plants are likely to gain greatly from sprawl. Standard agglomeration forces encourage firms to only keep those workers at the center city headquarters who benefit from interactions in the denser downtown (Rossi-Hansberg, Sarte and Owens 2005).
Other firms may gain by being able to construct large campuses where members of the firm can interact across divisions. Microsoft’s Richmond, Washington campus will be ten million square feet after it completes its expansion and there will be 12,000 workers there. Google now has 5,680 employees and is adding 1 million square feet to the 500,000 it now occupies in Mountain View, California.
There are at least two quality of life benefits from employment suburbanization. The previous section documented the reduction in commute times in suburban communities as more suburbanites now live closer to their jobs rather than commuting downtown. A second quality of life benefit from suburbanized employment is that this creates a type of separation of land uses. In the past, when cities where much more compact, millions of people lived too close to dirty, noisy manufacturing and slaughterhouse activity (Melosi 2001). Declining transportation costs have allowed a separation of where goods are produced and where people live.

Suburban Consumer Prices and the “Walmart” Effect

Walmart and other “superstores” could not exist in an urban world of compact cities with binding zoning laws. “Wal-Mart has sometimes had difficulty in receiving planning approval for its stores. Currently, Wal-Mart has either no presence or an extremely limited presence in New England, the New York metro area, California, and the Pacific Northwest. However, its expansion into new areas has proceeded over the past few years (Hausman and Leibtag 2005).”
These stores require large physical spaces and large parking lots to accommodate their inventory and to attract shoppers. Such stores offer one stop shopping and prices that can be 25% lower than regular supermarkets (see Hausman and Leibtag 2005). The diffusion of these stores may mean that the U.S consumer price index over-states inflation because this index does not properly reflect the prices that people face for core goods. These stores are disproportionately located in suburban and rural areas where land is cheap. Center city residents often drive to suburban locations to shop at such stores. While the popular media often reports stories critiquing Walmart’s employee compensation and its effects on driving out of business smaller “mom and pop” stores, it cannot be denied that consumers gain from having access to such stores. The key counter-factual here is what prices would residents face in a compact monocentric city without Walmart and other superstores?

Local Government Competition and Services and Taxes

Relative to a compact city, a sprawled metropolitan area is likely to have more political jurisdictions allowing households to have greater choice (Dye and McGuire 2000). Such political competition forces local jurisdictions to provide services such as garbage collection more efficiently per tax dollar spent relative to how they would provide services if they knew they were a monopolist. A central tenet of local public finance is that diverse households will gain if they can “vote with their feet” and seek out communities that offer the local services and taxes that meet their needs. Households willing to pay high taxes for good local schools will move to certain communities that households with no children would not consider. Many rich people are seeking out suburban communities both due to the housing stock, local public goods offered and the types of neighbors such communities attract.
A defining characteristic of cities in the United States is diversity. Such cities feature diversity with respect to ethnic groups and income inequality. The social capital literature has argued that an unintended consequence of the rise of ethnically diverse cities featuring significant income inequality is that people are less civically engaged (Costa and Kahn 2003, Alesina and LeFarina 2005). In such cities, part of the attraction of living in the suburbs may be the opportunity to self segregate into more homogenous communities within the greater metropolitan area.
It is important to note that spatial separation of different groups within the same metropolitan area reduces the likelihood of social interactions and this can have perverse consequences. In a sprawled city, if the heterogeneous population migrates and forms more homogenous communities with the poor in the center city and the wealthy in the suburbs, then bridging social capital across ethnic and income groups is less likely to take place. In this case, stereotypes can persist and collective action may be more challenging to achieve. In the past, when rich and poor clustered together in center cities, wealthy urbanites could not so easily escape the problems of their less fortunate neighbors. Pollution or disease spread easily and quickly from the tenements of the poor to the mansions of the rich. As a result, upper-bracket taxpayers were more likely to support policies that improved the living conditions of the worst off. For example, as Troesken (2004) points out, “In a world where blacks and whites lived in close proximity ‘sewers for everyone’ was an aesthetically sound strategy. Failing to install water and sewer mains in black neighborhoods increased the risk of diseases spreading from black neighborhoods to white ones.”
Today suburbanization has greatly increased the distance between the middle and upper middle class and the poor. Alesina and Glaeser (2004) investigate why Europe has more generous redistribution for the poor then the United States. They argue that part of the explanation is the fact that the United States has a more diverse population. Another possible cause is sprawl. If more U.S cities were more compact, would U.S tax payers be willing to redistribute more because they would have greater contact with the poor?



Public Safety

Does sprawl protect the suburban rich from crime? If criminals have less access to cars, then physical distance from the urban poor is likely to reduce the risk that the relatively wealthy face.
It is true that over the last decade center city crime has sharply decreased (Levitt 2005). While the causes of these quality of life gains continue to be debated, the consequences of this trend are clearly visible. Center cities will be better able to compete for the skilled (especially those with few children living in the household) against suburbs if the city is perceived to be safe. The reduction in urban crime will differentially increase quality of life in more compact cities such as San Francisco and New York City.
Compact cities do face greater risks from terrorist attacks. While only a small share of any city’s population is killed in even very large attacks such as 9/11/2001, people do tend to over-estimate the probability of unlikely events (Rabin 2002). Sprawled cities are also less attractive targets for terrorists (Glaeser and Shapiro 2002, Savitch 2005). It is no accident that the major terrorist attacks have taken place in dense cities such as at the World Trade Center, and the London bus bombs. A sprawled city offers the terrorists fewer causalities and thus less media coverage.

Sprawl and Urban Quality of Life

The previous sections have focused on individual subcomponents of urban quality of life. I have made no attempt to prioritize which dimensions of urban quality of life are most important to people. The economics literature on compensating differentials has attempted to answer this question. The theory of compensating differentials says that it will be more costly to live in “nicer” cities (Rosen 2002). This theory is really a “no arbitrage” result. If migration costs are low across urban areas and if potential buyers are fully informed about the differences in non-market attributes bundles then home prices and wages will adjust such that in nicer cities wages are lower and home prices are higher.
An enormous empirical literature has estimated cross-city hedonic price functions to estimate the implicit compensating differentials for non-market goods. In these studies, the dependent variable is the price of home I in city j in community m in year t. Define Xit as home I’s physical attributes in year t. Ajt represents city j’s attributes in year t. Given this notation, a standard real estate hedonic regression will take the form:

Priceijmt = 0 + 1*Xit + 2*Ajt + ijmt (1)

Multivariate regression estimates of this regression yield estimates of the compensating differentials for city level local public goods (based on 2). Intuitively, such estimates reveal how much higher are home prices for observationally identical homes in nice climate areas (i.e San Francisco) versus bad climate areas (i.e Houston).
In one prominent cross-city quality of life study, Gyourko and Tracy (1991) estimate equation (1) using 1980 data for 130 center cities. They use ordinary least squares estimates to construct a city quality of life index equal to 2*Ajt . In their empirical application, this A vector includes city attributes such as rainfall, cooling degree days, heating degree days, humidity, sunshine, wind speed, air pollution levels (measured by particulates), coastal access, cost of living, crime, student teacher ratio, insurance company ratings of the local fire department, hospital beds per-capita, taxes and population size. By estimating equation (1), Gyourko and Tracy provide index weights for the revealed relative importance of each of these factors in local quality of life. Intuitively, if a specific city attribute such as clean air is highly valued by people then cities with clean air should feature higher home prices and pay lower wages.
Their city quality of life rankings are useful for me because they allow me to study whether more compact cities have higher quality of life. The Gyourko and Tracy index can be used to rank center cities with respect to their quality of life from best to worst. I am able to merge the Gyourko and Tracy data and the Ewing, Pendall and Chen (2005) data (see Table One) for 47 of the metropolitan areas. In Figure Two, I graph their rankings of each city ranked from best (#1) to worst (#130) as a function of the metropolitan area’s sprawl. This figure allows me to examine whether there is objective evidence that quality of life is higher in more compact cities. Figure Two shows that there is no relationship between quality of life and city compactness for this subset of major metropolitan areas. Put differently, center city quality of life is not lower in more sprawled metropolitan areas. It is important to note that Gyourko and Tracy do not explicitly measure housing consumption or commute times by metropolitan area. Instead, they are focusing on non-market local public goods such as climate, street safety, and public services. Figure Two shows that these non-market services are neither better nor worse in more sprawled cities. An interesting extension of this research would examine cities over time. In sprawling cities, do we see urban quality of life declining? As shown in Figure Two, at a point in time across cities there is little evidence supporting this hypothesis. In the next section, I will present some evidence that air pollution has not grown worse in growing cities.

Some of the Local Environmental Costs of Sprawl are Declining

Sprawl’s opponents are likely to concede that the “American Dream” offers private benefits. They would counter that suburbanization imposes important social costs that no one household has an incentive to internalize. This section seeks to examine some of these environmental costs.
Environmentalists often argue that sprawl contributes to a large ecological footprint because people consume more resources when they live at low density. Table Two presents some evidence supporting this claim. The 2001 National Household Transportation Survey reports for each household how much gasoline they consume each year. Merging the city compactness index (see Table One) to this data, I examine gasoline consumption in compact and sprawled cities. As shown in Table Two, the average resident in compact cities consumes 335 gallons less per year of gasoline than the average resident of sprawled metropolitan areas. Within metropolitan areas, suburban drivers drive over 30% more miles than center city residents and are more likely to drive low fuel economy SUVs (Kahn 2000, 2006). The average Atlanta household would drive 25 percent fewer miles if it relocated to relatively compact Boston (Bento et. al. 2005). As a result, there are significant differences in average gasoline consumption across the country. Cross-national studies suggest that gasoline consumption could be 20 percent to 30 percent lower in sprawling cities like Houston and Phoenix if their urban structure more closely resembled that of Boston or Washington, DC.
People are less likely to use public transit when they live in sprawled cities. This has environmental implications because public transit is a “greener” transport technology than private vehicles. To document this fact, I use census tract data from the Urban Institute and Census Geolytics’ Neighborhood Change Database. This is a set of repeated cross sections from the 1970, 1980, 1990 and 2000 decennial censuses at the census tract level normalized to 2000 tract geography. Census tracts are areas of roughly 4,000 people. Using GIS software, I calculate each census tract’s distance to the CBD and focus on those census tracts within 25 miles of the CBD for the metropolitan areas listed in Table One. As shown in the bottom two rows of Table Two, in 1970 6.8% of workers in sprawled metropolitan areas and 24.6% of workers in compact metropolitan areas commuted using public transit. In both areas, these shares shrank between 1970 and 2000. In the year 2000, 2.8% of workers in sprawled metropolitan areas and 17.1% of workers in compact metropolitan areas commuted using public transit.
Income growth plays some role in explaining this trend. As household incomes increase, people are less likely to use public transit, which is typically slower than commuting by car. Car travel takes about two minutes per mile for commutes under five miles. In contrast, bus commuting takes more than three minutes per mile for commutes under five miles. In addition, the average bus commuter waits 19 minutes to board the bus. Using data from the 2000 Census of Population and Housing, I find that the probability of using public transit is 2.5 percentage points lower for a household at the 75th percentile of the income distribution ($65,339) than for a household at the 25th percentile ($41,159).
However, sprawl also helps explain the decline in public transit ridership. Based on the same data, I find that simulating sprawl by moving a person from the 75th percentile of the population density distribution (2,528 people per square mile) to the 25th percentile (142 people per square mile) reduces public transit use by 8.6 percentage points.
Baum-Snow and Kahn (2005) examine public transit use trends in sixteen major United States cities that have spent billions of dollars constructing new light rail and heavy rail lines between 1970 and 2000. They study whether the share of workers who commute using public transit increases in communities that have increased access to rail transit because they now live close to a new rail line. While they find some evidence of increased usage (especially in more compact cities such as Washington D.C and Boston), the observed “treatment” effects are small. New rail transit expansions are unlikely to encourage mode switching from vehicles to public transit. To reduce the ecological footprint impacts of private vehicle use, induced innovation is needed to encourage producers to develop high fuel efficient vehicles and for consumers to demand such vehicles. Expectations of high future gas prices would play a key role in providing incentives for such products to be demanded.

Air Pollution
A standard argument that environmentalists make about sprawl is that this trend contributes to urban air pollution. But, new vehicle emissions regulation has offset increased vehicle mileage. The Los Angeles Basin suffers from the highest levels of air pollution in the United States, with the pollution caused mainly by vehicle emissions. But Los Angeles has made dramatic progress on air pollution over the last 25 years. For ambient ozone, a leading indicator of smog, the average of the top 30 daily peak one-hour readings across the county’s 9 continuously operated monitoring stations declined 55% from 0.21 to 0.095 parts per million between 1980 and 2002. The number of days per year exceeding the federal one-hour ozone standard declined by an even larger amount—from about 150 days per year at the worst locations during the early 1980s, down to 20 to 30 days per year today.
Recent pollution gains are especially notable because Los Angeles County’s population grew by 29 percent between 1980 and 2000, while total automobile mileage grew by 70 percent (Census of Population and Housing 1980 and 2000; California Department of Transportation 2003). For air quality to improve as total vehicle mileage increases indicates that emissions per mile of driving must be declining sharply over time.
To document this fact, I use two waves of the California Random Roadside Emissions tests spanning the years 1997 to 2002 to estimate vehicle level emissions production functions (see Kahn and Schwartz 2006). Intuitively, I control for a number of vehicle characteristics such as the vehicle’s mileage, and the zip code of the vehicle owner. Holding these factors constant, I estimate how vehicle emissions vary as a function of vehicle model year. How much cleaner are 1990 makes relative to 1980 and 1975 makes?
In Figure Three, I present predicted vehicle emissions by model year holding all vehicle attributes at their sample means. For each of the three pollutant measures I normalize the predictions by dividing through by the predicted value for 1966 model year vehicles. The Figure shows sharp improvement with respect to model year and documents emissions progress even during years when new vehicle regulation did not tighten.
The vehicle emissions progress by model year means that the average vehicle on the road in any calendar year is becoming greener over time. In each subsequent calendar year, there are fewer high emitting pre-1975 model year makes on the roads. This greening of the average vehicles has greatly contributed to the reduction in ambient pollution despite ongoing city growth and increased vehicle mileage. To document this, I use ambient air pollution data from California Ambient Air Quality Data CD, 1980-2002 (California Air Resources Board). This CD-ROM provides all air quality readings taken in the state during this time period. In Figure Four, I graph the percent change in ambient ozone smog for 29 major California counties over the years 1980 to 2000 with respect to county percent population growth. I include data for the 29 California counties that had population levels greater than 200,000. Ambient ozone by county/year is measured by the maximum one hour reading at each monitoring station within the county and then I average these maximum readings by county in each year.
Anti-sprawl advocates would argue that counties experiencing greater population growth should experience rising ambient air pollution. As shown in this figure, there is no correlation between county growth and ambient air pollution. The correlation equals -.08. These major counties, even those such as Riverside that have experienced the greatest growth, have enjoyed large pollution reductions over this time period. The vehicle pollution progress documented in Figure Three has helped to offset the scale effects of California’s population growth.

Open Space
In addition to greenhouse gases and ambient air pollution, a third environmental concern often voiced by sprawl opponents is the conversion of farm land. Farmers provide green space. Such green space is privatized when farmers sell their land to suburban developers. If nearby households value the open space, then farmers impose a negative externality on existing urban and suburban residents when they sell to a developer. Fortunately, new markets in land development rights have helped mitigate this problem.
Throughout the United States, municipalities are purchasing open space around their borders to guarantee that the land is not developed. For example, the city of Boulder, Colorado, has earmarked a 0.73 percent sales tax to fund the purchase of 25,000 acres to establish a greenbelt around the city. It has also set aside 8,000 acres in the Boulder foothills to be used as parks. Some of the Boulder open space is leased to farmers and remains in agricultural use. Other parcels are maintained as natural areas. This allows residents to enjoy recreational activities such as walking, bicycling, and horseback riding. In the Seattle metropolitan area, King County has adopted a different strategy with a similar goal. Drawing upon a $50 million bond issue, the county is purchasing development rights from farmers. Farmers gain an increase in their income and in return they promise not to convert their “green space” into suburbia (see Kahn 2006).
Such government initiatives solve a free rider problem. In the absence of government intervention, environmental organizations such as land trusts might go door to door, asking people to contribute money to help preserve open space. But few people are likely to give under these conditions. The “win-win” for any one household is to contribute nothing to such programs and let everyone else underwrite their cost. As a result, too little money is invested in protecting local public goods. Government’s unique ability to collect taxes and allocate revenue solves this problem. However, not all governments can take this approach: like many green policies, “open space” initiatives are more likely to succeed as local incomes rise. After studying voting patterns for all open space referenda in the United States between 1998 and 2003, Kotchen and Powers (2005) found that richer jurisdictions and jurisdictions with more homeowners were more likely to vote to hold such ballot initiatives and to enact them. Nearly 1,000 jurisdictions had open space referenda and nearly 80 percent passed. From an ecological perspective, the key issue here is whether jurisdictions hire competent ecologists who can prioritize what are the most valuable pieces of open space to purchase and protect.




Conclusion

Compact cities featuring all employment located in the Central Business District limit economic opportunities. There is significant diversity of types of people and types of firms. Firms that need large parcels of land to operate and people who have a strong preference for their own large private plots of land face significant tradeoffs if they must locate in compact cities. Sprawled cities offer both firms and households more choices. How much such economic actors gain from these additional choices is a complex function that merits future research. This paper has presented original evidence on some of the relevant margins. Similar to the United States, Europe faces an increase in the diversity of its workforce as immigration changes urban demographic patterns. Would sprawled cities offer greater opportunities for these newcomers?
This paper has attempted to present a balanced analysis of the private benefits and social costs both of compact and sprawled cities. Compact cities feature greater congestion and higher commute times while in sprawled cities certain global environmental externalities such as greenhouse gas production are likely to be exacerbated. Technological advance has mitigated many of the environmental problems associated with sprawl.
Today the diversity of major cities within the United States offers households a wide menu to choose from. People with a taste for “new urban” living can move to a New York City while those that want their own private space can move to Houston, Texas.
Do European cities feature “too little” sprawl? As documented in this paper, an unintended consequence of urban compactness is that the diversity of choices for consumers and firms is shrunk. How much would these economic agents value increased choice?