Mostly Harmless Econometrics: An Empiricist's Companion

Mostly Harmless Econometrics: An Empiricist's Companion

ISBN-10:
0691120358
ISBN-13:
9780691120355
Pub. Date:
12/15/2008
Publisher:
Princeton University Press
ISBN-10:
0691120358
ISBN-13:
9780691120355
Pub. Date:
12/15/2008
Publisher:
Princeton University Press
Mostly Harmless Econometrics: An Empiricist's Companion

Mostly Harmless Econometrics: An Empiricist's Companion

Paperback

$41.2
Current price is , Original price is $45.0. You
$41.20  $45.00 Save 8% Current price is $41.2, Original price is $45. You Save 8%.
  • SHIP THIS ITEM
    Temporarily Out of Stock Online
  • PICK UP IN STORE

    Your local store may have stock of this item.

  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.


Overview

The core methods in today's econometric toolkit are linear regression for statistical control, instrumental variables methods for the analysis of natural experiments, and differences-in-differences methods that exploit policy changes. In the modern experimentalist paradigm, these techniques address clear causal questions such as: Do smaller classes increase learning? Should wife batterers be arrested? How much does education raise wages? Mostly Harmless Econometrics shows how the basic tools of applied econometrics allow the data to speak.

In addition to econometric essentials, Mostly Harmless Econometrics covers important new extensions—regression-discontinuity designs and quantile regression—as well as how to get standard errors right. Joshua Angrist and Jörn-Steffen Pischke explain why fancier econometric techniques are typically unnecessary and even dangerous. The applied econometric methods emphasized in this book are easy to use and relevant for many areas of contemporary social science.

  • An irreverent review of econometric essentials
  • A focus on tools that applied researchers use most
  • Chapters on regression-discontinuity designs, quantile regression, and standard errors
  • Many empirical examples
  • A clear and concise resource with wide applications


Product Details

ISBN-13: 9780691120355
Publisher: Princeton University Press
Publication date: 12/15/2008
Edition description: New Edition
Pages: 392
Product dimensions: 5.50(w) x 8.40(h) x 1.20(d)

About the Author

Joshua D. Angrist is professor of economics at the Massachusetts Institute of Technology. Jörn-Steffen Pischke is professor of economics at the London School of Economics and Political Science.

Read an Excerpt

Mostly Harmless Econometrics

An Empiricist's Companion
By Joshua D. Angrist Jörn-Steffen Pischke

PRINCETON UNIVERSITY PRESS

Copyright © 2009 Princeton University Press
All right reserved.

ISBN: 978-0-691-12035-5


Chapter One

Questions about Questions

"I checked it very thoroughly," said the computer, "and that quite definitely is the answer. I think the problem, to be quite honest with you, is that you've never actually known what the question is." Douglas Adams, The Hitchhiker's Guide to the Galaxy

This chapter briefly discusses the basis for a successful research project. Like the biblical story of Exodus, a research agenda can be organized around four questions. We call these frequently asked questions (FAQs), because they should be. The FAQs ask about the relationship of interest, the ideal experiment, the identification strategy, and the mode of inference.

In the beginning, we should ask, What is the causal relationship of interest? Although purely descriptive research has an important role to play, we believe that the most interesting research in social science is about questions of cause and effect, such as the effect of class size on children's test scores, discussed in chapters 2 and 6. A causal relationship is useful for making predictions about the consequences of changing circumstances or policies; it tells us what would happen in alternative (or "counterfactual") worlds. For example, as part of a research agenda investigating human productive capacity-what labor economists call human capital-we have both investigated the causal effect of schooling on wages (Card, 1999, surveys research in this area). The causal effect of schooling on wages is the increment to wages an individual would receive if he or she got more schooling. A range of studies suggest the causal effect of a college degree is about 40 percent higher wages on average, quite a payoff. The causal effect of schooling on wages is useful for predicting the earnings consequences of, say, changing the costs of attending college, or strengthening compulsory attendance laws. This relation is also of theoretical interest since it can be derived from an economic model.

As labor economists, we're most likely to study causal effects in samples of workers, but the unit of observation in causal research need not be an individual human being. Causal questions can be asked about firms or, for that matter, countries. Take, for example, Acemoglu, Johnson, and Robinson's (2001) research on the effect of colonial institutions on economic growth. This study is concerned with whether countries that inherited more democratic institutions from their colonial rulers later enjoyed higher economic growth as a consequence. The answer to this question has implications for our understanding of history and for the consequences of contemporary development policy. Today, we might wonder whether newly forming democratic institutions are important for economic development in Iraq and Afghanistan. The case for democracy is far from clear-cut; at the moment, China is enjoying robust economic growth without the benefit of complete political freedom, while much of Latin America has democratized without a big growth payoff.

The second research FAQ is concerned with the experiment that could ideally be used to capture the causal effect of interest. In the case of schooling and wages, for example, we can imagine offering potential dropouts a reward for finishing school, and then studying the consequences. In fact, Angrist and Lavy (2008) have run just such an experiment. Although their study looked at short-term effects such as college enrollment, a longer-term follow-up might well look at wages. In the case of political institutions, we might like to go back in time and randomly assign different government structures in former colonies on their independence day (an experiment that is more likely to be made into a movie than to get funded by the National Science Foundation).

Ideal experiments are most often hypothetical. Still, hypothetical experiments are worth contemplating because they help us pick fruitful research topics. We'll support this claim by asking you to picture yourself as a researcher with no budget constraint and no Human Subjects Committee policing your inquiry for social correctness: something like a well-funded Stanley Milgram, the psychologist who did pathbreaking work on the response to authority in the 1960s using highly controversial experimental designs that would likely cost him his job today.

Seeking to understand the response to authority, Milgram (1963) showed he could convince experimental subjects to administer painful electric shocks to pitifully protesting victims (the shocks were fake and the victims were actors). This turned out to be controversial as well as clever: some psychologists claimed that the subjects who administered shocks were psychologically harmed by the experiment. Still, Milgram's study illustrates the point that there are many experiments we can think about, even if some are better left on the drawing board. If you can't devise an experiment that answers your question in a world where anything goes, then the odds of generating useful results with a modest budget and nonexperimental survey data seem pretty slim. The description of an ideal experiment also helps you formulate causal questions precisely. The mechanics of an ideal experiment highlight the forces you'd like to manipulate and the factors you'd like to hold constant.

Research questions that cannot be answered by any experiment are FUQs: fundamentally unidentified questions. What exactly does a FUQ look like? At first blush, questions about the causal effect of race or gender seem good candidates because these things are hard to manipulate in isolation ("imagine your chromosomes were switched at birth"). On the other hand, the issue economists care most about in the realm of race and sex, labor market discrimination, turns on whether someone treats you differently because they believe you to be black or white, male or female. The notion of a counterfactual world where men are perceived as women or vice versa has a long history and does not require Douglas Adams-style outlandishness to entertain (Rosalind disguised as Ganymede fools everyone in Shakespeare's As You Like It). The idea of changing race is similarly near-fetched: in The Human Stain, Philip Roth imagines the world of Coleman Silk, a black literature professor who passes as white in professional life. Labor economists imagine this sort of thing all the time. Sometimes we even construct such scenarios for the advancement of science, as in audit studies involving fake job applicants and résumés.

A little imagination goes a long way when it comes to research design, but imagination cannot solve every problem. Suppose that we are interested in whether children do better in school by virtue of having started school a little older. Maybe the 7-year-old brain is better prepared for learning than the 6-year-old brain. This question has a policy angle coming from the fact that, in an effort to boost test scores, some school districts are now imposing older start ages (Deming and Dynarski, 2008). To assess the effects of delayed school entry on learning, we could randomly select some kids to start first grade at age 7, while others start at age 6, as is still typical. We are interested in whether those held back learn more in school, as evidenced by their elementary school test scores. To be concrete, let's look at test scores in first grade.

The problem with this question-the effects of start age on first grade test scores-is that the group that started school at age 7 is ... older. And older kids tend to do better on tests, a pure maturation effect. Now, it might seem we can fix this by holding age constant instead of grade. Suppose we wait to test those who started at age 6 until second grade and test those who started at age 7 in first grade, so that everybody is tested at age 7. But the first group has spent more time in school, a fact that raises achievement if school is worth anything. There is no way to disentangle the effect of start age on learning from maturation and time-in-school effects as long as kids are still in school. The problem here is that for students, start age equals current age minus time in school. This deterministic link disappears in a sample of adults, so we can investigate pure start-age effects on adult outcomes, such as earnings or highest grade completed (as in Black, Devereux, and Salvanes, 2008). But the effect of start age on elementary school test scores is impossible to interpret even in a randomized trial, and therefore, in a word, FUQed.

The third and fourth research FAQs are concerned with the nuts-and-bolts elements that produce a specific study. Question number 3 asks, What is your identification strategy? Angrist and Krueger (1999) used the term identification strategy to describe the manner in which a researcher uses observational data (i.e., data not generated by a randomized trial) to approximate a real experiment. Returning to the schooling example, Angrist and Krueger (1991) used the interaction between compulsory attendance laws in American states and students' season of birth as a natural experiment to estimate the causal effects of finishing high school on wages (season of birth affects the degree to which high school students are constrained by laws allowing them to drop out after their 16th birthday). Chapters 3-6 are primarily concerned with conceptual frameworks for identification strategies.

Although a focus on credible identification strategies is emblematic of modern empirical work, the juxtaposition of ideal and natural experiments has a long history in econometrics. Here is our econometrics forefather, Trygve Haavelmo (1944, p. 14), appealing for more explicit discussion of both kinds of experimental designs:

A design of experiments (a prescription of what the physicists call a "crucial experiment") is an essential appendix to any quantitative theory. And we usually have some such experiment in mind when we construct the theories, although-unfortunately-most economists do not describe their design of experiments explicitly. If they did, they would see that the experiments they have in mind may be grouped into two different classes, namely, (1) experiments that we should like to make to see if certain real economic phenomena-when artificially isolated from "other influences"-would verify certain hypotheses, and (2) the stream of experiments that Nature is steadily turning out from her own enormous laboratory, and which we merely watch as passive observers. In both cases the aim of the theory is the same, to become master of the happenings of real life.

The fourth research FAQ borrows language from Rubin (1991): What is your mode of statistical inference? The answer to this question describes the population to be studied, the sample to be used, and the assumptions made when constructing standard errors. Sometimes inference is straightforward, as when you use census microdata samples to study the American population. Often inference is more complex, however, especially with data that are clustered or grouped. The last chapter covers practical problems that arise once you've answered question number 4. Although inference issues are rarely very exciting, and often quite technical, the ultimate success of even a well-conceived and conceptually exciting project turns on the details of statistical inference. This sometimes dispiriting fact inspired the following econometrics haiku, penned by Keisuke Hirano after completing his thesis:

T-stat looks too good Try clustered standard errors- Significance gone

As should be clear from the above discussion, the four research FAQs are part of a process of project development. The following chapters are concerned mostly with the econometric questions that come up after you've answered the research FAQs-in other words, issues that arise once your research agenda has been set. Before turning to the nuts and bolts of empirical work, however, we begin with a more detailed explanation of why randomized trials give us our benchmark.

Chapter Two

The Experimental Ideal

It is an important and popular fact that things are not always what they seem. For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much-the wheel, New York, wars and so on-while all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man-for precisely the same reasons. In fact there was only one species on the planet more intelligent than dolphins, and they spent a lot of their time in behavioral research laboratories running round inside wheels and conducting frighteningly elegant and subtle experiments on man. The fact that once again man completely misinterpreted this relationship was entirely according to these creatures' plans. Douglas Adams, The Hitchhiker's Guide to the Galaxy

The most credible and influential research designs use random assignment. A case in point is the Perry preschool project, a 1962 randomized experiment designed to assess the effects of an early intervention program involving 123 black preschoolers in Ypsilanti, Michigan. The Perry treatment group was randomly assigned to an intensive intervention that included preschool education and home visits. It's hard to exaggerate the impact of the small but well-designed Perry experiment, which generated follow-up data through 1993 on the participants at age 27. Dozens of academic studies cite or use the Perry findings (see, e.g., Barnett, 1992). Most important, the Perry project provided the intellectual basis for the massive Head Start preschool program, begun in 1964, which ultimately served (and continues to serve) millions of American children.

2.1 The Selection Problem

We take a brief time-out for a more formal discussion of the role experiments play in uncovering causal effects. Suppose you are interested in a causal if-then question. To be concrete, let us consider a simple example: Do hospitals make people healthier? For our purposes, this question is allegorical, but it is surprisingly close to the sort of causal question health economists care about. To make this question more realistic, let's imagine we're studying a poor elderly population that uses hospital emergency rooms for primary care. Some of these patients are admitted to the hospital. This sort of care is expensive, crowds hospital facilities, and is, perhaps, not very effective (see, e.g., Grumbach, Keane, and Bindman, 1993). In fact, exposure to other sick patients by those who are themselves vulnerable might have a net negative impact on their health.

Since those admitted to the hospital get many valuable services, the answer to the hospital effectiveness question still seems likely to be yes. But will the data back this up? The natural approach for an empirically minded person is to compare the health status of those who have been to the hospital with the health of those who have not. The National Health Interview Survey (NHIS) contains the information needed to make this comparison. Specifically, it includes a question, "During the past 12 months, was the respondent a patient in a hospital overnight?" which we can use to identify recent hospital visitors. The NHIS also asks, "Would you say your health in general is excellent, very good, good, fair, poor?"

The following table displays the mean health status (assigning a 1 to poor health and a 5 to excellent health) among those who have been hospitalized and those who have not (tabulated from the 2005 NHIS):

Group Sample Size Mean Health Status Std. Error

Hospital 7,774 3.21 0.014 No hospital 90,049 3.93 0.003

The difference in means is 0.72, a large and highly significant contrast in favor of the nonhospitalized, with a t-statistic of 58.9.

(Continues...)



Excerpted from Mostly Harmless Econometrics by Joshua D. Angrist Jörn-Steffen Pischke Copyright © 2009 by Princeton University Press. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

List of Figures vii

List of Tables ix

Preface xi

Acknowledgments xv

Organization of This Book xvii

PART I: PRELIMINARIES 1

Chapter 1: Questions about Questions 3

Chapter 2: The Experimental Ideal 11

2.1 The Selection Problem 12

2.2 Random Assignment Solves the Selection Problem 15

2.3 Regression Analysis of Experiments 22

PART II: THE CORE 25

Chapter 3: Making Regression Make Sense 27

3.1 Regression Fundamentals 28

3.2 Regression and Causality 51

3.3 Heterogeneity and Nonlinearity 68

3.4 Regression Details 91

3.5 Appendix: Derivation of the Average Derivative Weighting Function 110

Chapter 4: Instrumental Variables in Action: Sometimes You Get What You Need 113

4.1 IV and Causality 115

4.2 Asymptotic 2SLS Inference 138

4.3 Two-Sample IV and Split-Sample IV 147

4.4 IV with Heterogeneous Potential Outcomes 150

4.5 Generalizing LATE 173

4.6 IV Details 188

4.7 Appendix 216

Chapter 5: Parallel Worlds: Fixed Effects, Differences-in-Differences, and Panel Data 221

5.1 Individual Fixed Effects 221

5.2 Differences-in-Differences 227

5.3 Fixed Effects versus Lagged Dependent Variables 243

5.4 Appendix: More on Fixed Effects and Lagged Dependent Variables 246

PART III: EXTENSIONS 249

Chapter 6: Getting a Little Jumpy: Regression Discontinuity Designs 251

6.1 Sharp RD 251

6.2 Fuzzy RD Is IV 259

Chapter 7: Quantile Regression 269

7.1 The Quantile Regression Model 270

7.2 IV Estimation of Quantile Treatment Effects 283

Chapter 8: Nonstandard Standard Error Issues 293

8.1 The Bias of Robust Standard Error Estimates 294

8.2 Clustering and Serial Correlation in Panels 308

8.3 Appendix: Derivation of the Simple Moulton Factor 323

Last Words 327

Acronyms and Abbreviations 329

Empirical Studies Index 335

References 339

Index 361

What People are Saying About This

Guido Imbens

Interesting and unusual, this is an econometrics book with attitude. It offers real answers and suggestions to problems faced daily by those engaged in the analysis of economic data. I will recommend it to my students.
Guido Imbens, Harvard University

James Robinson

This pathbreaking book is a must-read for any scientist who is interested in formulating and testing hypotheses about the social world. This includes political scientists, sociologists, historians, geographers, and anthropologists. The book is clever and funny, and guides you through the tangle of problems that confront empirical research in social science. I wish I had had it years ago.
James Robinson, Harvard University

Orley Ashenfelter

A well-written and very quirky take on econometric practice.
(Orley Ashenfelter, Princeton University)

Gary Solon

What a fascinating and useful book! The application of econometrics in empirical research is as much art as science. What is most distinctive about Mostly Harmless Econometrics relative to other graduate-level econometrics books (besides the colorful prose style!) is that because the authors are longtime practitioners of applied microeconometrics, they speak often and insightfully about the art. I expect it's a great thing to work in the same department with Angrist or Pischke and to be able to ask their advice. Having this book close at hand is the next best thing. When you consult the book to see 'What would Angrist and Pischke do?' about econometric issues you encounter in your own research, you won't necessarily end up doing what they would in every single instance, but I bet you always will benefit from getting their take on the issue.
Gary Solon, Michigan State Universit

Orley Ashenfelter

A well-written and very quirky take on econometric practice.
Orley Ashenfelter, Princeton University

From the B&N Reads Blog

Customer Reviews

Explore More Items