john@johnbullock.org

Department of Political Science
Northwestern University
Scott Hall, Room 224
601 University Place
Evanston, IL 60208-0800

Add to address book

Public key (fingerprint: F4F9 4ABA F636 9D4D ADB9 7C4C A9F9 C58D 3C9B D5A5)

Peer-Reviewed Articles

Bullock, John G., and Kelly Rader. Response Options and the Measurement of Political Knowledge. Forthcoming in British Journal of Political Science. [More information]

Bendor, Jonathan, and John G. Bullock. 2021. Lethal Incompetence: Leaders, Organizations, and the U.S. Response to COVID-19. The Forum 19 (2): 317-37. [More information]

Bullock, John G. 2021. Education and Attitudes toward Redistribution in the United States. British Journal of Political Science 51 (July): 1230-1250. [More information]

Bullock, John G., and Donald P. Green. 2021. The Failings of Conventional Mediation Analysis and a Design-Based Alternative. Advances in Methods and Practices in Psychological Science 4 (October-December): 1-18. [More information]

Bullock, John G., and Gabriel Lenz. 2019. Partisan Bias in Surveys. Annual Review of Political Science 22: 325-42. [More information]

Gertler, Aaron, and John G. Bullock. 2017. Reference Rot: An Emerging Threat to Transparency in Political Science. PS: Political Science and Politics 50 (January): 166-71. [More information]

Bullock, John G, Alan S. Gerber, Seth J. Hill, and Gregory A. Huber. 2015. Partisan Bias in Factual Beliefs about Politics. Quarterly Journal of Political Science 10 (December): 519-78. [More information]

Bullock, John G. 2011. Elite Influence on Public Opinion in an Informed Electorate. American Political Science Review 105 (August): 496-515. [More information]

Luskin, Robert C., and John G. Bullock. 2011. “Don’t Know” Means “Don’t Know”: DK Responses and the Public’s Level of Political Knowledge. Journal of Politics 73 (April): 547-57. [More information]

Bullock, John G., Donald P. Green, and Shang E. Ha. 2010. Yes, But What’s the Mechanism? (Don’t Expect an Easy Answer). Journal of Personality and Social Psychology 98 (April): 550-58. [More information]

Bullock, John G. 2009. Partisan Bias and the Bayesian Ideal in the Study of Public Opinion. Journal of Politics 71 (July): 1109-24. [More information]



Book Chapters, Review Papers, and Essays

Bullock, John G. 2020. Party Cues. In Oxford Handbook of Electoral Persuasion, ed. Elizabeth Suhay, Bernard Grofman, and Alexander H. Trechsel. New York: Oxford University Press. [More information]

Bullock, John G., and Donald P. Green. 2013. Mediation Analysis in the Social Sciences (comment). Journal of the Royal Statistical Society, Series A (January): 38-39.

Bullock, John G., and Shang E. Ha. 2011. Mediation Analysis Is Harder than It Looks. In Cambridge Handbook of Experimental Political Science, ed. James N. Druckman, Donald P. Green, James H. Kuklinski, and Arthur Lupia. New York: Cambridge University Press. [More information]

Green, Donald P., Shang E. Ha, and John G. Bullock. 2010. Enough Already about “Black Box” Experiments: Studying Mediation Is More Difficult than Most Scholars Suppose. Annals of the American Academy of Political and Social Science 628 (March): 200-08. [More information]

Bendor, Jonathan, and John G. Bullock. 2008. Lethal Incompetence: Voters, Officials, and Systems. Critical Review 20 (March): 1-23. [More information]

Sniderman, Paul M., and John G. Bullock. 2004. A Consistency Theory of Public Opinion and Political Choice: The Hypothesis of Menu Dependence. In Studies in Public Opinion: Gauging Attitudes, Nonattitudes, Measurement Error, and Change, ed. Willem E. Saris and Paul M. Sniderman. Princeton, NJ: Princeton University Press. [More information]

Courses

Introduction to American Politics (undergraduate lecture)

Political Behavior (graduate seminar):
Syllabus, 2021 Winter

Political Psychology (undergraduate lecture):
Syllabus, 2022 Spring

Public Opinion and Representation in the United States (undergraduate seminar): Syllabus, 2021 Fall

Quantitative Causal Inference (graduate seminar)

“Don’t Know” Means “Don’t Know”: DK Responses and the Public’s Level of Political Knowledge Abstract

Does the public know much more about politics than conventionally thought? A number of studies have recently argued, on various grounds, that the “don’t know” (DK) and incorrect responses to traditionally designed and scored survey knowledge items conceal a good deal of knowledge. This paper examines these claims, focusing on the prominent and influential argument that discouraging DKs would reveal a substantially more knowledgeable public. Using two experimental surveys with national random samples, we show that discouraging DKs does little to affect our picture of how much the public knows about politics. For closed-ended items, the increase in correct responses is large but mainly illusory. For open-ended items, it is genuine but minor. We close by examining the other recent evidence for a substantially more knowledgeable public, showing that it too holds little water.

Article

Published version
Appendices
Replication materials

Bibliographic Information

[BibTeX] [EndNote] [Google Scholar] [RIS]
DOI: 10.1017/S0022381611000132

Partisan Bias and the Bayesian Ideal in the Study of Public Opinion Abstract

Bayes’ Theorem is increasingly used as a benchmark against which to judge the quality of citizens, but some of its implications are not well understood. A common claim is that Bayesians must agree more as they learn and that the failure of partisans to do the same is evidence of bias in their responses to new information. Formal inspection of Bayesian learning models shows that this is a misunderstanding. Learning need not create agreement among Bayesians. Disagreement among partisans is never clear evidence of bias. And although most partisans are not Bayesians, their reactions to new information are surprisingly consistent with the ideal of Bayesian rationality.

Article

Published version
Appendix
Replication materials

Bibliographic Information

[BibTeX] [EndNote] [Google Scholar] [RIS]
DOI: 10.1017/S0022381609090914

Education and Attitudes toward Redistribution in the United States Abstract

Although scholars have studied education’s effects on many different outcomes, little attention has been paid to its effects on adults’ economic views. This article examines those effects. It presents results based on longitudinal data which suggest that secondary education has a little-appreciated consequence: it makes Americans more opposed to redistribution. Placebo tests and other analyses confirm this finding. Further investigation suggests that these conservative effects of education operate partly by changing the way that self-interest shapes people’s ideas about redistribution.

Article

Published version
Appendix
Replication materials

Other Information

DOI: 10.1017/S0007123419000504

Elite Influence on Public Opinion in an Informed Electorate Abstract

An enduring concern about democracies is that citizens conform too readily to the policy views of elites in their own parties, even to the point of ignoring other information about the policies in question. This article presents two experiments that suggest an important condition under which the concern may not hold. People are rarely exposed to even modest descriptions of policies, but when they are, their attitudes seem to be affected at least as much by those descriptions as by cues from party elites. The experiments also include measures of the extent to which people think about policy, and contrary to many accounts, they suggest that party cues do not inhibit such thinking. This is not cause for unbridled optimism about citizens’ ability to make good decisions, but it is reason to be more sanguine about their ability to use information about policy when they have it.

Article

Published version
Preprint
Appendix
Replication materials

Bibliographic Information

[BibTeX] [EndNote] [Google Scholar] [RIS]
DOI: 10.1017/S0003055411000165

Response Options and the Measurement of Political Knowledge Abstract

By many measures, the public knows little about politics. But just how little people seem to know depends on the questions that are put to them. In particular, knowledge levels seem higher when people are asked closed- rather than open-ended questions. In turn, differences between estimated knowledge levels are sometimes attributed to fundamental differences between these types of questions. Building on this previous research, the present study uses a pre-registered experiment conducted with a representative national sample to shed new light on the relationship between question form and knowledge measurement. The authors find that inferences about political knowledge depend less on fundamental differences between open- and closed-ended questions than on two little-appreciated aspects of survey design: the number and difficulty of the response options that accompany closed-ended questions. These aspects of survey design have large effects. Scholars who use the same questions with different response options may reach substantively different conclusions about the public's levels of knowledge.

Article

Article
Online-only appendix
Pre-registered pre-analysis plan
Replication materials

Other Information

DOI: 10.1017/S0007123421000120

Lethal Incompetence: Voters, Officials, and Systems Abstract

The study of voter competence has made significant contributions to our understanding of politics, but at this point there are diminishing returns to the endeavor. There is little reason, in theory or in practice, to expect voter competence to improve dramatically enough to make much of a difference, but there is reason to think that officials’ competence can vary enough to make large differences. To understand variations in government performance, therefore, we would do better to focus on the abilities and performance of officials, not ordinary citizens.

Article Bibliographic Information

[BibTeX] [Endnote] [RIS]
DOI: 10.1080/08913810802316290

Lethal Incompetence: Leaders, Organizations, and the U.S. Response to COVID-19 Abstract

The study of voter competence has made significant contributions to our understanding of politics, but at this point there are diminishing returns to the endeavor. Voter competence is unlikely to improve dramatically enough to make much of a difference to our politics. By contrast, the competence of officials can and does vary substantially over short periods of time. To understand variations in government performance, therefore, we would do better to focus on the abilities and performance of officials, not ordinary citizens. We elaborate on this argument, emphasizing the “incompetence multiplier”: the way that the properties of hierarchies can amplify the incompetence of those in powerful positions. We illustrate our argument with an extended discussion of the U.S. response to the COVID-19 pandemic.

Article Other Information

In some ways, this article is a sequel to our earlier Lethal Incompetence: Voters, Officials, and Systems.

DOI: 10.1515/for-2021-0010

Mediation Analysis Is Harder than It Looks Abstract

Mediation analysis is the effort to understand the mechanisms through which some variables affect others. It is increasingly common in political science. But political scientists typically draw inferences about mediation without manipulating mediators, and their analyses are likely to be biased. Recognizing the problem, social scientists are gradually turning to methods that involve experimental manipulation of mediators. This is a step in the right direction, but experiments have little-appreciated limitations of their own. We describe these limitations and conclude that inference about mediation is fundamentally difficult—more difficult than inference about treatment effects, and best tackled by a research program that is specifically designed to speak to the challenges of mediation analysis.

Article Bibliographic Information

[BibTeX] [Google Scholar]

Enough Already about “Black Box” Experiments: Studying Mediation Is More Difficult than Most Scholars Suppose Abstract

The question of how causal effects are transmitted is fascinating and inevitably arises whenever experiments are presented. Social scientists cannot be faulted for taking a lively interest in “mediation,” the process by which causal influences are transmitted. However, social scientists frequently underestimate the difficulty of establishing causal pathways in a rigorous empirical manner. We argue that the statistical methods currently used to study mediation are flawed and that even sophisticated experimental designs cannot speak to questions of mediation without the aid of strong assumptions. The study of mediation is more demanding than most social scientists suppose and requires not one experimental study but rather an extensive program of experimental research.

Article Bibliographic Information

[BibTeX] [Endnote] [Google Scholar][RIS]
DOI: 10.1177/0002716209351526

The Failings of Conventional Mediation Analysis and a Design-Based Alternative Abstract

Scholars routinely test mediation claims by using some form of measurement-of-mediation analysis whereby outcomes are regressed on treatments and mediators to assess direct and indirect effects. Indeed, it is rare for an issue of any leading journal of social or personality psychology not to include such an analysis. Statisticians have for decades criticized this method on the grounds that it relies on implausible assumptions, but these criticisms have been largely ignored. After presenting examples and simulations that dramatize the weaknesses of the measurement-of-mediation approach, we suggest that scholars instead use an approach that is rooted in experimental design. We propose implicit-mediation analysis, which adds and subtracts features of the treatment in ways that implicate some mediators and not others. We illustrate the approach with examples from recently published articles, explain the differences between the approach and other experimental approaches to mediation, and formalize the assumptions and statistical procedures that allow researchers to learn from experiments that encourage changes in mediators.

Article

Published version
Appendix
Replication materials

Bibliographic Information

DOI: 10.1177/25152459211047227

Yes, But What’s the Mechanism?
(Don’t Expect an Easy Answer)
Abstract

Psychologists increasingly recommend experimental analysis of mediation. This is a step in the right direction because mediation analyses based on nonexperimental data are likely to be biased and because experiments, in principle, provide a sound basis for causal inference. But even experiments cannot overcome certain threats to inference that arise chiefly or exclusively in the context of mediation analysis—threats that have received little attention in psychology. We describe three of these threats and suggest ways to improve the exposition and design of mediation tests. Our conclusion is that inference about mediators is far more difficult than previous research suggests, and best tackled by an experimental research program that is specifically designed to address the challenges of mediation analysis.

Article

Published version

Appendix

Bibliographic Information

[BibTeX] [Endnote] [Google Scholar] [RIS]
DOI: 10.1037/a0018933

News and Other Information

Commentary on the article by Eliot Smith, the new editor of Journal of Personality and Social Psychology: Attitudes and Social Cognition.

New standards for mediation analysis in Social Psychological and Personality Science. The editor, Allen McConnell, cites our article while establishing the new standards.

Partisan Bias in Factual Beliefs about Politics Abstract

Partisanship seems to affect factual beliefs about politics. For example, Republicans are more likely than Democrats to say that the deficit rose during the Clinton administration; Democrats are more likely to say that inflation rose under Reagan. What remains unclear is whether such patterns reflect differing beliefs among partisans or instead reflect a desire to praise one party or criticize another. To shed light on this question, we present a model of survey response in the presence of partisan cheerleading and payments for correct and “don’t know” responses. We design two experiments based on the model’s implications. The experiments show that small payments for correct and “don’t know” answers sharply diminish the gap between Democrats and Republicans in responses to “partisan” factual questions. Our conclusion is that the apparent gulf in factual beliefs between members of different parties may be more illusory than real.

Article

Published version
Online-only appendix
Replication materials

Other Information

This article appears in Quarterly Journal of Political Science alongside a related article (“You Cannot Be Serious”) by Prior, Sood, and Khanna, who independently reached conclusions quite similar to ours.

After these articles were published, a few objections were raised. Gabriel Lenz and I take up the objections in our Annual Review article, “Partisan Bias in Surveys”. See especially pages 332-33 of that article.

Some readers have objected to one particular question that we asked in our studies: a question about John McCain’s age. On these objections, see John McCain’s Age and the 2008 Presidential Election.

A previous version of this paper was released as NBER Working Paper 19080.

Partisan Bias in Surveys Abstract

If citizens are to hold politicians accountable for their performance, they probably must have some sense of the relevant facts, such as whether the economy is growing. In surveys, Democrats and Republicans often claim to hold different beliefs about these facts, which raises normative concerns. However, it is not clear that their divergent survey responses reflect actual divergence of beliefs. In this review, we conclude that partisan divergence in survey responses is often not due to sincere, considered differences of belief that fall along party lines—but determining what it is due to is difficult. We review the evidence for possible explanations, especially insincere responding and congenial inference. Research in this area is still nascent, and much more will be required before we can speak with precision about the causes of partisan divergence in responses to factual questions.

Article Other Information

Among other things, this review takes up objections to an earlier article, “Partisan Bias in Factual Beliefs about Politics.” See especially pages 332-33 of the review.

DOI: 10.1146/annurev-polisci-051117-050904

Party Cues Abstract

We now have a large and sprawling body of research on the effects of party cues. It is not very consistent or cumulative. Findings vary widely from one article to the next, and they sometimes contradict each other. This article sifts the evidence for five potential moderators of party-cue effects that have received much attention: political sophistication, need for cognition, issue salience, the amount of information in the information environment, and the distinctiveness of party reputations. It also considers the evidence on three large questions: whether party cues dominate policy information in people’s judgments, whether they are “shortcuts,” and how they affect our inferences about policies. The article closes by suggesting that limitations of research in this area are due partly to weak links between theory and empirical efforts and partly to problems of measurement error and statistical power.

Article Other Information

DOI: 10.1093/oxfordhb/9780190860806.013.2

Reference Rot: An Emerging Threat to Transparency in Political Science Abstract

Transparency of research is a large concern in political science, and the practice of publishing links to datasets and other online resources is one of the main methods by which political scientists promote transparency. But the method cannot work if the links don’t, and very often, they don’t. We show that most of the URLs ever published in the American Political Science Review no longer work, and that the problem is severe for recent as well as for older articles. We conclude that “reference rot” limits the transparency and reproducibility of political science research. We also describe practices that scholars can adopt to combat the problem: when possible, they should archive data in trustworthy repositories, use links that incorporate persistent digital identifiers, and create archival versions of the webpages to which they link.

Article

Published version
Online-only appendix
Replication materials

Bibliographic Information

[BibTeX] [EndNote]
DOI: 10.1017/S1049096516002353