Department of Government
University of Texas at Austin
158 West 21st Street, Stop A1800
Austin, TX 78712-1704

Add to address book

Peer-Reviewed Articles

Gertler, Aaron, and John G. Bullock. 2016. Reference Rot: An Emerging Threat to Transparency in Political Science. PS: Political Science and Politics. [More information]

Bullock, John G, Alan S. Gerber, Seth J. Hill, and Gregory A. Huber. 2015. Partisan Bias in Factual Beliefs about Politics. Quarterly Journal of Political Science 10 (December): 519-78. [More information]

Bullock, John G. 2011. Elite Influence on Public Opinion in an Informed Electorate. American Political Science Review 105 (August): 496-515. [More information]

Luskin, Robert C., and John G. Bullock. 2011. “Don’t Know” Means “Don’t Know”: DK Responses and the Public’s Level of Political Knowledge. Journal of Politics 73 (April): 547-57. [More information]

Bullock, John G., Donald P. Green, and Shang E. Ha. 2010. Yes, But What’s the Mechanism? (Don’t Expect an Easy Answer). Journal of Personality and Social Psychology 98 (April): 550-58. [More information]

Bullock, John G. 2009. Partisan Bias and the Bayesian Ideal in the Study of Public Opinion. Journal of Politics 71 (July): 1109-24. [More information]

Book Chapters, Review Papers, and Essays

Bullock, John G., and Donald P. Green. 2013. Mediation Analysis in the Social Sciences (comment). Journal of the Royal Statistical Society, Series A (January): 38-39.

Bullock, John G., and Shang E. Ha. 2011. Mediation Analysis Is Harder than It Looks. In Cambridge Handbook of Experimental Political Science, ed. James N. Druckman, Donald P. Green, James H. Kuklinski, and Arthur Lupia. New York: Cambridge University Press. [More information]

Green, Donald P., Shang E. Ha, and John G. Bullock. 2010. Enough Already about “Black Box” Experiments: Studying Mediation Is More Difficult than Most Scholars Suppose. Annals of the American Academy of Political and Social Science 628 (March): 200-08. [More information]

Bendor, Jonathan, and John G. Bullock. 2008. Lethal Incompetence: Voters, Officials, and Systems. Critical Review 20 (March): 1-23. [More information]

Sniderman, Paul M., and John G. Bullock. 2004. A Consistency Theory of Public Opinion and Political Choice: The Hypothesis of Menu Dependence. In Studies in Public Opinion: Gauging Attitudes, Nonattitudes, Measurement Error, and Change, ed. Willem E. Saris and Paul M. Sniderman. Princeton, NJ: Princeton University Press. [More information]


Political Preferences and American Political Behavior: Syllabus, 2013 Fall

Political Psychology (undergraduate lecture) :
Syllabus, 2016 Fall

Issues and Policies in American Government (undergraduate lecture on public opinion and representation): Syllabus, 2016 Fall

“Don’t Know” Means “Don’t Know”: DK Responses and the Public’s Level of Political Knowledge Abstract

Does the public know much more about politics than conventionally thought? A number of studies have recently argued, on various grounds, that the “don’t know” (DK) and incorrect responses to traditionally designed and scored survey knowledge items conceal a good deal of knowledge. This paper examines these claims, focusing on the prominent and influential argument that discouraging DKs would reveal a substantially more knowledgeable public. Using two experimental surveys with national random samples, we show that discouraging DKs does little to affect our picture of how much the public knows about politics. For closed-ended items, the increase in correct responses is large but mainly illusory. For open-ended items, it is genuine but minor. We close by examining the other recent evidence for a substantially more knowledgeable public, showing that it too holds little water.


Published version
Replication materials

Bibliographic Information

[BibTeX] [EndNote] [Google Scholar] [RIS]

Partisan Bias and the Bayesian Ideal in the Study of Public Opinion Abstract

Bayes’ Theorem is increasingly used as a benchmark against which to judge the quality of citizens, but some of its implications are not well understood. A common claim is that Bayesians must agree more as they learn and that the failure of partisans to do the same is evidence of bias in their responses to new information. Formal inspection of Bayesian learning models shows that this is a misunderstanding. Learning need not create agreement among Bayesians. Disagreement among partisans is never clear evidence of bias. And although most partisans are not Bayesians, their reactions to new information are surprisingly consistent with the ideal of Bayesian rationality.


Published version
Replication materials

Bibliographic Information

[BibTeX] [EndNote] [Google Scholar] [RIS]

Elite Influence on Public Opinion in an Informed Electorate Abstract

An enduring concern about democracies is that citizens conform too readily to the policy views of elites in their own parties, even to the point of ignoring other information about the policies in question. This article presents two experiments that suggest an important condition under which the concern may not hold. People are rarely exposed to even modest descriptions of policies, but when they are, their attitudes seem to be affected at least as much by those descriptions as by cues from party elites. The experiments also include measures of the extent to which people think about policy, and contrary to many accounts, they suggest that party cues do not inhibit such thinking. This is not cause for unbridled optimism about citizens’ ability to make good decisions, but it is reason to be more sanguine about their ability to use information about policy when they have it.


Published version
Preprint and appendix
Replication materials

Bibliographic Information

[BibTeX] [EndNote] [Google Scholar] [RIS]

Lethal Incompetence: Voters, Officials, and Systems Abstract

The study of voter competence has made significant contributions to our understanding of politics, but at this point there are diminishing returns to the endeavor. There is little reason, in theory or in practice, to expect voter competence to improve dramatically enough to make much of a difference, but there is reason to think that officials’ competence can vary enough to make large differences. To understand variations in government performance, therefore, we would do better to focus on the abilities and performance of officials, not ordinary citizens.

Article Bibliographic Information

[BibTeX] [Endnote] [RIS]

Mediation Analysis Is Harder than It Looks Abstract

Mediation analysis is the effort to understand the mechanisms through which some variables affect others. It is increasingly common in political science. But political scientists typically draw inferences about mediation without manipulating mediators, and their analyses are likely to be biased. Recognizing the problem, social scientists are gradually turning to methods that involve experimental manipulation of mediators. This is a step in the right direction, but experiments have little-appreciated limitations of their own. We describe these limitations and conclude that inference about mediation is fundamentally difficult—more difficult than inference about treatment effects, and best tackled by a research program that is specifically designed to speak to the challenges of mediation analysis.

Article Bibliographic Information

[BibTeX] [Google Scholar]

Enough Already about “Black Box” Experiments: Studying Mediation Is More Difficult than Most Scholars Suppose Abstract

The question of how causal effects are transmitted is fascinating and inevitably arises whenever experiments are presented. Social scientists cannot be faulted for taking a lively interest in “mediation,” the process by which causal influences are transmitted. However, social scientists frequently underestimate the difficulty of establishing causal pathways in a rigorous empirical manner. We argue that the statistical methods currently used to study mediation are flawed and that even sophisticated experimental designs cannot speak to questions of mediation without the aid of strong assumptions. The study of mediation is more demanding than most social scientists suppose and requires not one experimental study but rather an extensive program of experimental research.


Published version (gated)

Bibliographic Information

[BibTeX] [Endnote] [Google Scholar][RIS]
doi: 10.1177/0002716209351526

Yes, But What’s the Mechanism?
(Don’t Expect an Easy Answer)

Psychologists increasingly recommend experimental analysis of mediation. This is a step in the right direction because mediation analyses based on nonexperimental data are likely to be biased and because experiments, in principle, provide a sound basis for causal inference. But even experiments cannot overcome certain threats to inference that arise chiefly or exclusively in the context of mediation analysis—threats that have received little attention in psychology. We describe three of these threats and suggest ways to improve the exposition and design of mediation tests. Our conclusion is that inference about mediators is far more difficult than previous research suggests, and best tackled by an experimental research program that is specifically designed to address the challenges of mediation analysis.


Published version


Bibliographic Information

[BibTeX] [Endnote] [Google Scholar] [RIS]

News and Other Information

Commentary on the article by Eliot Smith, the new editor of Journal of Personality and Social Psychology: Attitudes and Social Cognition.

New standards for mediation analysis in Social Psychological and Personality Science. The editor, Allen McConnell, cites our article while establishing the new standards.

Partisan Bias in Factual Beliefs about Politics Abstract

Partisanship seems to affect factual beliefs about politics. For example, Republicans are more likely than Democrats to say that the deficit rose during the Clinton administration; Democrats are more likely to say that inflation rose under Reagan. What remains unclear is whether such patterns reflect differing beliefs among partisans or instead reflect a desire to praise one party or criticize another. To shed light on this question, we present a model of survey response in the presence of partisan cheerleading and payments for correct and “don’t know” responses. We design two experiments based on the model’s implications. The experiments show that small payments for correct and “don’t know” answers sharply diminish the gap between Democrats and Republicans in responses to “partisan” factual questions. Our conclusion is that the apparent gulf in factual beliefs between members of different parties may be more illusory than real.


Published version
Online-only appendix
Replication materials

Other Information

This article appears in Quarterly Journal of Political Science alongside a related article (“You Cannot Be Serious”) by Prior, Sood, and Khanna, who independently reached conclusions quite similar to ours.

A previous version of this paper was released as NBER Working Paper 19080.

Reference Rot: An Emerging Threat to Transparency in Political Science Abstract

Transparency of research is a large concern in political science, and the practice of publishing links to datasets and other online resources is one of the main methods by which political scientists promote transparency. But the method cannot work if the links don’t, and very often, they don’t. We show that most of the URLs ever published in the American Political Science Review no longer work, and that the problem is severe for recent as well as for older articles. We conclude that “reference rot” limits the transparency and reproducibility of political science research. We also describe practices that scholars can adopt to combat the problem: when possible, scholars should archive data in trustworthy repositories, use links that incorporate persistent digital identifiers, and create archival versions of the webpages to which they link.

Learning Economics, Math, and Statistics

I list only the free resources that I think most useful for political science students.


Osborne and Rubinstein’s A Course in Game Theory
Polak’s introductory game theory class
Rubinstein’s lecture notes on micro [PDF]


Teach yourself calculus: single-variable, multivariable;
Keisler’s Elementary Calculus; Strang’s Calculus

Osborne’s math-for-econ tutorial
Economist’s Mathematical Manual (very terse)


Russ Lenth’s power-and-sample-size calculators
Harvard’s Intro to Probability
Don Green’s lecture notes
American Statistician archives [JSTOR]
Elements of Statistical Learning (not for beginners)


Useful methods books for undergrad and grad students
Mathematical sociology textbook (learn Markov chains)

Miscellany Data

High School and Beyond: how to import the data:
Original HSB study (1980): school questionnaire
Original HSB study (1980): student questionnaire
First sophomore follow-up (1982)
First senior follow-up (1982)
Local labor market indicators (1982)

Graduate Admissions

Dan Nexon’s advice (best I’ve seen for political science)


apsr2006.bst (the best BibTeX style file for the APSR) Beamer item indentation
Beamer list of colorable elements
Detexify (draw a symbol, get the name)
LaTeX Previewer (enter code, get picture of equations)


Leslie Lamport’s advice


Deleting an Amazon S3 bucket that has many files in it
Simon Jackman’s workflow slides (about R and more)


Against The Elements of Style
Michael Munger on writing habits