Arbitrary Limits to Scholarly Speech: Why (Short) Word Limits Should Be Abolished

https://doi.org/10.5281/zenodo.823308

John Gerring

University of Texas at Austin

Lee Cojocaru

University of Oslo

[**Original document page numbers preserved in brackets for citation purposes.**]

[Start Page 1]

John Gerring is Professor of Government at The University of Texas at Austin. He can be reached at jgerring@austin.utexas.edu. Lee Cojocaru is a Research Assistant with the Varieties of Democ- racy (V-Dem) Project, University of Oslo. He can be reached at cojocaru@bu.edu.  Comments and suggestions on various drafts were provided by Taylor Boas, Colin Elman, Alan Jacobs, Carl Henrik Knutsen, and Evan Lieberman. A longer version of this paper, integrating work in economics and the natural sciences, where some of these issues have been explored, will appear in an edited volume, tentatively titled “The Production of Knowledge: Enhancing Progress in Social Science.” All online appendices (A, B, and C) can be accessed at https://utexas.app.box.com/s/fmybgvn70w78c5yn8ixwhg8 ssv3z0pyv.

Most journals in political science and sociology set stringent word or page limits, a fact of which every author is keenly aware. By all reports, researchers expend a good deal of effort trying to work within these limits. This might involve revising successive drafts until the final version slips just under the ceiling, moving sections of a paper into online appendices, splitting up a subject into “minimal publishing units,” shopping around for a publication venue with less stringent limits, or trying to negotiate special terms with an editor.

Some researchers relinquish the goal of journal publication entirely in preference for the more relaxed format of an academic monograph. This option, however, is less and less viable as university presses trim their lists and reorient priori- ties toward books with a popular theme and a potential crossover audience.

In sum, limits on the length of journal articles affect scholarly research in all sorts of ways, some more visible than others. Some researchers, we must presume, avoid projects entirely if they seem intractable in a journal format.

Our contention is that current policies that impose arbitrary word or page limits on published articles are not serving the discipline well. They contort the academic process in ways that are not conducive to scholarly research or to communication. And they waste everyone’s valuable time.

We begin by surveying the policies of top political science and sociology journals. In the second section, we lay out a proposal that we suppose will not be very controversial: journals should clearly state their policies vis-à-vis length requirements and adhere to those policies. In the third section, we lay out our more controversial proposal, that journals should abolish—or at least greatly loosen—length limits. The rest of the article elaborates and defends that proposal. We discuss (a) heterogeneity across venues (different journals offering different policies), (b) supplementary material posted online, c) references (often the first aspect of an article to be cut down in order to meet a length requirement), (d) the role of length limitations in structuring the work of political science, (e) word limits in economics (where we find journal policies to be considerably more permissive), (f) the correlation between article length and impact, and (g) the ramifications of a change of journal length limit policies for journal business models.

Survey

Despite its importance, no comprehensive survey of word or page limits has ever been conducted. To remedy this omission, and to set the stage for our argument, policies and practices across top journals in political science and sociology are summarized in Tables 1 and 2. Information is drawn directly from journal web pages (instructions to authors)—supplemented, in some cases, by direct communication with editors. Journal policies are quoted verbatim in online Appendices A and B.

Comparisons across journals must be inexact inasmuch as they follow different protocols. Some count words and others pages. Some count abstracts, references, tables, figures, and footnotes while others do not (or only count some of them). Some apply limits at the submission stage and others wait for final approval.

Here, we adopt a few standard criteria in order to provide a (more or less) systematic comparison of journal policies based on stated guidelines posted on journal web pages. Length is counted with words, as this is the usual practice in political science and is more exact than pagination. Where limits are counted in pages, we list the journal policy (in pages) and then convert pagination to word counts following journal guide- lines with respect to margins and font, as noted in column 2 of Table 1. We assume that online materials (generally in the form of appendices) are not considered in the word count. However, some journals exempt references and/or appendices in the word count even if they appear as part of the published article, as noted in columns 3-4. We also note whether the word count is applied at submission or later (column 5) and whether, according to the stated policy of the journal, editors are allowed some discretion in applying the rules (column 6).

Twenty of the most influential journals from each discipline are included in this survey. For gauges of impact in political science we rely on two sources: SCImago journal rank (Elsevier) and Science Watch (Thomsen Reuters).1 Chosen journals include American Journal of Political ScienceAmerican Political Science ReviewAnnual Review of Political ScienceBritish Journal of Political ScienceComparative Political StudiesConflict Management and Peace ScienceEuropean Journal of Political ResearchInternational Security, Inter- national Studies QuarterlyJournal of Conflict ResolutionJournal of Peace ResearchJournal of PoliticsJournal of Public Administration Research & TheoryParty PoliticsPolitical AnalysisPolitical CommunicationPolitical GeographyPolitical PsychologyPublic Opinion Quarterly, and World Politics.

[Start Page 3]

[Table 1 Inserts Here]

For a gauge of impact in sociology we rely on the Google Scholar (scholar.google.com) H5 index. Chosen journals include American Journal of SociologyAmerican Sociological ReviewAnnual Review of SociologyAntipodeBritish Journal of CriminologyBritish Journal of SociologyCriminologyDemographyEthnic and Racial StudiesEuropean Sociological ReviewJournal of Ethnic and Migration StudiesJournal of European Social PolicyJournal of Marriage and FamilyJournal of Population EconomicsPopulation and Development ReviewQualitative ResearchSocial Science ResearchSocial ForcesSociology, Theory, and Culture & Society.

All political science journals impose space limits, as shown in Table 1.2 The tightest limit—6,500 words (not including references or appendices)—is adopted by Public Opinion Quarterly. The most capacious limit—20,000 words—is allowed by International Security. Most hover between 8,000 and 12,000 words, with a mean of just over 10,500. All journals except the

[Start Page 4]

[Table 2 Inserts Here]

Journal of Peace Research allow editorial discretion in the application of word limits.

In sociology, journal practices are somewhat more relaxed. Six journals—American Journal of SociologyCriminology, J ournal of European Social PolicyJournal of Population EconomicsPopulation and Development ReviewSocial Science Research—impose no formal limits. Among those that impose limits, the range extends from about 8,000 to 15,000,

with a mean of about 9,000. Most journals allow editorial discretion in the policing of these limits.

The second section of Tables 1-2 focuses on journal practices, i.e., how these length limits are administered. We report mean, minimum, and maximum word counts of all articles published in 2015 (or where unavailable, in 2014), as noted in columns 7-9. Here, we include only regular, full-length articles, as defined by the journal. For example, if a journal has a separate

[Start Page 5]

section for research notes, methodology notes, or reviews, these publications are excluded. To determine mean length, we record page lengths for all articles published in a year, calculate the mean (in pages), locate an article with that (approximate) length, place the contents of that article (all aspects of the article—text, abstract, footnotes, references, appendices, tables, figures—so long as it appears in the journal itself rather than in an online appendix) into a Word document, and record the number of words. To calculate minimum and maximum length we use page length to identify the longest and shortest articles and then place the contents of those articles (all aspects, as published) into a Word file to record the number of words. We find that the mean length of articles is close to the stated word limit for most journals in political science and sociology (~10,000), and there is considerable spread from mini- mum (~7,000) to maximum (~15,000). Recorded word counts in practice are remarkably similar across the two disciplines.

The final section of Tables 1 and 2 focuses on consistency between policies and practices for those journals with an official limitation on length. Column 10 records the “actual maximum,” the highest word count of any article published by that journal within the year, including only those elements of an article that are considered relevant to calculating length according to the journal’s policies. For example, if the journal excludes references from the limit, the actual maximum does so as well. Column 11 compares the actual maximum with the official word limit, subtracting one from the other. Results are explored in the following section.

First Proposal: Clarity and Consistency

In comparing policies with practices, we find strong correspondence between the stated limits and the mean length of articles. Comparing columns 1 and 7, only a few journals— notably International Security and Public Opinion Quarterly in political science and Annual Review of Sociology and Demography in sociology—have mean lengths that greatly sur pass their official word limits, and this could be partly accounted for by our method of counting, which includes all article con- tent (even that which is not included in a journal’s assessment of word limits).

However, when comparing maximum (actual) lengths with stated limits we find considerable divergence, at least for certain journals, as shown in the final column of Tables 1 and 2. The average difference is nearly 5,000 words in political science. That is, across the twenty top political science journals, the longest article published in a year (usually, 2015) in that journal surpassed these journals’ formal limits by an average of just under 5,000 words. One journal, the British Journal of Political Science, published an article that is more than 11,000 words over the stated limit. And only one journal, the Journal of Peace Research, appears to strictly abide by their word limits (not coincidentally, it is only journal that does not allow editorial discretion). Differences between stated policies and practices are noticeable in sociology as well, though not as glaring.

To be sure, most journals allow editorial discretion in the application of length limits. In this sense, they are not violating their own policies. However, length limits are described on journal web pages as if they were strictly applied. Authors without experience with a specific journal—or prior correspondence with the editor—would have no way of knowing that they might publish an article of 15,000 words in a journal with a 10,000 word limit.

This inconsistency between de jure and de facto policies is problematic in several respects. Authors are unsure about how to craft their work in order to meet the journal’s guide- lines. They do not know whether the word limit will be observed and, if not, how much leeway might be allowed. Like- wise, senior faculty, who have greater experience, and perhaps know the editors personally, can muster inside information to successfully walk this tightrope.

Our first proposal will surprise no one. If wide discretion in word limits is allowed then this policy should be clearly stated on the journal’s web page. Authors should not be required to second-guess this important issue. Our analysis suggests that most journal word limits in political science should be understood as targets, not ceilings. Note that the mean number of words in published articles aligns closely with journal word limits, with considerable dispersion about the mean. A simple change of terminology would solve this problem. Editors could change word (or page) limit to word (or page) target and disable web pages (e.g., for the American Political Science Review) that automatically disqualify submissions that violate the target.

While a great deal of effort has gone into enhancing the transparency of journal content (i.e., articles) in recent years, e.g., via the DA*RT initiative, it is equally important that journal policies be transparent. This seems like an easy reform.

Second Proposal: No More (Tight) Limits

Our second, more controversial, proposal is that journals should abolish arbitrary, one-size-fits-all word limits, or greatly expand those limits. The argument for this proposal may be concisely stated. An article, like a book or any other written product, should be as long as it needs to beno longer, and no shorter.

Some articles are over-written. There is only one basic point and it is repeated ad infinitum. Or there is a set of empirical tests that so closely resemble each other as to be redundant; they belong in an appendix or perhaps are entirely un- necessary. Nonetheless, the author feels compelled to fill up the allocated space.

Articles in top natural science journals (e.g., NatureScience) are typically much shorter than those that appear in social science journals. While we do not think this format generally serves social science well, we should be mindful that some points can be made with brevity, and this should not take away from their importance or their impact. In political science and sociology, short papers are often relegated to “research notes,” simply because of their brevity. As a consequence of this classification, they are not taken very seriously and do not count for very much (re: promotion and tenure). This sort

[Start Page 6]

of classification by size seems just as arbitrary as the exclusion of longer papers that surpass word limits.

Some articles are under-written. The author has a very large and complex argument to make, or an extended set of (non-redundant) empirical exercises, many contexts to explore, or many styles of evidence to incorporate. However, under the rigid word limits assigned by the journal, all that appears in the main text is the outline of a story, from which one can glean little about the truth of the author’s argument. Here, word limits constitute a Procrustean bed.

To clarify, our argument is not for longer journal articles. Our argument is for the removal of arbitrary space constraints that have nothing to do with the content of a submission. Length should be adapted to the paper under review. Some topics can be dispensed with in 2,000 words. Others may re- quire 20,000, or even 30,000. As such, length should be a minor feature of the review process, along with other stylistic concerns (not to mention content). Journals do not mandate that authors present 3 tables and 1 figure. This would be patently absurd. We should not mandate that they present 10,000 words. Thus, we are not making an argument for endless babble.

Some authors need to be restrained from diarrhea of the keyboard. Other authors are terse to the point of obscurantism, and need to be drawn out (“please give a few examples of what you are talking about”). But one argument about length that does not seem admissible, if we are concerned with such things as truth and its dissemination, is that an article fit within an arbitrary (short) word limit. Journals cannot possibly reduce academic research to a formula because articles are not all alike.

We are reminded of the first question we always get from students after distributing a writing assignment. “How many pages?,” they ask. Most students are concerned with the minimal number of pages they will need to generate in order to pass the assignment. A few are concerned with the maximum. To both concerns we reply with a set of bounds intended to be advisory—e.g., “10-20 pages”—followed by the admonition not to get caught up in the number of pages but rather with the quality of the work they are producing. The number of pages or words is the least important aspect of your paper, we tell them. Unfortunately, we are not following this advice in academic publishing.

Heterogeneity across Venues

A few political science journals, which did not make it onto our list in Table 1, look favorably upon longer submissions. This includes International Security (20,000 words), Studies in American Political Development (no official limit), and the Quarterly Journal of Political Science (no official limit). There may be others of which we are not aware. By the same token, some journals have even tighter space restrictions than those listed in Table 1. For example, the newly founded Journal of Experimental Political Science requests papers of “approximately 2,500 words.”

Evidently, there is some degree of heterogeneity across journals, and even more so in sociology, as noted in Table 2. This heterogeneity may increase over time, if divergence rather than convergence is the overall trend within the discipline. Authors can thus shop around for an appropriate forum for their paper, as, to some extent, they do now. Supply and demand would then intersect. This seems like it might offer a happy resolution of our problem, with flexibility provided across journals (rather than across articles within the same journal).

This model of diversity fits the consumer-driven model of the commercial publishing business. Readers looking for a dis- cursive treatment of a contemporary subject can turn to the New York Review of Books or the New Yorker. Readers looking for the quick-and-dirty might turn to a newspaper, a blog, or a publication known for terseness such as the Economist. Fiction readers may look for long books, short books, or short stories. They are free to choose. By all accounts, length is an important consideration in consumer choice in the commercial marketplace.

Likewise, in the world of social science the choice to read a journal article rather than a book is, at least to some extent, a choice about length. So, one might argue that journal heterogeneity in length requirements is merely a continuation of a spectrum that stretches from academic monographs to para- graph-sized blogs, or even Tweets.

Unfortunately, journal specialization by length is inappropriate for academic journals. The reason, in brief, is that journals do not have overlapping purviews and functions. Be- cause mass market publications like NYRBNew Yorker, the Economist, and book publishers cater to the same sort of readers and cover (pretty much) the same sorts of things, readers may choose the format they wish—short, medium, or long. This does not obviate the tradeoff—conciseness versus depth—but it means that readers can make choices based on their priorities.

However, journals do not offer multiple options. Indeed, they are in the business of avoiding redundancy. Un-original content is excluded from consideration. Moreover, journals tend to specialize in a particular field or subfield. There is no space in the academic journal market for two journals focused on the same topic—one of which publishes long articles and the other of which publishes short articles.

Only general-interest journals (e.g., the American Political Science Review or the American Journal of Sociology) have overlapping purviews. Here, one might envision a division of labor in which some specialize in long articles and others in short articles. This would be productive in all respects except one: differentiation by space allotment would interfere with an important function of top journals – differentiation by quality. Insofar as scholars wish to maintain a clear ranking of journals (and, all protests to the contrary, it seems that they do) space-constraints should not obstruct that goal.

To conclude, heterogeneity across journals does not solve the problem. Indeed, this scenario seems about as defensible as a scenario in which some journals publish authors whose names begin with consonants and others publish authors whose names begin with vowels. Publication decisions should

[Start Page 7]

hinge on matters of topicality and quality, not size.

Online Supplementary Material

In recent years, the practice of posting supplementary material online has become more common, and readers may wonder if this solves the problem we are posing. Unfortunately, while online appendices are surely an improvement over the pre-WWW era, they are not ideal.

Appendices often contain information that is vital to the review process. Sometimes, they appear at the insistence of reviewers or editors. This suggests that anyone seeking to make sense of the argument of a paper would need to access the appendix (and that it should remain in stable form, post- publication). Yet, if the appendix is posted separately those who read or cite an article will feel under no compunction to read it. Such material is not part of the formal record, occupying a nebulous zone. A citation to “Sullivan (1998)” does not imply “and online appendices.” Online material is sometimes hard to locate and in any case usually ignored. For this reason, online appendices sometimes serve as a place to stow away evidence that does not fit neatly with the author’s main argument. Note also that if the online appendix is under the author’s control it is susceptible to post-publication manipulation.

For all these reasons it seems essential that appendices be published along with the main text of an article. Moreover, decisions about what material to place within the main text and what to place in appendices should be driven by matters other than arbitrary space constraints. There is nothing sillier than moving text from one place to another simply to get under a 10,000-word limit. (“I put it in the Appendix because I ran out of space in the text.”) This sort of shenanigan damages the stylistic coherence of an article, not to mention the time it imposes on the author, editor, and reviewers (who must check up on such things). Note also that when an appendix appears online the distinction between main text and appendix is highly con- sequential—something that editors need to scrutinize closely. By contrast, if an appendix is easily accessible and part of the published version of an article, this decision is not so fundamental.

The same general point applies to other decisions that are often made under pressure from arbitrary word limits, e.g., whether to cite additional work, to address counterarguments, to provide examples, or to provide clarification of a theory or method. Authors face many decisions about content and composition, and each deserves careful consideration. Writing social science is not a paint-by-numbers exercise. In searching for the right resolution of these questions one consideration that does not seem relevant is an arbitrary word limit. And one must not lose sight of the time required to re-shuffle words and ideas until the proper quantity is obtained. Researchers’ time is valuable and should not be wasted in a trivial quest for magic word counts.

References

A few journals (e.g., the Annual Review of Political Science and Public Opinion Quarterly) do not include references in their word count. But most do (see Table 1). Because references are of little concern to most authors and reviewers (un- less it is their work that is being cited, naturally), and because references consume a lot of words (for each citation there is usually a two-line reference), they are usually the first to be sacrificed when an author has to shorten a piece to satisfy a length limitation. For this reason, it is worth pondering the value of references.

Recent work by Patrick Dunleavy3 suggests that citations to the literature on a subject are essential for providing a basis for evaluation, showing how the present study fits in with an existing body of work. If that body of work is not fully represented, cumulation is impeded. A study must be understood within a context, and that context is provided by the citations. If past findings on a subject are not cited, cumulation is impossible.4

Second, anyone attempting to come to grips with a new area of study must be able to follow a trail of citations in order to piece together who has done what on a given subject. The intellectual history of a subject is located in the citations.

Third, we must consider the problem of academic honesty. We are acutely aware of the problem of plagiarism, when someone’s ideas (uncited) are stolen. A problem that receives less attention—but, arguably, is much more prevalent—is when prior studies of a subject are not cited, or only briefly cited, leaving readers unaware of how novel—or derivative—the author’s theory and findings really are.

Fourth, we might want to consider whether dropped citations are chosen in a biased fashion. Studies suggest that citations are often biased toward prestige journals5 and toward authors who are well established, senior, male,6 or at top universities and departments located in the United States and Europe.7 Commonsense suggests that these biases may be exacerbated in situations where space is in short supply. Here, authors are likely to favor the most prominent writer or work on a subject—the “obligatory” reference.

Finally, we should consider the role of citations in measuring impact. Nowadays, citation counts are critical for the evaluation of scholarship at all levels. An article’s impact is understood by the number of citations it receives. Journal impact is measured by the number of citations all the articles published in that journal receive. Author impact is measured by the number of citations all their publications receive. And the impact of fields and disciplines is understood according to how many citations they receive. It follows that when articles are incompletely referenced our ability to properly assess impact—of articles, journals, authors, subfields, or disciplines (at large)—

[Start Page 8]

is impaired. We may be able to trace the impact of “obligatory” references, but we cannot trace the impact of other work that may have affected the development of thinking on a subject.

Right-sizing the Discipline

The most serious cost imposed by word limits is not the author’s time. Nor is it the published articles that are too long or too short, those that make use of online appendices to get around arbitrary word limits, those that omit important citations, or those that are stylistically flawed because the text is playing limbo with the journal’s word count. These are fairly trivial costs. The most serious cost arises from the way in which the word count protocol structures the work of social science.

We shall assume that, in our highly professionalized discipline, researchers are sensitive to incentives. Since the main incentive is to publish, and since journals are increasingly the most prestigious outlets for publication (surpassing books, at least for most subfields), we must consider what sort of research this regime encourages, and discourages. Substance is inevitably structured by form. And when the form is rigidly fixed, the substance must accommodate itself.

Smart academics choose topics and research designs that fit the space-constrained format of the journals they wish to publish in. Since all journals impose word limits, and there is not a great deal of variation in these limits—leaving aside a few journals, as noted above—shopping around does not afford much leeway.

Under the circumstances, success in the business of academic publishing involves finding bite-sized topics that can be dispatched with 8 to 12,000 words. Qualitative work is at a disadvantage since evidence drawn from archival, ethno- graphic, or interview-based research normally requires a good deal of verbiage to adequately convey the nuances of the argument, e.g., the many bits and pieces of evidence that, together, contribute to a causal inference. Multi-method work is at an even more severe disadvantage since it must practice two trades—two separate research designs—in order to fulfill its mission. Work that embraces a large theoretical framework, with many empirical implications, is at a disadvantage. Work that applies a theory to multiple contexts is at a disadvantage. Historical work, which often involves both qualitative and quantitative evidence, is at a disadvantage. Research designs that fall far from the experimental ideal, and therefore involve a great deal of supporting argumentation and robustness tests, are at a disadvantage.8

Insofar as scholars are rational they will pause before undertaking such ventures, or will divide them up into separate pieces—”minimal publishing units”—that fit the space-constrained format of journal publication at the cost of redundancy (since the evidence for a large argument is divided up

Economics

At this point, it may be appropriate to consider our field in relation to our social science cousins on the “hard” (natural- ist) end of the spectrum. In Table 3, we survey the space limitation policies of 20 top journals in economics.

For estimations of scholarly impact we rely on SCImago.9 Chosen journals include American Economic Journal (AEJ): Applied EconomicsAEJ: Economic PolicyAEJ: MacroeconomicsAEJ: MicroeconomicsAmerican Economic ReviewAnnual Review of EconomicsBrookings Papers on Economic ActivityEconometricaEconomic JournalJournal of Economic LiteratureJournal of European Economic AssociationJournal of FinanceJournal of ManagementJournal of MarketingJournal of Political EconomyQuarterly Journal of EconomicsReview of Economic StudiesReview of Economics and StatisticsReview of Financial Economics, and Review of Financial Studies.

Table 3 reveals that economics journals have a considerably more relaxed set of policies with respect to article length than political science and sociology journals. This is signaled by the calculation of length in pages rather than words, for most journals. Six journals have no official limit on article length. Among the remainder, the average limit is just over 15,000 words. Only one journal, Economic Journal, has a tight limit—in this case, 7,500 words. However, we find that the average length of an article in that journal is well over 12,000 words and one article published in 2015 included over 21,000 words. So this does not constitute much of an exception from the industry norm of overall permissiveness with respect to article length. As with political science and sociology journals, practices often depart from policies. The actual maximum length is 7,000+ over the stated limit. This suggests that in economics, as in other fields, limits are not strictly applied. And this, in turn, suggests a problem of transparency.

Impact

Thus far, the gist of our argument is that by removing an arbitrary component of the publication process—article length— we will improve efficiency (spending less time worrying about limits and strategizing about how to get around them) and also arrive at higher-quality articles. Can the latter proposition be tested?

In one sort of hypothetical experiment, article length would be arbitrarily assigned. Conceivably, one might enlist a journal that takes a relaxed attitude toward word limits. Submissions that surpass a given threshold (e.g., 15,000 words) and pass the review process (in that form) would then be randomized into a control group (no change) and a treatment group (subjected to a word limit of 10,000 words). Compliance (not to mention ethics) would be difficult. Authors would need to 

[Start Page 9]

[Table 3 Inserts Here]

comply with the imposed limits (they could withdraw their submission and resubmit to another journal, complain to the editorial board) and reviewers would also need to be brought on board. Results could then be compared by standard metrics of influence such as citations—though some confounding might result as the nature of the experiment became known throughout a discipline and authors posted “full” versions on their web sites.

Natural experiments can also be imagined. For example, one might regard length limits as an instrument for actual length (columns 1 and 7 are indeed highly correlated). Citation counts for articles could then be regressed against the instrumented

[Start Page 10]

values for article length. However, this research design cannot disentangle journal fixed effects (some journals are more cited than others, even among the top twenty journals in Table 1).

Even so, we may learn something from the simple expedient of comparing articles—published in the same journal— that are shorter and longer. Because we are interested in relative length within the same journal, it is sufficient to rely on page counts (as listed in citations or on the journal’s table of contents) rather than word counts.

As a measure of the scholarly impact of journal articles, we rely on citation counts tallied by Web of Science, transformed by the natural logarithm (to accommodate a right- skewed distribution). To eliminate variations based on year of publication we focus on a single year located in the past (so that the article has time to be digested by the academic com- munity) but not the distant past (since we wish to generalize about contemporary policies and contemporary academic work). Balancing these goals, we focus on articles published in 2005. Citations may be influenced by the journal so we can only reliably compare articles published by the same journal. Fortunately, a good deal of variation can be found in most economics journals, and in one political science journal, as revealed by the range (minimum/maximum) of actual word counts in Tables 1 and 2. Our analysis therefore focuses on those journals (excluding sociology) with the greatest range (in 2015), provided they were published in 2005 (excluding journals founded after that date). This includes British Journal of Political ScienceAmerican Economic ReviewBrookings Papers on Economic ActivityEconometricaEconomic JournalJournal of Economic LiteratureJournal of European Economic AssociationJournal of FinanceJournal of ManagementJournal of MarketingJournal of Political EconomyQuarterly Journal of EconomicsReview of Economic StudiesReview of Economics and StatisticsReview of Financial Economics, and Review of Financial Studies.

Note that our selection criterion allows us to focus on journals that do not make a fetish of length, and thus follow policies that are closer to those that we advocate. The regression analysis takes the following form: + ε, where is citation count, is article length, is a vector of journal fixed effects, and ε is the error term. Estimation is by ordinary least squares with standard errors clustered by journal.

This resulting model, presented in Table 4, suggests that there is a robust relationship between length and citations. Indeed, the relationship appears to exist in every journal in our sample: when regression analyses are conducted for each journal, seriatim, we find a positive—though not always statistically significant—relationship between length and impact.

A plot of marginal effects is displayed in Figure 1. We preserve the logged scale of citation count on the axis; however, tick marks on the axis correspond to raw (unlogged) values in order to render the exercise more natural.

It is tempting to focus on the—apparently huge—impact of article length on citations as one approaches the right end of the axis. However, this is not where most of our data falls, as suggested by the wide confidence bounds in Figure 1. The

mean number of pages in our sample is about 25, with a standard deviation of about 12, so generalizations near the center of the distribution are apt to be most meaningful.

Consider an increase in article length from 25 to 35 (a little less than one standard deviation), which translates into an increase of about 6,000 words.10 This hypothetical change is associated with a substantial increase in citations, from (roughly) 35 to 55.

The meaning of this estimate may be debated. Let us assume for a moment that a rational selection bias is at work, namely more important articles are granted greater space in a journal’s pages. Articles deemed less significant are granted less space, as a product of the considered judgments of authors, reviewers, and editors. In this circumstance, it should be no surprise that longer articles garner more attention, as measured by citation counts.

Of course, we cannot rule out the possibility that researchers are influenced by length in their estimation of an article’s importance. Length may be regarded (implicitly) as a proxy for significance, and hence may influence citation counts. Even so, to the extent that such norms exist, they reinforce our basic point that, in the considered opinion of the scholarly community, length is correlated with importance.

Now let us consider the extent to which this analysis might be regarded as exemplifying a causal effect. We certainly cannot assume that articles analyzed in this sample would have been better (i.e., more impactful) if they were longer. But it does seem reasonable to propose that the longer articles in our sample would have been worse had they been shortened. Not all articles justify an expansive domain. But those that do would presumably suffer if the domain were arbitrarily con- strained. In this loose and unidirectional sense, we may regard the estimate contained in Table 4 as causal.

Business Costs

We have argued that length limits should be abolished, or at least considerably relaxed. A consequence of this change in policy is that many articles would increase in length. (A few might decrease, as we have suggested, if quality rather than quantity becomes the principal metric of evaluation.) Assuming that the number of articles published over the course of a year remains the same, the number of words and accompanying features such as tables and figures will grow. This imposes additional costs on academic editors and publishers, whose resources are already stretched thin.

One cost is associated with proofreading and typesetting additional pages. We assume that this cost is fairly minimal. (One can envision a scenario in which long appendices are submitted in “copy-ready” form, as is the case now with online material.)

A more substantial cost is associated with printing and mailing the “hard copy” version of the journal. Note that under

[Start Page 11]

[Table 4 and Figure 1 Insert Here]

the current business model most journals are sold to individuals and institutions (primarily university libraries) that receive a paper copy, which may then be bound prior to shelving (yet another cost, though one that libraries must bear). In economics, many journals charge a publication fee, which no doubt helps to support production costs, and may account for the greater latitude granted to authors by journals in that discipline.

However, the hard copy format seems increasingly anachronistic in an age when most journal output is accessed online and when many journals are adopting online-only publication formats. If this is the wave of the future, there may be good reasons to hasten its arrival. Our proposal presumes that this is possible, and desirable.

[Start Page 12]

Conclusions

The expansive policies adopted by many top economics journals dovetail with a move within the field to prize quality over quantity. Economists lay their claim to fame on a small number of high-impact publications rather than a larger number of less-cited ones. H-index scores matter more than the length of a CV. This may have something to do with the not-so-secret desire of every economist: to obtain a Nobel prize by the end of their career.

While no such holy grail exists for political science and sociology, it may still be possible to adjust incentives so that the time-consuming search for fundamental discoveries and/ or comprehensive analyses of a large topic is facilitated. One small but important step in this direction involves loosening the noose around authors’ necks so they can focus on the task at hand, rather than the space they must fill.

References

Bastow, Simon, Patrick Dunleavy, and Jane Tinkler. 2014. The Im- pact of the Social Sciences: How Academics and their Research Make a Difference. SAGE.

Basu, Aparna. 2006. “Using ISI’s ‘Highly cited researchers’ to obtain a country level indicator of citation excellence.” Scientometrics vol. 68, no. 3: 361–375.

Callaham, Michael, Robert L. Wears, and Ellen Weber. 2002. “Journal Prestige, Publication Bias, and Other Characteristics Associated With Citation of Published Studies in Peer-Reviewed Journals.” JAMA vol. 287, no. 21: 2847–2850.

Dunleavy, Patrick. 2014. “Poor citation practices are a form of aca- demic self-harm in the humanities and social sciences.” Online at: https://medium.com/advice-and-help-in-authoring-a-phd-or-non- fiction/poor-citation-practices-are-a-form-of-academic-self-harm- in-the-humanities-and-social-sciences-2cddf250b3c2#.6vnf69w17 (accessed 12/22/2015).

Gans, Herbert J. 1992. “Sociological Amnesia: The Noncumulation of Normal Social Science.” Sociological Forum vol. 7, no. 4: 701– 710.

Larivière, Vincent, Chaoqun Ni, Yves Gingras, Blaise Cronin, and Cassidy R. Sugimoto. 2013. “Bibliometrics: Global gender dispari- ties in science.” Nature vol. 504: 211–213.

Maliniak, Daniel, Ryan Powers, and Barbara F. Walter. 2013. “The Gender Citation Gap in International Relations.” International Or- ganization vol., 67, no. 04: 889–922.

Nosek, Brian A. and Yoav Bar-Anan. 2012. “Scientific Utopia: I. Opening Scientific Communication.” Psychological Inquiry vol. 23: 217–243.

Footnotes

1 See http://www.scimagojr.com/journalrank.php?category=3312 and http://archive.sciencewatch.com/dr/sci/09/mar29-09_1/ .

2 We are aware of one influential journal—the newly founded Quarterly Journal of Political Science—which does not impose a length limit, perhaps on the model of economics (see discussion below). This journal did not meet our threshold of a “top” journal according to the chosen sources of journal rankings, however.

3 Dunleavy 2014. See also Bastow, Dunleavy and Tinkler 2014.

4 Gans 1992.

5 Callaham, Wears and Weber 2002; Nosek and Bar-Anan 2012: 219.

6 Larivière et al. 2013; Maliniak, Powers, and Walter 2013.

7 Basu 2006. We regard these selection factors as elements of potential bias since none of them—with the possible exception of journal ranking—is directly indicative of the quality and relevance of the cited work.

8 We recognize that experimental research may also involve a good deal of supporting argumentation and robustness tests. But we assume that the burden carried by this sort of theoretical and empirical work is even greater when the data is observational, thus requiring more space for elaboration and demonstration.

across multiple publications). But our biggest concern should be about articles that never get written, or, if written (in a fit of vainglory), never get published.

9 See www.scimagojr.com/journalrank.php?area=2000

10 We derive this word-count estimate by drawing one normal- sized (full-text) page from each journal in our 2005 sample, counting the words on those pages, and calculating the mean across those 16 journals.

Tables and Figures