The standard advice to scientists writing abstracts for journal articles is: keep it short, dry, and simple. But those abstracts actually get fewer citations than their long, flowery, jargon-filled counterparts, according to an analysis of over one million abstracts.
The study, published in PLOS Computational Biology, suggests that most general writing rules are not as effective in scientific publications, which may reflect the influence of online search upon how people discover and consume science today.
“What I think is funny is there’s this disconnect between what you’d like to read, and what scientists actually cite,” says Stefano Allesina, professor of evolution and ecology at the University of Chicago, Computation Institute fellow and faculty, and senior author of the study.
“It’s very suggestive that we should not trust writing tips we take for granted.”
During a seminar for incoming graduate students on how to write effective abstracts, Allesina wondered whether there was hard evidence for the “rules” that were taught.
Follow the rules or break them?
To find out, Allesina and undergraduate Cody Weinberger gathered hundreds of writing suggestions from scientific literature and condensed them into “Ten Simple Rules,” including “Keep it short,” “Keep it simple,” “Signal novelty and importance,” and “Show confidence.”
The coauthors then collected one million abstracts from disciplines such as biology, chemistry, geology, and psychology, and tested how the above rules affected a paper’s citations, relative to other papers in the same journal.
For example, testing “Keep it short” meant looking at the relationship between the number of words or sentences in an abstract and subsequent citations.
This particular analysis found that shorter abstracts led to fewer citations across all disciplines tested—a refutation of the idea that “brief is better.” Other tests found that using more adjectives, adverbs, uncommon words, signals of novelty and importance, and “pleasant” words boosted citations, despite frequent warnings or rules against using each of these features.
Taken together and literally, the results would advise using a “lengthy, convoluted, highly-indexible, self-describing abstract” to attract more citations. But the authors don’t actually recommend it.
“If you were to follow all the rules, it would be absolutely horrible, terrible to read,” Allesina says. “I would discount the suggestions you are typically given, but I would not blindly embrace those we provide. There’s no magic trick; you have to write good papers and good abstracts, and it has to make sense.”
However, “more might be more, rather than less,” suggests coauthor James Evans, associate professor of sociology, Computation Institute Senior Fellow, and director of Knowledge Lab.
“In an era of online searching, a more complete description can attract more eyeballs, the results of more searches and, ultimately, more attention and acclaim.”
The authors hypothesize that the results reflect changing patterns of scientific discovery and consumption, as researchers access articles online instead of in print. Many commonly used databases, such as PubMed or Web of Science, search only abstracts when a user submits a query.
As a result, longer abstracts with more specific words about the research within the full paper are more likely to be found, and subsequently cited.
Regardless of the reason for the relationship, the results suggest that journals do a disservice to scientists and their research by enforcing low word counts for abstracts. Instead of writing constraints, science indexes that search the full text of articles would be more effective at helping scientists find relevant research, Allesina proposes.
“My feeling is that with these word limits, the stricter the worse,” Allesina says. “It’s completely unnecessary in this day and age, when articles are rarely printed.”
The National Science Foundation funded the study.
Source: University of Chicago