22 May 2017

A short conversation on the beach

Last week, I was out on the beach at South Padre Island, collecting sand crabs for my research. This involved lots of shoveling. When I do this, I often have people come up and ask me what I’m doing. A common guess is clams (none worth digging for on South Padre). Jokingly, people ask if I'm looking for buried treasure.

Normally, I try to cut the conversation short. I’m working. If you’re trying to get something done, it’s not always the time you want to chat with others.

Last week, I had just found an Emerita benedicti and was walking up to deposit it in my bucket. A woman came up while I was doing so and said, “Tortugas?”

Guessing she did not speak English, I searched my brain for the tiny amount of Spanish I knew. I held out my hand to show the little beast, and replied, “Cangrejos.”

“Ah, cangrejos!”

I guess my pronunciation was at least understandable. I was weirdly proud of that.

16 May 2017

Broader impacts, part 2

Hooray for arbitrary large round numbers! My answers on Quora have tallied one million views on Quora.

And yesterday saw my most views ever. Not sure what answer is getting all that traffic.

Related posts

Broader impacts

09 May 2017

Tuesday Crustie: Say a prayer

Australian crayfish are often like the country itself. Big, brash, and often highly charismatic. This newly discovered crayfish is a fine example of that.

Meet Euastacus vesper.As the Australians say, “She’s a beauty.”

Sadly, the authors expect this species is already criticially endangered. Like many crayfish species, it has a tiny distribution. But in slightly more cheerful news,the authors note they are working on describing even more new species in this genus.


McCormack RB, Ahyong ST. 2017. Euastacus vesper sp. nov., a new giant spiny crayfish (Crustacea, Decapoda, Parastacidae) from the Great Dividing Range, New South Wales, Australia. Zootaxa 4244(4): 556–567. https://doi.org/10.11646/zootaxa.4244.4.6

External links

Euastacus vesper, a new Euastacus for NSW
Eustacus vesper – a NEW Euastacus for NSW


08 May 2017

Perfecting the wheel instead of reinventing it

Back in grad school I read a lot about movement analysis and dance notation, and that was when I came across this dedication of the book Choreo-graphics, by Ann Hutchinson Guest:

This book is also dedicated to those who come after and who, instead of contemplating inventing a new dance notation system, discover what has already been achieved and contribute to the art of dance by directing their energies and talents to the perfection of the best one available.

I haven’t read this book in decades, but this quote stuck with me. I think the book said something like there had been a new dance notation system proposed every four years. I could sense her mild frustration that there were so many different systems out there, and people weren’t building on previous work. They were blowing things up and starting from scratch, every. Single. Time.

I think of this quote when people suggest that we should have new scientific journals. Or new programs. Or new administrative structures. So often our reaction to finding something that we think is not performing to our expectations is to walk away from it and start over again. But I like Guest’s approach: direct energies and talents to perfecting the best ones available.


Guest AH. 1989. Choreo-graphics: A Comparison of Dance Notation Systems from the Fifteenth Century to the Present. Routledge.

01 May 2017

We do not need new journals for negative results

Experiments are intended to show one thing effects another. However, not everything affects something else. Many experiments that show “no effect,” or “p > 0.05” are often called negative results.

The general wisdom is that negative results are harder to publish than one that show an experimental manipulation did have an statistically significant effect (“p < 0.05”). Anecdotally, the paper of mine that had the longest, toughest slog to publication was one with negative results.

Is the solution to this problem to create another journal? No.

First, we already have journals in biology that specifically say in their titles that they exist to publish negative results. We have the Journal of Negative Results in BioMedicine (started 2002) and Journal of Negative Results - Ecology & Evolutionary Biology (started 2004).

Second, we have journals that, while not specifically created to accept negative results, specifically include publication of negative results in their editorial mandate. Usually, this is phrased as “reviewed only for technical soundness, not perceived importance,” and these have become known as “megajournals” (regardless of how many papers they actually publish). This format, pioneered by PLOS ONE, is still quite new. Several megajournals are less than five years old (click to enlarge pic below).

The age of these journals is important to consider when talking about publishing negative results. In my experience, many academics take a long time to realize when the publishing landscape has changed. For example, I have been in many discussions with scientists who are actively publishing, active on social media, who mistakenly believe that “open access” is synonymous with “article processing charge” (APC). This is incorrect.

It takes time to change academics’ publishing habits. Five years is not enough to see how the creation of these journals affects the publication of negative results.

And more journals are on the way. The Society for the Study of Evolution has Evolution Letters coming, and Society for Integrative and Comparative Biology has an open access journal coming (though it seems likely these will review for “impact,” not only for technical soundness).

I do realize that some journals are better at upholding this editorial standard than others. For example, sometimes PLOS ONE reviewers have sent back reviews considering “importance” of the findings, even though the journal tells them not to do that.

In biology, you probably have at least six perfectly respectable journals that happily publish negative results. This is why I contend that we do not need to create new journals for negative results. We need to use the ones we have.

I think the underlying problem with discussions of negative results is that we talk about “negative results” as though they were all the same, scientifically: “no effect.” All negative results are not equivalent; some are more interesting than others. Below is a crude first attempt to rank them.

  1. Negative results that refute strongly held hypotheses. Physicists hypothesized that space contained an aether. Nope. Harry Whittington though the Burgess Shale fossil, Opabinia, was an arthropod. Nope. That was just a big old bunch of negative results. But they were clearly recognized as important in getting us off the wrong path.
  2. Negative results that fail to replicate an effect. These are tricky. We all recognize that replication is important, but how we react to them differs. Sometimes, failure to replicate is seen as important is demonstrating incorrect claims (like Rosie Redfield and others showing that GFAJ-1 bacteria, sometimes referred to as “arsenic life”, did indeed have phosphorus in its DNA rather than arsenic as initially claimed). Sometimes, failure to replicate can be dismissed as technical incompetence. (The “Tiger Woods” explanation.)
  3. “Hey, I wonder if...” (HIWI*) negative results. These are negative results that have no strong hypotheses driving the experimental outcome. Like asking, “What is the effect of gamma rays on man-in-the-moon marigolds?” Well, do you have any reason to believe that gamma rays would affect the marigolds differently than other organisms? If you don’t, negative results are deeply uninteresting.

In other words, that results are negative has very little bearing on how people view their importance. The importance of the hypothesis that underlies those negative results play a much bigger role in whether people are liable to think those negative results are interesting.

That is, even if you have another journal specifically for negative results, people are still going to think some results are more interesting and publishable than others. People whose negative results fall into the HIWI category (which may be a lot of those experiments) are still going to have a rough ride in publication, even for journals that consider negative results.

External links

Garraway L. 2017. Remember why we work on cancer. Nature 543(7647): 613–615. http://dx.doi.org/10.1038/543613a (Source of the “Tiger Woods” metaphor)

* In my head, “HIWI” rhymes with “Wi-Fi.”

This post prompted by Twitter discussion with Anthony Caravaggi.

26 April 2017

You think you deserved authorship, but didn’t get it. Now what?

You’re involved in a research project. You do a lot of work. And then your name appears nowhere on the manuscript or paper in the journal.

Pop quiz, hotshot!

What do you do?

While you think about that, let me talk about practices in another field: screenwriting. I’ve argued that movie credits provide a better model for contemporary science than current authorship practices. How do you determine who wrote a movie? (What follows is based on practices in Hollywood filmmaking, as far as I know. I don’t know if practices differ in, say, Bollywood.)

Like authorship of scientific papers, screenwriting credit is not simple and somewhat cryptic to outsiders. For instance:

“Screenplay by Jeffrey Boam and Jeffrey Boam & Robert Mark Kamen. Story by Jeffrey Boam.” Why is Boam in there twice? Why are the names joined with the word “and” and an ampersand? And how is that different from “Story by”? If you haven’t looked it up, it’s baffling. Research papers play similar games with things like authorship position.

Like research teams, you can have large numbers of people who work on a movie script. Over thirty writers were involved writing in The Flintsones live action movie. But only three names appeared on the screen.

And, just like scientific papers, you can have disputes over credit. And here’s where academic authorship and screenwriting diverge.

Credit for movie scripts can go to arbitration. Usually, the Writer’s Guild of America is the final arbiter. And they have rules for determining who gets credit, although there is wiggle room for interpretation, like what “substantial” means.

In a dispute over an authorship credit for a scientific paper, there is effectively nobody to turn to for help in resolving it. On Twitter, I asked people on journal editorial boards the “Pop quiz, hotshot” question at the start of this post. Someone says they were should have gotten authorship, but didn’t. Or possibly higher placement in authorship. What do you do?

So far, I’ve had more retweets than answers.

To make matters worse, there’s no widely accepted criteria for what constitutes authorship. Yes, there are the Vancouver Guidelines for paper authorship in biomedicine, but almost every time I mention them, I hear grumbling about how poor they are. Researchers either don’t known about, don’t care about, or disagree with those guidelines.

The ideal option to resolve authorship disputes, as far as I am concerned, is for the authors to talk to each other and try to resolve their differences on their own. But I suspect that once disputes raise to the point of withholding authorship, it’s going to be hard to resolve that on your own.

A trainee might try to inform the department chair of faculty involved in the project. But increasingly, there are multiple faculty and it may not be who is the relevant person overseeing the faculty. It also seems unlikely than many chairs are willing to step into an authorship dispute, or, even if they are, what they can do about it.

Some institutions might have a research compliance office. But because the standards for authorship are so vague, the question becomes, “What are you supposed to be complying with?” You can have a valid authorship dispute that involves no misconduct. Are compliance offices supposed to resolve differences of opinion about who deserves first author placement versus second author placement?

About the only logical step left is to appeal to the journals themselves. And the Committee on Publication Ethics has a procedure for adding authors (PDF). But if the authors don’t agree, the guidelines are to toss the ball back into the court of the institution, which is, as we just saw, problematic. And it isn’t clear if the recommendations for journals to add authors also apply to, say, changing author order or some other kind of authorship dispute.

Out of 3,000 or so Retraction Watch posts, 177 are tagged with “Authorship issues.” For instance, here are papers published without knowledge of “the bosses.” And here is one case where a student contacted a journal. And here’s one where authors couldn’t agree on author placement.

While I haven’t gone through every entry at Retraction Watch, I am willing to bet that more retractions arise from omitted senior scientists than omitted trainees.

And that’s a big part of the problem. There is a huge power differential between trainees and senior scientists. There seem to be few places more ripe for abuses of that power than in doling out authorship credit.

Regardless, it seems unfair and unwise to expect journals to resolve authorship disputes. There are too few standards across the community (see discontent over Vancouver Guidelines). Journals probably have no resources to investigate the facts of a dispute thoroughly. This probably means that in most cases, they will favour the senior scientist (see power differential).

I don’t know what the solution is. But I think this is a problem that is not given enough discussion. It seems likely that in many cases, trainees in disputes will be left twisting in the wind.

The moral of the story, if you are a trainee of any sort, is: Extensively discuss your expectations for authorship at the very start of any project. Be prepared to negotiate.

Hat tip to Amy Criss for COPE guidelines.

Related posts

Badges for scientific paper contributors

External links

The myth of screenwriting credits
Who gets credit for a screenplay?
A Graduate Student’s Guide to Determining Authorship Credit and Authorship Order (PDF; hat tip to Carolyn O’Meara)
Case studies in coauthorship: what would you do and why?

24 April 2017

Time is the difference between superficiality and scholarship

Many science questions emerge from a place similar to what Penn Jillette describes in this quote about people’s attitudes to video games. (Emphasis added.)

You know, when I was 15, 16, 17-years-old, I spent five hours a day juggling, and I probably spent six hours a day seriously listening to music. And if I were 16 now, I would put that time into playing video games.

The thing that old people don’t understand is – you know if you’ve never heard Bob Dylan, and someone listened to him for 15 minutes, you’re not going to get it. You are just not going to understand. You have to put in hours and hours to start to understand the form, and the same thing is true for gaming. You’re not going to just look at a first-person shooter where you are killing zombies and understand the nuances. There is this tremendous amount of arrogance and hubris, where somebody can look at something for five minutes and dismiss it. Whether you talk about gaming or 20th century classical music, you can’t do it in five minutes. You can’t listen to The Rite of Spring once and understand what Stravinsky was all about. It seems like you should at least have the grace to say you don’t know, instead of saying that what other people are doing is wrong.

The cliché of the nerdy kid who doesn’t go outside and just plays games is completely untrue. And it’s also true for the nerdy kid who studies comic books and turns into this genius, and it is also true for the nerdy kid who listens to every nerdy thing that Led Zeppelin put out. That kind of obsession in a 16-year-old is not ugly. It’s beautiful. That kind of obsession is going to lead to a sophisticated 30-year-old who has a background in that artform.

I think about this quote a lot.

It seems to me that many people who ask questions about science are working from that background of “They listened to Dylan for 15 minutes.”. They’ve been exposed to a few basic ideas. They’ve maybe had one or two lectures in high school about evolution. They get reproduction is important. They get that natural selection leads to adaptation. They get “survival of the fittest.”

But they haven’t mastered the art. So they ask why human evolution has stopped (it hasn’t) or why some trait is so obviously bad (lots of reasons). They can’t get those nuances without having spent that time on task.

Same with people who think that half an hour Googling an answer constitutes “independent research” on climate change or vaccines or what have you. Sorry, that’s the equivalent of listening to The Rite of Spring once.

It’s similar to what I talked about recently: you need time to live with ideas to understand the subtleties.

Related posts

Some “light bulb moments” are controlled by dimmers, not switches

External links

Penn Jillette Is Tired Of The Video Game Bulls***