Sunday, September 23, 2007

towards a richer scientific literature

The scientific review and publication process has received increasing attention over the last ten years; internet technologies have changed the way we search and read science; open access has changed our ability to share science; and highly publicized fraud causes have reminded us that our system has inherent flaws that may prove difficult to fix. Articles in this domain discuss the positive impact of open access, the growing problem of gift-authorship, and the burden on the review system caused by scientists who increasing opt for the top-down system of paper submission (i.e. submit to the top journals first and submit to increasingly lower impact factor journals as you get rejected and reedit your manuscript). I'm typically underwhelmed by the solutions proposed by such articles as they tend to send the message that grass roots revolution, "We're not gonna take it! No, we ain't gonna take it!", is needed to fix the system from the bottom-up. We should tell our deans, department chairs, PIs, etc... that we don't want to be ranked by our h-index, number of citations, and journal impact factors.

But the reality is that we're all doing the best we can with the system that we have. If the system does not change, I guarantee that if I have my own lab someday, I'll submit my papers to the best journal I think they have a shot-in-hell of getting into. The truth is that when I submit to a good journal, I think that my paper belongs there. It's just the editors that incorrectly label my work as not novel enough. It's just the reviewers, defending their territory, that incorrectly label my work as lacking rigor because I don't know that the correct term for mismatches on the end of a RNA:RNA duplex is dangling-ends not shaggy-ends (I still like my term better mystery reviewer man).

In my opinion, we have three problems with our current system:

1) editors are not qualified to judge what will be a high impact paper
I don't care if the editor is an active scientist or a full time editor. I don't care if he has two Nobel prizes. I don't care if he is related to Nostradamus. Besides the obvious, have-to-be-cited papers like complete genome sequences, it's impossible to know what research done today will still be important 5 years from now. So why do we make this the first hurdle to publication?

2) reviewers are helpful but are too focused on self-preservation to do the best job
Please don't make me cite your paper, because in some obscure way you thought of my idea first. Please don't steal my result, because I can't easily identify you. Please don't nail me to a cross and treat me like an idiot, because I'm wearing a blind fold. The temptation is too strong. I noticed this in my own reviews, so I read something to tame my ego before starting and submitting every review I write.

3) we have no good way to quickly judge papers, journals, and scientists
Impact factors and h-indexes were designed to help, not hinder, science. Particularly in the USA, we strive for a meritocracy. Thus, we need some metric for sorting journals and scientists. I think most people would agree that the GRE, LSAT, and MCAT are poor predictors of a person's graduate school potential, but what else can a medical school with 3000 applications for 30 spots do? Perhaps there's no metric we can invent that is better than the opinion of human experts, but expert panels and opinions also suck a lot of time that could be used to do science.

To me, most of the other issues with our current publishing process derive from these three problems. Professors schmooze with editors at conferences, so that the editors will hopefully predict the future more favorably on their next submission. Reviewers reject valuable papers, because impact factor leery editors stress their journal's high rejection rate and the importance of novelty. Professors provide and receive gift-authorship, because they need a high h-index, lots of citations, and visibility in big journals to keep their jobs, get higher pay, and retain the respect of their peers.

We are only human
The writers of the US constitution and the great economists of the world accept our humanness and try to develop government and market systems that thrive because of and despite our human attributes. Checks-and-balances keep the government's power in check, while elections provide change as a society's goals evolve. Free market economic ideas allow efficient prices and economic growth, while federal monetary policies keep things like inflation from getting out of hand.

How can we integrate checks-and-balances into scientific review?
Since transitions are often the trickest part, let's assume we're starting over from scratch with the scientific publication process. I think we can adapt ideas from Amazon.com, Slashdot, and Digg to create a better system. People have already mentioned or even tried some of these things, but so far nothing has struck me as likely to be successful. Journals are dabbling with these ideas, trying out one or two, but it is the combination of all of these in one journal that I think has a chance of adoption and really modernizing the publication process. For example, few people are going to use the rating system at PLoS One, because 1) it involves unnecessary work; and 2) it requires written public criticism of another scientist's work. Reason 2 alone will keep most people away, as flaky scientific egos are easily hurt, and science is a particularly bad field to accidentally burn your bridges. So a workable system would somehow need to compel people to comment and create an atmosphere where written criticism is the norm (and thus less dangerous; more like verbal criticism at a talk; note that good critical scientific debates do occur once in a while in the blogosphere - here's a good example on Steven Salzberg's blog).

In my opinion,

1) the new system must be comment/ratings rich
Readers can rate reviewers and papers. Similar to digg.com, readers should be able to give an article a thumbs up or a thumbs down. The final score of a paper is just the sum of the up and down thumbs (e.g. 126 people like your article and 26 don't, your article has a score of 100). With these scores you can find the papers receiving the most attention (sum of up and down thumbs), most positive attention, and most negative attention.

2) reviewers and commenters are reviewed
If someone on amazon.com writes an idiotic review, there's a nice ReviewNotHelpful button you can click to make sure more people to waste time reading the review in the future. Slashdot has a similar, though more advanced, commenter scoring system. We need a similar button to rate the ratings in the scientific publication process.

3) the best set of reviewers in each subject category are invited to be editors
Rather than having a good-ole-boy pass the editorial torch to his former student, we can allow the hard working thoughtful reviewers to be our judges.

4) the new system must be completely open
No one is anonymous and all information is public. As reviewers accept a paper for review, their name should become publicly associated with the article. When they submit their review, the review should become available for everyone to see. The reviewer's score (determined by other people rating the reviewer) and all of their previous reviews and comments should also be available.

5) nothing is destroyed
There should be no such thing as a rejected paper that no one sees. Trash science should be labeled as such by the community review and commenting system but not deleted. One man's trash might be another man's treasure.

6) review or comment is a prerequirement to submission
Before a paper goes to the editor, all authors on the paper must review another paper in the journal. A paper with 50 authors, contributes 50 reviews before going to review. A professor that slaps his name on 100 publications a year must be willing to write 100 reviews a year. If the professor has their student write the review for them, they will at least know they are putting their own reputation on the line, because the review is associated with the professor's name, and the review is public. If there are papers to be reviewed, they must choose a paper if the paper is in their subject area. Otherwise, they must comment on a certain number of reviewers or papers (e.g. at least three). By forcing comments, you alleviate the laziness factor, which I think will cause other rating systems like PLoS One to fail. We barely have enough time as it is to read a paper no less leave a comment on it. But if doing so is a prerequisite to publication, we'll do so. And if we know that our comments will be publicly available and associated with our name, we'll make sure not to write rubbish.

the ranking problems we don't need to worry about
Two problems with internet ratings systems are that they thrive on sensationalism and that they collect rubbish comments (e.g. youtube comments are often just idiots making fun of the people in the movie). Since a good reputation is vitally important to a scientist, we needn't worry too much about rubbish comments. I also think that scientists already have averse reactions to flashy papers driven more towards publicity than science, so perhaps a commenting system will actually reduce sensationalism.

I've written up the details of a hypothetical journal that incorporates these scientific publishing ideas in a separate blog article.

No comments: