Sunday, September 23, 2007

Center for Contributory Science

This article describes the Center for Contributory Science (CFCS), an imaginary journal, which I envision as the next generation scientific literature. See my previous post for the motivation for a next generation scientific literature.



The CFCS submission process

Submitting a paper

Scientists are encouraged to submit rigorous scientific research for publication in the journal. Choose the subject category appropriate for your paper. Based on your chosen categories, an editor with appropriate expertise will be randomly assigned to your paper.

Authors must first review / contribute

All authors must be registered CFCS users. Before manuscript submission is finalized every author on your manuscript (including the corresponding author) must do one of the following:
  • If there are any papers in Limbo in a category the author feels qualified in, the author must review a paper. Authors on the same manuscript submission can't review the same paper. (Papers are presented oldest to newest to prevent any one paper from remaining in Limbo for too long. See the section below, "the status of a manuscript", for details on Limbo)
  • If an author claims they are not qualified for any of the papers in their qualified category, the papers are placed in the author's public "not qualified for" list along with an optional comment by the author (this public acknowledgment prevents people from always claiming they aren't qualified to review papers).
  • If the author does not feel qualified to review any of the available manuscripts in Limbo, the author must score any 3 papers in Purgatory or Heaven with a thumbs up or down and a corresponding comment to each score (see below for definitions of Purgatory and Heaven).
The above features of CFCS aim to ensure
  1. there are at least as many reviewers as there are papers (and most likely many more)
  2. authors along for the ride at least have to contribute to the review process
  3. professors can't get out of reviews (and get credit for reviews) by sending work to their students
    1. students get credit for their review work getting their name out early
  4. if you want to submit 100 papers in a year, you must be willing to review 100 as well
For details about how the review process works at CFCS, please see the section "The CFCS Reviewer Process" below.

Authors decide a direction for their manuscript

Upon completion of the review/contribution requirement by all of the manuscript's authors, the manuscript submission will be finalized. Authors may then send their submitted manuscript on Purgatory track or on Heaven track (see below for details on Purgatory and Heaven).

Editors decide paper status

Editors decide if a Purgatory track paper goes to Purgatory or if a Heaven track paper goes to Limbo. This editorial step is simply to weed out complete rubbish before it goes to review. Almost every manuscript should pass this minor screening.



The CFCS reviewer process

Reviewing a paper for CFCS works in a similar manner to most contemporary journals. However, reviews are not anonymous and are publicly visible with the manuscript upon submission. Reviewers do not have to be authors. Any user can do a review to get a credit, so they can later submit a manuscript without having to review. Ideally in the CFCS system, few if any reviewers must be asked to review a manuscript by the editor.

In general, reviewers choose the manuscripts they want to review from the set of all manuscripts in Limbo they feel qualified to review. Each manuscript in Limbo requires four separate reviews. Upon receiving the authors' revised manuscript and response to the reviewer comments, each reviewer places a vote to decide if a manuscript belongs in Heaven or Purgatory. The reviewed manuscript goes to Heaven if the manuscript gets at least three out of four reviewers suggesting the manuscript for Heaven. In the case of a tie, the editor holds the tie-breaking vote, which he casts upon reading all four reviews.

All reviews and the authors' responses to the reviews are publicly available with alongside the final manuscript. Both the original and the revised manuscript drafts are available as well.



The CFCS editor process

Editors are either
  1. reviewers whose quality reviews have gained them a large reviewer impact score and who agree to the job
  2. invited editors (if there aren't enough high ranked reviewers)

Editors decide if a Purgatory track paper goes to Purgatory or is sent to Earth. Editors decide if a Heaven track paper goes to Limbo or is sent to Earth. The main job of the editor is to eliminate rubbish (pseudoscience and just bad science). Editors must also decide if the subject categories selected by the authors are appropriate. Most importantly, editors hold the tie-breaking vote when there are two Heaven votes and two Purgatory votes from the four reviewers. In cases with no tie, the reviewers alone decide the final destination of the manuscript.


The CFCS user process

Any registered user can score and comment any papers, comments, and reviews besides their own. A reader cannot score a paper, comment, or review without leaving a comment to explain their score. Scores and comments are publicly available with the manuscript and on the users' CFCS page.

All users have a reviewer impact score, a comment impact score, and an author impact score.



The status of a manuscript

The status of a paper follows two of the key ideas of CFCS: 1) information is always public; and 2) information is never deleted. Everything that happens to a paper on its route to Heaven is recorded and posted for all to see. All reviewer comments, all responses to reviewer comments, and both versions of the manuscript are available for download.

A publication search in CFCS can be limited to certain types of papers (to allow for example only peer-reviewed work) or it can draw from all of the CFCS library.

Heaven

Heaven is the pinnacle of CFCS. Manuscripts in Heaven have been peer-reviewed by four reviewers, the authors have responded to the reviewer comments to improve their manuscript, and the manuscript received a majority Heaven vote from the reviewers. Voting is carried out by the four reviewers plus the editor. The votes do not become public (to the reviewers or the editors) until all the votes are in (to prevent biased voting). Papers in Heaven are charged a modest processing fee to allow them to be uniformly typeset in the style of the journal. Typeset papers are submitted to pubmed. The four reviewers set the initial paper impact score with their votes. These initial seed scores count double the normal reader submitted score. Once entering Heaven, the manuscript can be scored and commented by all readers of CFCS to adjust each manuscript's impact score.

Limbo

Manuscripts for peer-review work are initially sent to Limbo. A manuscript remains in Limbo until it has received the necessary number of reviews, responded to those reviews, and been voted into Heaven. Failure to respond to the reviewers (within a fixed time) and failure to receive a majority vote result in the manuscript being sent to Purgatory.

Purgatory

Purgatory track manuscripts only need to pass the editor's inspection (otherwise they go to Earth). Purgatory is an option for works where the authors don't want to go through peer-review. Examples of good pieces for purgatory include: reviews and reports of failed experiments. Upon entering Purgatory, the manuscript can be scored by all CFCS readers to determine its paper impact score.

Earth

Manuscripts not passing the editor's initial quality screen go to Earth. Authors get one petition to get out of Earth and back into Limbo or Purgatory.

Hell

Manuscripts discovered to be fraudulent go to hell. (perhaps papers where the equation to word ratio is greater than one belong here too?)


Definitions
  • score: a vote by a CFCS reader; a score can be positive (thumbs-up) or negative (thumbs-down); reviewer comments, reader comments, and manuscripts can all be scored by all CFCS readers; all scores must be accompanied by a comment where the reader explains their reasoning for the score
  • review: similar to the current scientific literature, a review in CFCS aims to strengthen the quality, rigor, and focus of the submitted manuscript; reviews are publicly viewable with the manuscript as are the author's response to the review
  • comment: a comment is a CFCS reader's written opinion of a manuscript, review, or another person's comment
  • reviewer impact score: for each individual, this metric is determined by the number positive scores minus the number of negative scores from CFCS users for all of the reviews written by the individual
  • comment impact score: for each individual, this metric is determined by the number positive scores minus the number of negative scores from CFCS users for all of the comments written by the individual
  • paper impact score: for each manuscript, this metric is determined by the number positive scores minus the number of negative scores from CFCS users for that particular manuscript

towards a richer scientific literature

The scientific review and publication process has received increasing attention over the last ten years; internet technologies have changed the way we search and read science; open access has changed our ability to share science; and highly publicized fraud causes have reminded us that our system has inherent flaws that may prove difficult to fix. Articles in this domain discuss the positive impact of open access, the growing problem of gift-authorship, and the burden on the review system caused by scientists who increasing opt for the top-down system of paper submission (i.e. submit to the top journals first and submit to increasingly lower impact factor journals as you get rejected and reedit your manuscript). I'm typically underwhelmed by the solutions proposed by such articles as they tend to send the message that grass roots revolution, "We're not gonna take it! No, we ain't gonna take it!", is needed to fix the system from the bottom-up. We should tell our deans, department chairs, PIs, etc... that we don't want to be ranked by our h-index, number of citations, and journal impact factors.

But the reality is that we're all doing the best we can with the system that we have. If the system does not change, I guarantee that if I have my own lab someday, I'll submit my papers to the best journal I think they have a shot-in-hell of getting into. The truth is that when I submit to a good journal, I think that my paper belongs there. It's just the editors that incorrectly label my work as not novel enough. It's just the reviewers, defending their territory, that incorrectly label my work as lacking rigor because I don't know that the correct term for mismatches on the end of a RNA:RNA duplex is dangling-ends not shaggy-ends (I still like my term better mystery reviewer man).

In my opinion, we have three problems with our current system:

1) editors are not qualified to judge what will be a high impact paper
I don't care if the editor is an active scientist or a full time editor. I don't care if he has two Nobel prizes. I don't care if he is related to Nostradamus. Besides the obvious, have-to-be-cited papers like complete genome sequences, it's impossible to know what research done today will still be important 5 years from now. So why do we make this the first hurdle to publication?

2) reviewers are helpful but are too focused on self-preservation to do the best job
Please don't make me cite your paper, because in some obscure way you thought of my idea first. Please don't steal my result, because I can't easily identify you. Please don't nail me to a cross and treat me like an idiot, because I'm wearing a blind fold. The temptation is too strong. I noticed this in my own reviews, so I read something to tame my ego before starting and submitting every review I write.

3) we have no good way to quickly judge papers, journals, and scientists
Impact factors and h-indexes were designed to help, not hinder, science. Particularly in the USA, we strive for a meritocracy. Thus, we need some metric for sorting journals and scientists. I think most people would agree that the GRE, LSAT, and MCAT are poor predictors of a person's graduate school potential, but what else can a medical school with 3000 applications for 30 spots do? Perhaps there's no metric we can invent that is better than the opinion of human experts, but expert panels and opinions also suck a lot of time that could be used to do science.

To me, most of the other issues with our current publishing process derive from these three problems. Professors schmooze with editors at conferences, so that the editors will hopefully predict the future more favorably on their next submission. Reviewers reject valuable papers, because impact factor leery editors stress their journal's high rejection rate and the importance of novelty. Professors provide and receive gift-authorship, because they need a high h-index, lots of citations, and visibility in big journals to keep their jobs, get higher pay, and retain the respect of their peers.

We are only human
The writers of the US constitution and the great economists of the world accept our humanness and try to develop government and market systems that thrive because of and despite our human attributes. Checks-and-balances keep the government's power in check, while elections provide change as a society's goals evolve. Free market economic ideas allow efficient prices and economic growth, while federal monetary policies keep things like inflation from getting out of hand.

How can we integrate checks-and-balances into scientific review?
Since transitions are often the trickest part, let's assume we're starting over from scratch with the scientific publication process. I think we can adapt ideas from Amazon.com, Slashdot, and Digg to create a better system. People have already mentioned or even tried some of these things, but so far nothing has struck me as likely to be successful. Journals are dabbling with these ideas, trying out one or two, but it is the combination of all of these in one journal that I think has a chance of adoption and really modernizing the publication process. For example, few people are going to use the rating system at PLoS One, because 1) it involves unnecessary work; and 2) it requires written public criticism of another scientist's work. Reason 2 alone will keep most people away, as flaky scientific egos are easily hurt, and science is a particularly bad field to accidentally burn your bridges. So a workable system would somehow need to compel people to comment and create an atmosphere where written criticism is the norm (and thus less dangerous; more like verbal criticism at a talk; note that good critical scientific debates do occur once in a while in the blogosphere - here's a good example on Steven Salzberg's blog).

In my opinion,

1) the new system must be comment/ratings rich
Readers can rate reviewers and papers. Similar to digg.com, readers should be able to give an article a thumbs up or a thumbs down. The final score of a paper is just the sum of the up and down thumbs (e.g. 126 people like your article and 26 don't, your article has a score of 100). With these scores you can find the papers receiving the most attention (sum of up and down thumbs), most positive attention, and most negative attention.

2) reviewers and commenters are reviewed
If someone on amazon.com writes an idiotic review, there's a nice ReviewNotHelpful button you can click to make sure more people to waste time reading the review in the future. Slashdot has a similar, though more advanced, commenter scoring system. We need a similar button to rate the ratings in the scientific publication process.

3) the best set of reviewers in each subject category are invited to be editors
Rather than having a good-ole-boy pass the editorial torch to his former student, we can allow the hard working thoughtful reviewers to be our judges.

4) the new system must be completely open
No one is anonymous and all information is public. As reviewers accept a paper for review, their name should become publicly associated with the article. When they submit their review, the review should become available for everyone to see. The reviewer's score (determined by other people rating the reviewer) and all of their previous reviews and comments should also be available.

5) nothing is destroyed
There should be no such thing as a rejected paper that no one sees. Trash science should be labeled as such by the community review and commenting system but not deleted. One man's trash might be another man's treasure.

6) review or comment is a prerequirement to submission
Before a paper goes to the editor, all authors on the paper must review another paper in the journal. A paper with 50 authors, contributes 50 reviews before going to review. A professor that slaps his name on 100 publications a year must be willing to write 100 reviews a year. If the professor has their student write the review for them, they will at least know they are putting their own reputation on the line, because the review is associated with the professor's name, and the review is public. If there are papers to be reviewed, they must choose a paper if the paper is in their subject area. Otherwise, they must comment on a certain number of reviewers or papers (e.g. at least three). By forcing comments, you alleviate the laziness factor, which I think will cause other rating systems like PLoS One to fail. We barely have enough time as it is to read a paper no less leave a comment on it. But if doing so is a prerequisite to publication, we'll do so. And if we know that our comments will be publicly available and associated with our name, we'll make sure not to write rubbish.

the ranking problems we don't need to worry about
Two problems with internet ratings systems are that they thrive on sensationalism and that they collect rubbish comments (e.g. youtube comments are often just idiots making fun of the people in the movie). Since a good reputation is vitally important to a scientist, we needn't worry too much about rubbish comments. I also think that scientists already have averse reactions to flashy papers driven more towards publicity than science, so perhaps a commenting system will actually reduce sensationalism.

I've written up the details of a hypothetical journal that incorporates these scientific publishing ideas in a separate blog article.

What I read before I write a review

Writing an anonymous scientific review can make even the tamest human take a jab or two at their blind-folded peer. Because of this, I'm a fan of moving towards open peer review where we can treat each other like humans.

I noticed this aggressive tendency in myself when I first started being asked to write reviews four years ago. To make sure I don't step beyond where I'd like to be as a reviewer (i.e. critical and honest but not aggressive), I read the following text before starting and before submitting every review.

When reviewing papers

  • don't be evil
  • start with a compliment
    • say the positive general comments before you say the negative general comments. If you don't have positive comments, read it again. The editor probably wouldn't give you total crap.
  • don't nitpick too much just to feel powerful
  • try to say things you'd like to be told if it were your paper (i.e. comments to strengthen the manuscript not belittle the authors)
  • number the comments so the authors can easily refer to them if they resubmit
  • don't be evil

Saturday, September 1, 2007

Effect of sequence level mutations on transcription, translation, and noise

One of the main biological questions explored when DNA sequencing first became a practical laboratory technique was how the nucletides in a gene's promoter and ribosomal binding site define the gene's interactions with transcription factors and the ribosome translation apparatus. At least in prokaryotes, these interactions largely determine the levels of transcript and protein available for each gene, and thus provide the crucial information of how a genome regulates itself.

This early work resulted in many of the promoter analysis tools that are still widely used today. In particular, the promoters were often analyzed in terms of information content, and this information content was visualized using sequence logos. These sequence logos are still the most popular way to display DNA binding sites. I'm not sure why this field tappered off a little. My guess is that the people in this field had maxed out the information that was affordably obtainable with the available technologies.

But as most biotech loving biologists know, the times are a changin in biotech, and we have new sequencing technologies that enable drastically larger sequencing studies to be undertaken. Importantly, we are faced with several quite-different sequencing methods (unlike the previous era in sequencing biotech which was almost exclusively driven by ABI's advances in Sanger sequencing). I think with these new technologies coming online, it's time we dusted off our promoters and start figuring out how they work.

What we still must learn about promoters

For several promoters (ideally for all promoters), we need to exhaustively determine how base-pair changes in the promoter lead to changes in the amount of transcription and translation. We must determine this information across time, so we can also determine the rates of transcription and translation. Finally, we must determine these values at the level of single-cells, so that we can also obtain information about the noise inherent in each promoter sequence.

In the early 90s, these type of analyses were at least partially undertaken with populations of cells and 100-200 promoter mutations. Now we must study several promoters, with millions of different mutations for each promoter, and with multiple single-cell replicates of each mutation so we can estimate noise.

What is this new level of promoter knowledge good for?

We need to understand to what extent it is possible to build a computational model to predict translation and transcription from sequence alone. Such a model could act like a molecular biologist's version of Hardy-Weinberg equilibrium. That is if a promoter does not fit the model, it would suggest that there is some additional regulation (e.g. small RNA) that is not explained by the binding of transcription factors and the ribosome. In addition, the ability to screen vast numbers of promoter variants could be of huge value to forward biological engineering. Synthetic biologists often tune their human created networks using directed evolution. While directed evolution is a very powerful and massively parallel way to optimize a genetic system, the human that created the system in the first place has limited control over the final result. For example, it may be that the network evolved to generate ethanol from cellulose is extremely noise and could be made more efficient by fine-tuning the promoters in a more intelligent design.

An idea for a massively parallel method to determine the effect of sequence level mutations on transcription, translation, and noise

I think the tools are already available to determine the effect of sequence level mutations on transcription, translation, and noise in single-cells. One approach I've thought about is shown in the figure on the right (click to see an easier to read/print pdf version of the figure). The idea takes a cell-in-emulsion approach (see Directed evolution of polymerase function by compartmentalized self-replication) and combines it with the polony sequencing method pioneered in the Church lab (see Accurate multiplex polony sequencing of an evolved bacterial genome).

The first step (top right) is to synthesize a known promoter with a large number of random nucleotides. This is very similar to the method used by Stormo's lab many years ago (see Quantitative analysis of ribosome binding sites in E.coli), except that with modern sequencing methods we can drastically increase the number of random sites that we explore. A GFP reporter is placed directly after the promoter so that we can measure the amount of protein generated. Since, the vast majority of mutations will probably result in little to no expression, it may be useful to also add a bactericidal antibiotic resistance gene after the GFP to provide an easy way to get rid of unproductive promoters (for some studies, you would probably not want to remove these low output promoters).

The second step (top left) is to take a dynal bead and attach a primer to amplify the promoter, a second primer to amplify the GFP sequence, and an anti-GFP antibody.

Next the bead and the bacteria a placed together into an emulsion. In the emulsion solution, we also need to include reverse primers for the promoter and GFP sequence, reverse transcriptase, and PCR reagents. You would need to mess around a little with the dilutions and concentrations of beads and cells to maximize the case where you have only one bead and one cell in each emulsion.

Now we have the cells isolated into separate chambers, and we have one bead with the bacteria. This bead will provide the source of our future information read out. Also remember that by synthesizing our promoters with N's, we actually have generated a huge library of different promoters. So that each emulsion will have a different variant of our promoter. We then lyse the cells. I'm not sure of the best way to lyse the cell. But in the figure, I just assumed we used extreme heat. Because of the next step, it may be wise to use a gentler method to lyse them, such as placing a protein that will cause cell lysis (e.g. lysozyme or ccdB) on a promoter that is heat inducible, so you'd only need to heat the cells up to 42C rather than 95C. Once the cells are lysed, the GFP expressed from the synthetic promoter should diffuse around the emulsion until they meet and bind the antiGFP attached to our dynal bead.

We've got the protein on our bead, now we need to attach the DNA. Since one of the things we want to measure is transcript concentration which is mRNA, we need to do a reverse transcription reaction. Reverse transcriptase is not very heat stable, which is why I stressed above that we might want to lyse our cells more gently than by heating them to 95C. However, Superscript III from Invitrogen is pretty heat stable, so that might be worth a shot too. Since we include a reverse primer to our GFP sequence into the emulsion, we should have a fairly specific reverse transcription.

Finally, we need to attach the DNA to our bead, we can do so by running a multiplex PCR reaction for a few cycles. Since the forward primers are on the dynal bead, the PCR reaction results in the DNA being stuck to the bead.

And now for the fun part, let's measure protein concentration, transcript concentration, and determine the promoter sequence for our single cells (bottom row of the figure). To do this we lay our beads out on a microscope slide or some type of microfluidic device. We can measure the protein concentration directly as GFP fluorescence. Next we measure the transcript concentration as the amount of GFP cDNA attached to the bead. For increased accuracy, we might want to measure this concentration using cycled PCR reactions (like qPCR on a microscope). The concentration of the promoter sequence and GFP sequence attached to the bead can be measured in each round using two different molecular beacons. The concentration of promoter sequence can be used to normalize the concentration of the transcript (i.e. to try and remove artifacts due to the variance in emulsion sizes, emulsion PCR efficiency, and to remove the background transcript value that is due to amplification from the DNA rather than cDNA sequence). This bead based DNA quantitation can borrow some ideas from the BEAMing method (see BEAMing: single-molecule PCR on microparticles in water-in-oil emulsions).
Now that we've measured protein concentration and transcript concentration, it is time to determine the promoter sequence responsible for these concentrations. In some ways, this is the most difficult step. But in practice, it may be the easiest step, as the polony sequencing method does exactly that.

With the size of the beads and the massively parallel nature of this protocol, it should be possible to have the same sequence appear multiple times, allowing the estimation of noise for the tested promoters.

Again, I haven't tried any of this stuff, and I'm not sure it'll work. I just wanted to throw the idea out there in case someone else is thinking about this problem too.

Open questions with this idea
  1. how quantitative is emulsion PCR and how is noise influenced by the size of the emulsion
    1. can we increase quantitative accuracy of our mRNA concentration by running very few emulsion cycles and then running a microscope based qPCR on our bead
  2. how strongly does the GFP bind to our bead (e.g. when we break the emulsions, can GFP move from one bead to the next? we can test this by using mCherry in one sample, GFP in another, and then breaking the emulsions together to see if any beads have both proteins attached)
  3. Is crowding on the bead going to cause problems; that bead has a lot on it. does this bias our results in unpredictable ways?
  4. there are a lot of steps. long protocols can lead to excess experimenter derived error and slow the techniques adoption