Research in progress: this post describes some of my ongoing research. The raw data and all experimental details are updated daily in
J's Lab Notebook in the chapter entitled:
Towards a faster, more reliable ChIP protocol.The following text is largely taken from my PhD Oral Qualifier. I tried to blogify it a little, but it is still a little formal for a blog post. I also don't have many citations. Appropriate citations will be in the published version if I complete this project (if you have opinions about how we should deal with citations in very preliminary results please post a comment - I'd like to hear your opinion).
Short Version of What I'm trying to do: Chromatin Precipitation (ChIP) is often used to experimentally verify or discover transcription factor binding sites. In my experience, ChIP is lengthy, costly, and noisy. I'm trying to use statistical experimental design techniques to shorten, cheapen, and reduce the noise of the ChIP procedure. I'd really like it if ChIP were simple enough to become a standard technique that all experimentalists learn (i.e. like a miniprep and PCR), so we can really start to determine the transcriptional regulatory network structure of many organisms.
In general, I think there is a lot of unnessary
folklore in our experimental procedures, and the methods I'm applying here would be applicable to almost any experimental protocol optimization - if broadly applied, experimental biology would be a much less time-consuming endeavor.
Longer Version of What I'm trying to do:We plan to optimize and shorten the chromatin immunoprecipitation (ChIP) protocol for
in vivo validation of transcription factor targets. Verifying a transcription factor's genomic binding regions with ChIP requires: 1) fixing the transcription factor to the regions of the genome it binds via a crosslinking agent like formaldehyde, 2) cell lysis, 3) chromatin shearing (to enable isolation of only small regions of DNA bound by the transcription factor), and 4) multiple washes to remove background noise
[1]. Once the ChIP procedure is complete, the DNA bound by the transcription factor should be enriched relative to unbound DNA. This enrichment can be assayed by qPCR, microarray, or DNA sequencing (less common), providing confirmation of the transcription factor bindings sites (and therefore presumably, the gene targets of the transcription factors).
ChIP is used by numerous labs across many model organisms, yet the ChIP protocol is anything but standardized; ChIP protocols are as numerous as the number of investigators using the technique, suggesting that we are far from an optimal protocol. The ChIP protocol we previously used to validate network inference targets in
E. coli [2] required almost a week of long experimental days to go from cells to verified transcription factor targets. Because of this length, the procedure is error-prone and only tractable to the most experienced bench scientists. We aim to use modern statistical methods of experimental design to optimize the ChIP protocol [
3]. In particular, we will use fractional factorial designs to screen for unnecessary steps that can be removed to shorten the protocol. In addition, we will optimize the protocol steps that have the most significant influence on the enrichment of known transcription factor targets to improve the signal to noise ratio of the ChIP procedure.
Successful completion of this work will result in a markedly shorter and more effective ChIP protocol for verifying transcription factor targets. The new protocol will make verification of transcription factor binding sites approachable and practical to a wider range of bench scientists, promoting the experimental validation of future network inference predictions. In addition, the knowledge gained by an in-depth analysis of the ChIP technique will help optimize the protocol for different tasks such as highly parallel sequencing of ChIP DNA for transcription factor target discovery. Finally, the ChIP protocol optimization highlights the untapped experimenter efficiency potential these statistical methods could unleash on molecular biology if these experimental design techniques were broadly applied to experimental protocols.
Background:
Most experimental protocols can be represented mathematically as
y = f(
q) where
y is the product resulting from the protocol and
q are the parameters of the protocol. In a PCR experiment for example,
y would represent the yield of DNA (e.g. in micrograms), while
q represents the parameters of the reaction (e.g. concentrations of template, primers, magnesium chloride, etc...). The statistics of experimental design contains numerous methods to expedite the empirical optimization of
y through the intelligent exploration of
q (for two excellent books on experimental design see
[3,4]).
Fractional factorial methods. For each experimental protocol, there are thousands of parameters, (
q), whose values could be altered in an infinite number of combinations to potentially optimize the protocol output (
y). For example, with PCR we could alter the melting temperature, the duration at the melting temperature, the amount of each primer, and the variant of Taq. On another level, changing the tubes, pipettes, the PCR machine, and the experimenter could also lead to changes in the output,
y, of our PCR reaction. The first step in experimental design is to identify the parameters that contribute most to the output, so that they can be further optimized.
Fractional factorial methods provide an efficient way to screen these parameters (
Note: parameters are termed factors in experimental design)
. Traditional factor screening methods take a one-at-a-time approach. For example to optimize a PCR protocol, you might try the reaction with and without DMSO, with various concentrations of magnesium chloride, or with different annealing temperatures. Reliable determination of the effect of each of these factors (
qi) on the PCR output (
y) requires several replicates for each tested factor level. Because of this replication, a large number of experiments is required to test a small number of factors with a one-at-a-time approach. Fractional factorial methods screen many factors
at the same time and remove the need for time-consuming and expensive replication. An example fractional factorial design for optimizing a PCR protocol might look like:
annealing temp | primer concentration | hot start | extension time |
56C | 150 nM | no | 30 seconds |
62C | 150 nM | no | 90 seconds |
56C | 600 nM | no | 90 seconds |
62C | 600 nM | no | 30 seconds |
56C | 150 nM | yes | 90 seconds |
62C | 150 nM | yes | 30 seconds |
56C | 600 nM | yes | 30 seconds |
62C | 600 nM | yes | 90 seconds |
For efficiently reasons, factors in factorial designs are typically only sampled at two states. Experimenter intuition plays a role in these designs via the selection of the initial set of factors to screen and in the selection of the values of the two states to test for each factor.
The result of a fractional factorial can be represented in a table listing the effect size and p-value for each tested factor. For example, an analysis of our qPCR fraction factorial data might yield :
factor | effect (change in mg) | p-value |
annealing temp | 27 | 0.001 |
primer concentration | -1 | 0.6 |
hot start | 2 | 0.5 |
extension time | 10 | 0.05 |
From result in the table above, the experimenter might decide to focus their efforts on further optimization of the annealing temperature to increase the PCR yield, rather than on the three other tested factors that had little effect on our qPCR output.
Response surface methods. In a localized region, our function of interest
y = f(
q) can be fit using first (linear) and second order models. Fitting these models allows us to obtain a prediction of the parameter landscape of our function. Response surface methods use these models to estimate the most efficient path to the peak of the model (i.e. the maximum value of
y). It is at this peak where our experimental protocol is optimized (or at least locally optimal). Response surface methods are relatively time consuming, so fractional factorial methods are typically used to screen for factors to be later optimized by response surface methods.
Research Plan:
For our ChIP protocol, we want to optimize the enrichment,
y, of DNA bound to our transcription factor of interest. At the same time, we want to shorten the protocol as much as possible, so that the laborious protocol becomes more manageable. For this study, we will calculate
y as the change in enrichment of genes known to be bound by our transcription factor relative to the enrichment for randomly chosen genes (which are presumably not bound by our transcription factor). We calculate this relative enrichment from qPCR data. For each known target gene and random target gene, we first calculate their enrichment from an immunoprecipitation reaction with and without antibody as N = log((1+E
i)
Ci+Ui), where E
i is the median efficiency of the PCR primers for gene i, C
i is the qPCR Ct value for the DNA enriched using correct antibody for the transcription factor regulating gene i, and U
i is the qPCR Ct value for the DNA enriched without using an antibody for the transcription factor regulating gene i. We then calculate the increase in enrichment of our known targets relative to the random targets as
y = mean(N
k)
- mean(N
r) where N
k is the ChIP enrichment for the known targets and N
r is the ChIP enrichment for our random targets. Our goal is to maximize the value of
y in the most directed manner possible using statistical methods coupled with intuition rather than simply intuition alone.
We will initially use fractional factorial methods to screen a large number of factors of potential importance to the ChIP protocol. For tested factors that are not found to be significant, we will select the factor state that requires the shortest time. For example if a 10 min incubation and a 2 hr incubation produce insignificant changes in
y, we can save 1 hr 50 min by using a 10 min incubation. Factors found to be significant in the fractional factorial screen will be optimized using response surface methods.
Preliminary Results:
Note: these should be taken with caution, since I've not written the paper yet and haven't really sat down to analyze all of the results in detail yet.We will use fractional factorial experimental designs to screen for unnecessary steps and factors that can be removed or shortened in the ChIP procedure.
Thus far, we have screened twenty-three factors in the ChIP protocol. By choosing the fastest and cheapest alternatives for factors that did not significantly alter the enrichment of known targets relative to random targets (
y = mean(N
k)
- mean(N
r)), we were able to reduce the cost of the protocol by three-quarters and to cut the total procedure time in half (from 5 work days to 2.5). The four most significant factors were formaldehyde concentration, shearing time, antibody concentration, and bead concentration.
Factors that have a significant influence on the enrichment of known transcription factor targets will be optimized using response surface methods. We plan to optimize all four of the most significant factors in the ChIP protocol. As an initial step, we focused on the optimization of the antibody and bead concentrations. We assume that values of these parameters taken in a local area will result in smooth changes in
y that can be modeled with first and second order models (Figure 1). We can then use these models to efficiently direct us towards the optimal values of our bead and antibody concentrations.
Figure 1. A hypothetical response surface describing the enrichment of our ChIP procedure as a function of the antibody and bead concentrations. By sequential experimentation and model refinement, response surface methods can locally define this surface and efficiently lead to local optima of the parameters to maximize ChIP enrichment.From the fractional factorial screening experiments above, we have already obtained four initial points in our surface for bead and antibody concentration (i.e. LA+LB, LA+HB, HA+LB, HA+HB where L = low, H = high, A = antibody concentration, and B = bead concentration). Unfortunately, we do not yet know the surface, so we can't know where our points lie on the surface. However, we can fit a plane using the data for these four combinations of antibody and bead concentration (e.g. P = a
0 + a
1x
1 + a
2x
2, where x
1 and x
2 are the concentrations of antibody and bead respectively and a
i are the regression coefficients). If we assume that the local area around our points is a linear plane, we can use the a
i coefficients to estimate the direction of steepest ascent. For instance in our hypothetical example in Figure 1
, our four combinations might land us for example in the cyan region. A plane fit through these points can then be traversed in the direction of steepest assent to efficiently direct our future parameter value selections towards the red peak.
We fit such a plane to our bead and antibody concentration factorial data, and we choose new concentrations of these two factors along the direction of steepest ascent. These new concentrations led to a marked increase in the enrichment of our ChIP procedure (Figure 2a). It appeared that we had not yet reached saturation, so we tried an additional set of points further along the path of steepest ascent (Figure 2b). Although these new datapoints indicated that we might be close to the saturation point for these bead and antibody concentrations, the method of steepest ascent has pushed us into an expensive optimum, with almost three times the commonly used amount of beads for the ChIP procedure. We hypothesized that the amount of crosslinked-DNA was saturating our bead and antibody at low concentration - necessitating the use of large amounts of bead and antibody.
Figure 2: (A) Antibody and bead concentrations were optimized using the direction of steepest ascent determined by a linear model. (B) Further concentrations were tested to determine if we had reached a saturation point.To test this saturation hypothesis, we performed a factorial design using bead concentration, antibody concentration, and sheared chromatin concentration as factors. By using one-forth of the typical DNA concentration, we were able to obtain to improve our enrichment procedure using lower amounts of bead and antibody (Figure
3). With this lower concentration of DNA, we should be able to estimate more cost-effective optima for the bead and antibody concentrations.
Figure 3: A factorial design was run using bead concentration, antibody concentration, and crosslinked chromatin concentration. For visualization purposes, the values for low chromatin concentration are plotted to the left of those with high chromatin (shifting them slightly along the x-axis), even though both experiments used the same concentration of bead. By using less crosslinked chromatin, we obtain larger enrichment using the standard concentrations of bead and antibody. The results suggest that at the standard values for these concentrations, the beads and antibody are saturated with chromatin.References
[1] Tong Ihn Lee, Sarah E Johnstone, and Richard A Young.
Chromatin immunoprecipitation and microarray-based analysis of protein location.
Nat Protoc, 1(2):729-748, 2006.
[2] Faith JJ, Hayete B, Thaden JT, Mogno I, Wierzbowski J, Cottarel G, Kasif S, Collins JJ, and Gardner TS.
Large-scale mapping and validation of escherichia coli transcriptional regulation from a compendium of expression profiles.
PLoS Biol, 5(1):e8, 2007.
[3] GEP Box, Hunter JS, and Hunter WG.
Statistics for experimenters. Wiley-Interscience, 2nd edition, 2005.
[4] Box G and Draper N.
Empirical Model-Building and Response Surfaces. John Wiley and Sons, 1987.