Proteomic Profiling

Current Prices

TMT can be expensive especially if we do the labeling and sample prep. If you do the labeling you will save  a lot of time and money! Please ask for an estimate if you want to do profiling

 

for our official prices please see http://proteomics.ucdavis.edu/prices-jan-2015

 

TMT labeled profiling (recommended)

We do TMT analysis quite frequently and are getting better at it all the time :) . Currently TMTwe can combine TMT experiments so that we can do multiple 10 plexes according to this method.

Extended Multiplexing of Tandem Mass Tags (TMT) Labeling Reveals Age and High Fat Diet Specific Proteome Changes in Mouse Epididymal Adipose Tissue

Here is a presentation of a dataset we did recently. The author of the above publication helps us with the analysis. Make sure you read the notes as most of the detailed info is located there.

Extended_TMT_multiplexing_analysis (1).pptx

Also with our new Fusion Lumos we routinely do MS3 TMT 10 or 11 plex’s. We currently recommend 10 plexes with MS3 over 6 plexes with MS2

Currently we like to do multiples of 8 samples (8 Samples + 2 pooled references per 10 plex) so 16 samples minimum if the total number of samples is > 10. If less than 10 we do not have to use pooled reference. So e.g. if you have greater than 10 samples you should do a minimum of 16 with multiples of 8 after that (16,24,32 etc…_)

Spectral Counting (okay for pulldowns )

is straight forward and relatively sensitive to protein expression differences but can suffer from  sketchy accuracy and dynamic range. Basically it will tell you if your protein is up regulated or down regulated and give you some estimate of the amount (expression ratio 0.1, for example.) I would be surprised if it could determine a 30% up regulation from a 60% up regulation.

Area Under the Curve (AUC) and Differential Mass Spectrometry(dMS)

Area Under the Curve is similar to dMS. Both use the  areas from extracted ion chromatograms s (XIC’s). An XIC is basically a profiling1plot of the intensity of  a particular m/z (mass over charge) over time. So say plot the m/z  496.2867 (the plot on the right) plus or minus some m/z that represents the wobble (eg accuracy) of the mass spectrometer you are using. You then integrate that area and come up with a number (hence AUC). This number (usually its very large) can then be used to compare against the same AUC from the same XIC (sorry for all the acronyms !) from another sample. You can use a large number of different statistical analysis on these numbers. AUC is usually done only on peptides you identify using MS/MS so it works backwards from dMS (below). You identify the peptides first and then extract the areas of the peaks and look for differences.
dMS is similar but the order you do the tests is flipped. With dMS you generate AUC’s for every possible signal (peptide hopefully) you see in the entire LC-MS/MS run and then compare all of them using something like a t-test and sort for differences. If any of the signals that is different has an MS/MS spectra that identifies it as a peptide then you have just identified a peptide that is different in your different samples. If it doesn’t well….you are generally stuck….This is pretty clever, but implementing it correctly seems to be very difficult to do + a lot of the software that does this is commercial and very expensive, or does not work all that well. The problem with these approaches is that they are heavy dependent on accurate retention time (even after alignment warping) and gets confused a lot of times as the matrix we are dealing with is incredibly complex . dMS sounds great in theory, but it has not really caught on due to these issues. A few people can do it well…
A easier and far more practical approach  now that high resolution instruments have become common place is to combine spectral counting with either AUC analysis on the same data or use the spectral counting data to generate an inclusion list method (only generate MS/MS spectra on select m/z’s)  or a targeted MRM method.

Usually an experiment works like thisPLS-DA

  1. Do a traditional bottomup shotgun proteomics experiment.
  2. Do a spectral counting analysis (usually using Scaffold)to determine which proteins are differentially expressed
  3. Take the interesting proteins that are  significantly differentially expressed (or interesting and not significantly different) and make a list out of them
  4. Take all the MS/MS spectra you acquired from the shotgun experiment and generate a MS/MS library using skyline
  5. Take the sequences of the differential or interesting proteins and put them into skyline to generate AUC numbers of all your proteins from step 3 in all your replicates (technical or biological)
  6. Export these numbers from skyline into R
  7. Use R to calculate the statistics

You can also use a MRM targeted proteomics approach after say step 3. MRM on a triple quadrupole (low resolution, low accuracy, but direct beam and specific) has its advantages and disadvantages compared to AUC analysis on traditional high accuracy high resolution instruments.

Here is a an example paper where they did a similar approach (There are others too I will add shortly)

Comparative Shotgun Proteomics Using Spectral Count Data and Quasi-Likelihood Modeling

Targeted Proteomics Assays (TPA’s I guess)

These are mixtures of 100’s or even up to 1000 heavy isotope labeled peptides that map to 100’s of proteins. These peptides are used as internal standards either for AUC on MS data or MRM QQQ analysis. Usually these peptides are chosen carefully and the assay is validated to some extent to make sure the peptides behave linearly over a wide rage of concentrations. Choosing the peptides and validating the assay can take 100’s of hours and cost thousands of dollars. There are only two places that I know that do this

  1. Yale’s Keck Proteomics core facility
  2. MRM Proteomics

If this is something that interests you , let me know. I can see about getting one up and running here, or purchasing a TPA assay like Spiketides set for tumor associated antigens . All I need is the  interest from people on (or off) campus

SILAC profiling.

We have done this in the past and can generate the data for it. Just let us know if it interests you.  The data in this paper was generated in our facility . I personally do not like SILAC…It makes the MS1 space way to complex and it’s difficult to pick out SILAC pairs that have low s/n

Data Independent Analysis

We can now offer Data independent analysis (DIA) using our Q-exactives and Skyline. I’ll add more information soon, but DIA is a lot less expensive than MRM/SRM as there is virtually no method development (hence the I in DIA). Here is a good paper to get you started

Here are some other nice references to get you started with label free proteomics (I will try and update these soon)

TMT references

More soon…

Spectral Counting

 

More soon…

 

XIC/AUC approaches

 

Abundant Protein Depletion 

This is a great paper to read if your planning to look for biomarkers in Plasma!

Other important papers

One of the most recent and best studies on the correlation of mRNA and protein expression

Does a really nice job of comparing iBAQ, APEX and emPAI label free methods

Another really nice paper that uses Spectral couting, XIC and RNA-seq

A nice recent paper on differential expression on  transcripts using RNASeq compared to proteomics using SILAC

Recent Paper comparing Various Label Free methods of Quantitation (we can do all of these if you’re interested)

Share this:Email this to someoneTweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInShare on RedditPrint this page