Proteomic Profiling

Current Prices (3+ replicates per condition are required, 5-8 optimal)

Proteomic Profiling
Service University of California Non-Profit For-Profit
Typical Cost per Replicate $179 $252 $310
Raw Data Only (no analysis or prep) $73 $114 $140

* These are bundled prices, for our official prices please see


A note about our Prices: Profiling can get expensive as you typically need at least 3-5 biological replicates per condition. Optimally you need many more, but calculating power can only really be done correctly after the experiment is finished (mainly due to sample preparation and biological variability which we cannot predict accurately before we do the experiment). To save money I suggest doing as much of the sample preparation and data analysis yourself.

  • The above prices do not include the costs of extracting the proteins from tissue or cells. Unfortunately we have to charge extra for that.
  • Unfortunately we have to charge a little extra if you sample requires precipitation. Usually we need to precipitate samples that are in a solution with detergents or salts that cannot eaisly be removed
Proteomic Profiling is a type of quantitative proteomics (usually relative, more on that later) that reveals differences in protein expression across samples.
For example if you have a set of samples (treated vs Control or a time course 5 min,1 5 min, 20 min) and want to know what proteins are differently expressed, this is the right service for you.
We mainly practice bottom up label free methods of proteomic  profiling. These include spectral counting and Area under the Curve (AUC)
There some major differences between these two methods.

Spectral Counting

is straight forward and relatively sensitive to protein expression differences but can suffer from  sketchy accuracy and dynamic range. Basically it will tell you if your protein is up regulated or down regulated and give you some estimate of the amount (expression ratio 0.1, for example.) I would be surprised if it could determine a 30% up regulation from a 60% up regulation.

Area Under the Curve (AUC) and Differential Mass Spectrometry(dMS)

Area Under the Curve is similar to dMS. Both use the  areas from extracted ion chromatograms s (XIC’s). An XIC is basically a profiling1plot of the intensity of  a particular m/z (mass over charge) over time. So say plot the m/z  496.2867 (the plot on the right) plus or minus some m/z that represents the wobble (eg accuracy) of the mass spectrometer you are using. You then integrate that area and come up with a number (hence AUC). This number (usually its very large) can then be used to compare against the same AUC from the same XIC (sorry for all the acronyms !) from another sample. You can use a large number of different statistical analysis on these numbers. AUC is usually done only on peptides you identify using MS/MS so it works backwards from dMS (below). You identify the peptides first and then extract the areas of the peaks and look for differences.
dMS is similar but the order you do the tests is flipped. With dMS you generate AUC’s for every possible signal (peptide hopefully) you see in the entire LC-MS/MS run and then compare all of them using something like a t-test and sort for differences. If any of the signals that is different has an MS/MS spectra that identifies it as a peptide then you have just identified a peptide that is different in your different samples. If it doesn’t well….you are generally stuck….This is pretty clever, but implementing it correctly seems to be very difficult to do + a lot of the software that does this is commercial and very expensive, or does not work all that well. The problem with these approaches is that they are heavy dependent on accurate retention time (even after alignment warping) and gets confused a lot of times as the matrix we are dealing with is incredibly complex . dMS sounds great in theory, but it has not really caught on due to these issues. A few people can do it well…
A easier and far more practical approach  now that high resolution instruments have become common place is to combine spectral counting with either AUC analysis on the same data or use the spectral counting data to generate an inclusion list method (only generate MS/MS spectra on select m/z’s)  or a targeted MRM method.

Usually an experiment works like thisPLS-DA

  1. Do a traditional bottomup shotgun proteomics experiment.
  2. Do a spectral counting analysis (usually using Scaffold)to determine which proteins are differentially expressed
  3. Take the interesting proteins that are  significantly differentially expressed (or interesting and not significantly different) and make a list out of them
  4. Take all the MS/MS spectra you acquired from the shotgun experiment and generate a MS/MS library using skyline
  5. Take the sequences of the differential or interesting proteins and put them into skyline to generate AUC numbers of all your proteins from step 3 in all your replicates (technical or biological)
  6. Export these numbers from skyline into R
  7. Use R to calculate the statistics

You can also use a MRM targeted proteomics approach after say step 3. MRM on a triple quadrupole (low resolution, low accuracy, but direct beam and specific) has its advantages and disadvantages compared to AUC analysis on traditional high accuracy high resolution instruments.

Here is a an example paper where they did a similar approach (There are others too I will add shortly)

Comparative Shotgun Proteomics Using Spectral Count Data and Quasi-Likelihood Modeling

Targeted Proteomics Assays (TPA’s I guess)

These are mixtures of 100’s or even up to 1000 heavy isotope labeled peptides that map to 100’s of proteins. These peptides are used as internal standards either for AUC on MS data or MRM QQQ analysis. Usually these peptides are chosen carefully and the assay is validated to some extent to make sure the peptides behave linearly over a wide rage of concentrations. Choosing the peptides and validating the assay can take 100’s of hours and cost thousands of dollars. There are only two places that I know that do this

  1. Yale’s Keck Proteomics core facility
  2. MRM Proteomics

If this is something that interests you , let me know. I can see about getting one up and running here, or purchasing a TPA assay like Spiketides set for tumor associated antigens . All I need is the  interest from people on (or off) campus

ITRAQ and TMT labeled profiling

Currently we do not routinely  do this type of analysis. Although we can if requested.

SILAC profiling.

We have done this in the past and can generate the data for it. Just let us know if it interests you.  The data in this paper was generated in our facility     Here are some other nice references to get you started with label free proteomics (I will try and update these soon)

Data Independent Analysis

We can now offer Data independent analysis (DIA) using our Q-exactives and Skyline. I’ll add more information soon, but DIA is a lot less expensive than MRM/SRM as there is virtually no method development (hence the I in DIA). Here is a good paper to get you started

Here are some other nice references to get you started with label free proteomics (I will try and update these soon)

Spectral Counting


More soon…


XIC/AUC approaches

 Labeling Methods (SILAC, iTRAQ, TMT etc)

More soon…


Abundant Protein Depletion 

This is a great paper to read if your planning to look for biomarkers in Plasma!

Other important papers

One of the most recent and best studies on the correlation of mRNA and protein expression

Does a really nice job of comparing iBAQ, APEX and emPAI label free methods

Another really nice paper that uses Spectral couting, XIC and RNA-seq

A nice recent paper on differential expression on  transcripts using RNASeq compared to proteomics using SILAC

Recent Paper comparing Various Label Free methods of Quantitation (we can do all of these if you’re interested)

Share this:Email this to someoneTweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInShare on RedditPrint this page