Vpachec3 Week8
Contents
- 1 Statistical Analysis of Vibrio cholerae Microarray Data (Part 1)
- 1.1 Normalizing the log ratios for the set of slides in the experiment
- 1.2 Before we begin...
- 1.3 Normalize the log ratios for the set of slides in the experiment
- 1.4 Perform statistical analysis on the ratios
- 1.5 Sanity Check: Number of genes significantly changed
- 1.6 Sanity Check: Compare individual genes with known data
- 2 Misc. Commentary
- 3 Links
Statistical Analysis of Vibrio cholerae Microarray Data (Part 1)
- Downloaded the [Merrell_Compiled_Raw_Data_Vibrio.xls] file.
- Renamed the file to Merrell_Compiled_Raw_Data_Vibrio_BK_20151015.xls
Normalizing the log ratios for the set of slides in the experiment
- Copied data from the compiled_raw_data tab to a new tab, entitled scaled_centered
- Inserted two rows below the ID row, labelled Average and StDev.
- Average: Calculated the averages of the data in each column using the function
=AVERAGE()
- Sample- for row B the function is
=AVERAGE(B4:B5224)
- Sample- for row B the function is
- StDev: Calculated the standard deviations of the data in each column using the function
=STDEV()
- Sample- for row B the function is
=STDEV(B4:B5224)
- Sample- for row B the function is
- Average: Calculated the averages of the data in each column using the function
- Created a new series of columns for data sets A1-C4, adding the labels "_scaled_centered" (e.g. A1_scaled_centered)
- Wrote a function to normalize each value in the data set
- For the first value in column B (data set A1), the function was
=(B4-B$2)/B$3
- Extended this to all data values in the new scaled_centered columns
- For the first value in column B (data set A1), the function was
- Wrote a function to normalize each value in the data set
- Inserted two rows below the ID row, labelled Average and StDev.
- Created a new tab entitled statistics
- Copied over:
- The first column of gene ID values from the compiled_raw_data tab
- The _scaled_centered columns from the scaled_centered tab
- Deleted the blank Average and StDev rows
- Created a new series of columns
- Copied over:
Before we begin...
- The data from the Merrell et al. (2002) paper was accessed from this page at the Stanford Microarray Database (now hosted by Princeton — Kam D. Dahlquist 18:26, 7 October 2013 (EDT).
- The Log2 of R/G Normalized Ratio (Median) has been copied from the raw data files downloaded from the Stanford Microarray Database.
- Patient A
- Sample 1: 24047.xls (A1)
- Sample 2: 24048.xls (A2)
- Sample 3: 24213.xls (A3)
- Sample 4: 24202.xls (A4)
- Patient B
- Sample 5: 24049.xls (B1)
- Sample 6: 24050.xls (B2)
- Sample 7: 24203.xls (B3)
- Sample 8: 24204.xls (B4)
- Patient C
- Sample 9: 24053.xls (C1)
- Sample 10: 24054.xls (C2)
- Sample 11: 24205.xls (C3)
- Sample 12: 24206.xls (C4)
- Stationary Samples (We will not be using these, they are listed here for completeness, but do not appear in your compiled raw data file.)
- Sample 13: 24059.xls (Stationary-1)
- Sample 14: 24060.xls (Stationary-2)
- Sample 15: 24211.xls (Stationary-3)
- Sample 16: 24212.xls (Stationary-4)
- Patient A
- Downloaded the Merrell_Compiled_Raw_Data_Vibrio.xls file to your Desktop.
- Saved a copy of the file with a different filename that includes your initials and the date. For example, I would call mine "Merrell_Compiled_Raw_Data_Vibrio_KD_20091020.xls".
Normalize the log ratios for the set of slides in the experiment
To scale and center the data (between chip normalization) perform the following operations:
- Inserted a new Worksheet into your Excel file, and name it "scaled_centered".
- Went to the "compiled_raw_data" worksheet, Select All and Copy. Went to new "scaled_centered" worksheet, click on the upper, left-hand cell (cell A1) and Paste.
- Inserted two rows in between the top row of headers and the first data row.
- In cell A2, typed "Average" and in cell A3, typed "StdDev".
- Computed the Average log ratio for each chip (each column of data). In cell B2, type the following equation:
=AVERAGE(B4:B5224)
- and pressed "Enter". Excel is computing the average value of the cells specified in the range given inside the parentheses. Instead of typing the cell designations, you can click on the beginning cell, scroll down to the bottom of the worksheet, and shift-click on the ending cell.
- Computed the Standard Deviation of the log ratios on each chip (each column of data). In cell B3, type the following equation:
=STDEV(B4:B5224)
- and press "Enter".
- Copied these two equations (cells B2 and B3) and pasted them into the empty cells in the rest of the columns. Excel will automatically change the equation to match the cell designations for those columns.
- Copied the column headings for all of your data columns and then pasted them to the right of the last data column so that you have a second set of headers above blank columns of cells. Edited the names of the columns so that they read: A1_scaled_centered, A2_scaled_centered, etc.
- In cell N4, typed the following equation:
=(B4-B$2)/B$3
- In this case, we wanted the data in cell B4 to have the average subtracted from it (cell B2) and be divided by the standard deviation (cell B3). We used the dollar sign symbols in front of the "2" and "3" to tell Excel to always reference that row in the equation, even though we pasted it for the entire column of 5221 genes. Why is this important?
- Copied and pasted this equation into the entire column. Clicked on the original cell with your equation and position the cursor at the bottom right corner. Cursor changed to a thin black plus sign (not a chubby white one).
Double clicked, and the formula was copied to the entire column of genes.
- Copied and pasted the scaling and centering equation for each of the columns of data with the "_scaled_centered" column header. Be sure that your equation is correct for the column you are calculating.
Perform statistical analysis on the ratios
We are going to perform this step on the scaled and centered data you produced in the previous step.
- Insert a new worksheet and name it "statistics".
- Go back to the "scaling_centering" worksheet and copy the first column ("ID").
- Paste the data into the first column of your new "statistics" worksheet.
- Go back to the "scaling_centering" worksheet and copy the columns that are designated "_scaled_centered".
- Go to your new worksheet and click on the B1 cell. Select "Paste Special" from the Edit menu. A window will open: click on the radio button for "Values" and click OK. This will paste the numerical result into your new worksheet instead of the equation which must make calculations on the fly.
- Delete Rows 2 and 3 where it says "Average" and "StDev" so that your data rows with gene IDs are immediately below the header row 1.
- Go to a new column on the right of your worksheet. Type the header "Avg_LogFC_A", "Avg_LogFC_B", and "Avg_LogFC_C" into the top cell of the next three columns.
- Compute the average log fold change for the replicates for each patient by typing the equation:
=AVERAGE(B2:E2)
- into cell N2. Copy this equation and paste it into the rest of the column.
- Create the equation for patients B and C and paste it into their respective columns.
- Now you will compute the average of the averages. Type the header "Avg_LogFC_all" into the first cell in the next empty column. Create the equation that will compute the average of the three previous averages you calculated and paste it into this entire column.
- Insert a new column next to the "Avg_LogFC_all" column that you computed in the previous step. Label the column "Tstat". This will compute a T statistic that tells us whether the scaled and centered average log ratio is significantly different than 0 (no change). Enter the equation:
=AVERAGE(N2:P2)/(STDEV(N2:P2)/SQRT(number of replicates))
- (NOTE: in this case the number of replicates is 3. Be careful that you are using the correct number of parentheses.) Copy the equation and paste it into all rows in that column.
- Label the top cell in the next column "Pvalue". In the cell below the label, enter the equation:
=TDIST(ABS(R2),degrees of freedom,2)
The number of degrees of freedom is the number of replicates minus one, so in our case there are 2 degrees of freedom. Copy the equation and paste it into all rows in that column.
Calculate the Bonferroni p value Correction
- Now we will perform adjustments to the p value to correct for the multiple testing problem. Label the next two columns to the right with the same label, Bonferroni_Pvalue.
- Type the equation
=S2*5221
, Upon completion of this single computation, use the trick to copy the formula throughout the column. - Replace any corrected p value that is greater than 1 by the number 1 by typing the following formula into the first cell below the second Bonferroni_Pvalue header:
=IF(T2>1,1,T2)
. Use the trick to copy the formula throughout the column.
Calculate the Benjamini & Hochberg p value Correction
- Insert a new worksheet named "B-H_Pvalue".
- Copy and paste the "ID" column from your previous worksheet into the first column of the new worksheet.
- Insert a new column on the very left and name it "MasterIndex". We will create a numerical index of genes so that we can always sort them back into the same order.
- Type a "1" in cell A2 and a "2" in cell A3.
- Select both cells. Hover your mouse over the bottom-right corner of the selection until it makes a thin black + sign. Double-click on the + sign to fill the entire column with a series of numbers from 1 to 5221 (the number of genes on the microarray).
- For the following, use Paste special > Paste values. Copy your unadjusted p values from your previous worksheet and paste it into Column C.
- Select all of columns A, B, and C. Sort by ascending values on Column C. Click the sort button from A to Z on the toolbar, in the window that appears, sort by column C, smallest to largest.
- Type the header "Rank" in cell D1. We will create a series of numbers in ascending order from 1 to 5221 in this column. This is the p value rank, smallest to largest. Type "1" into cell D2 and "2" into cell D3. Select both cells D2 and D3. Double-click on the plus sign on the lower right-hand corner of your selection to fill the column with a series of numbers from 1 to 5221.
- Now you can calculate the Benjamini and Hochberg p value correction. Type B-H_Pvalue in cell E1. Type the following formula in cell E2:
=(C2*5221)/D2
and press enter. Copy that equation to the entire column. - Type "B-H_Pvalue" into cell F1.
- Type the following formula into cell F2:
=IF(E2>1,1,E2)
and press enter. Copy that equation to the entire column. - Select columns A through F. Now sort them by your MasterIndex in Column A in ascending order.
- Copy column F and use Paste special > Paste values to paste it into the next column on the right of your "statistics" sheet.
Prepare file for GenMAPP
- Insert a new worksheet and name it "forGenMAPP".
- Go back to the "statistics" worksheet and Select All and Copy.
- Go to your new sheet and click on cell A1 and select Paste Special, click on the Values radio button, and click OK. We will now format this worksheet for import into GenMAPP.
- Select Columns B through Q (all the fold changes). Select the menu item Format > Cells. Under the number tab, select 2 decimal places. Click OK.
- Select all the columns containing p values. Select the menu item Format > Cells. Under the number tab, select 4 decimal places. Click OK.
- Delete the left-most Bonferroni p value column, preserving the one that shows the result of your "if" statement.
- Insert a column to the right of the "ID" column. Type the header "SystemCode" into the top cell of this column. Fill the entire column (each cell) with the letter "N".
- Select the menu item File > Save As, and choose "Text (Tab-delimited) (*.txt)" from the file type drop-down menu. Excel will make you click through a couple of warnings because it doesn't like you going all independent and choosing a different file type than the native .xls. This is OK. Your new *.txt file is now ready for import into GenMAPP. But before we do that, we want to know a few things about our data as shown in the next section.
- Upload both the .xls and .txt files that you have just created to your journal page in the class wiki. Make sure that your file name is distinct from your other classmates so that nobody overwrites anyone else's file.
Sanity Check: Number of genes significantly changed
Before we move on to the GenMAPP/MAPPFinder analysis, we want to perform a sanity check to make sure that we performed our data analysis correctly. We are going to find out the number of genes that are significantly changed at various p value cut-offs and also compare our data analysis with the published results of Merrell et al. (2002).
- Open your spreadsheet and go to the "forGenMAPP" tab.
- Click on cell A1 and select the menu item Data > Filter > Autofilter. Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
- Click on the drop-down arrow on your "Pvalue" column. Select "Custom". In the window that appears, set a criterion that will filter your data so that the Pvalue has to be less than 0.05.
- How many genes have p value < 0.05? and what is the percentage (out of 5221)?
- What about p < 0.01? and what is the percentage (out of 5221)?
- What about p < 0.001? and what is the percentage (out of 5221)?
- What about p < 0.0001? and what is the percentage (out of 5221)?
- When we use a p value cut-off of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero less than 5% of the time.
- We have just performed 5221 T tests for significance. Another way to state what we are seeing with p < 0.05 is that we would expect to see this magnitude of a gene expression change in about 5% of our T tests, or 261 times. (Test your understanding: http://xkcd.com/882/.) Since we have more than 261 genes that pass this cut off, we know that some genes are significantly changed. However, we don't know which ones. To apply a more stringent criterion to our p values, we performed the Bonferroni and Benjamini and Hochberg corrections to these unadjusted p values. The Bonferroni correction is very stringent. The Benjamini-Hochberg correction is less stringent. To see this relationship, filter your data to determine the following:
- How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 5221)?
- How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 5221)?
- In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level. If we want to be very confident of our data, use a small p value cut-off. If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off.
- The "Avg_LogFC_all" tells us the size of the gene expression change and in which direction. Positive values are increases relative to the control; negative values are decreases relative to the control.
- Keeping the (unadjusted) "Pvalue" filter at p < 0.05, filter the "Avg_LogFC_all" column to show all genes with an average log fold change greater than zero. How many are there? (and %)
- Keeping the (unadjusted) "Pvalue" filter at p < 0.05, filter the "Avg_LogFC_all" column to show all genes with an average log fold change less than zero. How many are there? (and %)
- What about an average log fold change of > 0.25 and p < 0.05? (and %)
- Or an average log fold change of < -0.25 and p < 0.05? (and %) (These are more realistic values for the fold change cut-offs because it represents about a 20% fold change which is about the level of detection of this technology.)
- In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level. If we want to be very confident of our data, use a small p value cut-off. If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off. For the GenMAPP analysis below, we will use the fold change cut-off of greater than 0.25 or less than -0.25 and the unadjusted p value cut off of p < 0.05 for our analysis because we want to include several hundred genes in our analysis.
- What criteria did Merrell et al. (2002) use to determine a significant gene expression change? How does it compare to our method?
Sanity Check: Compare individual genes with known data
- Merrell et al. (2002) report that genes with IDs: VC0028, VC0941, VC0869, VC0051, VC0647, VC0468, VC2350, and VCA0583 were all significantly changed in their data. Look these genes up in your spreadsheet. What are their fold changes and p values? Are they significantly changed in our analysis?
Misc. Commentary
October 15, 2015
Media:Merrell Compiled Raw Data Vibrio VP 20151015.xls
October 20,2015
Media:Merrell Compiled Raw Data Vibrio VP 20151020.txt
Media:Merrell Compiled Raw Data Vibrio VP 20151020.gex
There was 121 errors detected in the raw data in GenMAPP 2.1. when the Merrell_Compiled_Raw_Data_Vibrio_VP_20151020.EX.txt was selected. My partner, Brandon, had the 2009 file and had 772 errors when ran through GenMAPP2.1.
October 22,2015
For the class demo, I was part of the decrease group.