Difference between revisions of "Msaeedi23 Week 8"

From LMU BioDB 2015
Jump to: navigation, search
(genmapp procedure)
(GenMAPP Expression Dataset Manager Procedure)
Line 183: Line 183:
 
*I then moved onto the and [http://www.openwetware.org/wiki/BIOL367/F10:GenMAPP_and_MAPPFinder_Protocols part 2] page to continue the assignment now using the programs genMAPP and MAPPFinder. I used the text as a template.  
 
*I then moved onto the and [http://www.openwetware.org/wiki/BIOL367/F10:GenMAPP_and_MAPPFinder_Protocols part 2] page to continue the assignment now using the programs genMAPP and MAPPFinder. I used the text as a template.  
  
=== GenMAPP Expression Dataset Manager Procedure ===
+
=== 10/22/15 GenMAPP Expression Dataset Manager Procedure ===
  
 
* I launched the GenMAPP Program.  I checked to make sure the correct Gene Database was loaded.
 
* I launched the GenMAPP Program.  I checked to make sure the correct Gene Database was loaded.

Revision as of 04:52, 27 October 2015

File:Merrell Compiled Raw Data Vibrio MS20151015.xls File:Merrell Compiled Raw Data Vibrio MS20151015.txt

10/15/15

  • Went toOpen Wet Ware and created an account. I then copied and pasted the text for the part 1 page onto my own and then edited it.

Before we begin...

  • The data from the Merrell et al. (2002) paper was accessed from this page at the Stanford Microarray Database.
  • The Log2 of R/G Normalized Ratio (Median) has been copied from the raw data files downloaded from the Stanford Microarray Database.
    • Patient A
      • Sample 1: 24047.xls (A1)
      • Sample 2: 24048.xls (A2)
      • Sample 3: 24213.xls (A3)
      • Sample 4: 24202.xls (A4)
    • Patient B
      • Sample 5: 24049.xls (B1)
      • Sample 6: 24050.xls (B2)
      • Sample 7: 24203.xls (B3)
      • Sample 8: 24204.xls (B4)
    • Patient C
      • Sample 9: 24053.xls (C1)
      • Sample 10: 24054.xls (C2)
      • Sample 11: 24205.xls (C3)
      • Sample 12: 24206.xls (C4)
    • Stationary Samples (We will not be using these, they are listed here for completeness, but do not appear in your compiled raw data file.)
      • Sample 13: 24059.xls (Stationary-1)
      • Sample 14: 24060.xls (Stationary-2)
      • Sample 15: 24211.xls (Stationary-3)
      • Sample 16: 24212.xls (Stationary-4)
  • I downloaded the Merrell_Compiled_Raw_Data_Vibrio.xls file to my Desktop.

Normalize the log ratios for the set of slides in the experiment

I scaled and centered the data (between chip normalization) by performing the following operations:

  • Inserted a new Worksheet into my Excel file, and named it "scaled_centered".
  • Went back to the "compiled_raw_data" worksheet, Selected All and Copy. Went to my new "scaled_centered" worksheet, clicked on the upper, left-hand cell (cell A1) and Pasted.
  • Inserted two rows in between the top row of headers and the first data row.
  • In cell A2, I typed "Average" and in cell A3, typed "StdDev".
  • I went to compute the Average log ratio for each chip (each column of data). In cell B2, I typed the following equation:
=AVERAGE(B4:B5224)
and pressed "Enter". Excel computed the average value of the cells specified in the range given inside the parentheses. Instead of typing the cell designations, I clicked on the beginning cell, scrolled down to the bottom of the worksheet, and shift-clicked on the ending cell.
  • Then I computed the Standard Deviation of the log ratios on each chip (each column of data). In cell B3, I typed the following equation:
=STDEV(B4:B5224)
and pressed "Enter".
  • Excel did some work for me. I copied these two equations (cells B2 and B3) and pasted them into the empty cells in the rest of the columns. Excel automatically changed the equation to match the cell designations for those columns.
  • I had computed the average and standard deviation of the log ratios for each chip. Then I did the scaling and centering based on these values.
  • I copied the column headings for all of my data columns and then pasted them to the right of the last data column so that I had a second set of headers above blank colums of cells. I edited the names of the columns so that they read: A1_scaled_centered, A2_scaled_centered, etc.
  • In cell N4, I typed the following equation:
=(B4-B$2)/B$3
In this case, I wanted the data in cell B4 to have the average subtracted from it (cell B2) and be divided by the standard deviation (cell B3). I used the dollar sign symbols in front of the "2" and "3" to tell Excel to always reference that row in the equation, even though I pasted it for the entire column of 5221 genes. Why is this important? Because it will keep the data consistent.
  • I copied and pasted this equation into the entire column. One easy way to do this was to click on the original cell with my equation and positioned my cursor at the bottom right corner. I saw my cursor change to a thin black plus sign (not a chubby white one). When it did, I double clicked, and the formula magically was copied to the entire column of genes.
  • I copied and pasted the scaling and centering equation for each of the columns of data with the "_scaled_centered" column header. I made sure that my equation was correct for the column I was calculating.

Perform statistical analysis on the ratios

I performed this step on the scaled and centered data I produced in the previous step.

  • I inserted a new worksheet and named it "statistics".
  • I went back to the "scaling_centering" worksheet and copied the first column ("ID").
  • I pasted the data into the first column of my new "statistics" worksheet.
  • I went back to the "scaling_centering" worksheet and copied the columns that were designated "_scaled_centered".
  • I went to my new worksheet and clicked on the B1 cell. I selected "Paste Special" from the Edit menu. A window opened: I clicked on the radio button for "Values" and clicked OK. This pasted the numerical result into my new worksheet instead of the equation which must have made calculations on the fly.
  • I deleted Rows 2 and 3 where it said "Average" and "StDev" so that my data rows with gene IDs were immediately below the header row 1.
  • I went to a new column on the right of my worksheet. I typed the header "Avg_LogFC_A", "Avg_LogFC_B", and "Avg_LogFC_C" into the top cell of the next three columns.
  • I computed the average log fold change for the replicates for each patient by typing the equation:
=AVERAGE(B2:E2)
into cell N2. I copied this equation and pasted it into the rest of the column.
  • I created the equation for patients B and C and pasted them into their respective columns.
  • I then computed the average of the averages. I typed the header "Avg_LogFC_all" into the first cell in the next empty column. I created the equation that computed the average of the three previous averages I calculated and pasted it into this entire column.
  • I inserted a new column next to the "Avg_LogFC_all" column that I computed in the previous step. I labeled the column "Tstat". This computed a T statistic that told me whether the scaled and centered average log ratio was significantly different than 0 (no change). I entered the equation:
=AVERAGE(N2:P2)/(STDEV(N2:P2)/SQRT(number of replicates))
(NOTE: in this case the number of replicates was 3. I was careful that I was using the correct number of parentheses.) I copied the equation and pasted it into all rows in that column.
  • I labeled the top cell in the next column "Pvalue". In the cell below the label, I entered the equation:
=TDIST(ABS(R2),degrees of freedom,2)

The number of degrees of freedom was the number of replicates minus one, so in my case there were 2 degrees of freedom. I copied the equation and pasted it into all rows in that column.

Calculate the Bonferroni p value Correction

  • I performed adjustments to the p value to correct for the multiple testing problem. I labeled the next two columns to the right with the same label, Bonferroni_Pvalue.
  • I typed the equation =S2*5221, Upon completion of this single computation, I used the trick to copy the formula throughout the column.
  • I replaced any corrected p value that was greater than 1 by the number 1 by typing the following formula into the first cell below the second Bonferroni_Pvalue header: =IF(T2>1,1,T2). I used the trick to copy the formula throughout the column.

10/20/15 Protocol

  • I continued with the BIOL398-01/S10:Sample Microarray Analysis Vibrio cholerae page to finish the list of actions to perform.

Calculate the Benjamini & Hochberg p value Correction

  • I inserted a new worksheet named "B-H_Pvalue".
  • I copied and pasted the "ID" column from my previous worksheet into the first column of the new worksheet.
  • I inserted a new column on the very left and named it "MasterIndex". I created a numerical index of genes so that I could always sort them back into the same order.
    • I typed a "1" in cell A2 and a "2" in cell A3.
    • I selected both cells. I hovered my mouse over the bottom-right corner of the selection until it made a thin black + sign. I double-clicked on the + sign to fill the entire column with a series of numbers from 1 to 5221 (the number of genes on the microarray).
  • For the following, I used Paste special > Paste values. I copied my unadjusted p values from my previous worksheet and pasted them into Column C.
  • I selected all of columns A, B, and C. I sorted by ascending values on Column C. I clicked the sort button from A to Z on the toolbar, in the window that appeared, sorted by column C, smallest to largest.
  • I typed the header "Rank" in cell D1. I created a series of numbers in ascending order from 1 to 5221 in this column. This was the p value rank, smallest to largest. I typed "1" into cell D2 and "2" into cell D3. I selected both cells D2 and D3. I double-clicked on the plus sign on the lower right-hand corner of my selection to fill the column with a series of numbers from 1 to 5221.
  • Then I calculated the Benjamini and Hochberg p value correction. I typed B-H_Pvalue in cell E1. I typed the following formula in cell E2: =(C2*5221)/D2 and pressed enter. I copied that equation to the entire column.
  • I typed "B-H_Pvalue" into cell F1.
  • I typed the following formula into cell F2: =IF(E2>1,1,E2) and pressed enter. I copied that equation to the entire column.
  • I selected columns A through F. I then sorted them by your MasterIndex in Column A in ascending order.
  • I copied column F and used Paste special > Paste values to paste it into the next column on the right of my "statistics" sheet.

Prepare file for GenMAPP

  • I inserted a new worksheet and named it "forGenMAPP".
  • I went back to the "statistics" worksheet and Selected All and Copy.
  • I went to my new sheet and clicked on cell A1 and selected Paste Special, clicked on the Values radio button, and clicked OK. I then formatted this worksheet for import into GenMAPP.
  • I selected Columns B through Q (all the fold changes). I selected the menu item Format > Cells. Under the number tab, I selected 2 decimal places. I clicked OK.
  • I selected all the columns containing p values. I selected the menu item Format > Cells. Under the number tab, I selected 4 decimal places. I clicked OK.
  • I deleted the left-most Bonferroni p value column, preserving the one that showed the result of my "if" statement.
  • I inserted a column to the right of the "ID" column. I typed the header "SystemCode" into the top cell of this column. I filled the entire column (each cell) with the letter "N".
  • I selected the menu item File > Save As, and chose "Text (Tab-delimited) (*.txt)" from the file type drop-down menu. Excel made me click through a couple of warnings because it didn't like me going all independent and choosing a different file type than the native .xls. That was OK. My new *.txt file was now ready for import into GenMAPP. But before I did that, I wanted to know a few things about my data as shown in the next section.

Sanity Check: Number of genes significantly changed

Before I moved on to the GenMAPP/MAPPFinder analysis, I wanted to perform a sanity check to make sure that I performed our data analysis correctly. I found out the number of genes that are significantly changed at various p value cut-offs and also compared my data analysis with the published results of Merrell et al. (2002).

  • I opened my spreadsheet and went to the "forGenMAPP" tab.
  • I clicked on cell A1 and selected the menu item Data > Filter > Autofilter. Little drop-down arrows appeared at the top of each column. This enabled me to filter the data according to criteria I set.
  • I clicked on the drop-down arrow on my "Pvalue" column. I selected "Custom". In the window that appeared, set a criterion that will filter my data so that the Pvalue has to be less than 0.05.
    • How many genes have p value < 0.05? and what is the percentage (out of 5221)?
    • 948/5221, about 18.2 % of genes have p value < 0.05.
    • What about p < 0.01? and what is the percentage (out of 5221)?
    • 235/5221, about 4.5 % of genes have p value < 0.01
    • What about p < 0.001? and what is the percentage (out of 5221)?
    • 24/5221, about 0.46% of genes have p value < 0.0001
    • What about p < 0.0001? and what is the percentage (out of 5221)?
    • 2/5221, about 0.04 % of genes have p value less than 0.0001
  • When I used a p value cut-off of p < 0.05, what is assumed is that there would have been a gene expression change that deviated this far from zero less than 5% of the time.
  • I had just performed 5221 T tests for significance. Another way to state what I was seeing with p < 0.05 is that I expected to see this magnitude of a gene expression change in about 5% of my T tests, or 261 times. (Tested my understanding: http://xkcd.com/882/.) Since I had more than 261 genes that pass this cut off, I knew that some genes were significantly changed. However, I didn't know which ones. To apply a more stringent criterion to my p values, I performed the Bonferroni and Benjamini and Hochberg corrections to these unadjusted p values. The Bonferroni correction was very stringent. The Benjamini-Hochberg correction was less stringent. To see this relationship, I filtered my data to determine the following:
    • How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 5221)?
    • 6/5221, about 0.12% of genes
    • How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 5221)?
    • 0/5221, 0% of genes
  • In summary, the p value cut-off was not thought of as some magical number at which data became "significant". Instead, it was a moveable confidence level. If I wanted to be very confident of my data, I used a small p value cut-off. If I was OK with being less confident about a gene expression change and wanted to include more genes in my analysis, I used a larger p value cut-off.
  • The "Avg_LogFC_all" told me the size of the gene expression change and in which direction. Positive values were increases relative to the control; negative values were decreases relative to the control.
    • I kept the (unadjusted) "Pvalue" filter at p < 0.05, filtered the "Avg_LogFC_all" column to show all genes with an average log fold change greater than zero. How many are there? (and %)
    • 325 genes, about 6.74% of genes.
    • I kept the (unadjusted) "Pvalue" filter at p < 0.05, filtered the "Avg_LogFC_all" column to show all genes with an average log fold change less than zero. How many are there? (and %)
    • 596 genes, about 11.42% of genes.
    • What about an average log fold change of > 0.25 and p < 0.05? (and %)
    • 339 genes, about 6.5 % of genes.
    • Or an average log fold change of < -0.25 and p < 0.05? (and %) (These are more realistic values for the fold change cut-offs because it represents about a 20% fold change which is about the level of detection of this technology.)
    • 579 genes, 11.09% of genes.
  • In summary, the p value cut-off was not thought of as some magical number at which data became "significant". Instead, it was a moveable confidence level. If I wanted to be very confident of my data, I used a small p value cut-off. If I was OK with being less confident about a gene expression change and wanted to include more genes in my analysis, I used a larger p value cut-off. For the GenMAPP analysis below, I used the fold change cut-off of greater than 0.25 or less than -0.25 and the unadjusted p value cut off of p < 0.05 for my analysis because I wanted to include several hundred genes in my analysis.
  • What criteria did Merrell et al. (2002) use to determine a significant gene expression change? How does it compare to our method?
    • Merrell et al. conducted a two-class SAM analysis, with the in vitro strain as class I and the each of the patients' samples being class II. The study selected genes with statistically significant changes, at least 2 fold, in each patient sample and these sample data were used to identify genes that significantly changed in expression for all three samples. Our method was similar in that we utilized the data from all three samples, however we used p values (less than 0.05) to identify significant changes in gene expression.

Sanity Check: Compare individual genes with known data

  • Merrell et al. (2002) report that genes with IDs: VC0028, VC0941, VC0869, VC0051, VC0647, VC0468, VC2350, and VCA0583 were all significantly changed in their data. Look these genes up in your spreadsheet. What are their fold changes and p values? Are they significantly changed in our analysis?
  • VC0028
    • Fold Change: 1st entry = 1.65, 2nd entry = 1.27
    • P-Value: 1st entry = 0.0474, 2nd entry = 0.0692
    • Significance: 1st and 2nd entry = statistically significant
  • VC0941
    • Fold Change: 1st entry = 0.09, 2nd entry = -0.28
    • P-Value: 1st entry = 0.6759, 2nd entry = 0.1636
    • Significance: 1st and 2nd entry = not statistically significant
  • VC0869
    • Fold Change (nth entry): 1 = 1.59, 2 = 1.95, 3 = 2.20, 4 = 1.50, 5 = 2.12
    • P-Value (nth entry): 1 = 0.0463, 2 = 0.0227, 3 = 0.0020, 4 = 0.0174, 5 = 0.0200
    • Significance (nth entry): 1 = significant, 2 = significant, 3 = significant, 4 = significant, 5 = significant
  • VC0051
    • Fold Change: 1st entry = 1.92, 2nd entry = 1.89
    • P-Value: 1st entry = 0.0139, 2nd entry = 0.0160
    • Significance: 1st and 2nd entry = statistically significant
  • VC0468
    • Fold Change: -0.17
    • P-Value: 0.3350
    • Significance: not statistically significant
  • VC2350
    • Fold Change: -2.40
    • P-Value: 0.0130
    • Significance: statistically significant
  • VCA0583
    • Fold Change: 1.06
    • P-Value: 0.1011
    • Significance: not statistically significant
  • I then moved onto the and part 2 page to continue the assignment now using the programs genMAPP and MAPPFinder. I used the text as a template.

10/22/15 GenMAPP Expression Dataset Manager Procedure

  • I launched the GenMAPP Program. I checked to make sure the correct Gene Database was loaded.
    • I looked in the lower, left-hand corner of the main GenMAPP Drafting Board window to see the name of the Gene Database that was loaded. If it was not the correct Gene Database or it said "No Gene Database", then went to the Data > Choose Gene Database menu item and selected the Gene Database I needed to perform the analysis.
    • Remember, you and your partner are going to use different versions of the Vibrio cholerae Gene Database for this exercise.
  • I selected the Data menu from the main Drafting Board window and chose Expression Dataset Manager from the drop-down list. The Expression Dataset Manager window opened.
  • I selected New Dataset from the Expression Datasets menu. I selected the tab-delimited text file that I formatted for GenMAPP (.txt) in the procedure above from the file dialog box that appeared.
    • I needed to download my .txt file from the wiki onto my Desktop.
  • The Data Type Specification window appeared. GenMAPP was expecting that I was providing numerical data. If any of my columns had text (character) data, I checked the box next to the field (column) name.
    • The Vibrio data I had been working with did not have any text (character) data in it.
  • I allowed the Expression Dataset Manager to convert my data.
    • This took a few minutes depending on the size of the dataset and the computer’s memory and processor speed. When the process was complete, the converted dataset was active in the Expression Dataset Manager window and the file was saved in the same folder the raw data file was in, named the same except with a .gex extension; for example, MyExperiment.gex.
    • A message appeared saying that the Expression Dataset Manager could not convert one or more lines of data. Lines that generated an error during the conversion of a raw data file were not added to the Expression Dataset. Instead, an exception file was created. The exception file was given the same name as my raw data file with .EX before the extension (e.g., MyExperiment.EX.txt). The exception file contained all of my raw data, with the addition of a column named ~Error~. This column contained either error messages or, if the program found no errors, a single space character.
      • Record the number of errors. For your journal assignment, open the .EX.txt file and use the Data > Filter > Autofilter function to determine what the errors were for the rows that were not converted. Record this information in your individual journal page.
      • After the conversion, there were 772 errors detected in the raw data by genMAPP using the 2009 databse.
      • It is likely that you will have a different number of errors than your partner who is using a different version of the Vibrio cholerae Gene Database. Which of you has more errors? Why do you think that is? Record your answers in your journal page.
      • I had many more errors in the 2009 database and this makes sense because it is an older database and with the updates there would be expected to be less errors, as my partner saw.


Mahrad Saeedi

Class Whoopers Team Page
Assignment Links
Individual Journals
Shared Journals