Imacarae Week 8

From LMU BioDB 2019
Jump to navigation Jump to search

Imacarae's User Page

Assignment Shared Entries Individual Entries
Week 1 Class Journal Week 1 ----
Week 2 Class Journal Week 2 Imacarae Week 2
Week 3 Class Journal Week 3 HSF1/YGL073W Week 3
Week 4 Class Journal Week 4 Imacarae Week 4
Week 5 Class Journal Week 5 CancerSEA Week 5
Week 6 Class Journal Week 6 Imacarae Week 6
Week 7 Class Journal Week 7 Imacarae Week 7
Week 8 Class Journal Week 8 Imacarae Week 8
Week 9 Class Journal Week 9 Imacarae Week 9
Week 10 Class Journal Week 10 Imacarae Week 10
Week 11 Sulfiknights Imacarae Week 11
Week 12/13 Sulfiknights Sulfiknights DA Week 12/13
---- Sulfiknights Sulfiknights DA Week 14

Purpose

  • To conduct the "analyze" step of the data life cycle for a DNA microarray dataset.
  • To develop an intuition about what different p-value cut-offs mean.
  • To keep a detailed electronic laboratory notebook to facilitate reproducible research.

Methods and Results

Background, Week 7 procedure

  1. Open Excel spreadsheet for dCIN5 data:
    • Opened the worksheet labeled "Master_Sheet_<dCIN5>".
    • In this worksheet, each row contains the data for one gene (one spot on the microarray).
    • The first column contains the "MasterIndex", which numbers all of the rows sequentially in the worksheet so that we can always use it to sort the genes into the order they were in when we started.
    • The second column (labeled "ID") contains the Systematic Name (gene identifier) from the Saccharomyces Genome Database.
    • The third column contains the Standard Name for each of the genes.
    • Each subsequent column contains the log2 ratio of the red/green fluorescence from each microarray hybridized in the experiment, for each strain starting with wild type and proceeding in alphabetical order by strain deletion.
    • Each of the column headings from the data begin with the experiment name ("wt" for wild type S. cerevisiae data, "dCIN5" for the Δcin5 data, etc.). "LogFC" stands for "Log2 Fold Change" which is the Log2 red/green ratio. The timepoints are designated as "t" followed by a number in minutes. Replicates are numbered as "-0", "-1", "-2", etc. after the timepoint.
    • The timepoints are t15, t30, t60 (cold shock at 13°C) and t90 and t120 (cold shock at 13°C followed by 30 or 60 minutes of recovery at 30°C).
  2. Counting replicates
    • Strain: dCIN5
    • There are 4 replicates for each timepoint in this strain.
  3. Saving File:
    • Saved original Excel file under original name: BIOL367_F19_microarray-data_dCIN5.
    • Saved as a new name: BIOL367_F19_microarray-data_dCIN5_IM

Week 8

Statistical Analysis Part 1: ANOVA

  1. We created a new worksheet, naming it "dCIN5_ANOVA".
  2. We copied the first three columns containing the "MasterIndex", "ID", and "Standard Name" from the "Master_Sheet" worksheet for our strain and paste it into your the worksheet.
  3. At the top of the first column to the right of the data, we created five column headers of the form dCIN5_AvgLogFC_(TIME) where (TIME) is 15, 30, 60, 90, and 120.
  4. In the cell below the dCIN5_AvgLogFC_t15 header, we typed =AVERAGE(
  5. Then we highlighted all the data in row 2 associated with t15, pressed the closing paren key (shift 0),and pressed enter.
    • This cell now contains the average of the log fold change data from the first gene at t=15 minutes.
  6. We clicked on this cell and positioned the cursor at the bottom right corner to copy the entire column of 6188 other genes.
  7. We repeated steps 4-7 with the t30, t60, t90, and the t120 data.
  8. In the first empty column to the right of the dCIN5_AvgLogFC_t120 calculation, we created the column header dCIN5_ss_HO.
  9. In the first cell below this header, we typed =SUMSQ(
  10. We highlighted all the LogFC data in row 2 (but not the AvgLogFC), pressed the closing paren key (shift 0),and pressed the enter.
  11. In the empty column to the right of dCIN5_ss_HO, we created the column headers dCIN5_ss_(TIME) as in (3).
  12. We made a note of how many data points we have at each time point for our strain. For this strain, it is 20.
  13. In the first cell below the header dCIN5)_ss_t15, we typed =SUMSQ(<range of cells for logFC_t15>)-COUNTA(<range of cells for logFC_t15>)*<AvgLogFC_t15>^2 and hit enter.
    • The COUNTA function counts the number of cells in the specified range that have data in them (i.e., does not count cells with missing values).
    • The phrase <range of cells for logFC_t15> should be replaced by the data range associated with t15.
    • The phrase <AvgLogFC_t15> should be replaced by the cell number in which you computed the AvgLogFC for t15, and the "^2" squares that value.
    • Upon completion of this single computation, we used Step (7) trick to copy the formula throughout the column.
  14. We repeated this computation for the t30 through t120 data points. We were sure to get the data right for each time point for each cell, column, and row for each computation.
  15. In the first column to the right of dCIN5_ss_t120, we created the column header dCIN5_SS_full.
  16. In the first row below this header, we typed =sum(<range of cells containing "ss" for each timepoint>) and hit enter.
  17. In the next two columns to the right, we created the headers dCIN5_Fstat and dCIN5_p-value.
  18. Recall the number of data points from (13): call that total n. This is n=20.
  19. In the first cell of the dCIN5_Fstat column, type =((20-5)/5)*(<dCIN5_ss_HO>-<dCIN5_SS_full>)/<dCIN5_SS_full> and hit enter.
    • We replaced the phrase dCIN5_ss_HO with the cell designation.
    • We replaced the phrase <dCIN5_SS_full> with the cell designation.
    • We copied it to the whole column.
  20. In the first cell below the dCIN5_p-value header, we type =FDIST(<dCIN5_Fstat>,5,n-5) replacing the phrase <dCIN5_Fstat> with the cell designation and the "n" as in (13) with the number of data points total. We copied it to the whole column.
  21. Before we moved on to the next step, we performed a sanity check to see if we did all of these computations correctly.
    • We clicked on cell A1 and click on the Data tab and selected the Filter icon. Little drop-down arrows appeared at the top of each column. This enabled us to filter the data according to criteria we set.
    • We clicked on the drop-down arrow on your dCIN5_p-value column, select "Number Filters". In the window that appears, we set a criterion so that the p value is less than 0.05.
    • Excel now only displayed the rows that correspond to data meeting that filtering criterion. The number that appears in the lower left hand corner of the window gives the number of rows that meet that criterion. We checked our results with each other to make sure that the computations were performed correctly.

Calculate the Bonferroni and p value Correction

  1. We performed adjustments to the p value to correct for the multiple testing problem. We labeled the next two columns to the right with the same label, dCIN5_Bonferroni_p-value.
  2. We typed the equation =<dCIN5_p-value>*6189. Upon completion of this single computation, we used the Step (6) trick to copy the formula throughout the column.
  3. We replaced any corrected p value that is greater than 1 by the number 1 by typing the following formula into the first cell below the second dCIN5_Bonferroni_p-value header: =IF(dCIN5_Bonferroni_p-value>1,1,dCIN5_Bonferroni_p-value), where "dCIN5_Bonferroni_p-value" refers to the cell in which the first Bonferroni p value computation was made. We used the Step (10) trick again to copy the formula throughout the column.

Calculate the Benjamini & Hochberg p value Correction

  1. We created a new worksheet named "dCIN5_ANOVA_B-H".
  2. We copied and pasted the "MasterIndex", "ID", and "Standard Name" columns from our previous worksheet into the first two columns of the new worksheet.
  3. Using Paste special > Paste values, we copied our unadjusted p values from our ANOVA worksheet and pasted it into Column D.
  4. We select all of columns A, B, C, and D, sorted them by ascending values on Column D, clicked the sort button from A to Z on the toolbar, in the window that appears, and sorted by column D, smallest to largest.
  5. We typed the header "Rank" in cell E1. This created a series of numbers in ascending order from 1 to 6189 in this column. This is the p value rank, smallest to largest. We typed "1" into cell E2 and "2" into cell E3. We selected both cells E2 and E3 and double-clicked on the plus sign on the lower right-hand corner of our selection to fill the column with a series of numbers from 1 to 6189.
  6. We calculated the Benjamini and Hochberg p value correction. We typed dCIN5_B-H_p-value in cell F1. We typed =(D2*6189)/E2 in cell F2 and pressed enter. We copied that equation to the entire column.
  7. We typed "dCIN5_B-H_p-value" into cell G1.
  8. We typed =IF(F2>1,1,F2) into G2 and pressed enter. We copied that equation to the entire column.
  9. We selected columns A through G and sorted them by our MasterIndex in Column A in ascending order.
  10. We copied column G and use Paste special > Paste values to paste it into the next column on the right of our ANOVA sheet.
  • We zipped and uploaded the .xlsx file onto the wiki.

Sanity Check: Number of genes significantly changed

Before we move on to further analysis of the data, we wanted to perform a more extensive sanity check to make sure that we performed our data analysis correctly. We found out the number of genes that are significantly changed at various p value cut-offs.

  • We went to our dCIN5_ANOVA worksheet.
  • We selected row 1 and selected the menu item Data > Filter > Autofilter. Little drop-down arrows should appear at the top of each column. This will enables us to filter the data according to criteria we set.
  • We clicked on the drop-down arrow for the unadjusted p value, and set a criterion that will filter your data so that the p value was be less than 0.05.
    • How many genes have p < 0.05? and what is the percentage (out of 6189)?
    • How many genes have p < 0.01? and what is the percentage (out of 6189)?
    • How many genes have p < 0.001? and what is the percentage (out of 6189)?
    • How many genes have p < 0.0001? and what is the percentage (out of 6189)?
    • We created a new worksheet in our workbook to record the answers to these questions. Then, we wrote a formula in Excel to automatically calculate the percentage.
  • When we use a p value cut-off of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero by chance less than 5% of the time.
  • We have just performed 6189 hypothesis tests. Another way to state what we are seeing with p < 0.05 is that we would expect to see this a gene expression change for at least one of the timepoints by chance in about 5% of our tests, or 309 times. Since we have more than 309 genes that pass this cut off, we know that some genes are significantly changed. However, we don't know which ones. To apply a more stringent criterion to our p values, we performed the Bonferroni and Benjamini and Hochberg corrections to these unadjusted p values. The Bonferroni correction is very stringent. The Benjamini-Hochberg correction is less stringent. To see this relationship, filter your data to determine the following:
    • How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)?
    • How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?
  • In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level. If we want to be very confident of our data, use a small p value cut-off. If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off.
  • We will compare the numbers we get between the wild type strain and the other strains studied, organized as a table. Use this sample PowerPoint slide to see how your table should be formatted. Upload your slide to the wiki.
    • Since the wild type data is being analyzed by one of the groups in the class, it will be sufficient for this week to supply just the data for your strain. We will do the comparison with wild type at a later date.
  • Comparing results with known data: the expression of the gene NSR1 (ID: YGR159C)is known to be induced by cold shock. Find NSR1 in your dataset. What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values? What is its average Log fold change at each of the timepoints in the experiment? Note that the average Log fold change is what we called "STRAIN)_AvgLogFC_(TIME)" in step 3 of the ANOVA analysis. Does NSR1 change expression due to cold shock in this experiment?
    -Undajusted: 6.376E-08
    -Bonferroni-corrected: 0.0003946
    -BH-corrected: 2.192E-05
    -Average Log fold change at each timepoint:
    • -t=15: 4.070
      -t=30: 3.611
      -t=60: 4.298
      -t=90: -2.901
      -t=120: -0.9315
  • NSR1 did change expression due to cold shock most drastically during the recovery periods, and somewhat during the cold shock exposure times.
  • For fun, find "your favorite gene" (from your Week 3 assignment) in the dataset. What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values? What is its average Log fold change at each of the timepoints in the experiment? Does your favorite gene change expression due to cold shock in this experiment?
    • My Favorite Gene: YGL073
      -Undajusted: 0.004551
      -Bonferroni-corrected: 28.16
      -BH-corrected: 0.025720554
      -Average Log fold change at each timepoint:
      • -t=15: -1.230
        -t=30: -1.272
        -t=60: -1.500
        -t=90: -0.0757
        -t=120: 0.4227
  • YGL073 did not drastically change expression during cold shock exposure.

Clustering and GO Term Enrichment with stem (part 2)

  1. Preparing the microarray data file for loading into STEM.
    • We inserted a new worksheet into the Excel workbook and named it "dCIN5_stem".
    • We selected all of the data from the "dCIN5_ANOVA" worksheet and Paste special > paste values into our "dCIN5_stem" worksheet.
      • The leftmost column should have the column header "Master_Index". We renamed this column to "SPOT". Column B should be named "ID" and we renamed this column to "Gene Symbol". Then we deleted the column named "Standard_Name".
      • We filtered the data on the B-H corrected p value to be > 0.05 (that's greater than in this case).
        • Once the data has been filtered, we selected all of the rows (except for your header row) and deleted the rows by right-clicking and choosing "Delete Row" from the context menu. Undo the filter. This ensured that we will cluster only the genes with a "significant" change in expression and not the noise.
      • We deleted all of the data columns EXCEPT for the Average Log Fold change columns for each timepoint (for example, wt_AvgLogFC_t15, etc.).
      • We renamed the data columns with just the time and units (for example, 15m, 30m, etc.).
      • Then, we saved our work. We used Save As to save this spreadsheet as Text (Tab-delimited) (*.txt), and clicked OK to the warnings and close your file.
        • Note: we turned on the file extensions if necessary.
  2. Now we downloaded and extracted the STEM software. Here.
    • Click on the download link and download the stem.zip file to your Desktop.
    • We unzipped the file.
    • This created a folder called stem.
    • Inside the folder, we double-clicked on the stem.jar to launch the STEM program.
  3. Running STEM
    1. In section 1 (Expression Data Info) of the the main STEM interface window, we clicked on the Browse... button to navigate to and selected our file.
      • Clicked on the radio button No normalization/add 0.
      • Checked the box next to Spot IDs included in the data file.
        dCIN5 Stem Program View
    2. In section 2 (Gene Info) of the main STEM interface window, leave the default selection for the three drop-down menu selections for Gene Annotation Source, Cross Reference Source, and Gene Location Source as "User provided".
    3. Clicked the "Browse..." button to the right of the "Gene Annotation File" item. Browse to our "stem" folder and select the file "gene_association.sgd.gz" and clicked Open.
    4. In section 3 (Options) of the main STEM interface window, make sure that the Clustering Method says "STEM Clustering Method" and do not change the defaults for Maximum Number of Model Profiles or Maximum Unit Change in Model Profiles between Time Points.
    5. In section 4 (Execute) we clicked on the yellow Execute button to run STEM.
      • If we got an error, there are some known reasons why stem might not work. If we had #DIV/0! errors in our input file, it caused problems. We re-opened your file and opened the Find/Replace dialog, searched for #DIV/0!, but don't put anything in the replace field, and click "Replace all" to remove the #DIV/0! errors. Then we saved our file and try again with stem.
      • This is where we stopped for the Week 8 assignment.

Data and Files

Zipped Excel File

Text File

dCIN5 P-value Side

Conclusion

The purpose of this experiment was to analyze the microarray data of a specific yeast specimen. Genes of specimen genome were deleted, and the affect of the deletion was subject to cold shock. The expression of these certain genes was therefore monitored over time to show variances. Analysis of the data included conducted ANOVA tests among different standards. It was determined that our group strain, dCIN5, did not change expression over time in response to cold shock. This is determined by the multiple p-value tests run.

Acknowledgments

  • To Dr. Dahlquist for walking us through analyzing the data with Excel.
    • Procedural steps were copied from the [Week 8] assignment page and modified to fit the data.
  • To my group members, DeLisa, Mihir, and Emma. We met during class time to make sure that we had the right values and to help each other with specific steps of the procedure.
  • Except for what is noted above, this individual journal entry was completed by me and not copied from another source.

Imacarae (talk) 18:42, 23 October 2019 (PDT)

References