Difference between revisions of "Cdomin12 Week 8"

From LMU BioDB 2019
Jump to navigation Jump to search
(Statistical Analysis Part 1: ANOVA: added this part)
(Calculate the Benjamini & Hochberg p value Correction: added answers)
Line 65: Line 65:
 
* '''''Zip and uploaded the .xlsx file that just created to the wiki.'''''
 
* '''''Zip and uploaded the .xlsx file that just created to the wiki.'''''
  
 +
==== Sanity Check: Number of genes significantly changed ====
  
[[Media:P-value slideCD.pptx |p value slide]]
+
* Went to wt_ANOVA worksheet.
 +
* Selected row 1 (the row with your column headers) and selected the menu item Data > Filter > Autofilter (The funnel icon on the Data tab).
 +
* Clicked on the drop-down arrow for the unadjusted p value.  Set a criterion that will filter your data so that the p value has to be less than 0.05.
 +
** '''''How many genes have p < 0.05?  and what is the percentage (out of 6189)?'''''
 +
** '''''How many genes have p < 0.01? and what is the percentage (out of 6189)?'''''
 +
** '''''How many genes have p < 0.001? and what is the percentage (out of 6189)?'''''
 +
** '''''How many genes have p < 0.0001? and what is the percentage (out of 6189)?'''''
 +
* We have just performed 6189 hypothesis tests.  Another way to state what we are seeing with p < 0.05 is that we would expect to see this a gene expression change for at least one of the timepoints by chance in about 5% of our tests, or 309 times.  Since we have more than 309 genes that pass this cut off, we know that some genes are significantly changed.  However, we don't know ''which'' ones.  To apply a more stringent criterion to our p values, we performed the Bonferroni and Benjamini and Hochberg corrections to these unadjusted p values.  The Bonferroni correction is very stringent.  The Benjamini-Hochberg correction is less stringent.  To see this relationship, filter your data to determine the following:
 +
** '''''How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)?'''''
 +
** '''''How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?'''''
 +
 
 +
Answer to above question are located at the following link:[[Media:P-value slideCD.pptx |p value slide]]
 +
 
 +
* Compared results with known data:  the expression of the gene ''NSR1'' (ID: YGR159C)is known to be induced by cold shock. '''''Found ''NSR1'' in your dataset.  What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?  What is its average Log fold change at each of the timepoints in the experiment?'''''
 +
 
 +
Does ''NSR1'' change expression due to cold shock in this experiment?
 +
 
 +
* Found "your favorite gene" (from your [[Week 3]] assignment) in the dataset.  '''''What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?  What is its average Log fold change at each of the timepoints in the experiment?''''' 
 +
 
 +
Does your favorite gene change expression due to cold shock in this experiment?
  
 
===NSR1===
 
===NSR1===

Revision as of 20:37, 22 October 2019

User Page

template: cdomin12

Assignment Page Individual Journal Entries Class Journal
Week 1 cdomin12 Week 1 Class Journal Week 1
Week 2 cdomin12 Week 2 Class Journal Week 2
Week 3 RAD53 / YPL153C Week 3 Class Journal Week 3
Week 4 cdomin12 Week 4 Class Journal Week 4
Week 5 IMG/VR Week 5 Class Journal Week 5
Week 6 cdomin12 Week 6 Class Journal Week 6
Week 7 cdomin12 Week 7 Class Journal Week 7
Week 8 cdomin12 Week 8 Class Journal Week 8
Week 9 cdomin12 Week 9 Class Journal Week 9
Week 10 cdomin12 Week 10 Class Journal Week 10
Week 11 cdomin12 Week 11 Skinny Genes
Week 12/13 Skinny Genes Quality Assurance Skinny Genes
Week 15 Skinny Genes Deliverables Skinny Genes

Purpose

Methods/Results

Statistical Analysis Part 1: ANOVA

  1. Created a new worksheet, naming it "wt_ANOVA"
  2. Copied the first three columns containing the "MasterIndex", "ID", and "Standard Name" from the "Master_Sheet" worksheet and pasted it into new worksheet. Copied the columns containing the data for your strain and pasted it into your new worksheet.
  3. At the top of the first column to the right of your data, created five column headers of the form wt_AvgLogFC_(TIME) where STRAIN is your strain designation and (TIME) is 15, 30, etc.
  4. In the cell below the wt_AvgLogFC_t15 header, typed =AVERAGE
  5. Then highlighted all the data in row 2 associated with t15, pressed the closing paren key (shift 0),and pressed the "enter" key.
  6. This cell now contains the average of the log fold change data from the first gene at t=15 minutes.
  7. Clicked on this cell and position your cursor at the bottom right corner. You should see your cursor change to a thin black plus sign (not a chubby white one). When it does, double clicked, and the formula was copied to the entire column of 6188 other genes.
  8. Repeated steps (4) through (8) with the t30, t60, t90, and the t120 data.
  9. Now in the first empty column to the right of the wt_AvgLogFC_t120 calculation, createde the column header wt_ss_HO.
  10. In the first cell below this header, type =SUMSQ(
  11. Highlighted all the LogFC data in row 2 (but not the AvgLogFC), pressed the closing paren key (shift 0),and pressed the "enter" key.
  12. In the next empty column to the right of wt_ss_HO, created the column headers wt_ss_(TIME) as in (3).
  13. Made a note of how many data points you have at each time point for your strain. For most of the strains, it will be 4, but for dHAP4 t90 or t120, it will be "3", and for the wild type it will be "4" or "5". Counted carefully. Also, made a note of the total number of data points. Again, for most strains, this will be 20, but for example, dHAP4, this number will be 18, and for wt it should be 23 (double-check).
  14. In the first cell below the header wt_ss_t15, type =SUMSQ(D2:G2)-COUNTA(D2:G2)*AA2^2 and hit enter.
    • The COUNTA function counts the number of cells in the specified range that have data in them (i.e., does not count cells with missing values).
    • The phrase <range of cells for logFC_t15> was replaced by the data range associated with t15.
    • The phrase <AvgLogFC_t15> was replaced by the cell number in which computed the AvgLogFC for t15, and the "^2" squares that value.
    • Upon completion of this single computation, used the Step (7) trick to copy the formula throughout the column.
  15. Repeated this computation for the t30 through t120 data points. Again, be sure to get the data for each time point, type the right number of data points, and get the average from the appropriate cell for each time point, and copy the formula to the whole column for each computation.
  16. In the first column to the right of wt_ss_t120, created the column header wt_SS_full.
  17. In the first row below this header, type =sum=AL2 and hit enter.
  18. In the next two columns to the right, created the headers wt_Fstat and wt_p-value.
  19. Recall the number of data points from (13): call that total n.
  20. In the first cell of the (STRAIN)_Fstat column, type =((23-5)/5)*(((AF2)-(AL2))/(AL2))

and hit enter.

    • Don't actually type the n but instead use the number from (13). Also note that "5" is the number of timepoints.
    • Replaced the phrase wt_ss_HO with the cell designation.
    • Replaced the phrase <wt_SS_full> with the cell designation.
    • Copied to the whole column.
  1. In the first cell below the wt_p-value header, type =FDIST(AM2,5,23-5) replacing the phrase <(STRAIN)_Fstat> with the cell designation and the "n" as in (13) with the number of data points total. . Copied to the whole column.
    • Clicked on cell A1 and click on the Data tab. Selected the Filter icon (looks like a funnel). Little drop-down arrows appeared at the top of each column. This enabled us to filter the data according to criteria we set.
    • Clicked on the drop-down arrow on your wt_p-value column. Selected "Number Filters". In the window that appears, set a criterion that filtered data so that the p value has to be less than 0.05.
    • Excel will now only display the rows that correspond to data meeting that filtering criterion. A number appeared in the lower left hand corner of the window the number of rows that meet that criterion.
    • Undid any filters applied before making any additional calculations.

Results: 2,528/6189 records found to have P<0.05

Calculate the Bonferroni and p value Correction

Note: Be sure to undo any filters that you have applied before continuing with the next steps.

  1. Now we will perform adjustments to the p value to correct for the multiple testing problem. Labeled the next two columns to the right with the same label, wt_Bonferroni_p-value.
  2. Typed the equation =<wt_p-value>*6189, Upon completion of this single computation, used the Step (10) trick to copy the formula throughout the column.
  3. Replaced any corrected p value that is greater than 1 by the number 1 by typing the following formula into the first cell below the second wt_Bonferroni_p-value header: =IF(AO2>1,1,AO2), where "wt_Bonferroni_p-value" refers to the cell in which the first Bonferroni p value computation was made. Used the Step (10) trick to copy the formula throughout the column.

Calculate the Benjamini & Hochberg p value Correction

  1. Inserted a new worksheet named "wt_ANOVA_B-H".
  2. Copied and pasted the "MasterIndex", "ID", and "Standard Name" columns from previous worksheet into the first two columns of the new worksheet.
  3. For the following, used Paste special > Paste values. Copied unadjusted p values from ANOVA worksheet and pasted it into Column D.
  4. Selected all of columns A, B, C, and D. Sorted by ascending values on Column D. Clicked the sort button from A to Z on the toolbar, in the window that appears, sorted by column D, smallest to largest.
  5. Typed the header "Rank" in cell E1. Created a series of numbers in ascending order from 1 to 6189 in this column. This is the p value rank, smallest to largest. Typed "1" into cell E2 and "2" into cell E3. Selected both cells E2 and E3. Double-clicked on the plus sign on the lower right-hand corner of your selection to fill the column with a series of numbers from 1 to 6189.
  6. Calculated the Benjamini and Hochberg p value correction. Type wt_B-H_p-value in cell F1. Typed the following formula in cell F2: =(D2*6189)/E2 and press enter. Copy that equation to the entire column.
  7. Type "STRAIN_B-H_p-value" into cell G1.
  8. Type the following formula into cell G2: =IF(F2>1,1,F2) and press enter. Coped that equation to the entire column.
  9. Selected columns A through G. Sorted them by MasterIndex in Column A in ascending order.
  10. Copied column G and used Paste special > Paste values and pasted it into the next column on the right of your ANOVA sheet.
  • Zip and uploaded the .xlsx file that just created to the wiki.

Sanity Check: Number of genes significantly changed

  • Went to wt_ANOVA worksheet.
  • Selected row 1 (the row with your column headers) and selected the menu item Data > Filter > Autofilter (The funnel icon on the Data tab).
  • Clicked on the drop-down arrow for the unadjusted p value. Set a criterion that will filter your data so that the p value has to be less than 0.05.
    • How many genes have p < 0.05? and what is the percentage (out of 6189)?
    • How many genes have p < 0.01? and what is the percentage (out of 6189)?
    • How many genes have p < 0.001? and what is the percentage (out of 6189)?
    • How many genes have p < 0.0001? and what is the percentage (out of 6189)?
  • We have just performed 6189 hypothesis tests. Another way to state what we are seeing with p < 0.05 is that we would expect to see this a gene expression change for at least one of the timepoints by chance in about 5% of our tests, or 309 times. Since we have more than 309 genes that pass this cut off, we know that some genes are significantly changed. However, we don't know which ones. To apply a more stringent criterion to our p values, we performed the Bonferroni and Benjamini and Hochberg corrections to these unadjusted p values. The Bonferroni correction is very stringent. The Benjamini-Hochberg correction is less stringent. To see this relationship, filter your data to determine the following:
    • How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)?
    • How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?

Answer to above question are located at the following link:p value slide

  • Compared results with known data: the expression of the gene NSR1 (ID: YGR159C)is known to be induced by cold shock. Found NSR1 in your dataset. What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values? What is its average Log fold change at each of the timepoints in the experiment?
Does NSR1 change expression due to cold shock in this experiment? 
  • Found "your favorite gene" (from your Week 3 assignment) in the dataset. What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values? What is its average Log fold change at each of the timepoints in the experiment?

Does your favorite gene change expression due to cold shock in this experiment?

NSR1

unadjusted:2.86939E-10

Bonferroni-corrected:1.77586E-06

B-H-corrected: 8.87932E-07

average log fold change 15m:3.279225

average log fold change 30m:3.621

average log fold change 60 m:3.526525

average log fold change 90m:-2.04985

average log fold change 120m:-0.60622

Favorite Gene: YPL153C

unadjusted: 0.005415

Bonferroni-corrected: 33.50424111

B-H-corrected: 0.023931601

average log fold change 15m: -0.57335

average log fold change 30m: -0.78184

average log fold change 60 m: -0.7237

average log fold change 90m: 0.7496

average log fold change 120m: -0.25624

Data and Files

File:BIOL367 F19 microarray-data wt (1)CDupdate1.zip

Interim File

Revised Upload

text file

excel file

Conclusion

Acknowledgments

1. I worked with User:Knguye66, User:Jcowan4, and User:Mavila9 for this assignment.

2."Except for what is noted above, this individual journal entry was completed by me and not copied from another source."

Cdomin12 (talk) 21:07, 22 October 2019 (PDT)

References

Cdomin12 (talk) 21:07, 22 October 2019 (PDT)