Difference between revisions of "Dbashour Week 8"

From LMU BioDB 2017
Jump to: navigation, search
(edited tense)
(added summary)
Line 1: Line 1:
== Homework Instructions ==
+
= Electronic Notebook =
 +
 
 +
*strain: dGLN3
 +
*filename: DGLN3_ANOVA_DB <br>
 +
*timepoints: 15, 30, 60, 90, 120 <br>
 +
*number of replicates: there are 4 replicates for each time point <br>
 +
*number of NA cells replaced: 6652
  
 
==== Part 1: Statistical Analysis Part 1 ====
 
==== Part 1: Statistical Analysis Part 1 ====
Line 5: Line 11:
 
The purpose of the witin-stain ANOVA test is to determine if any genes had a gene expression change that was significantly different than zero at '''''any''''' timepoint.
 
The purpose of the witin-stain ANOVA test is to determine if any genes had a gene expression change that was significantly different than zero at '''''any''''' timepoint.
  
# Created a new worksheet, named it "dGLN3_ANOVA".  
+
# I created a new worksheet, named it "dGLN3_ANOVA".  
 
# I copied the first three columns containing the "MasterIndex", "ID", and "Standard Name" from the "Master_Sheet" worksheet for my strain and pasted it into dGLN3_ANOVA. I copied the columns containing the data for dGLN3 and pasted it into dGLN3_ANOVA.  
 
# I copied the first three columns containing the "MasterIndex", "ID", and "Standard Name" from the "Master_Sheet" worksheet for my strain and pasted it into dGLN3_ANOVA. I copied the columns containing the data for dGLN3 and pasted it into dGLN3_ANOVA.  
 
# At the top of the first column to the right of your data, I created five column headers of the form dGLN3_AvgLogFC_(TIME) where (TIME) is 15, 30, etc.
 
# At the top of the first column to the right of your data, I created five column headers of the form dGLN3_AvgLogFC_(TIME) where (TIME) is 15, 30, etc.
Line 12: Line 18:
 
# This cell now contains the average of the log fold change data from the first gene at t=15 minutes.
 
# This cell now contains the average of the log fold change data from the first gene at t=15 minutes.
 
# I Clicked on this cell and positioned my cursor at the bottom right corner. You should see your cursor change to a thin black plus sign (not a chubby white one). When it does, double click, and the formula will magically be copied to the entire column of 6188 other genes.
 
# I Clicked on this cell and positioned my cursor at the bottom right corner. You should see your cursor change to a thin black plus sign (not a chubby white one). When it does, double click, and the formula will magically be copied to the entire column of 6188 other genes.
# Repeat steps (4) through (8) with the t30, t60, t90, and the t120 data.
+
# I then repeated steps (4) through (8) with the t30, t60, t90, and the t120 data. Except with each new time column I used the formula corresponding to its time point i.e. dGLN3_AvgLogFC_t30 for time column 30, dGLN3_AvgLogFC_t60 for time column 60. dGLN3_AvgLogFC_t90 for time column 90, and dGLN3_AvgLogFC_t120 for time column 120.
#* for t30, I used the formula  
+
# Now in the first empty column to the right of the dGLN3_AvgLogFC_t120 calculation, I created the column header dGLN3_ss_HO.
# Now in the first empty column to the right of the dGLN3_AvgLogFC_t120 calculation, create the column header dGLN3)_ss_HO.
+
# In the first cell below this header, I typed <code>=SUMSQ(</code>
# In the first cell below this header, type <code>=SUMSQ(</code>
+
# I highlighted all the LogFC data in row 2 of dGLN3 (but not the AvgLogFC), then pressed the closing paren key (shift 0),and pressed the "enter" key.  
# Highlight all the LogFC data in row 2 for your dGLN3 (but not the AvgLogFC), press the closing paren key (shift 0),and press the "enter" key.  
+
# In the next empty column (AC) to the right of dGLN3_ss_HO, I created the column headers dGLN3_ss_(TIME) where (TIME) is 15, 30, 60, 90, and 120.
# In the next empty column to the right of dGLN3_ss_HO, create the column headers dGLN3_ss_(TIME) as in (3).
+
# ''' There are 4 data points for each time point. The total number of data points is 20 '''.
# Make a note of how many data points you have at each time point for your strain.  For most of the strains, it will be 4, but for dHAP4 t90 or t120, it will be "3", and for the wild type it will be "4" or "5". Count carefully. Also, make a note of the total number of data points. Again, for most strains, this will be 20, but for example, dHAP4, this number will be 18, and for wt it should be 23 (double-check).
+
# In the cell AC2, I typed <code>=SUMSQ(<range of cells for logFC_t15>)-COUNTA(<range of cells for logFC_t15>)*<AvgLogFC_t15>^2</code> and hit enter.
# In the first cell below the header dGLN3_ss_t15, type <code>=SUMSQ(<range of cells for logFC_t15>)-COUNTA(<range of cells for logFC_t15>)*<AvgLogFC_t15>^2</code> and hit enter.
 
 
#* The <code>COUNTA</code> function counts the number of cells in the specified range that have data in them (i.e., does not count cells with missing values).
 
#* The <code>COUNTA</code> function counts the number of cells in the specified range that have data in them (i.e., does not count cells with missing values).
 
#* The phrase <range of cells for logFC_t15> should be replaced by the data range associated with t15.  
 
#* The phrase <range of cells for logFC_t15> should be replaced by the data range associated with t15.  
 
#* The phrase <AvgLogFC_t15> should be replaced by the cell number in which you computed the AvgLogFC for t15, and the "^2" squares that value.  
 
#* The phrase <AvgLogFC_t15> should be replaced by the cell number in which you computed the AvgLogFC for t15, and the "^2" squares that value.  
#* Upon completion of this single computation, use the Step (7) trick to copy the formula throughout the column.
+
#* Upon completion of this single computation, I used the Step (7) trick to copy the formula throughout the column.
# Repeat this computation for the t30 through t120 data points.  Again, be sure to get the data for each time point, type the right number of data points, and get the average from the appropriate cell for each time point, and copy the formula to the whole column for each computation.
+
# I repeated this computation for the t30 through t120 data points.  Again, I made sure to get the data for each time point, typed the right number of data points, got the average from the appropriate cell for each time point, and copied the formula to the whole column for each computation.
# In the first column to the right of (STRAIN)_ss_t120, create the column header dGLN3_SS_full.
+
# In cell AI1, I created the column header dGLN3_SS_full.  
# In the first row below this header, type <code>=sum(<range of cells containing "ss" for each timepoint>)</code> and hit enter.
+
# In cell AI2, I typed <code>=sum(<range of cells containing "ss" for each timepoint>)</code> and hit enter.
# In the next two columns to the right, create the headers dGLN3_Fstat and dGLN3_p-value.
+
# In cells AJ1 and AK1, I created the headers dGLN3_Fstat and dGLN3_p-value.
# Recall the number of data points from (13): call that total n.
+
# In cell AJ2, I typed <code>=((20-5)/5)*(AC2-AI2)/AI2</code> and hit enter.
# In the first cell of the dGLN3_Fstat column, type <code>=((n-5)/5)*(<(dGLN3_ss_HO>-<dGLN3_SS_full>)/<dGLN3_SS_full></code> and hit enter.
+
#* I copied this to the whole column.
#* Don't actually type the n but instead use the number from (13). Also note that "5" is the number of timepoints and the dSWI4 strain has 4 timepoints (it is missing t15).
+
# In cell AK2, I typed <code>=FDIST(AJ2,5,20-5)</code>.   
#* Replace the phrase dGLN3_ss_HO with the cell designation.
+
#* I copied this to the whole column.
#* Replace the phrase <dGLN3_SS_full> with the cell designation.
+
# Before I moved on to the next step, I performed a quick sanity check to see if I did all of these computations correctly.
#* Copy to the whole column.
+
#* I clicked on cell A1 and click on the Data tab.  I selected the Filter icon (looks like a funnel). Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
# In the first cell below the dGLN3_p-value header, type <code>=FDIST(<dGLN3_Fstat>,5,n-5)</code> replacing the phrase <dGLN3_Fstat> with the cell designation and the "n" as in (13) with the number of data points total. (Again, note that the number of timepoints is actually "4" for the dSWI4 strain)Copy to the whole column.
+
#* I click on the drop-down arrow on cell AK2 and selected "Number Filters". In the window that appears, set a criterion that will filter your data so that the p value has to be less than 0.05.  
# Before we move on to the next step, we will perform a quick sanity check to see if we did all of these computations correctly.
 
#* Click on cell A1 and click on the Data tab.  Select the Filter icon (looks like a funnel). Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
 
#* Click on the drop-down arrow on your dGLN3_p-value column. Select "Number Filters". In the window that appears, set a criterion that will filter your data so that the p value has to be less than 0.05.  
 
 
#* Excel will now only display the rows that correspond to data meeting that filtering criterion.  A number will appear in the lower left hand corner of the window giving you the number of rows that meet that criterion.  We will check our results with each other to make sure that the computations were performed correctly.
 
#* Excel will now only display the rows that correspond to data meeting that filtering criterion.  A number will appear in the lower left hand corner of the window giving you the number of rows that meet that criterion.  We will check our results with each other to make sure that the computations were performed correctly.
  
 
=== Calculate the Bonferroni and p value Correction ===
 
=== Calculate the Bonferroni and p value Correction ===
  
# Now we will perform adjustments to the p value to correct for the [https://xkcd.com/882/ multiple testing problem]. Label the next two columns to the right with the same label, (STRAIN)_Bonferroni_p-value.
+
# Then I performed adjustments to the p value to correct for the [https://xkcd.com/882/ multiple testing problem]. I labeled the next two columns to the right with the same label, dGLN3_Bonferroni_p-value.
# Type the equation <code>=<dGLN3_p-value>*6189</code>, Upon completion of this single computation, use the Step (10) trick to copy the formula throughout the column.
+
# I type the equation <code>=<dGLN3_p-value>*6189</code>, Upon completion of this single computation, I used the Step (10) trick to copy the formula throughout the column.
# Replace any corrected p value that is greater than 1 by the number 1 by typing the following formula into the first cell below the second dGLN3_Bonferroni_p-value header: <code>=IFdGLN3_Bonferroni_p-value>1,1,dGLN3_Bonferroni_p-value)</code>, where "dGLN3_Bonferroni_p-value" refers to the cell in which the first Bonferroni p value computation was made. Use the Step (10) trick to copy the formula throughout the column.
+
# I replace any corrected p value that is greater than 1 by the number 1 by typing the following formula into cell AL2: <code>=IFAK2>1,1,AK2)</code>. I used the Step (10) trick to copy the formula throughout the column.
  
 
==== Calculate the Benjamini & Hochberg p value Correction ====
 
==== Calculate the Benjamini & Hochberg p value Correction ====
  
# Insert a new worksheet named "dGLN3_ANOVA_B-H".
+
# I Inserted a new worksheet named "dGLN3_ANOVA_B-H".
# Copy and paste the "MasterIndex", "ID", and "Standard Name" columns from your previous worksheet into the first two columns of the new worksheet.  
+
# I copied and pasted the "MasterIndex", "ID", and "Standard Name" columns from your previous worksheet into the first two columns of the new worksheet.  
# For the following, use Paste special > Paste values.  Copy your unadjusted p values from your ANOVA worksheet and paste it into Column D.
+
# For the following, use Paste special > Paste values.  I copied the unadjusted p values from my ANOVA worksheet and pasted it into Column D.
# Select all of columns A, B, C, and D. Sort by ascending values on Column D. Click the sort button from A to Z on the toolbar, in the window that appears, sort by column D, smallest to largest.
+
# I select all of columns A, B, C, and D and sorted by ascending values on Column D. I clicked the sort button from A to Z on the toolbar, in the window that appears, sort by column D, smallest to largest.
# Type the header "Rank" in cell E1.  We will create a series of numbers in ascending order from 1 to 6189 in this column.  This is the p value rank, smallest to largest.  Type "1" into cell E2 and "2" into cell E3. Select both cells E2 and E3. Double-click on the plus sign on the lower right-hand corner of your selection to fill the column with a series of numbers from 1 to 6189.
+
# I typed the header "Rank" in cell E1.  We will create a series of numbers in ascending order from 1 to 6189 in this column.  This is the p value rank, smallest to largest.  Type "1" into cell E2 and "2" into cell E3. Select both cells E2 and E3. Double-click on the plus sign on the lower right-hand corner of your selection to fill the column with a series of numbers from 1 to 6189.
# Now you can calculate the Benjamini and Hochberg p value correction. Type dGLN3_B-H_p-value in cell F1. Type the following formula in cell F2: <code>=(D2*6189)/E2</code> and press enter. Copy that equation to the entire column.
+
# Now you can calculate the Benjamini and Hochberg p value correction. Type dGLN3_B-H_p-value in cell F1. Type the following formula in cell F2: <code>=(D2*6189)/E2</code> and press enter. I copied that equation to the entire column.
# Type "dGLN3_B-H_p-value" into cell G1.  
+
# I typed "dGLN3_B-H_p-value" into cell G1.  
# Type the following formula into cell G2: <code>=IF(F2>1,1,F2)</code> and press enter. Copy that equation to the entire column.  
+
# I typed the following formula into cell G2: <code>=IF(F2>1,1,F2)</code> and pressed enter then copied that equation to the entire column.  
# Select columns A through G. Now sort them by your MasterIndex in Column A in ascending order.
+
# I selected columns A through G. Then sorted them by my MasterIndex in Column A in ascending order.
# Copy column G and use Paste special > Paste values to paste it into the next column on the right of your ANOVA sheet.
+
# I copied column G and used Paste special > Paste values to paste it into the next column on the right of my ANOVA sheet.
  
* '''''Zip and upload the .xlsx file that you have just created to the wiki.'''''
+
* '''''I uploaded this file to the deliverables section of this wiki page, naming it DGLN3_ANOVA_DB.'''''
  
 
==== Sanity Check: Number of genes significantly changed ====
 
==== Sanity Check: Number of genes significantly changed ====
  
Before we move on to further analysis of the data, we want to perform a more extensive sanity check to make sure that we performed our data analysis correctly. We are going to find out the number of genes that are significantly changed at various p value cut-offs.
+
Before I moved on to further analysis of the data, I wanted to perform a more extensive sanity check to make sure that I performed my data analysis correctly. I am going to find out the number of genes that are significantly changed at various p value cut-offs.
 +
 
 +
* I went to my dGLN3_ANOVA worksheet.
 +
* I Selected row 1 (the row with my column headers) and selected the menu item Data > Filter > Autofilter (The funnel icon on the Data tab).  Little drop-down arrows should appear at the top of each column.  This will enable us to filter the data according to criteria we set.
 +
* I clicked on the drop-down arrow for the unadjusted p value.  I set a criterion that will filter my data so that the p value has to be less than 0.05.
 +
**'''Genes with a p < 0.05 = 2135/6189 records or 34.50% of the data.
 +
**'''Genes with a p < 0.01 = 1204/6189 records or 19.45% of the data'''.
 +
**'''Genes with a p < 0.001 = 514/6189 records or 8.31% of the data'''.
 +
**'''Genes with a p < 0.0001 = 180/6189 records or 2.91% of the data.'''
  
* Go to your dGLN3_ANOVA worksheet.
+
* When I use a p value cut-off of p < 0.05, what I am saying is that I would have seen a gene expression change that deviates this far from zero by chance less than 5% of the time.
* Select row 1 (the row with your column headers) and select the menu item Data > Filter > Autofilter (The funnel icon on the Data tab).  Little drop-down arrows should appear at the top of each column.  This will enable us to filter the data according to criteria we set.
+
* I have just performed 6189 hypothesis tests.  Another way to state what I am seeing with p < 0.05 is that I would expect to see this a gene expression change for at least one of the timepoints by chance in about 5% of our tests, or 309 times.  Since I have more than 309 genes that pass this cut off, I know that some genes are significantly changed.  However, I don't know ''which'' ones.  To apply a more stringent criterion to our p values, I performed the Bonferroni and Benjamini and Hochberg corrections to these unadjusted p values.  The Bonferroni correction is very stringent.  The Benjamini-Hochberg correction is less stringent.  I filtered the data to determine this relationship and found:
* Click on the drop-down arrow for the unadjusted p value.  Set a criterion that will filter your data so that the p value has to be less than 0.05.
+
**'''Genes with a p < 0.05 for the Bonferroni-corrected p-value = 45/6189 records or 0.73% of the data.'''
** '''''How many genes have p < 0.05?  and what is the percentage (out of 6189)?'''''
+
**'''Genes with a p < 0.05 for the Benjamini and Hochber-corrected p-value = 1185/6189 records or 19.15% of the data.'''
** '''''How many genes have p < 0.01? and what is the percentage (out of 6189)?'''''
 
** '''''How many genes have p < 0.001? and what is the percentage (out of 6189)?'''''
 
** '''''How many genes have p < 0.0001? and what is the percentage (out of 6189)?'''''
 
* When we use a p value cut-off of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero by chance less than 5% of the time.
 
* We have just performed 6189 hypothesis tests.  Another way to state what we are seeing with p < 0.05 is that we would expect to see this a gene expression change for at least one of the timepoints by chance in about 5% of our tests, or 309 times.  Since we have more than 309 genes that pass this cut off, we know that some genes are significantly changed.  However, we don't know ''which'' ones.  To apply a more stringent criterion to our p values, we performed the Bonferroni and Benjamini and Hochberg corrections to these unadjusted p values.  The Bonferroni correction is very stringent.  The Benjamini-Hochberg correction is less stringent.  To see this relationship, filter your data to determine the following:
 
** '''''How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)?'''''
 
** '''''How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?'''''
 
 
* In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant".  Instead, it is a moveable confidence level.  If we want to be very confident of our data, use a small p value cut-off.  If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off.   
 
* In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant".  Instead, it is a moveable confidence level.  If we want to be very confident of our data, use a small p value cut-off.  If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off.   
* We will compare the numbers we get between the wild type strain and the other strains studied, organized as a table.  Use this [[Media:BIOL367_F17_sample_p-value_slide.pptx | sample PowerPoint slide]] to see how your table should be formatted. Upload your slide to the wiki.
+
* I compared the results to the wild type strain and uploaded the information to a powerpoint that is linked to in the deliverables section of this wiki.  
** Note that since the wild type data is being analyzed by one of the groups in the class, it will be sufficient for this week to supply just the data for your strain.  We will do the comparison with wild type at a later date.
+
* I also compared NSR1 and ADH1 and found the following:
* Comparing results with known data:  the expression of the gene ''NSR1'' (ID: YGR159C)is known to be induced by cold shock. '''''Find ''NSR1'' in your dataset.  What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?  What is its average Log fold change at each of the timepoints in the experiment?'''''  Note that the average Log fold change is what we called "dGLN3_AvgLogFC_(TIME)" in step 3 of the ANOVA analysis. Does ''NSR1'' change expression due to cold shock in this experiment?
 
* For fun, find "your favorite gene" (from your web page) in the dataset.  '''''What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?  What is its average Log fold change at each of the timepoints in the experiment?'''''  Does your favorite gene change expression due to cold shock in this experiment?
 
 
 
=== Summary Paragraph ===
 
  
* Write a summary paragraph that gives the conclusions from this week's analysis.
+
===== NSR1 =====
 
 
== Uploaded Zip File ==
 
 
 
[[Media: DGLN3_ANOVA.zip | dGLN3 zip file]]
 
 
 
== Uploaded Powerpoint File ==
 
 
 
[[Media: DGLN3_ppt_Dina.pptx | dGLN3 powerpoint]]
 
 
 
==P-Value Computations==
 
#Genes with a p < 0.05 = 2135/6189 records or 34.50% of the data.
 
#Genes with a p < 0.01 = 1204/6189 records or 19.45% of the data.
 
#Genes with a p < 0.001 = 514/6189 records or 8.31% of the data.
 
#Genes with a p < 0.0001 = 180/6189 records or 2.91% of the data.
 
 
 
==Bonferroni and Benjamini and Hochber Corrected P-Value Computations==
 
#Genes with a p < 0.05 for the Bonferroni-corrected p-value = 45/6189 records or 0.73% of the data.
 
#Genes with a p < 0.05 for the Benjamini and Hochber-corrected p-value = 1185/6189 records or 19.15% of the data.
 
 
 
==NSR1 Computations==
 
 
#Unaltered p-value = 0.000506
 
#Unaltered p-value = 0.000506
#Bonferroni-Corrected p-value = 3.13616
+
#Bonferroni-Corrected p-value = 1
 
#B-H corrected p-value = 0.008167
 
#B-H corrected p-value = 0.008167
 
#Average Log Fold:
 
#Average Log Fold:
Line 114: Line 93:
 
#* @ 90 = -1.85027
 
#* @ 90 = -1.85027
 
#* @ 120 = -1.86741
 
#* @ 120 = -1.86741
#As shown above, we can infer that gene expression started off high but eventually decreased to a consistent level due to cold shock.  
+
#As shown above, we can infer that gene expression started off high but eventually decreased to a consistent level due to cold shock. NSR1 has an unadjusted p value < 0.05, but is no longer significant with the corrections. This means that we have some confidence that is is really changing in this experiment, but not as much as some other genes.
  
==ADH1==
+
===== ADH1=====
 
#Unaltered p-value = 0.772
 
#Unaltered p-value = 0.772
#Bonferonni-corrected P-Value: 4780.319
+
#Bonferonni-corrected P-Value: 1
 
#B-H-corrected P-Value: 0.8617
 
#B-H-corrected P-Value: 0.8617
 
#Average Log Fold  
 
#Average Log Fold  
Line 126: Line 105:
 
#* @ 90 = -0.045097454
 
#* @ 90 = -0.045097454
 
#* @ 120 = 0.514075012
 
#* @ 120 = 0.514075012
#As shown above, we can infer that gene expression started low but rose around the 60 interval then decreased a significant amount at the 90 interval, only to rise again at the last interval. This shows that cold shock affects the gene expression around the 30-60 interval and the 90-120 interval.
+
#As shown above, we can infer that gene expression started low but rose around the 60 interval then decreased a significant amount at the 90 interval, only to rise again at the last interval. ADH1 p values are < 0.05, so the Average Log Fold Changes you see are likely just noise.
  
==Summary==
+
===Summary===
In summary, we hoped to find genes that showed significantly different rates of gene expression from the start of the experiment (time 0) to the end, 120 minutes later. We determined this by performing an ANOVA and determining the significance of the results. We were able to determine what the effects of cold related osmotic shock was on gene expression.  
+
Our strain was dGLN3 and we analyzed the data for timepoints 15, 30, 60, 90, and 120. We used that information to calculate the p-value of an ANOVA, the Bonferroni corrected p-value, and the Benjamini and Hochber corrected p-value for each timepoint. We learned how to use excel to facilitate with large datasets like this one. We learned shortcuts to highlighting information like double clicking in the corner of the box as well as dragging the box down to multiple cells in order to copy the function to those respective cells. We concluded by answering the questions related to our findings as well as analyzing the information from the "our favorite gene" assignment. In summary, we hoped to find genes that showed significantly different rates of gene expression from the start of the experiment (time 0) to the end, 120 minutes later. We determined this by performing an ANOVA and determining the significance of the results. We were able to determine what the effects of cold related osmotic shock was on gene expression. Specifically, we found that 1185 genes or 19.15% of the data in relation to the wildtype strain had a Benjamini & Hochberg-corrected p value of less than 0.05 and 45 genes or 0.73% of the data in relation to the wildtype strain had a Bonferroni-corrected p value of less than 0.05. Lastly, NSR1 has small p values indicating that it is significant and affected by the cold shock. ADH1, or our favorite gene, was most likely not affected by cold shock because of the p values being above 0.05, meaning that the average log fold changes we see are likely just noise.  
  
==Electronic Notebook==
+
==Acknowledgements==
We began this assignment by working in class on the [[Week 8]] homework. [[user:Zvanysse | Zack]] and I worked in class to complete the excel worksheet by following the direct instructions on the assignment page. Our strain was dGLN3 and we analyzed the data for timepoints 15, 30, 60, 90, and 120. We used that information to calculate the p-value of an ANOVA, the Bonferroni corrected p-value, and the Benjamini and Hochber corrected p-value for each timepoint. We learned how to use excel to facilitate with large datasets like this one. We learned shortcuts to highlighting information like double clicking in the corner of the box as well as dragging the box down to multiple cells in order to copy the function to those respective cells. We concluded by answering the questions related to our findings as well as analyzing the information from the "our favorite gene" assignment.  
+
I worked with [[user:Zvanysse | Zack]] to complete this week's assignment. We worked in class in order to ask questions as we worked but completed the rest of the assignment separately over the weekend. We consulted via text if we had any questions regarding the assignment. I also copied and modified the [[Week 8]] assignment page and used the data from Dr. Dahlquist's research on yeast and their effect on cold shock.
  
==Acknowledgements==
+
'''While I worked with the people noted above, this individual journal entry was completed by me and not copied from another source.'''
I worked with [[user:Zvanysse | Zack]] to complete this week's assignment. We worked in class in order to ask questions as we worked but completed the rest of the assignment separately over the weekend. We consulted via text if we had any questions regarding the assignment. While I worked with the people noted above, this individual journal entry was completed by me and not copied from another source.
 
  
 
[[User:Dbashour|Dbashour]] ([[User talk:Dbashour|talk]]) 23:29, 23 October 2017 (PDT)
 
[[User:Dbashour|Dbashour]] ([[User talk:Dbashour|talk]]) 23:29, 23 October 2017 (PDT)
Line 141: Line 119:
 
==References==
 
==References==
 
LMU BioDB 2017. (2017). Week 8. Retrieved October 23, 2017, from https://xmlpipedb.cs.lmu.edu/biodb/fall2017/index.php/Week_8
 
LMU BioDB 2017. (2017). Week 8. Retrieved October 23, 2017, from https://xmlpipedb.cs.lmu.edu/biodb/fall2017/index.php/Week_8
 +
  
  
 
{{Template:dbashour}}
 
{{Template:dbashour}}

Revision as of 22:48, 9 December 2017

Electronic Notebook

  • strain: dGLN3
  • filename: DGLN3_ANOVA_DB
  • timepoints: 15, 30, 60, 90, 120
  • number of replicates: there are 4 replicates for each time point
  • number of NA cells replaced: 6652

Part 1: Statistical Analysis Part 1

The purpose of the witin-stain ANOVA test is to determine if any genes had a gene expression change that was significantly different than zero at any timepoint.

  1. I created a new worksheet, named it "dGLN3_ANOVA".
  2. I copied the first three columns containing the "MasterIndex", "ID", and "Standard Name" from the "Master_Sheet" worksheet for my strain and pasted it into dGLN3_ANOVA. I copied the columns containing the data for dGLN3 and pasted it into dGLN3_ANOVA.
  3. At the top of the first column to the right of your data, I created five column headers of the form dGLN3_AvgLogFC_(TIME) where (TIME) is 15, 30, etc.
  4. In the cell below the dGLN3_AvgLogFC_t15 header, I typed =AVERAGE(
  5. Then I highlighted all the data in row 2 associated with dGLN3 and t15 and pressed the closing paren key (shift 0),and pressed the "enter" key.
  6. This cell now contains the average of the log fold change data from the first gene at t=15 minutes.
  7. I Clicked on this cell and positioned my cursor at the bottom right corner. You should see your cursor change to a thin black plus sign (not a chubby white one). When it does, double click, and the formula will magically be copied to the entire column of 6188 other genes.
  8. I then repeated steps (4) through (8) with the t30, t60, t90, and the t120 data. Except with each new time column I used the formula corresponding to its time point i.e. dGLN3_AvgLogFC_t30 for time column 30, dGLN3_AvgLogFC_t60 for time column 60. dGLN3_AvgLogFC_t90 for time column 90, and dGLN3_AvgLogFC_t120 for time column 120.
  9. Now in the first empty column to the right of the dGLN3_AvgLogFC_t120 calculation, I created the column header dGLN3_ss_HO.
  10. In the first cell below this header, I typed =SUMSQ(
  11. I highlighted all the LogFC data in row 2 of dGLN3 (but not the AvgLogFC), then pressed the closing paren key (shift 0),and pressed the "enter" key.
  12. In the next empty column (AC) to the right of dGLN3_ss_HO, I created the column headers dGLN3_ss_(TIME) where (TIME) is 15, 30, 60, 90, and 120.
  13. There are 4 data points for each time point. The total number of data points is 20 .
  14. In the cell AC2, I typed =SUMSQ(<range of cells for logFC_t15>)-COUNTA(<range of cells for logFC_t15>)*<AvgLogFC_t15>^2 and hit enter.
    • The COUNTA function counts the number of cells in the specified range that have data in them (i.e., does not count cells with missing values).
    • The phrase <range of cells for logFC_t15> should be replaced by the data range associated with t15.
    • The phrase <AvgLogFC_t15> should be replaced by the cell number in which you computed the AvgLogFC for t15, and the "^2" squares that value.
    • Upon completion of this single computation, I used the Step (7) trick to copy the formula throughout the column.
  15. I repeated this computation for the t30 through t120 data points. Again, I made sure to get the data for each time point, typed the right number of data points, got the average from the appropriate cell for each time point, and copied the formula to the whole column for each computation.
  16. In cell AI1, I created the column header dGLN3_SS_full.
  17. In cell AI2, I typed =sum(<range of cells containing "ss" for each timepoint>) and hit enter.
  18. In cells AJ1 and AK1, I created the headers dGLN3_Fstat and dGLN3_p-value.
  19. In cell AJ2, I typed =((20-5)/5)*(AC2-AI2)/AI2 and hit enter.
    • I copied this to the whole column.
  20. In cell AK2, I typed =FDIST(AJ2,5,20-5).
    • I copied this to the whole column.
  21. Before I moved on to the next step, I performed a quick sanity check to see if I did all of these computations correctly.
    • I clicked on cell A1 and click on the Data tab. I selected the Filter icon (looks like a funnel). Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
    • I click on the drop-down arrow on cell AK2 and selected "Number Filters". In the window that appears, set a criterion that will filter your data so that the p value has to be less than 0.05.
    • Excel will now only display the rows that correspond to data meeting that filtering criterion. A number will appear in the lower left hand corner of the window giving you the number of rows that meet that criterion. We will check our results with each other to make sure that the computations were performed correctly.

Calculate the Bonferroni and p value Correction

  1. Then I performed adjustments to the p value to correct for the multiple testing problem. I labeled the next two columns to the right with the same label, dGLN3_Bonferroni_p-value.
  2. I type the equation =<dGLN3_p-value>*6189, Upon completion of this single computation, I used the Step (10) trick to copy the formula throughout the column.
  3. I replace any corrected p value that is greater than 1 by the number 1 by typing the following formula into cell AL2: =IFAK2>1,1,AK2). I used the Step (10) trick to copy the formula throughout the column.

Calculate the Benjamini & Hochberg p value Correction

  1. I Inserted a new worksheet named "dGLN3_ANOVA_B-H".
  2. I copied and pasted the "MasterIndex", "ID", and "Standard Name" columns from your previous worksheet into the first two columns of the new worksheet.
  3. For the following, use Paste special > Paste values. I copied the unadjusted p values from my ANOVA worksheet and pasted it into Column D.
  4. I select all of columns A, B, C, and D and sorted by ascending values on Column D. I clicked the sort button from A to Z on the toolbar, in the window that appears, sort by column D, smallest to largest.
  5. I typed the header "Rank" in cell E1. We will create a series of numbers in ascending order from 1 to 6189 in this column. This is the p value rank, smallest to largest. Type "1" into cell E2 and "2" into cell E3. Select both cells E2 and E3. Double-click on the plus sign on the lower right-hand corner of your selection to fill the column with a series of numbers from 1 to 6189.
  6. Now you can calculate the Benjamini and Hochberg p value correction. Type dGLN3_B-H_p-value in cell F1. Type the following formula in cell F2: =(D2*6189)/E2 and press enter. I copied that equation to the entire column.
  7. I typed "dGLN3_B-H_p-value" into cell G1.
  8. I typed the following formula into cell G2: =IF(F2>1,1,F2) and pressed enter then copied that equation to the entire column.
  9. I selected columns A through G. Then sorted them by my MasterIndex in Column A in ascending order.
  10. I copied column G and used Paste special > Paste values to paste it into the next column on the right of my ANOVA sheet.
  • I uploaded this file to the deliverables section of this wiki page, naming it DGLN3_ANOVA_DB.

Sanity Check: Number of genes significantly changed

Before I moved on to further analysis of the data, I wanted to perform a more extensive sanity check to make sure that I performed my data analysis correctly. I am going to find out the number of genes that are significantly changed at various p value cut-offs.

  • I went to my dGLN3_ANOVA worksheet.
  • I Selected row 1 (the row with my column headers) and selected the menu item Data > Filter > Autofilter (The funnel icon on the Data tab). Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
  • I clicked on the drop-down arrow for the unadjusted p value. I set a criterion that will filter my data so that the p value has to be less than 0.05.
    • Genes with a p < 0.05 = 2135/6189 records or 34.50% of the data.
    • Genes with a p < 0.01 = 1204/6189 records or 19.45% of the data.
    • Genes with a p < 0.001 = 514/6189 records or 8.31% of the data.
    • Genes with a p < 0.0001 = 180/6189 records or 2.91% of the data.
  • When I use a p value cut-off of p < 0.05, what I am saying is that I would have seen a gene expression change that deviates this far from zero by chance less than 5% of the time.
  • I have just performed 6189 hypothesis tests. Another way to state what I am seeing with p < 0.05 is that I would expect to see this a gene expression change for at least one of the timepoints by chance in about 5% of our tests, or 309 times. Since I have more than 309 genes that pass this cut off, I know that some genes are significantly changed. However, I don't know which ones. To apply a more stringent criterion to our p values, I performed the Bonferroni and Benjamini and Hochberg corrections to these unadjusted p values. The Bonferroni correction is very stringent. The Benjamini-Hochberg correction is less stringent. I filtered the data to determine this relationship and found:
    • Genes with a p < 0.05 for the Bonferroni-corrected p-value = 45/6189 records or 0.73% of the data.
    • Genes with a p < 0.05 for the Benjamini and Hochber-corrected p-value = 1185/6189 records or 19.15% of the data.
  • In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level. If we want to be very confident of our data, use a small p value cut-off. If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off.
  • I compared the results to the wild type strain and uploaded the information to a powerpoint that is linked to in the deliverables section of this wiki.
  • I also compared NSR1 and ADH1 and found the following:
NSR1
  1. Unaltered p-value = 0.000506
  2. Bonferroni-Corrected p-value = 1
  3. B-H corrected p-value = 0.008167
  4. Average Log Fold:
    • @ 15 = 3.50622
    • @ 30 = 4.53189
    • @ 60 = 2.75921
    • @ 90 = -1.85027
    • @ 120 = -1.86741
  5. As shown above, we can infer that gene expression started off high but eventually decreased to a consistent level due to cold shock. NSR1 has an unadjusted p value < 0.05, but is no longer significant with the corrections. This means that we have some confidence that is is really changing in this experiment, but not as much as some other genes.
ADH1
  1. Unaltered p-value = 0.772
  2. Bonferonni-corrected P-Value: 1
  3. B-H-corrected P-Value: 0.8617
  4. Average Log Fold
    • @ 15 = -0.902324179
    • @ 30 = -0.692990646
    • @ 60 = 0.138781308
    • @ 90 = -0.045097454
    • @ 120 = 0.514075012
  5. As shown above, we can infer that gene expression started low but rose around the 60 interval then decreased a significant amount at the 90 interval, only to rise again at the last interval. ADH1 p values are < 0.05, so the Average Log Fold Changes you see are likely just noise.

Summary

Our strain was dGLN3 and we analyzed the data for timepoints 15, 30, 60, 90, and 120. We used that information to calculate the p-value of an ANOVA, the Bonferroni corrected p-value, and the Benjamini and Hochber corrected p-value for each timepoint. We learned how to use excel to facilitate with large datasets like this one. We learned shortcuts to highlighting information like double clicking in the corner of the box as well as dragging the box down to multiple cells in order to copy the function to those respective cells. We concluded by answering the questions related to our findings as well as analyzing the information from the "our favorite gene" assignment. In summary, we hoped to find genes that showed significantly different rates of gene expression from the start of the experiment (time 0) to the end, 120 minutes later. We determined this by performing an ANOVA and determining the significance of the results. We were able to determine what the effects of cold related osmotic shock was on gene expression. Specifically, we found that 1185 genes or 19.15% of the data in relation to the wildtype strain had a Benjamini & Hochberg-corrected p value of less than 0.05 and 45 genes or 0.73% of the data in relation to the wildtype strain had a Bonferroni-corrected p value of less than 0.05. Lastly, NSR1 has small p values indicating that it is significant and affected by the cold shock. ADH1, or our favorite gene, was most likely not affected by cold shock because of the p values being above 0.05, meaning that the average log fold changes we see are likely just noise.

Acknowledgements

I worked with Zack to complete this week's assignment. We worked in class in order to ask questions as we worked but completed the rest of the assignment separately over the weekend. We consulted via text if we had any questions regarding the assignment. I also copied and modified the Week 8 assignment page and used the data from Dr. Dahlquist's research on yeast and their effect on cold shock.

While I worked with the people noted above, this individual journal entry was completed by me and not copied from another source.

Dbashour (talk) 23:29, 23 October 2017 (PDT)

References

LMU BioDB 2017. (2017). Week 8. Retrieved October 23, 2017, from https://xmlpipedb.cs.lmu.edu/biodb/fall2017/index.php/Week_8


Dina Bashoura

Biological Databases Homepage

List of Assignments

List of Individual Journal Entries

List of Shared Journal Entries

List of Final Assignments

List of Team Journal Assignments