Difference between revisions of "Kmill104 Week 9"

From LMU BioDB 2024
Jump to navigation Jump to search
(Sanity Check: Number of genes significantly changed: answering final questions)
(Calculate the Benjamini & Hochberg p value Correction: adding another step)
 
(16 intermediate revisions by the same user not shown)
Line 5: Line 5:
 
==== Experimental Design and Getting Ready ====
 
==== Experimental Design and Getting Ready ====
 
The data used in this exercise is publicly available at the NCBI GEO database in [https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE83656 record GSE83656].   
 
The data used in this exercise is publicly available at the NCBI GEO database in [https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE83656 record GSE83656].   
* Begin by downloading the Excel file for your group's strain.
+
* I first began by downloading the Excel file for my group's strain.
** Katie & Dean ([[Media:BIOL367_S24_microarray-data_wt.xlsx |wild type data]])
+
** ([[Media:BIOL367_S24_microarray-data_wt.xlsx |wild type data]])
* '''NOTE: before beginning any analysis, immediately change the filename (Save As...) so that it contains your initials to distinguish it from other students' work.'''
+
* I then changed the filename so that it contained my initials to distinguish it from other students' work.'''
* In the Excel spreadsheet, there is a worksheet labeled "Master_Sheet_<STRAIN>", where <STRAIN> is replaced by the strain designation wt.
+
* I then looked at the sheet labeled "Master_Sheet_wt" to find that:  
** In this worksheet, each row contains the data for one gene (one spot on the microarray). 
+
** The strain that me and Dean will analyze is the wild type strain. The name of the original file is [[Media:BIOL367_S24_microarray-data_wt.xlsx | BIOL367_S24_microarray-data_wt.xlsx]]. The name of the file with my saved initials and what I will be working on is [[Media:BIOL367_S24_microarray-data_wt_KM.xlsx | BIOL367_S24_microarray-data_wt_KM.xlsx]]. For the total wt strain data, there are 23 replicates. At time point 15, there are 4 replicates. At time point 30, there are 5 replicates. At time point 60, there are 4 replicates. At time point 90, there are 5 replicates. And at time point 120, there are 5 replicates.
** The first column contains the "MasterIndex", which numbers all of the rows sequentially in the worksheet so that we can always use it to sort the genes into the order they were in when we started. 
 
** The second column (labeled "ID") contains the Systematic Name (gene identifier) from the [http://www.yeastgenome.org Saccharomyces Genome Database]. 
 
** The third column contains the Standard Name for each of the genes. 
 
** Each subsequent column contains the log<sub>2</sub> ratio of the red/green fluorescence from each microarray hybridized in the experiment (steps 1-5 above having been performed for you already), for each strain starting with wild type and proceeding in alphabetical order by strain deletion.
 
** Each of the column headings from the data begin with the experiment name ("wt" for wild type ''S. cerevisiae'' data).  "LogFC" stands for "Log<sub>2</sub> Fold Change" which is the Log<sub>2</sub> red/green ratio.  The timepoints are designated as "t" followed by a number in minutes.  Replicates are numbered as "-0", "-1", "-2", etc. after the timepoint.
 
*** The timepoints are t15, t30, t60 (cold shock at 13°C) and t90 and t120 (cold shock at 13°C followed by 30 or 60 minutes of recovery at 30°C).
 
* '''''Begin by recording in your wiki, the strain that you will analyze, the filename, the number of replicates for each strain and each time point in your data.'''''
 
 
 
The strain that me and Dean will analyze is the wild type strain. The name of the original file is [[Media:BIOL367_S24_microarray-data_wt.xlsx | BIOL367_S24_microarray-data_wt.xlsx]]. The name of the file with my saved initials and what I will be working on is [[Media:BIOL367_S24_microarray-data_wt_KM.xlsx | BIOL367_S24_microarray-data_wt_KM.xlsx]]. For the total wt strain data, there are 23 replicates. At time point 15, there are 4 replicates. At time point 30, there are 5 replicates. At time point 60, there are 4 replicates. At time point 90, there are 5 replicates. And at time point 120, there are 5 replicates.
 
  
 
==== Statistical Analysis Part 1: ANOVA ====
 
==== Statistical Analysis Part 1: ANOVA ====
  
The purpose of the within-stain ANOVA test is to determine if any genes had a gene expression change that was significantly different than zero at '''''any''''' timepoint.
+
# I then created a new worksheet, naming it "wt_ANOVA".
 
+
# I then copied all data from the "Master_Sheet" worksheet and pasted it into my new worksheet.
# Create a new worksheet, naming it "wt_ANOVA".
+
# At the top of the first column to the right of your data, I then created five column headers of the form wt_AvgLogFC_(TIME) where (TIME) is 15, 30, 60, 90, and 120.
# Copy all data from the "Master_Sheet" worksheet and paste it in your new worksheet.
+
# In the cell below the wt_AvgLogFC_t15 header, I then typed <code>=AVERAGE(</code>  
# At the top of the first column to the right of your data, create five column headers of the form wt_AvgLogFC_(TIME) where wt is your strain designation and (TIME) is 15, 30, 60, 90, and 120.
+
# I then highlighted all the data in row 2 associated with t15 (D1:G1), pressed the closing paren key (shift 0), and pressed the "enter" key.
# In the cell below the wt_AvgLogFC_t15 header, type <code>=AVERAGE(</code>  
+
# The cell now contains the average of the log fold change data from the first gene at t=15 minutes.
# Then highlight all the data in row 2 associated with t15, press the closing paren key (shift 0),and press the "enter" key.
+
# I then clicked on this cell and positioned my cursor at the bottom right corner. After the cursor changed to a thin black plus sign, I double clicked and the formula was copied to the remaining genes.
# This cell now contains the average of the log fold change data from the first gene at t=15 minutes.
+
# I then repeated steps (4) through (8) with the t30 (H1:L1), t60 (M1:P1), t90 (Q1:U1), and the t120 data (V1:Z1).
# Click on this cell and position your cursor at the bottom right corner. You should see your cursor change to a thin black plus sign (not a chubby white one). When it does, double click, and the formula will magically be copied to the entire column of 6188 other genes.
+
# To the right of the wt_AvgLogFC_t120 calculation, I created the column header wt_ss_HO.
# Repeat steps (4) through (8) with the t30, t60, t90, and the t120 data.
+
# In the first cell below this header, I typed <code>=SUMSQ(</code>
# Now in the first empty column to the right of the wt_AvgLogFC_t120 calculation, create the column header wt_ss_HO.
+
# Then, I highlighted all the LogFC data in row 2 (D1:Z1), pressed the closing paren key (shift 0),and pressed the "enter" key.  
# In the first cell below this header, type <code>=SUMSQ(</code>
+
# In the next empty column to the right of wt_ss_HO, I created the column headers wt_ss_(TIME) as in (3).
# Highlight all the LogFC data in row 2 (but not the AvgLogFC), press the closing paren key (shift 0),and press the "enter" key.  
+
# I counted that there are 23 data points for each time point of the wt strain data.  
# In the next empty column to the right of wt_ss_HO, create the column headers wt_ss_(TIME) as in (3).
+
# In the first cell below the header wt_ss_t15, I then typed <code>=SUMSQ(D2:G2)-COUNTA(D2:G2)*<AA2>^2</code> and hit enter.
# Make a note of how many data points you have at each time point for your strain. For the wild type it will be "4" or "5".  Count carefully. Also, make a note of the total number of data points. For wt it should be 23 (double-check). It is 23.
+
#* Upon completion of this single computation, I then used the Step (7) trick to copy the formula throughout the column.
# In the first cell below the header wt_ss_t15, type <code>=SUMSQ(<range of cells for logFC_t15>)-COUNTA(<range of cells for logFC_t15>)*<AvgLogFC_t15>^2</code> and hit enter.
+
# I then repeated this computation for the t30 through t120 data points.   
#* The <code>COUNTA</code> function counts the number of cells in the specified range that have data in them (i.e., does not count cells with missing values).
+
#*Below wt_ss_t30, I typed <code>=SUMSQ(H2:L2)-COUNTA(H2:L2)*<AB2>^2</code> and hit enter, then copied the formula throughout the column.
#* The phrase <range of cells for logFC_t15> should be replaced by the data range associated with t15.
+
#*Below wt_ss_t60, I typed <code>=SUMSQ(M2:P2)-COUNTA(M2:P2)*<AC2>^2</code> and hit enter, then copied the formula throughout the column.
#* The phrase <AvgLogFC_t15> should be replaced by the cell number in which you computed the AvgLogFC for t15, and the "^2" squares that value.  
+
#*Below wt_ss_t90, I typed <code>=SUMSQ(Q2:U2)-COUNTA(Q2:U2)*<AD>^2</code> and hit enter, then copied the formula throughout the column.
#* Upon completion of this single computation, use the Step (7) trick to copy the formula throughout the column.
+
#*Below wt_ss_t120, I typed <code>=SUMSQ(V2:Z2)-COUNTA(V2:Z2)*<AE2>^2</code> and hit enter, then copied the formula throughout the column.  
# Repeat this computation for the t30 through t120 data points.  Again, be sure to get the data for each time point, type the right number of data points, and get the average from the appropriate cell for each time point, and copy the formula to the whole column for each computation.
+
# In the first column to the right of wt_ss_t120, I then created the column header wt_ss_full.
# In the first column to the right of wt_ss_t120, create the column header wt_SS_full.
+
# In the first row below this header, I typed <code>=sum(AG2:AK2)</code> and hit enter.
# In the first row below this header, type <code>=sum(<range of cells containing "ss" for each timepoint>)</code> and hit enter.
+
# In the next two columns to the right, I then created create the headers wt_Fstat and wt_p-value.
# In the next two columns to the right, create the headers wt_Fstat and wt_p-value.
+
# There are 23 data points for each time point, which I used in the following formula.  
# Recall the number of data points from (13) -> 23.  
+
# In the first cell of the wt_Fstat column, I typed <code>=(23-5)/5)*(AF2-AL2)/AL2</code> and hit enter.   
# In the first cell of the wt_Fstat column, type <code>=(23-5)/5)*(<(wt_ss_HO>-<wt_SS_full>)/<wt_SS_full></code> and hit enter.   
+
#* I then copied this to the whole column.
#Use the number from (13), which is 23. Also note that "5" is the number of timepoints.
+
# In the first cell below the wt_p-value header, I typed <code>=FDIST(AM2,5,23-5)</code>, and then copied this to the whole column.
#* Replace the phrase wt_ss_HO with the cell designation.
+
# Before moving onto the next step, I performed a quick sanity check to see if the computations were done correctly.  
#* Replace the phrase wt_ss_full with the cell designation.
+
#* First, I clicked on cell A1 and clicked on the Data tab.  I then selected the Filter icon, and used the drop-down arrows that appeared to filter the data.  
#* Copy to the whole column.
+
#* I clicked on the drop-down arrow on my wt_p-value column, then selected "Number Filters". In the window that appeared, I set a criterion that filtered my data so that the p value had to be less than 0.05.  
# In the first cell below the wt_p-value header, type <code>=FDIST(<wt_Fstat>,5,23-5)</code> replacing the phrase wt_Fstat> with the cell designation. Copy to the whole column.
+
#* Excel then only displayed the rows that corresponded to data meeting that filtering criterion.  A number appeared in the lower left hand corner of the window giving me the number of rows that meet that criterion.  These numbers were then used to check our results with each other to make sure that the computations were performed correctly.
# Before we move on to the next step, we will perform a quick sanity check to see if we did all of these computations correctly.
 
#* Click on cell A1 and click on the Data tab.  Select the Filter icon (looks like a funnel). Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
 
#* Click on the drop-down arrow on your wt_p-value column. Select "Number Filters". In the window that appears, set a criterion that will filter your data so that the p value has to be less than 0.05.  
 
#* Excel will now only display the rows that correspond to data meeting that filtering criterion.  A number will appear in the lower left hand corner of the window giving you the number of rows that meet that criterion.  We will check our results with each other to make sure that the computations were performed correctly.
 
#* Be sure to undo any filters that you have applied before making any additional calculations.
 
  
 
==== Calculate the Bonferroni and p value Correction ====
 
==== Calculate the Bonferroni and p value Correction ====
  
''Note: Be sure to undo any filters that you have applied before continuing with the next steps.''
+
# Then, I performed adjustments to the p value to correct for the [https://xkcd.com/882/ multiple testing problem].  I labeled the next two columns to the right of wt_p-value with the same label, wt_Bonferroni_p-value.
# Now we will perform adjustments to the p value to correct for the [https://xkcd.com/882/ multiple testing problem].  Label the next two columns to the right with the same label, wt_Bonferroni_p-value.
+
# In the first cell of the first column, I typed the equation <code>=AN2*6189</code>, and upon completion of this single computation, copied the formula throughout the entire column.  
# Type the equation <code>=<wt_p-value>*6189</code>, Upon completion of this single computation, use the Step (10) trick to copy the formula throughout the column.
+
# I then replaced any corrected p value that was greater than 1 with the number 1 by typing the following formula into the first cell below the second wt_Bonferroni_p-value column: <code>=IF(AO2>1,1,AO2)</code>, and then copied the formula throughout the column.
# Replace any corrected p value that is greater than 1 by the number 1 by typing the following formula into the first cell below the second wt_Bonferroni_p-value header: <code>=IF(wt_Bonferroni_p-value>1,1,wt_Bonferroni_p-value)</code>, where "wt_Bonferroni_p-value" refers to the cell in which the first Bonferroni p value computation was made.  Use the Step (10) trick to copy the formula throughout the column.
 
  
 
==== Calculate the Benjamini & Hochberg p value Correction ====
 
==== Calculate the Benjamini & Hochberg p value Correction ====
  
# Insert a new worksheet named "wt_ANOVA_B-H".
+
# I then inserted a new worksheet named "wt_ANOVA_B-H".
# Copy and paste the "MasterIndex", "ID", and "Standard Name" columns from your previous worksheet into the first two columns of the new worksheet.  
+
# I copy and pasted the "MasterIndex", "ID", and "Standard Name" columns from my previous worksheet into the first two columns of the new worksheet.  
# For the following, use Paste special > Paste values.  Copy your unadjusted p values from your ANOVA worksheet and paste it into Column D.
+
# I used Paste special > Paste values for the following: I copied my unadjusted p values from my ANOVA worksheet and pasted it into Column D.
# Select all of columns A, B, C, and D. Sort by ascending values on Column D. Click the sort button from A to Z on the toolbar, in the window that appears, sort by column D, smallest to largest.
+
# I then selected all of columns A, B, C, and D. I sorted by ascending values on column D by selecting the sort button from A to Z on the toolbar, and in the window that appeared, clicked sort by column D, smallest to largest.
# Type the header "Rank" in cell E1. We will create a series of numbers in ascending order from 1 to 6189 in this column. This is the p value rank, smallest to largest.  Type "1" into cell E2 and "2" into cell E3. Select both cells E2 and E3. Double-click on the plus sign on the lower right-hand corner of your selection to fill the column with a series of numbers from 1 to 6189.
+
# I then typed the header "Rank" in cell E1. This column was used to create a series of numbers in ascending order from 1 to 6189. This gave me the p value rank, smallest to largest.  To assign ranks, I typed "1" into cell E2 and "2" into cell E3. I selected both cells E2 and E3, and then double-clicked on the plus sign on the lower right-hand corner of my selection to fill the column with a series of numbers from 1 to 6189.
# Now you can calculate the Benjamini and Hochberg p value correction. Type wt_B-H_p-value in cell F1. Type the following formula in cell F2: <code>=(D2*6189)/E2</code> and press enter. Copy that equation to the entire column.
+
# I used the following to calculate the Benjamini and Hochberg p value corrections. I typed wt_B-H_p-value in cell F1, and typed the following formula in cell F2: <code>=(D2*6189)/E2</code> and pressed enter. I then copied that equation to the entire column.
# Type "wt_B-H_p-value" into cell G1.  
+
# I then typed "wt_B-H_p-value" into cell G1.  
# Type the following formula into cell G2: <code>=IF(F2>1,1,F2)</code> and press enter. Copy that equation to the entire column.  
+
# In G2, I typed the following formula: <code>=IF(F2>1,1,F2)</code> and pressed enter, then copied that equation to the entire column.
# Select columns A through G.  Now sort them by your MasterIndex in Column A in ascending order.
+
# I then selected columns A through G, and sorted them by my MasterIndex in Column A in ascending order.
# Copy column G and use Paste special > Paste values to paste it into the next column on the right of your ANOVA sheet.
+
# Finally, I copied column G and use Paste special > Paste values to paste it into the next column on the right of my ANOVA sheet.
 
+
#* I then uploaded the zipped .xlsx file to the Data and Files section of this notebook.
* '''''Zip and upload the .xlsx file that you have just created to the wiki.'''''
 
  
 
==== Sanity Check: Number of genes significantly changed ====
 
==== Sanity Check: Number of genes significantly changed ====
  
Before we move on to further analysis of the data, we want to perform a more extensive sanity check to make sure that we performed our data analysis correctly.  We are going to find out the number of genes that are significantly changed at various p value cut-offs.
+
I then performed another sanity check to make sure the data analysis was performed correctly by finding the number of genes that are significantly changed at various p-value cut-offs.  
  
* Go to your wt_ANOVA worksheet.
+
* First, I went to my wt_ANOVA worksheet.
* Select row 1 (the row with your column headers) and select the menu item Data > Filter > Autofilter (The funnel icon on the Data tab).  Little drop-down arrows should appear at the top of each columnThis will enable us to filter the data according to criteria we set.
+
* I then selected row 1 and selected the menu item Data > Filter > Autofilter, causing drop-down arrows to appear.   
* Click on the drop-down arrow for the unadjusted p value.  Set a criterion that will filter your data so that the p value has to be less than 0.05.
+
* I clicked on the drop-down arrow for the unadjusted p value, and set a criterion that filtered my data so that the p value has to be less than 0.05, and used this to follow the answering questions.  
 
** '''''How many genes have p < 0.05?  and what is the percentage (out of 6189)?''''' 2528 genes have p < 0.05. This is 40.85% of the total genes in the dataset.  
 
** '''''How many genes have p < 0.05?  and what is the percentage (out of 6189)?''''' 2528 genes have p < 0.05. This is 40.85% of the total genes in the dataset.  
 
** '''''How many genes have p < 0.01? and what is the percentage (out of 6189)?''''' 1652 genes have p < 0.01. This is 26.69% of the total genes in the dataset.  
 
** '''''How many genes have p < 0.01? and what is the percentage (out of 6189)?''''' 1652 genes have p < 0.01. This is 26.69% of the total genes in the dataset.  
Line 93: Line 77:
 
** '''''How many genes have p < 0.0001? and what is the percentage (out of 6189)?''''' 496 genes have p < 0.0001. This is 8.01% of the total genes in the dataset.  
 
** '''''How many genes have p < 0.0001? and what is the percentage (out of 6189)?''''' 496 genes have p < 0.0001. This is 8.01% of the total genes in the dataset.  
  
** Note that it is a good idea to create a new worksheet in your workbook to record the answers to these questionsThen you can write a formula in Excel to automatically calculate the percentage for you.
+
We know that the Bonferroni correction is very stringent, and the Benjamini-Hochberg correction is less stringentTo see this relationship, I filtered my data in the first Bonferroni column by clicking on the drop-down arrow and setting a criterion so that the p-value is less than 0.5. I then repeated this for the Benjamini-Hochberg column. I used this to answer the following questions.
* When we use a p value cut-off of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero by chance less than 5% of the time.
+
*'''''How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)?''''' 248 genes are p < 0.05. This is 4.01% of the total genes in the dataset.  
* We have just performed 6189 hypothesis tests.  Another way to state what we are seeing with p < 0.05 is that we would expect to see this a gene expression change for at least one of the timepoints by chance in about 5% of our tests, or 309 times. Since we have more than 309 genes that pass this cut off, we know that some genes are significantly changed.  However, we don't know ''which'' ones.  To apply a more stringent criterion to our p values, we performed the Bonferroni and Benjamini and Hochberg corrections to these unadjusted p values.  The Bonferroni correction is very stringent.  The Benjamini-Hochberg correction is less stringent. To see this relationship, filter your data to determine the following:
+
*'''''How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?''''' 1822 genes are p < 0.05. This is 29.44% of the total genes in the dataset.  
** '''''How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)?''''' 248 genes are p < 0.05. This is 4.01% of the total genes in the dataset.  
+
 
** '''''How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?''''' 1822 genes are p < 0.05. This is 29.44% of the total genes in the dataset.  
+
I then organized these numbers into a table in a PowerPoint slide modeled off the sample slide posted in the Week 9 Assignment, and uploaded this slide to the wiki.
* In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant".  Instead, it is a moveable confidence level.  If we want to be very confident of our data, use a small p value cut-off.  If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off
+
 
* We will compare the numbers we get between the wild type strain and the other strains studied, organized as a table.  Use this [[Media:BIOL367_S24_sample_p-value_slide.pptx | sample PowerPoint slide]] to see how your table should be formatted. '''''Upload your slide to the wiki.'''''
+
We know that the expression of the gene ''NSR1'' (ID: YGR159C)is known to be induced by cold shock. I found NSR1 by using the drop-down arrow on column C to search its name. and I then used this to answer the following questions.
** Note that since the wild type data is being analyzed by one of the groups in the class, it will be sufficient for this week to supply just the data for your strain.  We will do the comparison with wild type at a later date.
+
*'''''What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?'''''
* Comparing results with known data:  the expression of the gene ''NSR1'' (ID: YGR159C)is known to be induced by cold shock. '''''Find ''NSR1'' in your dataset.  What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?  
 
 
**Unadjusted: 2.86939E-10
 
**Unadjusted: 2.86939E-10
 
**Bonferroni-corrected: 1.77586E-06
 
**Bonferroni-corrected: 1.77586E-06
 
**B-H-corrected: 8.87932E-07  
 
**B-H-corrected: 8.87932E-07  
'''What is its average Log fold change at each of the timepoints in the experiment?'''
+
*'''''What is its average Log fold change at each of the timepoints in the experiment?'''''
*Time point 15: 3.279225
+
**Time point 15: 3.279225
*Time point 30: 3.621
+
**Time point 30: 3.621
*Time point 60: 3.526525
+
**Time point 60: 3.526525
*Time point 90: -2.04985
+
**Time point 90: -2.04985
*Time point 120: -0.60622
+
**Time point 120: -0.60622
Note that the average Log fold change is what we called "wt_AvgLogFC_(TIME)" in step 3 of the ANOVA analysis. Does ''NSR1'' change expression due to cold shock in this experiment?  
+
*'''''Does ''NSR1'' change expression due to cold shock in this experiment? '''''
*Yes, at time points 15, 30, and 60 the average Log fold change is ~3.3-3.6 At time point 90, the average Log fold change has a significant drop to -2.04985, and at time point 120 it goes up to -0.60622.  
+
**Yes, the unadjusted, Bonferroni-corrected, and B-H-corrected p-values are all less than 0.05, indicating that NSR1 changes expression due to cold shock in this experiment.
  
* For fun, find MSN1 (from your [[Week 3]] assignment) in the dataset.  '''''What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?  What is its average Log fold change at each of the timepoints in the experiment?'''''  Does your favorite gene change expression due to cold shock in this experiment?
+
I then found MSN1 by using the drop-down arrow for column C and searching its name, and then used this to answer the following questions.   
 +
*'''''What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?   
 
**Unadjusted: 0.563798852
 
**Unadjusted: 0.563798852
 
**Bonferroni-corrected: 3489.351093
 
**Bonferroni-corrected: 3489.351093
 
**B-H-corrected: 0.679258535
 
**B-H-corrected: 0.679258535
  
'''What is its average Log fold change at each of the timepoints in the experiment?'''
+
*'''''What is its average Log fold change at each of the timepoints in the experiment?'''''
 
**Time point 15: 0.1076
 
**Time point 15: 0.1076
 
**Time point 30: -0.46192
 
**Time point 30: -0.46192
Line 126: Line 110:
 
**Time point 120: -0.18418
 
**Time point 120: -0.18418
  
'''Does your favorite gene change expression due to cold shock in this experiment?'''
+
*'''''Does your favorite gene change expression due to cold shock in this experiment?'''''
Yes, there is a wide range of values for the average Log fold change at the different timepoints. The lowest value is ~-0.46 - -0.47 at time points 30 and 60, and there is a high at time point 90 of 0.17 before it goes back to the negative value -0.18 at time point 120.
+
**No, the unadjusted, Bonferroni-corrected, and B-H-corrected values are all greater than 0.05, indicating the gene expression has not changed due to cold shock in the experiment.
  
 
==Data & Files==
 
==Data & Files==
[[Media:BIOL367_S24_microarray-data_wt_KM.xlsx.zip | WT ANOVA Data]]
+
[[Media:BIOL367_S24_microarray-data_wt_KM.xlsx.zip | Miller WT ANOVA Data]]
 +
 
 +
[[Media:BIOL367_S24_sample_p-value_slide_KM.pptx | Miller WT Slide]]
  
 
==Conclusion==
 
==Conclusion==
Line 136: Line 122:
  
 
==Acknowledgements==
 
==Acknowledgements==
I worked under the guidance of Dr. Dahlquist on 3-14-24 and 3-18-24. I consulted with my classmates during class time when I had a question about 21. in the ANOVA section of this journal assignment.
+
I worked under the guidance of Dr. Dahlquist on 3-14-24 and 3-18-24. I consulted with my classmates during class time when I had a question about 21. in the ANOVA section of this journal assignment. I first copied the  general procedure from the Week 9 Assignment page, and then adjusted my individual procedure to what exactly I did when following the steps.
  
 
Except for what is noted above, this individual journal entry was completed by me and not copied from another source.
 
Except for what is noted above, this individual journal entry was completed by me and not copied from another source.
Line 144: Line 130:
 
==References==
 
==References==
 
LMU BioDB 2024. (2024). Week 9. Retrieved March 20, 2024, from https://xmlpipedb.cs.lmu.edu/biodb/spring2024/index.php/Week_9
 
LMU BioDB 2024. (2024). Week 9. Retrieved March 20, 2024, from https://xmlpipedb.cs.lmu.edu/biodb/spring2024/index.php/Week_9
 +
 +
{{Template:Kmill104}}

Latest revision as of 22:00, 1 April 2024

Purpose

The purpose of this week's journal assignment is to analyze DNA microarray data using several different equations in an Excel format. It is also to learn about the meaning of and importance of p-values, and the ways that we can adjust p-values through the use of additional equations. It is also to keep an organized and detailed electronic notebook that makes our research able to be replicated and reproduced.

Methods/Results

Experimental Design and Getting Ready

The data used in this exercise is publicly available at the NCBI GEO database in record GSE83656.

  • I first began by downloading the Excel file for my group's strain.
  • I then changed the filename so that it contained my initials to distinguish it from other students' work.
  • I then looked at the sheet labeled "Master_Sheet_wt" to find that:
    • The strain that me and Dean will analyze is the wild type strain. The name of the original file is BIOL367_S24_microarray-data_wt.xlsx. The name of the file with my saved initials and what I will be working on is BIOL367_S24_microarray-data_wt_KM.xlsx. For the total wt strain data, there are 23 replicates. At time point 15, there are 4 replicates. At time point 30, there are 5 replicates. At time point 60, there are 4 replicates. At time point 90, there are 5 replicates. And at time point 120, there are 5 replicates.

Statistical Analysis Part 1: ANOVA

  1. I then created a new worksheet, naming it "wt_ANOVA".
  2. I then copied all data from the "Master_Sheet" worksheet and pasted it into my new worksheet.
  3. At the top of the first column to the right of your data, I then created five column headers of the form wt_AvgLogFC_(TIME) where (TIME) is 15, 30, 60, 90, and 120.
  4. In the cell below the wt_AvgLogFC_t15 header, I then typed =AVERAGE(
  5. I then highlighted all the data in row 2 associated with t15 (D1:G1), pressed the closing paren key (shift 0), and pressed the "enter" key.
  6. The cell now contains the average of the log fold change data from the first gene at t=15 minutes.
  7. I then clicked on this cell and positioned my cursor at the bottom right corner. After the cursor changed to a thin black plus sign, I double clicked and the formula was copied to the remaining genes.
  8. I then repeated steps (4) through (8) with the t30 (H1:L1), t60 (M1:P1), t90 (Q1:U1), and the t120 data (V1:Z1).
  9. To the right of the wt_AvgLogFC_t120 calculation, I created the column header wt_ss_HO.
  10. In the first cell below this header, I typed =SUMSQ(
  11. Then, I highlighted all the LogFC data in row 2 (D1:Z1), pressed the closing paren key (shift 0),and pressed the "enter" key.
  12. In the next empty column to the right of wt_ss_HO, I created the column headers wt_ss_(TIME) as in (3).
  13. I counted that there are 23 data points for each time point of the wt strain data.
  14. In the first cell below the header wt_ss_t15, I then typed =SUMSQ(D2:G2)-COUNTA(D2:G2)*<AA2>^2 and hit enter.
    • Upon completion of this single computation, I then used the Step (7) trick to copy the formula throughout the column.
  15. I then repeated this computation for the t30 through t120 data points.
    • Below wt_ss_t30, I typed =SUMSQ(H2:L2)-COUNTA(H2:L2)*<AB2>^2 and hit enter, then copied the formula throughout the column.
    • Below wt_ss_t60, I typed =SUMSQ(M2:P2)-COUNTA(M2:P2)*<AC2>^2 and hit enter, then copied the formula throughout the column.
    • Below wt_ss_t90, I typed =SUMSQ(Q2:U2)-COUNTA(Q2:U2)*<AD>^2 and hit enter, then copied the formula throughout the column.
    • Below wt_ss_t120, I typed =SUMSQ(V2:Z2)-COUNTA(V2:Z2)*<AE2>^2 and hit enter, then copied the formula throughout the column.
  16. In the first column to the right of wt_ss_t120, I then created the column header wt_ss_full.
  17. In the first row below this header, I typed =sum(AG2:AK2) and hit enter.
  18. In the next two columns to the right, I then created create the headers wt_Fstat and wt_p-value.
  19. There are 23 data points for each time point, which I used in the following formula.
  20. In the first cell of the wt_Fstat column, I typed =(23-5)/5)*(AF2-AL2)/AL2 and hit enter.
    • I then copied this to the whole column.
  21. In the first cell below the wt_p-value header, I typed =FDIST(AM2,5,23-5), and then copied this to the whole column.
  22. Before moving onto the next step, I performed a quick sanity check to see if the computations were done correctly.
    • First, I clicked on cell A1 and clicked on the Data tab. I then selected the Filter icon, and used the drop-down arrows that appeared to filter the data.
    • I clicked on the drop-down arrow on my wt_p-value column, then selected "Number Filters". In the window that appeared, I set a criterion that filtered my data so that the p value had to be less than 0.05.
    • Excel then only displayed the rows that corresponded to data meeting that filtering criterion. A number appeared in the lower left hand corner of the window giving me the number of rows that meet that criterion. These numbers were then used to check our results with each other to make sure that the computations were performed correctly.

Calculate the Bonferroni and p value Correction

  1. Then, I performed adjustments to the p value to correct for the multiple testing problem. I labeled the next two columns to the right of wt_p-value with the same label, wt_Bonferroni_p-value.
  2. In the first cell of the first column, I typed the equation =AN2*6189, and upon completion of this single computation, copied the formula throughout the entire column.
  3. I then replaced any corrected p value that was greater than 1 with the number 1 by typing the following formula into the first cell below the second wt_Bonferroni_p-value column: =IF(AO2>1,1,AO2), and then copied the formula throughout the column.

Calculate the Benjamini & Hochberg p value Correction

  1. I then inserted a new worksheet named "wt_ANOVA_B-H".
  2. I copy and pasted the "MasterIndex", "ID", and "Standard Name" columns from my previous worksheet into the first two columns of the new worksheet.
  3. I used Paste special > Paste values for the following: I copied my unadjusted p values from my ANOVA worksheet and pasted it into Column D.
  4. I then selected all of columns A, B, C, and D. I sorted by ascending values on column D by selecting the sort button from A to Z on the toolbar, and in the window that appeared, clicked sort by column D, smallest to largest.
  5. I then typed the header "Rank" in cell E1. This column was used to create a series of numbers in ascending order from 1 to 6189. This gave me the p value rank, smallest to largest. To assign ranks, I typed "1" into cell E2 and "2" into cell E3. I selected both cells E2 and E3, and then double-clicked on the plus sign on the lower right-hand corner of my selection to fill the column with a series of numbers from 1 to 6189.
  6. I used the following to calculate the Benjamini and Hochberg p value corrections. I typed wt_B-H_p-value in cell F1, and typed the following formula in cell F2: =(D2*6189)/E2 and pressed enter. I then copied that equation to the entire column.
  7. I then typed "wt_B-H_p-value" into cell G1.
  8. In G2, I typed the following formula: =IF(F2>1,1,F2) and pressed enter, then copied that equation to the entire column.
  9. I then selected columns A through G, and sorted them by my MasterIndex in Column A in ascending order.
  10. Finally, I copied column G and use Paste special > Paste values to paste it into the next column on the right of my ANOVA sheet.
    • I then uploaded the zipped .xlsx file to the Data and Files section of this notebook.

Sanity Check: Number of genes significantly changed

I then performed another sanity check to make sure the data analysis was performed correctly by finding the number of genes that are significantly changed at various p-value cut-offs.

  • First, I went to my wt_ANOVA worksheet.
  • I then selected row 1 and selected the menu item Data > Filter > Autofilter, causing drop-down arrows to appear.
  • I clicked on the drop-down arrow for the unadjusted p value, and set a criterion that filtered my data so that the p value has to be less than 0.05, and used this to follow the answering questions.
    • How many genes have p < 0.05? and what is the percentage (out of 6189)? 2528 genes have p < 0.05. This is 40.85% of the total genes in the dataset.
    • How many genes have p < 0.01? and what is the percentage (out of 6189)? 1652 genes have p < 0.01. This is 26.69% of the total genes in the dataset.
    • How many genes have p < 0.001? and what is the percentage (out of 6189)? 919 genes have p < 0.001. This is 14.85% of the total genes in the dataset.
    • How many genes have p < 0.0001? and what is the percentage (out of 6189)? 496 genes have p < 0.0001. This is 8.01% of the total genes in the dataset.

We know that the Bonferroni correction is very stringent, and the Benjamini-Hochberg correction is less stringent. To see this relationship, I filtered my data in the first Bonferroni column by clicking on the drop-down arrow and setting a criterion so that the p-value is less than 0.5. I then repeated this for the Benjamini-Hochberg column. I used this to answer the following questions.

  • How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)? 248 genes are p < 0.05. This is 4.01% of the total genes in the dataset.
  • How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)? 1822 genes are p < 0.05. This is 29.44% of the total genes in the dataset.

I then organized these numbers into a table in a PowerPoint slide modeled off the sample slide posted in the Week 9 Assignment, and uploaded this slide to the wiki.

We know that the expression of the gene NSR1 (ID: YGR159C)is known to be induced by cold shock. I found NSR1 by using the drop-down arrow on column C to search its name. and I then used this to answer the following questions.

  • What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?
    • Unadjusted: 2.86939E-10
    • Bonferroni-corrected: 1.77586E-06
    • B-H-corrected: 8.87932E-07
  • What is its average Log fold change at each of the timepoints in the experiment?
    • Time point 15: 3.279225
    • Time point 30: 3.621
    • Time point 60: 3.526525
    • Time point 90: -2.04985
    • Time point 120: -0.60622
  • Does NSR1 change expression due to cold shock in this experiment?
    • Yes, the unadjusted, Bonferroni-corrected, and B-H-corrected p-values are all less than 0.05, indicating that NSR1 changes expression due to cold shock in this experiment.

I then found MSN1 by using the drop-down arrow for column C and searching its name, and then used this to answer the following questions.

  • What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?
    • Unadjusted: 0.563798852
    • Bonferroni-corrected: 3489.351093
    • B-H-corrected: 0.679258535
  • What is its average Log fold change at each of the timepoints in the experiment?
    • Time point 15: 0.1076
    • Time point 30: -0.46192
    • Time point 60: -0.47075
    • Time point 90: 0.16805
    • Time point 120: -0.18418
  • Does your favorite gene change expression due to cold shock in this experiment?
    • No, the unadjusted, Bonferroni-corrected, and B-H-corrected values are all greater than 0.05, indicating the gene expression has not changed due to cold shock in the experiment.

Data & Files

Miller WT ANOVA Data

Miller WT Slide

Conclusion

During this week's journal assignment, I learned ways to analyze datasets with different equations, and how to use Excel for getting correct and efficient results. I discovered that the meaning of p-values is that , and we can use the Bonferroni and Benjamini and Hochberg p-value correction equations for our obtained p-value data. I also learned that as p-values get smaller, less genes fall in the parameter of being less than that value. I saw that log-fold change is different for the NSR1 and MSN1 genes at each timepoint. Finally, during this week I followed the Week 9 assignment page to keep my own organized and detailed electronic journal that is specific to the wild type strain.

Acknowledgements

I worked under the guidance of Dr. Dahlquist on 3-14-24 and 3-18-24. I consulted with my classmates during class time when I had a question about 21. in the ANOVA section of this journal assignment. I first copied the general procedure from the Week 9 Assignment page, and then adjusted my individual procedure to what exactly I did when following the steps.

Except for what is noted above, this individual journal entry was completed by me and not copied from another source.

Kmill104 (talk) 20:30, 20 March 2024 (PDT)

References

LMU BioDB 2024. (2024). Week 9. Retrieved March 20, 2024, from https://xmlpipedb.cs.lmu.edu/biodb/spring2024/index.php/Week_9

User Page

User:Kmill104

Assignment Pages

Individual Journal Entry Pages

Shared Journal Entry Pages