Difference between revisions of "Mavila9 Week 8"

From LMU BioDB 2019
Jump to navigation Jump to search
(Methods & Results: methods)
(Data: image)
 
(11 intermediate revisions by the same user not shown)
Line 3: Line 3:
 
== Purpose ==
 
== Purpose ==
  
This investigation was done to analyze the data of the life cycle for a DNA microarray dataset and develop knowledge about what different p-value cut-offs mean.
+
This investigation was done to analyze the microarray data to determine changes in gene expression after a cold shock.
  
 
== Methods & Results ==
 
== Methods & Results ==
  
 
==== ANOVA: Part 1 ====
 
==== ANOVA: Part 1 ====
 
The purpose of the within-stain ANOVA test is to determine if any genes had a gene expression change that was significantly different than zero at '''''any''''' timepoint.
 
  
 
# A new worksheet was created named "wt_ANOVA".
 
# A new worksheet was created named "wt_ANOVA".
 
# The first three columns containing the "MasterIndex", "ID", and "Standard Name" were copied from the "Master_Sheet" worksheet for the wt strain and pasted it into the "wt_ANOVA" worksheet. The columns containing the data for the wt strain were copied and pasted into the "wt_ANOVA" worksheet.
 
# The first three columns containing the "MasterIndex", "ID", and "Standard Name" were copied from the "Master_Sheet" worksheet for the wt strain and pasted it into the "wt_ANOVA" worksheet. The columns containing the data for the wt strain were copied and pasted into the "wt_ANOVA" worksheet.
 
# At the top of the first column to the right of the data, five column headers were created of the form wt_AvgLogFC_(TIME) where (TIME) was 15, 30, etc.
 
# At the top of the first column to the right of the data, five column headers were created of the form wt_AvgLogFC_(TIME) where (TIME) was 15, 30, etc.
# In the cell below the wt_AvgLogFC_t15 header, <code>=AVERAGE(</code>) was typed.
+
# In the cell below the wt_AvgLogFC_t15 header, <code>=AVERAGE(</code> was typed.
 
# Then all the data in row 2 associated with t15 was highlighted, and the closing parentheses key was pressed, followed by the "enter" key.
 
# Then all the data in row 2 associated with t15 was highlighted, and the closing parentheses key was pressed, followed by the "enter" key.
 
# The equation in this cell was copied to the entire column of the 6188 other genes.
 
# The equation in this cell was copied to the entire column of the 6188 other genes.
Line 21: Line 19:
 
# In the first cell below this header, <code>=SUMSQ(</code> was typed.
 
# In the first cell below this header, <code>=SUMSQ(</code> was typed.
 
# Then all the LogFC data in row 2 (but not the AvgLogFC) were highlighted, then the closing parentheses were added,and the "enter" key was pressed.
 
# Then all the LogFC data in row 2 (but not the AvgLogFC) were highlighted, then the closing parentheses were added,and the "enter" key was pressed.
# In the next empty column to the right of (STRAIN)_ss_HO, the column headers wt_ss_(TIME) was created as in (3).
+
# In the next empty column to the right of wt_ss_HO, the column headers wt_ss_(TIME) was created as in (3).
# Make a note of how many data points you have at each time point for your strain.  For most of the strains, it will be 4, but for dHAP4 t90 or t120, it will be "3", and for the wild type it will be "4" or "5".  Count carefully. Also, make a note of the total number of data points. Again, for most strains, this will be 20, but for example, dHAP4, this number will be 18, and for wt it should be 23 (double-check).
+
# Make a note of how many data points you have at each time point for your strain.
# In the first cell below the header (STRAIN)_ss_t15, type <code>=SUMSQ(<range of cells for logFC_t15>)-COUNTA(<range of cells for logFC_t15>)*<AvgLogFC_t15>^2</code> and hit enter.
+
# In the first cell below the header wt_ss_t15, <code>=SUMSQ(<range of cells for logFC_t15>)-COUNTA(<range of cells for logFC_t15>)*<AvgLogFC_t15>^2</code> was typed and the enter was pressed.
#* The <code>COUNTA</code> function counts the number of cells in the specified range that have data in them (i.e., does not count cells with missing values).
+
#* The phrase <range of cells for logFC_t15> was replaced by the data range associated with t15.  
#* The phrase <range of cells for logFC_t15> should be replaced by the data range associated with t15.  
+
#* The phrase <AvgLogFC_t15> was replaced by the cell number in which the AvgLogFC for t15 was computed.  
#* The phrase <AvgLogFC_t15> should be replaced by the cell number in which you computed the AvgLogFC for t15, and the "^2" squares that value.  
+
#* Upon completion of this single computation, the formula was copied throughout the column.
#* Upon completion of this single computation, use the Step (7) trick to copy the formula throughout the column.
+
# This computation was repeated for the t30 through t120 data points.
# Repeat this computation for the t30 through t120 data points.  Again, be sure to get the data for each time point, type the right number of data points, and get the average from the appropriate cell for each time point, and copy the formula to the whole column for each computation.
+
# In the first column to the right of wt_ss_t120, the column header wt_SS_full was created.
# In the first column to the right of (STRAIN)_ss_t120, create the column header (STRAIN)_SS_full.
+
# In the first row below this header, <code>=sum(<range of cells containing "ss" for each timepoint>)</code> was typed and enter was pressed.
# In the first row below this header, type <code>=sum(<range of cells containing "ss" for each timepoint>)</code> and hit enter.
+
# In the next two columns to the right, the headers (STRAIN)_Fstat and (STRAIN)_p-value were created.
# In the next two columns to the right, create the headers (STRAIN)_Fstat and (STRAIN)_p-value.
+
# Recall the number of data points from (12): that total was called n.
# Recall the number of data points from (13): call that total n.
+
# In the first cell of the (STRAIN)_Fstat column, <code>=((n-5)/5)*(<(STRAIN)_ss_HO>-<(STRAIN)_SS_full>)/<(STRAIN)_SS_full></code> was typed and enter was pressed.
# In the first cell of the (STRAIN)_Fstat column, type <code>=((n-5)/5)*(<(STRAIN)_ss_HO>-<(STRAIN)_SS_full>)/<(STRAIN)_SS_full></code> and hit enter.
+
#* The phrase wt_ss_HO was replaced with the cell designation.
#* Don't actually type the n but instead use the number from (13). Also note that "5" is the number of timepoints.<!-- and the dSWI4 strain has 4 timepoints (it is missing t15).-->
+
#* The phrase <wt_SS_full> was replaced with the cell designation.  
#* Replace the phrase (STRAIN)_ss_HO with the cell designation.
+
#* The whole column was copied.
#* Replace the phrase <(STRAIN)_SS_full> with the cell designation.  
+
# In the first cell below the wt_p-value header, <code>=FDIST(<wt_Fstat>,5,n-5)</code> was typed, replacing the phrase <wt_Fstat> with the cell designation and the "n" as in (12) with the number of data points total. The whole column was copied.
#* Copy to the whole column.
+
# A sanity check was performed before moving on to see if all of these computations were done correctly.
# In the first cell below the (STRAIN)_p-value header, type <code>=FDIST(<(STRAIN)_Fstat>,5,n-5)</code> replacing the phrase <(STRAIN)_Fstat> with the cell designation and the "n" as in (13) with the number of data points total. <!--(Again, note that the number of timepoints is actually "4" for the dSWI4 strain)-->.  Copy to the whole column.
+
#* Cell A1 was selected then on the Data tab, the Filter icon was chosen.
# Before we move on to the next step, we will perform a quick sanity check to see if we did all of these computations correctly.
+
#* The drop-down arrow on the wt_p-value column was clicked on, then "Number Filters" was selected. In the window that appears, a criterion that will filter the data so that the p value has to be less than 0.05 was selected.  
#* Click on cell A1 and click on the Data tab.  Select the Filter icon (looks like a funnel). Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
+
#* Excel then only displayed the rows that correspond to data meeting that filtering criterion.  A number appeared in the lower left hand corner of the window giving the number of rows that meet that criterion.
#* Click on the drop-down arrow on your (STRAIN)_p-value column. Select "Number Filters". In the window that appears, set a criterion that will filter your data so that the p value has to be less than 0.05.  
+
#* Any filters that were applied were removed before making any additional calculations.
#* Excel will now only display the rows that correspond to data meeting that filtering criterion.  A number will appear in the lower left hand corner of the window giving you the number of rows that meet that criterion.  We will check our results with each other to make sure that the computations were performed correctly.
 
#* Be sure to undo any filters that you have applied before making any additional calculations.
 
 
 
==== Calculate the Bonferroni and p value Correction ====
 
 
 
''Note: Be sure to undo any filters that you have applied before continuing with the next steps.''
 
# Now we will perform adjustments to the p value to correct for the [https://xkcd.com/882/ multiple testing problem].  Label the next two columns to the right with the same label, (STRAIN)_Bonferroni_p-value.
 
# Type the equation <code>=<(STRAIN)_p-value>*6189</code>, Upon completion of this single computation, use the Step (10) trick to copy the formula throughout the column.
 
# Replace any corrected p value that is greater than 1 by the number 1 by typing the following formula into the first cell below the second (STRAIN)_Bonferroni_p-value header: <code>=IF((STRAIN)_Bonferroni_p-value>1,1,(STRAIN)_Bonferroni_p-value)</code>, where "(STRAIN)_Bonferroni_p-value" refers to the cell in which the first Bonferroni p value computation was made.  Use the Step (10) trick to copy the formula throughout the column.
 
  
==== Calculate the Benjamini & Hochberg p value Correction ====
+
==== Calculating the Bonferroni and p-value Correction ====
  
# Insert a new worksheet named "(STRAIN)_ANOVA_B-H".
+
# The next two columns to the right were labeled with the same label, wt_Bonferroni_p-value.
# Copy and paste the "MasterIndex", "ID", and "Standard Name" columns from your previous worksheet into the first two columns of the new worksheet.
+
# The equation <code>=<wt_p-value>*6189</code> was typed then copied into the entire column.
# For the following, use Paste special > Paste values.  Copy your unadjusted p values from your ANOVA worksheet and paste it into Column D.
+
# Any corrected p-values that were greater than 1 were corrected by typing the following formula into the first cell below the second wt_Bonferroni_p-value header: <code>=IF(wt_Bonferroni_p-value>1,1,wt_Bonferroni_p-value)</code>, where "wt_Bonferroni_p-value" refers to the cell in which the first Bonferroni p-value computation was made then the formula was copied throughout the column.
# Select all of columns A, B, C, and D. Sort by ascending values on Column D. Click the sort button from A to Z on the toolbar, in the window that appears, sort by column D, smallest to largest.
 
# Type the header "Rank" in cell E1.  We will create a series of numbers in ascending order from 1 to 6189 in this column.  This is the p value rank, smallest to largest.  Type "1" into cell E2 and "2" into cell E3. Select both cells E2 and E3. Double-click on the plus sign on the lower right-hand corner of your selection to fill the column with a series of numbers from 1 to 6189.
 
# Now you can calculate the Benjamini and Hochberg p value correction. Type (STRAIN)_B-H_p-value in cell F1. Type the following formula in cell F2: <code>=(D2*6189)/E2</code> and press enter. Copy that equation to the entire column.
 
# Type "STRAIN_B-H_p-value" into cell G1.
 
# Type the following formula into cell G2: <code>=IF(F2>1,1,F2)</code> and press enter. Copy that equation to the entire column.
 
# Select columns A through G.  Now sort them by your MasterIndex in Column A in ascending order.
 
# Copy column G and use Paste special > Paste values to paste it into the next column on the right of your ANOVA sheet.
 
  
* '''''Zip and upload the .xlsx file that you have just created to the wiki.'''''
+
==== Calculating the Benjamini & Hochberg p value Correction ====
  
'''''You must finish up to this point for the interim deadline of Tuesday, October 22, 12:01am Pacific time, so that the instructor can check your calculations before class.'''''
+
# A new worksheet named "wt_ANOVA_B-H" was created.
 +
# The "MasterIndex", "ID", and "Standard Name" columns were copied and pasted from the previous worksheet into the first two columns of the new worksheet.
 +
# The unadjusted p-values were copied from your ANOVA worksheet and pasted into the Column D.
 +
# Columns A, B, C, and D were selected and sorted by ascending values on Column D. The sort button was clicked on the toolbar, and in the window that appears column D was sorted from smallest to largest.
 +
# "Rank" was typed in the cell E1. "1" was typed into cell E2 and "2" into cell E3. Both cells E2 and E3 were selected then the plus sign in the lower right-corner was double-clicked to fill the column with a series of numbers from 1 to 6189.
 +
# The Benjamini and Hochberg p-value correction was calculated next. "wt_B-H_p-value" was typed into cell F1. The following formula was typed into cell F2: <code>=(D2*6189)/E2</code> and enter was press. That equation was copied throughout the entire column.
 +
# "STRAIN_B-H_p-value" was typed into cell G1.
 +
# The following formula was typed into cell G2: <code>=IF(F2>1,1,F2)</code> and enter was press. That equation was copied throughout the entire column.
 +
# Columns A through G were selected then sorted by the MasterIndex in Column A in ascending order.
 +
# Column G was copied and the values were pasted into the next column on the right of your ANOVA sheet.
 +
# The .xlsx file was put into .zip folder and uploaded to the wiki.
  
 
==== Sanity Check: Number of genes significantly changed ====
 
==== Sanity Check: Number of genes significantly changed ====
Line 73: Line 65:
 
Before we move on to further analysis of the data, we want to perform a more extensive sanity check to make sure that we performed our data analysis correctly.  We are going to find out the number of genes that are significantly changed at various p value cut-offs.
 
Before we move on to further analysis of the data, we want to perform a more extensive sanity check to make sure that we performed our data analysis correctly.  We are going to find out the number of genes that are significantly changed at various p value cut-offs.
  
* Go to your (STRAIN)_ANOVA worksheet.
+
* From the wt_ANOVA worksheet, row 1 was selected, then the menu item Data > Filter > Autofilter.
* Select row 1 (the row with your column headers) and select the menu item Data > Filter > Autofilter (The funnel icon on the Data tab).  Little drop-down arrows should appear at the top of each column.  This will enable us to filter the data according to criteria we set.
+
* The drop-down arrow for the unadjusted p-value was selected and a criterion to the data so that the p-value has to be less than 0.05 was added to determine: How many genes have p < 0.05?  and what is the percentage (out of 6189)?; how many genes have p < 0.01? and what is the percentage (out of 6189)?; how many genes have p < 0.001? and what is the percentage (out of 6189)?; and how many genes have p < 0.0001? and what is the percentage (out of 6189)?
* Click on the drop-down arrow for the unadjusted p value.  Set a criterion that will filter your data so that the p value has to be less than 0.05.
+
* To see the relationship of the less stringent Benjamini-Hochberg correction the data was filtered to determine: how many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)? & how many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?
** '''''How many genes have p < 0.05?  and what is the percentage (out of 6189)?'''''
+
* The numbers were organized as a table using this [[Media:BIOL367_F19_sample_p-value_slide.pptx | sample PowerPoint slide]] then compared to other strains after the slide was uploaded to the wiki.
** '''''How many genes have p < 0.01? and what is the percentage (out of 6189)?'''''
+
* Comparing results with known data: the expression of the gene NSR1 (ID: YGR159C) is known to be induced by cold shock, so the unadjusted, Bonferroni-corrected, and B-H-corrected p values were found of this gene, along with it's average Log fold change at each of the timepoints in the experiment to determine whether ''NSR1'' changed expression due to cold shock.
** '''''How many genes have p < 0.001? and what is the percentage (out of 6189)?'''''
+
* The unadjusted, Bonferroni-corrected, and B-H-corrected p values for the gene YBR160W/CDC28 in the dataset were found along with the average Log fold change at each of the timepoints in the experimentto determine whether gene expression changed due to cold shock.
** '''''How many genes have p < 0.0001? and what is the percentage (out of 6189)?'''''
 
** Note that it is a good idea to create a new worksheet in your workbook to record the answers to these questions.  Then you can write a formula in Excel to automatically calculate the percentage for you.
 
* When we use a p value cut-off of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero by chance less than 5% of the time.
 
* We have just performed 6189 hypothesis tests.  Another way to state what we are seeing with p < 0.05 is that we would expect to see this a gene expression change for at least one of the timepoints by chance in about 5% of our tests, or 309 times.  Since we have more than 309 genes that pass this cut off, we know that some genes are significantly changed.  However, we don't know ''which'' ones.  To apply a more stringent criterion to our p values, we performed the Bonferroni and Benjamini and Hochberg corrections to these unadjusted p values.  The Bonferroni correction is very stringent.  The Benjamini-Hochberg correction is less stringent.  To see this relationship, filter your data to determine the following:
 
** '''''How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)?'''''
 
** '''''How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?'''''
 
* In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant".  Instead, it is a moveable confidence level.  If we want to be very confident of our data, use a small p value cut-off.  If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off. 
 
* We will compare the numbers we get between the wild type strain and the other strains studied, organized as a table.  Use this [[Media:BIOL367_F19_sample_p-value_slide.pptx | sample PowerPoint slide]] to see how your table should be formatted. '''''Upload your slide to the wiki.'''''
 
** Note that since the wild type data is being analyzed by one of the groups in the class, it will be sufficient for this week to supply just the data for your strain.  We will do the comparison with wild type at a later date.
 
* Comparing results with known data: the expression of the gene ''NSR1'' (ID: YGR159C)is known to be induced by cold shock. '''''Find ''NSR1'' in your dataset.  What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?  What is its average Log fold change at each of the timepoints in the experiment?'''''  Note that the average Log fold change is what we called "STRAIN)_AvgLogFC_(TIME)" in step 3 of the ANOVA analysis. Does ''NSR1'' change expression due to cold shock in this experiment?
 
* For fun, find "your favorite gene" (from your [[Week 3]] assignment) in the dataset.  '''''What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?  What is its average Log fold change at each of the timepoints in the experiment?'''''  Does your favorite gene change expression due to cold shock in this experiment?
 
  
 
==== Clustering and GO Term Enrichment with stem (part 2)====
 
==== Clustering and GO Term Enrichment with stem (part 2)====
  
# '''Prepare your microarray data file for loading into STEM.'''
+
# The microarray data file was prepared for loading into STEM.
#* Insert a new worksheet into your Excel workbook, and name it "(STRAIN)_stem".
+
#* A new worksheet was inserted into the Excel workbook named "wt_stem".
#* Select all of the data from your "(STRAIN)_ANOVA" worksheet and Paste special > paste values into your "(STRAIN)_stem" worksheet.
+
#* All of the data from the "wt_ANOVA" worksheet was selected, copied, and pasted into the "wt_stem" worksheet.
#** Your leftmost column should have the column header "Master_Index".  Rename this column to "SPOT". Column B should be named "ID".  Rename this column to "Gene Symbol".  Delete the column named "Standard_Name".
+
#** The leftmost column with the column header "Master_Index" was renamed to "SPOT". Column B with header "ID" was renamed to "Gene Symbol".  Then the the column named "Standard_Name" was deleted.
#** Filter the data on the B-H corrected p value to be > 0.05 (that's '''greater than''' in this case).
+
#** The data on the B-H corrected p-value was filtered to be > 0.05.
#*** Once the data has been filtered, select all of the rows (except for your header row) and delete the rows by right-clicking and choosing "Delete Row" from the context menu. Undo the filter.  This ensures that we will cluster only the genes with a "significant" change in expression and not the noise.
+
#*** Once the data was filtered, all of the rows (except for your header row) were selected and deleted by right-clicking and choosing "Delete Row" from the context menu. The filter was undone to ensure that only the genes with a "significant" change in expression, and not the noise, will be used.
#** Delete all of the data columns '''''EXCEPT''''' for the Average Log Fold change columns for each timepoint (for example, wt_AvgLogFC_t15, etc.).
+
#** All of the data columns except for the Average Log Fold change columns for each timepoint (for example, wt_AvgLogFC_t15, etc.) were deleted.
#** Rename the data columns with just the time and units (for example, 15m, 30m, etc.).
+
#** The data columns were renamed with just the time and units (for example, 15m, 30m, etc.).
#** Save your work.  Then use ''Save As'' to save this spreadsheet as Text (Tab-delimited) (*.txt).  Click OK to the warnings and close your file.
+
#** The work was saved using ''Save As'' to save this spreadsheet as Text (Tab-delimited) (*.txt).
#*** Note that you should turn on the file extensions if you have not already done so.
+
# Next the STEM software was downloaded and extracted from [http://www.cs.cmu.edu/~jernst/stem/ the STEM web site].
# '''Now download and extract the STEM software.'''  [http://www.cs.cmu.edu/~jernst/stem/ Click here to go to the STEM web site].
+
#* The [http://www.sb.cs.cmu.edu/stem/stem.zip download link] was selected and the <code>stem.zip</code> file was downloaded to Desktop.
#* Click on the [http://www.sb.cs.cmu.edu/stem/stem.zip download link] and download the <code>stem.zip</code> file to your Desktop.
+
#* The file was unzipped creating a folder called <code>stem</code>.
#* Unzip the file.  In Seaver 120, you can right click on the file icon and select the menu item ''7-zip > Extract Here''.
+
#** Next the Gene Ontology ([https://lmu.box.com/s/t8i5s1z1munrcfxzzs7nv7q2edsktxgl "gene_ontology.obo"]) and yeast GO annotations ([https://lmu.box.com/s/zlr1s8fjogfssa1wl59d5shyybtm1d49 "gene_association.sgd.gz"]) were downloaded and placed in the <code>stem</code> folder.
#* This will create a folder called <code>stem</code>.
+
#*Inside the folder, the <code>stem.jar</code> was double-clicked to launch the STEM program.
#** You now need to download the Gene Ontology and yeast GO annotations and place them in this folder.
 
#** Click here to download the file [https://lmu.box.com/s/t8i5s1z1munrcfxzzs7nv7q2edsktxgl "gene_ontology.obo"].
 
#** Click here to download the file [https://lmu.box.com/s/zlr1s8fjogfssa1wl59d5shyybtm1d49 "gene_association.sgd.gz"].
 
#*Inside the folder, double-click on the <code>stem.jar</code> to launch the STEM program.
 
 
# '''Running STEM'''
 
# '''Running STEM'''
## In section 1 (Expression Data Info) of the the main STEM interface window, click on the ''Browse...'' button to navigate to and select your file.
+
## In section 1 (Expression Data Info) of the the main STEM interface window, the ''Browse...'' button was clicked and the text file created was selected.
##* Click on the radio button ''No normalization/add 0''.
+
##* The radio button ''No normalization/add 0'' was selected.
##* Check the box next to ''Spot IDs included in the data file''.
+
##* The box next to ''Spot IDs included in the data file'' was checked.
## In section 2 (Gene Info) of the main STEM interface window, leave the default selection for the three drop-down menu selections for Gene Annotation Source, Cross Reference Source, and Gene Location Source as "User provided".
+
## In section 2 (Gene Info) of the main STEM interface window, the default selection for the three drop-down menu selections for Gene Annotation Source, Cross Reference Source, and Gene Location Source were left as "User provided".
## Click the "Browse..." button to the right of the "Gene Annotation File" item.  Browse to your "stem" folder and select the file "gene_association.sgd.gz" and click Open.
+
## The "Browse..." button to the right of the "Gene Annotation File" item was clicked and the "gene_association.sgd.gz" was selected and opened.
## In section 3 (Options) of the main STEM interface window, make sure that the Clustering Method says "STEM Clustering Method" and do not change the defaults for Maximum Number of Model Profiles or Maximum Unit Change in Model Profiles between Time Points.
+
## In section 3 (Options) of the main STEM interface window, the Clustering Method "STEM Clustering Method" was selected.
## In section 4 (Execute) click on the yellow Execute button to run STEM.
+
## In section 4 (Execute) the yellow Execute button was selected to run STEM.
##* If you get an error, there are some known reasons why stem might not work.  If you had #DIV/0! errors in your input file, it will cause problems. Re-open your file and open the Find/Replace dialog.  Search for #DIV/0!, but don't put anything in the replace field.  Click "Replace all" to remove the #DIV/0! errors.  Then save your file and try again with stem.
 
  
 
* The sanity check in step 22 of "Statistical Analysis Part 1: ANOVA" showed that 2528 out of the 6189 samples have a p-value of less than 0.05
 
* The sanity check in step 22 of "Statistical Analysis Part 1: ANOVA" showed that 2528 out of the 6189 samples have a p-value of less than 0.05
Line 125: Line 101:
 
== Data ==
 
== Data ==
  
[[media:BIOL367 F19 microarray-data wtMAVILA9 Benjamini&Hochberg Pvalue Correction.zip | Microarray Data]]
+
[[media:BIOL367 F19 microarray-data wtMAVILA9 Benjamini&Hochberg Pvalue Correction.zip | Microarray Data .zip]]
 +
 
 +
[[media:BIOL367 F19 microarray-data wtMAVILA9 Benjamini^0Hochberg Pvalue Correction.txt | Microarray Data .txt]]
 +
 
 +
[[media:BIOL367 F19 wt p-value slide mavila9.zip | Sanity Check Table]]
 +
 
 +
[[image:Stem results1.JPG]]
  
 
== Conclusion ==
 
== Conclusion ==
  
In conclusion, the wild type p-values were calculated for each gene allowing statistical significance to be determined. Next Bonferroni and Benjamini & Hochberg p-value adjustments were done to correct for the multiple testing problem.
+
In conclusion, the wild type p-values were calculated for each gene allowing statistical significance to be determined. Next Bonferroni and Benjamini & Hochberg p-value adjustments were done to correct for the multiple testing problem. The microarray data was prepared to be run on the STEM program.
  
 
== Acknowledgements ==
 
== Acknowledgements ==

Latest revision as of 15:27, 24 October 2019

Links

User Page

Template:mavila9

Assignment Page Individual Journal Entry Class Journal Entry
Week 1 Week 1 (User page) Shared Journal Week 1
Week 2 Mavila9 Week 2 Shared Journal Week 2
Week 3 Gene Page Week 3 Shared Journal Week 3
Week 4 Journal Entry Page Week 4 Shared Journal Week 4
Week 5 RNAct Database Page Week 5 Shared Journal Week 5
Week 6 Journal Entry Page Week 6 Shared Journal Week 6
Week 7 Journal Entry Page Week 7 Shared Journal Week 7
Week 8 Journal Entry Page Week 8 Shared Journal Week 8
Week 9 Journal Entry Page Week 9 Shared Journal Week 9
Week 10 Journal Entry Page Week 10 Shared Journal Week 10
Week 11 Sulfiknights Team Page Shared Journal Week 10
Journal Entry Page Week 11
Week 12/13 Journal Entry Page Week 12 Shared Journal Week 11
Week 12/13 Sulfiknights DA Week 12/13 Shared Journal Week 12
N/A Sulfiknights DA Week 14

Purpose

This investigation was done to analyze the microarray data to determine changes in gene expression after a cold shock.

Methods & Results

ANOVA: Part 1

  1. A new worksheet was created named "wt_ANOVA".
  2. The first three columns containing the "MasterIndex", "ID", and "Standard Name" were copied from the "Master_Sheet" worksheet for the wt strain and pasted it into the "wt_ANOVA" worksheet. The columns containing the data for the wt strain were copied and pasted into the "wt_ANOVA" worksheet.
  3. At the top of the first column to the right of the data, five column headers were created of the form wt_AvgLogFC_(TIME) where (TIME) was 15, 30, etc.
  4. In the cell below the wt_AvgLogFC_t15 header, =AVERAGE( was typed.
  5. Then all the data in row 2 associated with t15 was highlighted, and the closing parentheses key was pressed, followed by the "enter" key.
  6. The equation in this cell was copied to the entire column of the 6188 other genes.
  7. Steps (3) through (6) were repeated with the t30, t60, t90, and the t120 data.
  8. Next in the first empty column to the right of the wt_AvgLogFC_t120 calculation, the column header wt_ss_HO was created.
  9. In the first cell below this header, =SUMSQ( was typed.
  10. Then all the LogFC data in row 2 (but not the AvgLogFC) were highlighted, then the closing parentheses were added,and the "enter" key was pressed.
  11. In the next empty column to the right of wt_ss_HO, the column headers wt_ss_(TIME) was created as in (3).
  12. Make a note of how many data points you have at each time point for your strain.
  13. In the first cell below the header wt_ss_t15, =SUMSQ(<range of cells for logFC_t15>)-COUNTA(<range of cells for logFC_t15>)*<AvgLogFC_t15>^2 was typed and the enter was pressed.
    • The phrase <range of cells for logFC_t15> was replaced by the data range associated with t15.
    • The phrase <AvgLogFC_t15> was replaced by the cell number in which the AvgLogFC for t15 was computed.
    • Upon completion of this single computation, the formula was copied throughout the column.
  14. This computation was repeated for the t30 through t120 data points.
  15. In the first column to the right of wt_ss_t120, the column header wt_SS_full was created.
  16. In the first row below this header, =sum(<range of cells containing "ss" for each timepoint>) was typed and enter was pressed.
  17. In the next two columns to the right, the headers (STRAIN)_Fstat and (STRAIN)_p-value were created.
  18. Recall the number of data points from (12): that total was called n.
  19. In the first cell of the (STRAIN)_Fstat column, =((n-5)/5)*(<(STRAIN)_ss_HO>-<(STRAIN)_SS_full>)/<(STRAIN)_SS_full> was typed and enter was pressed.
    • The phrase wt_ss_HO was replaced with the cell designation.
    • The phrase <wt_SS_full> was replaced with the cell designation.
    • The whole column was copied.
  20. In the first cell below the wt_p-value header, =FDIST(<wt_Fstat>,5,n-5) was typed, replacing the phrase <wt_Fstat> with the cell designation and the "n" as in (12) with the number of data points total. The whole column was copied.
  21. A sanity check was performed before moving on to see if all of these computations were done correctly.
    • Cell A1 was selected then on the Data tab, the Filter icon was chosen.
    • The drop-down arrow on the wt_p-value column was clicked on, then "Number Filters" was selected. In the window that appears, a criterion that will filter the data so that the p value has to be less than 0.05 was selected.
    • Excel then only displayed the rows that correspond to data meeting that filtering criterion. A number appeared in the lower left hand corner of the window giving the number of rows that meet that criterion.
    • Any filters that were applied were removed before making any additional calculations.

Calculating the Bonferroni and p-value Correction

  1. The next two columns to the right were labeled with the same label, wt_Bonferroni_p-value.
  2. The equation =<wt_p-value>*6189 was typed then copied into the entire column.
  3. Any corrected p-values that were greater than 1 were corrected by typing the following formula into the first cell below the second wt_Bonferroni_p-value header: =IF(wt_Bonferroni_p-value>1,1,wt_Bonferroni_p-value), where "wt_Bonferroni_p-value" refers to the cell in which the first Bonferroni p-value computation was made then the formula was copied throughout the column.

Calculating the Benjamini & Hochberg p value Correction

  1. A new worksheet named "wt_ANOVA_B-H" was created.
  2. The "MasterIndex", "ID", and "Standard Name" columns were copied and pasted from the previous worksheet into the first two columns of the new worksheet.
  3. The unadjusted p-values were copied from your ANOVA worksheet and pasted into the Column D.
  4. Columns A, B, C, and D were selected and sorted by ascending values on Column D. The sort button was clicked on the toolbar, and in the window that appears column D was sorted from smallest to largest.
  5. "Rank" was typed in the cell E1. "1" was typed into cell E2 and "2" into cell E3. Both cells E2 and E3 were selected then the plus sign in the lower right-corner was double-clicked to fill the column with a series of numbers from 1 to 6189.
  6. The Benjamini and Hochberg p-value correction was calculated next. "wt_B-H_p-value" was typed into cell F1. The following formula was typed into cell F2: =(D2*6189)/E2 and enter was press. That equation was copied throughout the entire column.
  7. "STRAIN_B-H_p-value" was typed into cell G1.
  8. The following formula was typed into cell G2: =IF(F2>1,1,F2) and enter was press. That equation was copied throughout the entire column.
  9. Columns A through G were selected then sorted by the MasterIndex in Column A in ascending order.
  10. Column G was copied and the values were pasted into the next column on the right of your ANOVA sheet.
  11. The .xlsx file was put into .zip folder and uploaded to the wiki.

Sanity Check: Number of genes significantly changed

Before we move on to further analysis of the data, we want to perform a more extensive sanity check to make sure that we performed our data analysis correctly. We are going to find out the number of genes that are significantly changed at various p value cut-offs.

  • From the wt_ANOVA worksheet, row 1 was selected, then the menu item Data > Filter > Autofilter.
  • The drop-down arrow for the unadjusted p-value was selected and a criterion to the data so that the p-value has to be less than 0.05 was added to determine: How many genes have p < 0.05? and what is the percentage (out of 6189)?; how many genes have p < 0.01? and what is the percentage (out of 6189)?; how many genes have p < 0.001? and what is the percentage (out of 6189)?; and how many genes have p < 0.0001? and what is the percentage (out of 6189)?
  • To see the relationship of the less stringent Benjamini-Hochberg correction the data was filtered to determine: how many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)? & how many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?
  • The numbers were organized as a table using this sample PowerPoint slide then compared to other strains after the slide was uploaded to the wiki.
  • Comparing results with known data: the expression of the gene NSR1 (ID: YGR159C) is known to be induced by cold shock, so the unadjusted, Bonferroni-corrected, and B-H-corrected p values were found of this gene, along with it's average Log fold change at each of the timepoints in the experiment to determine whether NSR1 changed expression due to cold shock.
  • The unadjusted, Bonferroni-corrected, and B-H-corrected p values for the gene YBR160W/CDC28 in the dataset were found along with the average Log fold change at each of the timepoints in the experimentto determine whether gene expression changed due to cold shock.

Clustering and GO Term Enrichment with stem (part 2)

  1. The microarray data file was prepared for loading into STEM.
    • A new worksheet was inserted into the Excel workbook named "wt_stem".
    • All of the data from the "wt_ANOVA" worksheet was selected, copied, and pasted into the "wt_stem" worksheet.
      • The leftmost column with the column header "Master_Index" was renamed to "SPOT". Column B with header "ID" was renamed to "Gene Symbol". Then the the column named "Standard_Name" was deleted.
      • The data on the B-H corrected p-value was filtered to be > 0.05.
        • Once the data was filtered, all of the rows (except for your header row) were selected and deleted by right-clicking and choosing "Delete Row" from the context menu. The filter was undone to ensure that only the genes with a "significant" change in expression, and not the noise, will be used.
      • All of the data columns except for the Average Log Fold change columns for each timepoint (for example, wt_AvgLogFC_t15, etc.) were deleted.
      • The data columns were renamed with just the time and units (for example, 15m, 30m, etc.).
      • The work was saved using Save As to save this spreadsheet as Text (Tab-delimited) (*.txt).
  2. Next the STEM software was downloaded and extracted from the STEM web site.
    • The download link was selected and the stem.zip file was downloaded to Desktop.
    • The file was unzipped creating a folder called stem.
    • Inside the folder, the stem.jar was double-clicked to launch the STEM program.
  3. Running STEM
    1. In section 1 (Expression Data Info) of the the main STEM interface window, the Browse... button was clicked and the text file created was selected.
      • The radio button No normalization/add 0 was selected.
      • The box next to Spot IDs included in the data file was checked.
    2. In section 2 (Gene Info) of the main STEM interface window, the default selection for the three drop-down menu selections for Gene Annotation Source, Cross Reference Source, and Gene Location Source were left as "User provided".
    3. The "Browse..." button to the right of the "Gene Annotation File" item was clicked and the "gene_association.sgd.gz" was selected and opened.
    4. In section 3 (Options) of the main STEM interface window, the Clustering Method "STEM Clustering Method" was selected.
    5. In section 4 (Execute) the yellow Execute button was selected to run STEM.
  • The sanity check in step 22 of "Statistical Analysis Part 1: ANOVA" showed that 2528 out of the 6189 samples have a p-value of less than 0.05

Data

Microarray Data .zip

Microarray Data .txt

Sanity Check Table

Stem results1.JPG

Conclusion

In conclusion, the wild type p-values were calculated for each gene allowing statistical significance to be determined. Next Bonferroni and Benjamini & Hochberg p-value adjustments were done to correct for the multiple testing problem. The microarray data was prepared to be run on the STEM program.

Acknowledgements

I'd like to thank User:Knguye66, User:Jcowan4, and User:Cdomin12 for helping with this investigation. Except for what is noted above, this individual journal entry was completed by me and not copied from another source. Mavila9 (talk) 14:24, 20 October 2019 (PDT)

References

Week 8. Retrieved October 20, 2019, from https://xmlpipedb.cs.lmu.edu/biodb/fall2019/index.php/Week_8