Difference between revisions of "Eyoung20 journal week 8"

From LMU BioDB 2019
Jump to navigation Jump to search
(copy and pasted methods from edit sections on week 8 instructions)
(Acknowledgements: added signature)
 
(32 intermediate revisions by the same user not shown)
Line 1: Line 1:
==purpose==
+
==Purpose==
==== Experimental Design and Getting Ready ====
+
The objective of this assignment is to gain comfort with statistically analyzing large sets of data, and to practice the skill of keeping an up to date laboratory research notebook. Another object is to become comfortable with converting data to  run through various softwares and databases.
  
The data used in this exercise is publicly available at the NCBI GEO database in [https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE83656 record GSE83656]. 
+
==Methods==
* Begin by downloading the Excel file for your group's strain.
+
=== Experimental Design and Getting Ready ===
** Marcus & Jonar, Christina & Kaitlyn ([[Media:BIOL367_F19_microarray-data_wt.xlsx | wild type data]])
+
This Group includes [[User:Imacarae|Ivy]], [[User:Dmadere|Delisa]], [[User:Msamdars|Mihir]], and [[User:eyoung20|Emma]]  
** Ivy & Emma, DeLisa & Mihir ([[Media:BIOL367_F19_microarray-data_dCIN5.xlsx | ''Δcin5'' data]])
+
==== Gene information ====
** Iliana, Mike, Joey ([[Media:BIOL367_F19_microarray-data_dGLN3.xlsx | ''Δgln3'' data]])
+
*Strain: dCIN5
** Aby, David, Naomi ([[Media:BIOL367_F19_microarray-data_dHAP4.xlsx | ''Δhap4'' data]])
+
*Filename: Master_Sheet_dCIN5
* '''NOTE: before beginning any analysis, immediately change the filename (Save As...) so that it contains your initials to distinguish it from other students' work.'''
+
* Number of Replicates: for each time point there are four replicates
* In the Excel spreadsheet, there is a worksheet labeled "Master_Sheet_<STRAIN>", where <STRAIN> is replaced by the strain designation, wt, dCIN5, dGLN3, or dHAP4.
+
*Time points:
** In this worksheet, each row contains the data for one gene (one spot on the microarray). 
+
**T15: 15 minutes
** The first column contains the "MasterIndex", which numbers all of the rows sequentially in the worksheet so that we can always use it to sort the genes into the order they were in when we started. 
+
**T30: 30 minutes
** The second column (labeled "ID") contains the Systematic Name (gene identifier) from the [http://www.yeastgenome.org Saccharomyces Genome Database]. 
+
**T60: 60 minutes
** The third column contains the Standard Name for each of the genes. 
+
**T90: 90 minutes
** Each subsequent column contains the log<sub>2</sub> ratio of the red/green fluorescence from each microarray hybridized in the experiment (steps 1-5 above having been performed for you already), for each strain starting with wild type and proceeding in alphabetical order by strain deletion.
+
**T120: 120 minutes
** Each of the column headings from the data begin with the experiment name ("wt" for wild type ''S. cerevisiae'' data, "dCIN5" for the ''Δcin5'' data, etc.). "LogFC" stands for "Log<sub>2</sub> Fold Change" which is the Log<sub>2</sub> red/green ratio.  The timepoints are designated as "t" followed by a number in minutes.  Replicates are numbered as "-0", "-1", "-2", etc. after the timepoint.
+
 
*** The timepoints are t15, t30, t60 (cold shock at 13°C) and t90 and t120 (cold shock at 13°C followed by 30 or 60 minutes of recovery at 30°C).
+
The Data file was Downloaded form the link with the group and gene assignments on the [[week 7]] assignment page. The Data file was renamed adding the initials ERY, making the File name  Master_Sheet_dCIN5_ERY so I will be able to be differentiated from the other group mates downloaded files.
* '''''Begin by recording in your wiki, the strain that you will analyze, the filename, the number of replicates for each strain and each time point in your data.'''''
 
  
'''''Note that you should have already completed this section as part of the [[Week 7]] assignment.  Please copy over the information recorded on your [[Week 7]] journal to your [[Week 8]] individual journal entry.'''''
 
  
 
<!--
 
<!--
Line 30: Line 28:
 
-->
 
-->
  
==== Statistical Analysis Part 1: ANOVA ====
+
===Statistical Analysis Part 1: ANOVA ===
  
The purpose of the within-stain ANOVA test is to determine if any genes had a gene expression change that was significantly different than zero at '''''any''''' timepoint.
+
The purpose of the within-stain ANOVA test is to determine if any genes in dCIN5 had a gene expression change that was significantly different than zero at '''''any''''' timepoint.
  
# Create a new worksheet, naming it either "(STRAIN)_ANOVA" as appropriate. For example, you might call yours "wt_ANOVA" or "dHAP4_ANOVA"
+
# A new worksheet was created, "named dCIN5_ANOVA".
# Copy the first three columns containing the "MasterIndex", "ID", and "Standard Name" from the "Master_Sheet" worksheet for your strain and paste it into your new worksheet. Copy the columns containing the data for your strain and paste it into your new worksheet.
+
# The first three columns containing the "MasterIndex", "ID", and "Standard Name" from the "Master_Sheet" worksheet in the file "BIOL367_F19_microarray-data_dCIN5_ERY" were copies and pasted it into a new worksheet in the same file. The columns containing the data for dCIN5 were copied and pasted it into the new worksheet as well.
# At the top of the first column to the right of your data, create five column headers of the form (STRAIN)_AvgLogFC_(TIME) where STRAIN is your strain designation and (TIME) is 15, 30, etc.
+
# At the top of the first column to the right the data, five column headers of the form dCIN5_AvgLogFC_(TIME), where (TIME) is 15, 30, etc.
# In the cell below the (STRAIN)_AvgLogFC_t15 header, type <code>=AVERAGE(</code>  
+
# In the cell below the dCIN5_AvgLogFC_t15 header, <code>=AVERAGE(</code> was entered.
# Then highlight all the data in row 2 associated with t15, press the closing paren key (shift 0),and press the "enter" key.
+
# Then all the data in row 2 associated with t15 was highlighted, the closing paren key (shift 0) was pressed, and then the "enter" key.
# This cell now contains the average of the log fold change data from the first gene at t=15 minutes.
+
# The cell then contained the average of the log fold change data from the first gene at t=15 minutes.
# Click on this cell and position your cursor at the bottom right corner. You should see your cursor change to a thin black plus sign (not a chubby white one). When it does, double click, and the formula will magically be copied to the entire column of 6188 other genes.
+
# The cell was then clicked on and and the cursor was positioned at the bottom right corner. when the cursor changed to a thin black plus sign it was double clicked allowing the formula to be copied to the entire column of 6188 other genes.
# Repeat steps (4) through (8) with the t30, t60, t90, and the t120 data.
+
# The steps (4) through (8) were repeated with the t30, t60, t90, and the t120 data.
# Now in the first empty column to the right of the (STRAIN)_AvgLogFC_t120 calculation, create the column header (STRAIN)_ss_HO.
+
# In the empty column to the right of the dCIN5_AvgLogFC_t120 calculation, the column header dCIN5_ss_HO was created.
# In the first cell below this header, type <code>=SUMSQ(</code>
+
# In the first cell below this header, <code>=SUMSQ(</code> was entered.
# Highlight all the LogFC data in row 2 (but not the AvgLogFC), press the closing paren key (shift 0),and press the "enter" key.  
+
# All the LogFC data in row 2 was highlighted (except for the AvgLogFC), the closing paren key (shift 0) was pressed, and then "enter" key was pressed.  
# In the next empty column to the right of (STRAIN)_ss_HO, create the column headers (STRAIN)_ss_(TIME) as in (3).
+
# In the empty column to the right of dCIN5_ss_HO, the column headers dCIN5_ss_(TIME) as in (3) were created.
# Make a note of how many data points you have at each time point for your strain. For most of the strains, it will be 4, but for dHAP4 t90 or t120, it will be "3", and for the wild type it will be "4" or "5".  Count carefully. Also, make a note of the total number of data points. Again, for most strains, this will be 20, but for example, dHAP4, this number will be 18, and for wt it should be 23 (double-check).
+
# It was noted that there were 4 data points at each time point for dCIN5. It was also noted that the total number of data points was 20.  
# In the first cell below the header (STRAIN)_ss_t15, type <code>=SUMSQ(<range of cells for logFC_t15>)-COUNTA(<range of cells for logFC_t15>)*<AvgLogFC_t15>^2</code> and hit enter.
+
# In the first cell below the header dCIN5_ss_t15, <code>=SUMSQ(<range of cells for logFC_t15>)-COUNTA(<range of cells for logFC_t15>)*<AvgLogFC_t15>^2</code> was inserted and the enter key was pressed.
 
#* The <code>COUNTA</code> function counts the number of cells in the specified range that have data in them (i.e., does not count cells with missing values).
 
#* The <code>COUNTA</code> function counts the number of cells in the specified range that have data in them (i.e., does not count cells with missing values).
#* The phrase <range of cells for logFC_t15> should be replaced by the data range associated with t15.  
+
#* The phrase <range of cells for logFC_t15> was replaced by the data range associated with t15.  
#* The phrase <AvgLogFC_t15> should be replaced by the cell number in which you computed the AvgLogFC for t15, and the "^2" squares that value.  
+
#* The phrase <AvgLogFC_t15> was replaced by the cell number in which the AvgLogFC for t15 was computed, and the "^2" squared that value.  
#* Upon completion of this single computation, use the Step (7) trick to copy the formula throughout the column.
+
#* Upon completion of this single computation, the Step (7) trick was used to copy the formula throughout the column.
# Repeat this computation for the t30 through t120 data points. Again, be sure to get the data for each time point, type the right number of data points, and get the average from the appropriate cell for each time point, and copy the formula to the whole column for each computation.
+
# This computation was repeated for the t30 through t120 data points. All the information used for the calculations were confirmed when placed in the equation. The formula was copied to the whole column for each computation.
# In the first column to the right of (STRAIN)_ss_t120, create the column header (STRAIN)_SS_full.
+
# In the column to the right of dCIN5_ss_t120, the column header dCIN5_SS_full was added.
# In the first row below this header, type <code>=sum(<range of cells containing "ss" for each timepoint>)</code> and hit enter.
+
# In the first row below this header, <code>=sum(<range of cells containing "ss" for each timepoint>)</code> was typed out and the enter key was pressed.
# In the next two columns to the right, create the headers (STRAIN)_Fstat and (STRAIN)_p-value.
+
# In the next two columns to the right, the headers dCIN5_Fstat and dCIN5_p-value were created.
 
# Recall the number of data points from (13): call that total n.
 
# Recall the number of data points from (13): call that total n.
# In the first cell of the (STRAIN)_Fstat column, type <code>=((n-5)/5)*(<(STRAIN)_ss_HO>-<(STRAIN)_SS_full>)/<(STRAIN)_SS_full></code> and hit enter.   
+
# In the first cell of the dCIN5_Fstat column, <code>=((n-5)/5)*(<dCIN5_ss_HO>-<dCIN5_SS_full>)/<dCIN5_SS_full></code> was typed in and the enter key was hit.   
#* Don't actually type the n but instead use the number from (13). Also note that "5" is the number of timepoints.<!-- and the dSWI4 strain has 4 timepoints (it is missing t15).-->
+
#* Instead of the "n" the number from (13) was used. It was also noted that "5" was the number of timepoints.
#* Replace the phrase (STRAIN)_ss_HO with the cell designation.
+
#* The phrase dCIN5_ss_HO was replaced with the cell designation.
#* Replace the phrase <(STRAIN)_SS_full> with the cell designation.  
+
#* The phrase <dCIN5_SS_full> was replaced with the cell designation.  
#* Copy to the whole column.
+
#* This was then copied to the whole column.
# In the first cell below the (STRAIN)_p-value header, type <code>=FDIST(<(STRAIN)_Fstat>,5,n-5)</code> replacing the phrase <(STRAIN)_Fstat> with the cell designation and the "n" as in (13) with the number of data points total. <!--(Again, note that the number of timepoints is actually "4" for the dSWI4 strain)-->.  Copy to the whole column.
+
# In the first cell below the dCIN5_p-value header, <code>=FDIST(<dCIN5_Fstat>,5,n-5)</code> was typed in to replace the phrase <dCIN5_Fstat> with the cell designation and the "n" as in (13) with the number of data points total. This was copied to the whole column.
# Before we move on to the next step, we will perform a quick sanity check to see if we did all of these computations correctly.
+
# Before moving on to the next step, a sanity check was preformed to confirm that all the computations were done correctly.
#* Click on cell A1 and click on the Data tab.  Select the Filter icon (looks like a funnel). Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
+
#*   Cell A1 was clicked on and then the Data tab.  The Filter icon was selected.  
#* Click on the drop-down arrow on your (STRAIN)_p-value column. Select "Number Filters". In the window that appears, set a criterion that will filter your data so that the p value has to be less than 0.05.  
+
#* The drop-down arrow on dCIN5_p-value column was selected. "Number Filters" was selected. In the window that appeared, criterion was created that filtered the data so that the p value had to be less than 0.05.  
#* Excel will now only display the rows that correspond to data meeting that filtering criterion.  A number will appear in the lower left hand corner of the window giving you the number of rows that meet that criterion.  We will check our results with each other to make sure that the computations were performed correctly.
+
#* Excel only displayed the rows that correspond to data that met the filtering criterion.  A number appeared in the lower left hand corner of the window, it provided the number of rows that met the criterion. The results were checked with partner in order to make sure that the computations were performed correctly.
#* Be sure to undo any filters that you have applied before making any additional calculations.
+
#* Applied filters were removed before the next steps were taken.
  
==== Calculate the Bonferroni and p value Correction ====
+
=== Calculate the Bonferroni and p value Correction ===
  
 
''Note: Be sure to undo any filters that you have applied before continuing with the next steps.''
 
''Note: Be sure to undo any filters that you have applied before continuing with the next steps.''
Line 76: Line 74:
 
# Replace any corrected p value that is greater than 1 by the number 1 by typing the following formula into the first cell below the second (STRAIN)_Bonferroni_p-value header: <code>=IF((STRAIN)_Bonferroni_p-value>1,1,(STRAIN)_Bonferroni_p-value)</code>, where "(STRAIN)_Bonferroni_p-value" refers to the cell in which the first Bonferroni p value computation was made.  Use the Step (10) trick to copy the formula throughout the column.
 
# Replace any corrected p value that is greater than 1 by the number 1 by typing the following formula into the first cell below the second (STRAIN)_Bonferroni_p-value header: <code>=IF((STRAIN)_Bonferroni_p-value>1,1,(STRAIN)_Bonferroni_p-value)</code>, where "(STRAIN)_Bonferroni_p-value" refers to the cell in which the first Bonferroni p value computation was made.  Use the Step (10) trick to copy the formula throughout the column.
  
==== Calculate the Benjamini & Hochberg p value Correction ====
+
=== Calculate the Benjamini & Hochberg p value Correction ===
 
 
# Insert a new worksheet named "(STRAIN)_ANOVA_B-H".
 
# Copy and paste the "MasterIndex", "ID", and "Standard Name" columns from your previous worksheet into the first two columns of the new worksheet.
 
# For the following, use Paste special > Paste values.  Copy your unadjusted p values from your ANOVA worksheet and paste it into Column D.
 
# Select all of columns A, B, C, and D. Sort by ascending values on Column D. Click the sort button from A to Z on the toolbar, in the window that appears, sort by column D, smallest to largest.
 
# Type the header "Rank" in cell E1.  We will create a series of numbers in ascending order from 1 to 6189 in this column.  This is the p value rank, smallest to largest.  Type "1" into cell E2 and "2" into cell E3. Select both cells E2 and E3. Double-click on the plus sign on the lower right-hand corner of your selection to fill the column with a series of numbers from 1 to 6189.
 
# Now you can calculate the Benjamini and Hochberg p value correction. Type (STRAIN)_B-H_p-value in cell F1. Type the following formula in cell F2: <code>=(D2*6189)/E2</code> and press enter. Copy that equation to the entire column.
 
# Type "STRAIN_B-H_p-value" into cell G1.
 
# Type the following formula into cell G2: <code>=IF(F2>1,1,F2)</code> and press enter. Copy that equation to the entire column.
 
# Select columns A through G.  Now sort them by your MasterIndex in Column A in ascending order.
 
# Copy column G and use Paste special > Paste values to paste it into the next column on the right of your ANOVA sheet.
 
 
 
* '''''Zip and upload the .xlsx file that you have just created to the wiki.'''''
 
  
'''''You must finish up to this point for the interim deadline of Tuesday, October 22, 12:01am Pacific time, so that the instructor can check your calculations before class.'''''
+
# A new worksheet named "dCIN5_ANOVA_B-H" was created.
 +
# The "MasterIndex", "ID", and "Standard Name" columns from the previous worksheet were copied and pasted into the first two columns of the new worksheet.
 +
# For the the next steps Paste special > Paste values was used to get the correct values copied into the new sheet. The unadjusted p values from the ANOVA worksheet  where copied and pasted into Column D.
 +
# All of columns A, B, C, and D were selected. Then the selected columns were sorted by ascending values on Column D. this was done by clicking the sort button from A to Z on the toolbar, in the window that appeared, sorted by column D, smallest to largest.
 +
# The header "Rank" was typed into cell E1. A series of numbers in ascending order from 1 to 6189  were created in this column.  These were the p value rank, smallest to largest. "1" was typed into cell E2 and "2" into cell E3. Both cells E2 and E3 were selected. The plus sign on the lower right-hand corner of the selection was double-clicked to fill the column with a series of numbers from 1 to 6189.
 +
# then the Benjamini and Hochberg p value correction was calculated. dCIN5_B-H_p-value was typed into cell F1. The formula : <code>=(D2*6189)/E2</code>  was typed into cell F2 and the enter key was pressed. The equation was copied to the entire column.
 +
# "dCIN5_B-H_p-value"  was typed into cell G1.
 +
# the formula <code>=IF(F2>1,1,F2)</code> was typed into cell G2 and the enter key was pressed. The was copied equation to the entire column.
 +
# Columns A through G were selected. Then they were sorted by MasterIndex in Column A in ascending order.
 +
# Column G was then copied and Paste special > Paste values was used to paste the column into the available column that was on the dCIN5_ANOVA sheet.
  
==== Sanity Check: Number of genes significantly changed ====
+
=== Sanity Check: Number of genes significantly changed ===
 +
A more extensive sanity check was preformed to make sure that the data was analyzed correctly. The sanity check was to determine the number of genes that were significantly changed at various p value cut-offs.
 +
* The dCIN5_ANOVA worksheet was opened.
 +
* Row 1 was selected and the menu item Data > Filter > Autofilter was selected. Little drop-down arrows appeared at the top of each column.  This enabled the data to be filtered according to criteria that was set.
 +
* The drop-down arrow  was clicked for the unadjusted p value. A criterion that would filter the data so that the p value had to be less than 0.05 was set.
 +
** '''''How many genes had p < 0.05? and what is the percentage (out of 6189)?''''': 2290/6189 37.00% 
 +
** '''''How many genes had p < 0.01? and what is the percentage (out of 6189)?''''': 1380/6189 22.29%
 +
** '''''How many genes had p < 0.001? and what is the percentage (out of 6189)?''''': 691/6189 11.16%
 +
** '''''How many genes had p < 0.0001? and what is the percentage (out of 6189)?''''':358/6189 5.78%
 +
* That action performed 6189 hypothesis tests.  The same method was then used to determine the percentages of relevant data that was present as determined by the The Bonferroni correction and The Benjamini-Hochberg correction.
 +
** '''''How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)?''''': 151/6189 2.44%
 +
** '''''How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?''''':1458/6189 2.35%
 +
* In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant".  Instead, it is a moveable confidence level.  If there was to be confidence in the data, small p value cut-off were needed to be used. If less confidence about a gene expression change was ok and it was desired to include more genes in the analysis,  a larger p value cut-off could have been used. 
 +
* '''''Comparing results with known data:''''' 
 +
**'''''the expression of the gene ''NSR1'' (ID: YGR159C)is known to be induced by cold shock. Find ''NSR1'' in your dataset.  What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?'''''
 +
***Unadjusted p-value: 6.376*10^-8
 +
***Bonferroni-corrected p-value: 0.0003
 +
***B-H Corrected p-value: 2.192*10^-5
 +
**'''''What is its average Log fold change at each of the timepoints in the experiment? Note that the average Log fold change is what we called "dCIN5_AvgLogFC_(TIME)" in step 3 of the ANOVA analysis. '''''
 +
*** 18.5887
 +
**'''''Does ''NSR1'' change expression due to cold shock in this experiment?'''''
 +
***Yes ''NSR1'' does change expression due to cold shock in this experiment as seen in the low p-values in the unadjusted, Bonferroni-corrected, and B-H-corrected p values.
 +
* '''''For fun, find "your favorite gene" (from your [[Week 3]] assignment) in the dataset.''''' 
 +
**'''''What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?'''''
 +
***Unadjusted p-value: 0.0212
 +
***Bonferroni-corrected p-value: 1
 +
***B-H Corrected p-value: 0.0756
 +
** '''''What is its average Log fold change at each of the timepoints in the experiment?'''''
 +
*** 13.41 
 +
**'''''Does your favorite gene change expression due to cold shock in this experiment?'''''
 +
*** Based on the unadjusted p-value the answer would be yes because it has a p-value less than 0.05, however the Bonferroni-corrected, and B-H-corrected p values are greater than 0.05, so most likely it does not change expression due to cold shock in this experiment.
  
Before we move on to further analysis of the data, we want to perform a more extensive sanity check to make sure that we performed our data analysis correctly.  We are going to find out the number of genes that are significantly changed at various p value cut-offs.
+
=== Clustering and GO Term Enrichment with stem (part 2)===
  
* Go to your (STRAIN)_ANOVA worksheet.
+
# ''' prepared microarray data file for loading into STEM.'''
* Select row 1 (the row with your column headers) and select the menu item Data > Filter > Autofilter (The funnel icon on the Data tab).  Little drop-down arrows should appear at the top of each column.  This will enable us to filter the data according to criteria we set.
+
#* Inserted a new worksheet into the Excel workbook, and it was named "dCIN5_stem".
* Click on the drop-down arrow for the unadjusted p value.  Set a criterion that will filter your data so that the p value has to be less than 0.05.
+
#* Selected all of the data from "dCIN5_ANOVA" worksheet and Pasted special > paste values into "dCIN5_stem" worksheet.
** '''''How many genes have p < 0.05?  and what is the percentage (out of 6189)?'''''
+
#** the leftmost column had the column header "Master_Index".  The column was renamed "SPOT".  Column B was named "ID".  Then was renamed "Gene Symbol". The column named "Standard_Name" was deleted.
** '''''How many genes have p < 0.01? and what is the percentage (out of 6189)?'''''
+
#** The data was filtered on the B-H corrected p value to be > 0.05.
** '''''How many genes have p < 0.001? and what is the percentage (out of 6189)?'''''
+
#*** After the data was filtered, all of the rows (except for the header row) were selected and deleted by right-clicking and choosing "Delete Row" from the context menu. The filter was removed.  This ensured that only the genes with a "significant" change in expression were clustered and not the noise.
** '''''How many genes have p < 0.0001? and what is the percentage (out of 6189)?'''''
+
#** All of the data columns '''''EXCEPT''''' for the Average Log Fold change columns for each timepoint were deleted (for example, dCIN5_AvgLogFC_t15, etc.).
** Note that it is a good idea to create a new worksheet in your workbook to record the answers to these questions.  Then you can write a formula in Excel to automatically calculate the percentage for you.
+
#** Renamed the data columns with just the time and units (for example, 15m, 30m, etc.).
* When we use a p value cut-off of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero by chance less than 5% of the time.
+
#** The progress was saved.  Then ''Save As'' was used to save this spreadsheet as Text (Tab-delimited) (*.txt). Okayed the warnings and the file was closed.
* We have just performed 6189 hypothesis tests.  Another way to state what we are seeing with p < 0.05 is that we would expect to see this a gene expression change for at least one of the timepoints by chance in about 5% of our tests, or 309 times.  Since we have more than 309 genes that pass this cut off, we know that some genes are significantly changed.  However, we don't know ''which'' ones.  To apply a more stringent criterion to our p values, we performed the Bonferroni and Benjamini and Hochberg corrections to these unadjusted p values.  The Bonferroni correction is very stringent.  The Benjamini-Hochberg correction is less stringent.  To see this relationship, filter your data to determine the following:
+
# '''Then the STEM software was downloaded and extracted.'''  [http://www.cs.cmu.edu/~jernst/stem/ Click here to go to the STEM web site].
** '''''How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)?'''''
+
#* The [http://www.sb.cs.cmu.edu/stem/stem.zip download link] was clicked and the file <code>stem.zip</code> was downloaded to the desktop.
** '''''How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?'''''
+
#* The file was unzipped.  In Seaver 120, the file icon was right clicked on and the menu item ''7-zip > Extract Here was selected''.
* In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant".  Instead, it is a moveable confidence level.  If we want to be very confident of our data, use a small p value cut-off.  If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off. 
+
#* A folder called <code>stem</code> was created due to those actions.
* We will compare the numbers we get between the wild type strain and the other strains studied, organized as a table.  Use this [[Media:BIOL367_F19_sample_p-value_slide.pptx | sample PowerPoint slide]] to see how your table should be formatted. '''''Upload your slide to the wiki.'''''
+
#** Gene Ontology and yeast GO annotations were download and placed them in the folder.
** Note that since the wild type data is being analyzed by one of the groups in the class, it will be sufficient for this week to supply just the data for your strain.  We will do the comparison with wild type at a later date.
+
#** This link was clicked [https://lmu.box.com/s/t8i5s1z1munrcfxzzs7nv7q2edsktxgl "gene_ontology.obo"] and downloaded.
* Comparing results with known data:  the expression of the gene ''NSR1'' (ID: YGR159C)is known to be induced by cold shock. '''''Find ''NSR1'' in your dataset.  What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?  What is its average Log fold change at each of the timepoints in the experiment?'''''  Note that the average Log fold change is what we called "STRAIN)_AvgLogFC_(TIME)" in step 3 of the ANOVA analysis. Does ''NSR1'' change expression due to cold shock in this experiment?
+
#** This link was clicked [https://lmu.box.com/s/zlr1s8fjogfssa1wl59d5shyybtm1d49 "gene_association.sgd.gz"] and downloaded.
* For fun, find "your favorite gene" (from your [[Week 3]] assignment) in the dataset.  '''''What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?  What is its average Log fold change at each of the timepoints in the experiment?'''''  Does your favorite gene change expression due to cold shock in this experiment?
+
#*Inside the folder, <code>stem.jar</code> was double-clicked to launch the STEM program.
 
 
==== Clustering and GO Term Enrichment with stem (part 2)====
 
 
 
# '''Prepare your microarray data file for loading into STEM.'''
 
#* Insert a new worksheet into your Excel workbook, and name it "(STRAIN)_stem".
 
#* Select all of the data from your "(STRAIN)_ANOVA" worksheet and Paste special > paste values into your "(STRAIN)_stem" worksheet.
 
#** Your leftmost column should have the column header "Master_Index".  Rename this column to "SPOT".  Column B should be named "ID".  Rename this column to "Gene Symbol". Delete the column named "Standard_Name".
 
#** Filter the data on the B-H corrected p value to be > 0.05 (that's '''greater than''' in this case).
 
#*** Once the data has been filtered, select all of the rows (except for your header row) and delete the rows by right-clicking and choosing "Delete Row" from the context menu. Undo the filter.  This ensures that we will cluster only the genes with a "significant" change in expression and not the noise.
 
#** Delete all of the data columns '''''EXCEPT''''' for the Average Log Fold change columns for each timepoint (for example, wt_AvgLogFC_t15, etc.).
 
#** Rename the data columns with just the time and units (for example, 15m, 30m, etc.).
 
#** Save your work.  Then use ''Save As'' to save this spreadsheet as Text (Tab-delimited) (*.txt). Click OK to the warnings and close your file.
 
#*** Note that you should turn on the file extensions if you have not already done so.
 
# '''Now download and extract the STEM software.'''  [http://www.cs.cmu.edu/~jernst/stem/ Click here to go to the STEM web site].
 
#* Click on the [http://www.sb.cs.cmu.edu/stem/stem.zip download link] and download the <code>stem.zip</code> file to your Desktop.
 
#* Unzip the file.  In Seaver 120, you can right click on the file icon and select the menu item ''7-zip > Extract Here''.
 
#* This will create a folder called <code>stem</code>.
 
#** You now need to download the Gene Ontology and yeast GO annotations and place them in this folder.
 
#** Click here to download the file [https://lmu.box.com/s/t8i5s1z1munrcfxzzs7nv7q2edsktxgl "gene_ontology.obo"].
 
#** Click here to download the file [https://lmu.box.com/s/zlr1s8fjogfssa1wl59d5shyybtm1d49 "gene_association.sgd.gz"].
 
#*Inside the folder, double-click on the <code>stem.jar</code> to launch the STEM program.
 
 
# '''Running STEM'''
 
# '''Running STEM'''
## In section 1 (Expression Data Info) of the the main STEM interface window, click on the ''Browse...'' button to navigate to and select your file.
+
## In section 1 (Expression Data Info) of the the main STEM interface window, the ''Browse...'' button was selected to navigate and The file was selected.
##* Click on the radio button ''No normalization/add 0''.
+
##* The radio button was clicked ''No normalization/add 0''.
##* Check the box next to ''Spot IDs included in the data file''.
+
##* The box next to ''Spot IDs included in the data file'' was selected.
## In section 2 (Gene Info) of the main STEM interface window, leave the default selection for the three drop-down menu selections for Gene Annotation Source, Cross Reference Source, and Gene Location Source as "User provided".
+
## In section 2 (Gene Info) of the main STEM interface window, the default selection for the three drop-down menu selections for Gene Annotation Source , Cross Reference Source, and Gene Location Source were left as "User provided".
## Click the "Browse..." button to the right of the "Gene Annotation File" item.  Browse to your "stem" folder and select the file "gene_association.sgd.gz" and click Open.
+
## The "Browse..." button to the right of the "Gene Annotation File" item was selectedThe "stem" folder was opened and the file "gene_association.sgd.gz" was selected and Opened.
## In section 3 (Options) of the main STEM interface window, make sure that the Clustering Method says "STEM Clustering Method" and do not change the defaults for Maximum Number of Model Profiles or Maximum Unit Change in Model Profiles between Time Points.
+
## In section 3 (Options) of the main STEM interface window, the Clustering Method was confirmed to say "STEM Clustering Method" and the defaults for Maximum Number of Model Profiles or Maximum Unit Change in Model Profiles between Time Points were not changed.
## In section 4 (Execute) click on the yellow Execute button to run STEM.
+
## In section 4 (Execute) the yellow Execute button was clicked to run STEM.
# '''Viewing and Saving STEM Results'''
+
## The Error for #DIV/0! appeared on the screen.  
## A new window will open called "All STEM Profiles (1)".  Each box corresponds to a model expression profile.  Colored profiles have a statistically significant number of genes assigned; they are arranged in order from most to least significant p value.  Profiles with the same color belong to the same cluster of profiles.  The number in each box is simply an ID number for the profile.
+
## The excel file was reopenedThe Find/Replace dialog was opened. #DIV/0! was searched for, nothing was placed in the replace field. "Replace all" was clicked and all the #DIV/0! errors were removed. The file was reserved and the Stem was run again.
##* Click on the button that says "Interface Options...".  At the bottom of the Interface Options window that appears below where it says "X-axis scale should be:", click on the radio button that says "Based on real time".  Then close the Interface Options window.
+
## The STEM attempted to run the data for 2 hours with no success.
##*Take a screenshot of this window (on a PC, simultaneously press the <code>Alt</code> and <code>PrintScreen</code> buttons to save the view in the active window to the clipboard) and paste it into a PowerPoint presentation to save your figures.
 
## Click on each of the SIGNIFICANT profiles (the colored ones) to open a window showing a more detailed plot containing all of the genes in that profile.
 
##* Take a screenshot of each of the individual profile windows and save the images in your PowerPoint presentation.
 
##* At the bottom of each profile window, there are two yellow buttons "Profile Gene Table" and "Profile GO Table".  For each of the profiles, click on the "Profile Gene Table" button to see the list of genes belonging to the profile.  In the window that appears, click on the "Save Table" button and save the file to your desktop.  Make your filename descriptive of the contents, e.g. "wt_profile#_genelist.txt", where you replace the number symbol with the actual profile number.
 
##** Upload these files to the wiki and link to them on your individual journal page. (Note that it will be easier to [[Week_4#Compressing_and_Decompressing_Files_with_7-Zip | zip all the files together]] and upload them as one file).
 
##* For each of the significant profiles, click on the "Profile GO Table" to see the list of Gene Ontology terms belonging to the profile. In the window that appears, click on the "Save Table" button and save the file to your desktop.  Make your filename descriptive of the contents, e.g. "wt_profile#_GOlist.txt", where you use "wt", "dGLN3", etc. to indicate the dataset and where you replace the number symbol with the actual profile number.  At this point you have saved all of the primary data from the STEM software and it's time to interpret the results!
 
##** Upload these files to the wiki and link to them on your individual journal page. (Note that it will be easier to [[Week_4#Compressing_and_Decompressing_Files_with_7-Zip | zip all the files together]] and upload them as one file).
 
# '''Analyzing and Interpreting STEM Results'''
 
## Select '''''one''''' of the profiles you saved in the previous step for further intepretation of the data.  I suggest that you choose one that has a pattern of up- or down-regulated genes at the cold shock timepoints.  '''''Each member of your group should choose a different profile.'''''  Answer the following:
 
##* '''''Why did you select this profile?  In other words, why was it interesting to you?'''''
 
##* '''''How many genes belong to this profile?'''''
 
##* '''''How many genes were expected to belong to this profile?'''''
 
##* '''''What is the p value for the enrichment of genes in this profile?'''''  Bear in mind that we just finished computing p values to determine whether each individual gene had a significant change in gene expression at each time point.  This p value determines whether the number of genes that show this particular expression profile across the time points is significantly more than expected.
 
##* Open the GO list file you saved for this profile in Excel.  This list shows all of the Gene Ontology terms that are associated with genes that fit this profile.  Select the third row and then choose from the menu Data > Filter > Autofilter.  Filter on the "p-value" column to show only GO terms that have a p value of < 0.05.  '''''How many GO terms are associated with this profile at p < 0.05?'''''  The GO list also has a column called "Corrected p-value".  This correction is needed because the software has performed thousands of significance tests.  Filter on the "Corrected p-value" column to show only GO terms that have a corrected p value of < 0.05.  '''''How many GO terms are associated with this profile with a corrected p value < 0.05?'''''
 
##* Select 6 Gene Ontology terms from your filtered list (either p < 0.05 or corrected p < 0.05). 
 
##** Each member of the group will be reporting on his or her own cluster in your research presentation.  You should take care to choose terms that are the most significant, but that are also not too redundant.  For example, "RNA metabolism" and "RNA biosynthesis" are redundant with each other because they mean almost the same thing.
 
##*** Note whether the same GO terms are showing up in multiple clusters.
 
##**'''''Look up the definitions for each of the terms at [http://geneontology.org http://geneontology.org].  In your research presentation, you will discuss the biological interpretation of these GO terms.  In other words, why does the cell react to cold shock by changing the expression of genes associated with these GO terms?  Also, what does this have to do with the transcription factor being deleted (for the groups working with deletion strain data)?'''''
 
##** To easily look up the definitions, go to [http://geneontology.org http://geneontology.org].
 
##** Copy and paste the GO ID (e.g. GO:0044848) into the search field on the left of the page.
 
##** In the [http://amigo.geneontology.org/amigo/medial_search?q=GO%3A0044848 results] page, click on the button that says "Link to detailed information about <term>, in this case "biological phase"".
 
##** The definition will be on the next results page, e.g. [http://amigo.geneontology.org/amigo/term/GO:0044848 here].
 
  
 
== Data and Files ==
 
== Data and Files ==
[[media:BIOL367_F19_microarray-data_dCIN5_ERY.xlsx]]
+
[[media:BIOL367_F19_microarray-data_dCIN5_ERY.xlsx|dCIN5_ANOVA spread sheet]]
  
Your data and files section should include:
+
[[media:BIOL367_F19_microarray-data_dCIN5_ERY.txt|dCIN5_STEM txt file]]
* Your Excel workbook with all of your calculations.
 
** Note that you will be working with this workbook for the next week or two, adding computations to it.  Save the new versions to the wiki with the '''same filename'''.  The wiki will store each version of the file so you can always go back to a previous version, if need be.
 
* Your PowerPoint slide with a summary table of p values, updated with the screenshots from the stem software.
 
** You will also be adding to the PowerPoint presentation during subsequent steps in the analysis.
 
* The input .txt file that you used to run stem.
 
* The zipped together genelist and GOlist files for each of your significant profiles.
 
  
== Conclusion (Summary Paragraph) ==
+
[[media:BIOL367 F19 sample p-value slide ERY.pptx|dCIN5 p-value slide]]
  
* Write a summary paragraph that gives the conclusions from this week's analysis.
+
[[media: Stem_ERY.zip|zipped  genelist and GOlist ]]
  
 +
== Conclusion ==
 +
From this weeks analysis we can conclude that 37.00% of the genes have an unadjusted  p<0.05, 22.29% p<0.01, 11.16% p<0.001, 5.78% p<0.0001.
 +
2.44%  of the genes have a B&H p<0.05 and 2.35% a Bonferroni p<0.05.
  
 
==Acknowledgements==
 
==Acknowledgements==
 +
I would like to acknowledge my home work partners s [[User:Imacarae|Ivy]], [[User:Dmadere|Delisa]], and [[User:Msamdars|Mihir]] with there help with checking our work and helping to answer questions about the procedure.
 +
I would like to acknowledge that the methods were taken from [[week 8]] and then adapted to fit the actual lab methods preformed.
 +
I would like to acknowledge [[user:kdahlquist|Dr. Dahlquist]] for her instruction on the topic and the procedure.
 +
"Except for what is noted above, this individual journal entry was completed by me and not copied from another source."
 +
[[User:Eyoung20|Eyoung20]] ([[User talk:Eyoung20|talk]]) 00:08, 24 October 2019 (PDT)
 +
 
==references==
 
==references==
 
+
LMU BioDB 2019. (2019). Week 8. Retrieved October 23, 2019, from https://xmlpipedb.cs.lmu.edu/biodb/fall2019/index.php/Week_8
  
 
{{Template:eyoung20}}
 
{{Template:eyoung20}}

Latest revision as of 23:08, 23 October 2019

Purpose

The objective of this assignment is to gain comfort with statistically analyzing large sets of data, and to practice the skill of keeping an up to date laboratory research notebook. Another object is to become comfortable with converting data to run through various softwares and databases.

Methods

Experimental Design and Getting Ready

This Group includes Ivy, Delisa, Mihir, and Emma

Gene information

  • Strain: dCIN5
  • Filename: Master_Sheet_dCIN5
  • Number of Replicates: for each time point there are four replicates
  • Time points:
    • T15: 15 minutes
    • T30: 30 minutes
    • T60: 60 minutes
    • T90: 90 minutes
    • T120: 120 minutes

The Data file was Downloaded form the link with the group and gene assignments on the week 7 assignment page. The Data file was renamed adding the initials ERY, making the File name Master_Sheet_dCIN5_ERY so I will be able to be differentiated from the other group mates downloaded files.


Statistical Analysis Part 1: ANOVA

The purpose of the within-stain ANOVA test is to determine if any genes in dCIN5 had a gene expression change that was significantly different than zero at any timepoint.

  1. A new worksheet was created, "named dCIN5_ANOVA".
  2. The first three columns containing the "MasterIndex", "ID", and "Standard Name" from the "Master_Sheet" worksheet in the file "BIOL367_F19_microarray-data_dCIN5_ERY" were copies and pasted it into a new worksheet in the same file. The columns containing the data for dCIN5 were copied and pasted it into the new worksheet as well.
  3. At the top of the first column to the right the data, five column headers of the form dCIN5_AvgLogFC_(TIME), where (TIME) is 15, 30, etc.
  4. In the cell below the dCIN5_AvgLogFC_t15 header, =AVERAGE( was entered.
  5. Then all the data in row 2 associated with t15 was highlighted, the closing paren key (shift 0) was pressed, and then the "enter" key.
  6. The cell then contained the average of the log fold change data from the first gene at t=15 minutes.
  7. The cell was then clicked on and and the cursor was positioned at the bottom right corner. when the cursor changed to a thin black plus sign it was double clicked allowing the formula to be copied to the entire column of 6188 other genes.
  8. The steps (4) through (8) were repeated with the t30, t60, t90, and the t120 data.
  9. In the empty column to the right of the dCIN5_AvgLogFC_t120 calculation, the column header dCIN5_ss_HO was created.
  10. In the first cell below this header, =SUMSQ( was entered.
  11. All the LogFC data in row 2 was highlighted (except for the AvgLogFC), the closing paren key (shift 0) was pressed, and then "enter" key was pressed.
  12. In the empty column to the right of dCIN5_ss_HO, the column headers dCIN5_ss_(TIME) as in (3) were created.
  13. It was noted that there were 4 data points at each time point for dCIN5. It was also noted that the total number of data points was 20.
  14. In the first cell below the header dCIN5_ss_t15, =SUMSQ(<range of cells for logFC_t15>)-COUNTA(<range of cells for logFC_t15>)*<AvgLogFC_t15>^2 was inserted and the enter key was pressed.
    • The COUNTA function counts the number of cells in the specified range that have data in them (i.e., does not count cells with missing values).
    • The phrase <range of cells for logFC_t15> was replaced by the data range associated with t15.
    • The phrase <AvgLogFC_t15> was replaced by the cell number in which the AvgLogFC for t15 was computed, and the "^2" squared that value.
    • Upon completion of this single computation, the Step (7) trick was used to copy the formula throughout the column.
  15. This computation was repeated for the t30 through t120 data points. All the information used for the calculations were confirmed when placed in the equation. The formula was copied to the whole column for each computation.
  16. In the column to the right of dCIN5_ss_t120, the column header dCIN5_SS_full was added.
  17. In the first row below this header, =sum(<range of cells containing "ss" for each timepoint>) was typed out and the enter key was pressed.
  18. In the next two columns to the right, the headers dCIN5_Fstat and dCIN5_p-value were created.
  19. Recall the number of data points from (13): call that total n.
  20. In the first cell of the dCIN5_Fstat column, =((n-5)/5)*(<dCIN5_ss_HO>-<dCIN5_SS_full>)/<dCIN5_SS_full> was typed in and the enter key was hit.
    • Instead of the "n" the number from (13) was used. It was also noted that "5" was the number of timepoints.
    • The phrase dCIN5_ss_HO was replaced with the cell designation.
    • The phrase <dCIN5_SS_full> was replaced with the cell designation.
    • This was then copied to the whole column.
  21. In the first cell below the dCIN5_p-value header, =FDIST(<dCIN5_Fstat>,5,n-5) was typed in to replace the phrase <dCIN5_Fstat> with the cell designation and the "n" as in (13) with the number of data points total. This was copied to the whole column.
  22. Before moving on to the next step, a sanity check was preformed to confirm that all the computations were done correctly.
    • Cell A1 was clicked on and then the Data tab. The Filter icon was selected.
    • The drop-down arrow on dCIN5_p-value column was selected. "Number Filters" was selected. In the window that appeared, criterion was created that filtered the data so that the p value had to be less than 0.05.
    • Excel only displayed the rows that correspond to data that met the filtering criterion. A number appeared in the lower left hand corner of the window, it provided the number of rows that met the criterion. The results were checked with partner in order to make sure that the computations were performed correctly.
    • Applied filters were removed before the next steps were taken.

Calculate the Bonferroni and p value Correction

Note: Be sure to undo any filters that you have applied before continuing with the next steps.

  1. Now we will perform adjustments to the p value to correct for the multiple testing problem. Label the next two columns to the right with the same label, (STRAIN)_Bonferroni_p-value.
  2. Type the equation =<(STRAIN)_p-value>*6189, Upon completion of this single computation, use the Step (10) trick to copy the formula throughout the column.
  3. Replace any corrected p value that is greater than 1 by the number 1 by typing the following formula into the first cell below the second (STRAIN)_Bonferroni_p-value header: =IF((STRAIN)_Bonferroni_p-value>1,1,(STRAIN)_Bonferroni_p-value), where "(STRAIN)_Bonferroni_p-value" refers to the cell in which the first Bonferroni p value computation was made. Use the Step (10) trick to copy the formula throughout the column.

Calculate the Benjamini & Hochberg p value Correction

  1. A new worksheet named "dCIN5_ANOVA_B-H" was created.
  2. The "MasterIndex", "ID", and "Standard Name" columns from the previous worksheet were copied and pasted into the first two columns of the new worksheet.
  3. For the the next steps Paste special > Paste values was used to get the correct values copied into the new sheet. The unadjusted p values from the ANOVA worksheet where copied and pasted into Column D.
  4. All of columns A, B, C, and D were selected. Then the selected columns were sorted by ascending values on Column D. this was done by clicking the sort button from A to Z on the toolbar, in the window that appeared, sorted by column D, smallest to largest.
  5. The header "Rank" was typed into cell E1. A series of numbers in ascending order from 1 to 6189 were created in this column. These were the p value rank, smallest to largest. "1" was typed into cell E2 and "2" into cell E3. Both cells E2 and E3 were selected. The plus sign on the lower right-hand corner of the selection was double-clicked to fill the column with a series of numbers from 1 to 6189.
  6. then the Benjamini and Hochberg p value correction was calculated. dCIN5_B-H_p-value was typed into cell F1. The formula : =(D2*6189)/E2 was typed into cell F2 and the enter key was pressed. The equation was copied to the entire column.
  7. "dCIN5_B-H_p-value" was typed into cell G1.
  8. the formula =IF(F2>1,1,F2) was typed into cell G2 and the enter key was pressed. The was copied equation to the entire column.
  9. Columns A through G were selected. Then they were sorted by MasterIndex in Column A in ascending order.
  10. Column G was then copied and Paste special > Paste values was used to paste the column into the available column that was on the dCIN5_ANOVA sheet.

Sanity Check: Number of genes significantly changed

A more extensive sanity check was preformed to make sure that the data was analyzed correctly. The sanity check was to determine the number of genes that were significantly changed at various p value cut-offs.
  • The dCIN5_ANOVA worksheet was opened.
  • Row 1 was selected and the menu item Data > Filter > Autofilter was selected. Little drop-down arrows appeared at the top of each column. This enabled the data to be filtered according to criteria that was set.
  • The drop-down arrow was clicked for the unadjusted p value. A criterion that would filter the data so that the p value had to be less than 0.05 was set.
    • How many genes had p < 0.05? and what is the percentage (out of 6189)?: 2290/6189 37.00%
    • How many genes had p < 0.01? and what is the percentage (out of 6189)?: 1380/6189 22.29%
    • How many genes had p < 0.001? and what is the percentage (out of 6189)?: 691/6189 11.16%
    • How many genes had p < 0.0001? and what is the percentage (out of 6189)?:358/6189 5.78%
  • That action performed 6189 hypothesis tests. The same method was then used to determine the percentages of relevant data that was present as determined by the The Bonferroni correction and The Benjamini-Hochberg correction.
    • How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)?: 151/6189 2.44%
    • How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?:1458/6189 2.35%
  • In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level. If there was to be confidence in the data, small p value cut-off were needed to be used. If less confidence about a gene expression change was ok and it was desired to include more genes in the analysis, a larger p value cut-off could have been used.
  • Comparing results with known data:
    • the expression of the gene NSR1 (ID: YGR159C)is known to be induced by cold shock. Find NSR1 in your dataset. What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?
      • Unadjusted p-value: 6.376*10^-8
      • Bonferroni-corrected p-value: 0.0003
      • B-H Corrected p-value: 2.192*10^-5
    • What is its average Log fold change at each of the timepoints in the experiment? Note that the average Log fold change is what we called "dCIN5_AvgLogFC_(TIME)" in step 3 of the ANOVA analysis.
      • 18.5887
    • Does NSR1 change expression due to cold shock in this experiment?
      • Yes NSR1 does change expression due to cold shock in this experiment as seen in the low p-values in the unadjusted, Bonferroni-corrected, and B-H-corrected p values.
  • For fun, find "your favorite gene" (from your Week 3 assignment) in the dataset.
    • What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?
      • Unadjusted p-value: 0.0212
      • Bonferroni-corrected p-value: 1
      • B-H Corrected p-value: 0.0756
    • What is its average Log fold change at each of the timepoints in the experiment?
      • 13.41
    • Does your favorite gene change expression due to cold shock in this experiment?
      • Based on the unadjusted p-value the answer would be yes because it has a p-value less than 0.05, however the Bonferroni-corrected, and B-H-corrected p values are greater than 0.05, so most likely it does not change expression due to cold shock in this experiment.

Clustering and GO Term Enrichment with stem (part 2)

  1. prepared microarray data file for loading into STEM.
    • Inserted a new worksheet into the Excel workbook, and it was named "dCIN5_stem".
    • Selected all of the data from "dCIN5_ANOVA" worksheet and Pasted special > paste values into "dCIN5_stem" worksheet.
      • the leftmost column had the column header "Master_Index". The column was renamed "SPOT". Column B was named "ID". Then was renamed "Gene Symbol". The column named "Standard_Name" was deleted.
      • The data was filtered on the B-H corrected p value to be > 0.05.
        • After the data was filtered, all of the rows (except for the header row) were selected and deleted by right-clicking and choosing "Delete Row" from the context menu. The filter was removed. This ensured that only the genes with a "significant" change in expression were clustered and not the noise.
      • All of the data columns EXCEPT for the Average Log Fold change columns for each timepoint were deleted (for example, dCIN5_AvgLogFC_t15, etc.).
      • Renamed the data columns with just the time and units (for example, 15m, 30m, etc.).
      • The progress was saved. Then Save As was used to save this spreadsheet as Text (Tab-delimited) (*.txt). Okayed the warnings and the file was closed.
  2. Then the STEM software was downloaded and extracted. Click here to go to the STEM web site.
    • The download link was clicked and the file stem.zip was downloaded to the desktop.
    • The file was unzipped. In Seaver 120, the file icon was right clicked on and the menu item 7-zip > Extract Here was selected.
    • A folder called stem was created due to those actions.
    • Inside the folder, stem.jar was double-clicked to launch the STEM program.
  3. Running STEM
    1. In section 1 (Expression Data Info) of the the main STEM interface window, the Browse... button was selected to navigate and The file was selected.
      • The radio button was clicked No normalization/add 0.
      • The box next to Spot IDs included in the data file was selected.
    2. In section 2 (Gene Info) of the main STEM interface window, the default selection for the three drop-down menu selections for Gene Annotation Source , Cross Reference Source, and Gene Location Source were left as "User provided".
    3. The "Browse..." button to the right of the "Gene Annotation File" item was selected. The "stem" folder was opened and the file "gene_association.sgd.gz" was selected and Opened.
    4. In section 3 (Options) of the main STEM interface window, the Clustering Method was confirmed to say "STEM Clustering Method" and the defaults for Maximum Number of Model Profiles or Maximum Unit Change in Model Profiles between Time Points were not changed.
    5. In section 4 (Execute) the yellow Execute button was clicked to run STEM.
    6. The Error for #DIV/0! appeared on the screen.
    7. The excel file was reopened, The Find/Replace dialog was opened. #DIV/0! was searched for, nothing was placed in the replace field. "Replace all" was clicked and all the #DIV/0! errors were removed. The file was reserved and the Stem was run again.
    8. The STEM attempted to run the data for 2 hours with no success.

Data and Files

dCIN5_ANOVA spread sheet

dCIN5_STEM txt file

dCIN5 p-value slide

zipped genelist and GOlist

Conclusion

From this weeks analysis we can conclude that 37.00% of the genes have an unadjusted p<0.05, 22.29% p<0.01, 11.16% p<0.001, 5.78% p<0.0001. 2.44% of the genes have a B&H p<0.05 and 2.35% a Bonferroni p<0.05.

Acknowledgements

I would like to acknowledge my home work partners s Ivy, Delisa, and Mihir with there help with checking our work and helping to answer questions about the procedure. I would like to acknowledge that the methods were taken from week 8 and then adapted to fit the actual lab methods preformed. I would like to acknowledge Dr. Dahlquist for her instruction on the topic and the procedure. "Except for what is noted above, this individual journal entry was completed by me and not copied from another source." Eyoung20 (talk) 00:08, 24 October 2019 (PDT)

references

LMU BioDB 2019. (2019). Week 8. Retrieved October 23, 2019, from https://xmlpipedb.cs.lmu.edu/biodb/fall2019/index.php/Week_8

Eyoung20 user page

Assignment pages Individual Journal Class Journal
week 1 Eyoung20 journal week 1 Class Journal Week 1
week 2 Eyoung20 journal week 2 Class Journal Week 2
week 3 ASP1/YDR321W Week 3 Class Journal Week 3
week 4 Eyoung20 journal week 4 Class Journal Week 4
week 5 Ancient mtDNA Week 5 Class Journal Week 5
week 6 Eyoung20 journal week 6 Class Journal Week 6
week 7 Eyoung20 journal week 7 Class Journal Week 7
week 8 Eyoung20 journal week 8 Class Journal Week 8
week 9 Eyoung20 journal week 9 Class Journal Week 9
week 10 Eyoung20 journal week 10 Class Journal Week 10
week 11 Eyoung20 journal week 11 FunGals
week 12/13 Knguye66 Eyoung20 Week 12/13 FunGals
week 15 Knguye66 Eyoung20 Week 15 FunGals