Difference between revisions of "Ntesfaio Week 8"

From LMU BioDB 2019
Jump to navigation Jump to search
(Acknowledgments: added my homework partners to the acknowledgement section)
(Electronic Workbook: edited correct stem file page)
 
(23 intermediate revisions by the same user not shown)
Line 3: Line 3:
 
===Purpose===
 
===Purpose===
 
The purpose of this week's lab is to continue off of week 7 by going through the steps of the data life cycle for a DNA microarray dataset  
 
The purpose of this week's lab is to continue off of week 7 by going through the steps of the data life cycle for a DNA microarray dataset  
 +
 
===Methods/ Results===
 
===Methods/ Results===
 +
 +
Background
 +
 +
This is a list of steps required to analyze DNA microarray data.
 +
 +
Quantitate the fluorescence signal in each spot
 +
 +
Calculate the ratio of red/green fluorescence
 +
 +
Log2 transform the ratios
 +
 +
Steps 1-3 have been performed for you by the GenePix Pro software (which runs the microarray scanner).
 +
 +
Normalize the ratios on each microarray slide
 +
 +
Normalize the ratios for a set of slides in an experiment
 +
 +
Steps 4-5 was performed for you using a script in R, a statistics package (see: Microarray Data
 +
 +
Analysis Workflow)
 +
 +
You will perform the following steps:
 +
 +
Perform statistical analysis on the ratios
 +
 +
Compare individual genes with known data
 +
 +
Steps 6-7 are performed in Microsoft Excel
 +
 +
Pattern finding algorithms (clustering)
 +
 +
Map onto biological pathways
 +
 +
NOTE: before beginning any analysis, immediately change the filename (Save As...) so that it contains your initials to distinguish it from other students' work.
 +
In the Excel spreadsheet, there is a worksheet labeled "Master_Sheet_<STRAIN>", where <STRAIN> is replaced by the strain designation, wt, dCIN5, dGLN3, or dHAP4.
 +
In this worksheet, each row contains the data for one gene (one spot on the microarray).
 +
The first column contains the "MasterIndex", which numbers all of the rows sequentially in the worksheet so that we can always use it to sort the genes into the order they were in when we started.
 +
The second column (labeled "ID") contains the Systematic Name (gene identifier) from the Saccharomyces Genome Database.
 +
The third column contains the Standard Name for each of the genes.
 +
Each subsequent column contains the log2 ratio of the red/green fluorescence from each microarray hybridized in the experiment (steps 1-5 above having been performed for you already), for each strain starting with wild type and proceeding in alphabetical order by strain deletion.
 +
Each of the column headings from the data begin with the experiment name ("wt" for wild type S. cerevisiae data, "dCIN5" for the Δcin5 data, etc.). "LogFC" stands for "Log2 Fold Change" which is the Log2 red/green ratio. The timepoints are designated as "t" followed by a number in minutes. Replicates are numbered as "-0", "-1", "-2", etc. after the timepoint.
 +
The timepoints are t15, t30, t60 (cold shock at 13°C) and t90 and t120 (cold shock at 13°C followed by 30 or 60 minutes of recovery at 30°C).We will use software called STEM for the clustering and mapping
 +
 +
Statistical Analysis Part 1: ANOVA
 +
The purpose of the within-stain ANOVA test is to determine if any genes had a gene expression change that was significantly different than zero at any timepoint.
 +
 +
Create a new worksheet, naming it either "(STRAIN)_ANOVA" as appropriate. For example, you might call yours "wt_ANOVA" or "dHAP4_ANOVA"
 +
Copy the first three columns containing the "MasterIndex", "ID", and "Standard Name" from the "Master_Sheet" worksheet for your strain and paste it into your new worksheet. Copy the columns containing the data for your strain and paste it into your new worksheet.
 +
At the top of the first column to the right of your data, create five column headers of the form (STRAIN)_AvgLogFC_(TIME) where STRAIN is your strain designation and (TIME) is 15, 30, etc.
 +
In the cell below the (STRAIN)_AvgLogFC_t15 header, type =AVERAGE(
 +
Then highlight all the data in row 2 associated with t15, press the closing paren key (shift 0),and press the "enter" key.
 +
This cell now contains the average of the log fold change data from the first gene at t=15 minutes.
 +
Click on this cell and position your cursor at the bottom right corner. You should see your cursor change to a thin black plus sign (not a chubby white one). When it does, double click, and the formula will magically be copied to the entire column of 6188 other genes.
 +
Repeat steps (4) through (8) with the t30, t60, t90, and the t120 data.
 +
Now in the first empty column to the right of the (STRAIN)_AvgLogFC_t120 calculation, create the column header (STRAIN)_ss_HO.
 +
In the first cell below this header, type =SUMSQ(
 +
Highlight all the LogFC data in row 2 (but not the AvgLogFC), press the closing paren key (shift 0),and press the "enter" key.
 +
In the next empty column to the right of (STRAIN)_ss_HO, create the column headers (STRAIN)_ss_(TIME) as in (3).
 +
Make a note of how many data points you have at each time point for your strain. For most of the strains, it will be 4, but for dHAP4 t90 or t120, it will be "3", and for the wild type it will be "4" or "5". Count carefully. Also, make a note of the total number of data points. Again, for most strains, this will be 20, but for example, dHAP4, this number will be 18, and for wt it should be 23 (double-check).
 +
In the first cell below the header (STRAIN)_ss_t15, type =SUMSQ(<range of cells for logFC_t15>)-COUNTA(<range of cells for logFC_t15>)*<AvgLogFC_t15>^2 and hit enter.
 +
The COUNTA function counts the number of cells in the specified range that have data in them (i.e., does not count cells with missing values).
 +
The phrase <range of cells for logFC_t15> should be replaced by the data range associated with t15.
 +
The phrase <AvgLogFC_t15> should be replaced by the cell number in which you computed the AvgLogFC for t15, and the "^2" squares that value.
 +
Upon completion of this single computation, use the Step (7) trick to copy the formula throughout the column.
 +
Repeat this computation for the t30 through t120 data points. Again, be sure to get the data for each time point, type the right number of data points, and get the average from the appropriate cell for each time point, and copy the formula to the whole column for each computation.
 +
In the first column to the right of (STRAIN)_ss_t120, create the column header (STRAIN)_SS_full.
 +
In the first row below this header, type =sum(<range of cells containing "ss" for each timepoint>) and hit enter.
 +
In the next two columns to the right, create the headers (STRAIN)_Fstat and (STRAIN)_p-value.
 +
Recall the number of data points from (13): call that total n.
 +
In the first cell of the (STRAIN)_Fstat column, type =((n-5)/5)*(<(STRAIN)_ss_HO>-<(STRAIN)_SS_full>)/<(STRAIN)_SS_full> and hit enter.
 +
Don't actually type the n but instead use the number from (13). Also note that "5" is the number of timepoints.
 +
Replace the phrase (STRAIN)_ss_HO with the cell designation.
 +
Replace the phrase <(STRAIN)_SS_full> with the cell designation.
 +
Copy to the whole column.
 +
In the first cell below the (STRAIN)_p-value header, type =FDIST(<(STRAIN)_Fstat>,5,n-5) replacing the phrase <(STRAIN)_Fstat> with the cell designation and the "n" as in (13) with the number of data points total. . Copy to the whole column.
 +
Before we move on to the next step, we will perform a quick sanity check to see if we did all of these computations correctly.
 +
Click on cell A1 and click on the Data tab. Select the Filter icon (looks like a funnel). Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
 +
Click on the drop-down arrow on your (STRAIN)_p-value column. Select "Number Filters". In the window that appears, set a criterion that will filter your data so that the p value has to be less than 0.05.
 +
Excel will now only display the rows that correspond to data meeting that filtering criterion. A number will appear in the lower left hand corner of the window giving you the number of rows that meet that criterion. We will check our results with each other to make sure that the computations were performed correctly.
 +
Be sure to undo any filters that you have applied before making any additional calculations.
 +
Calculate the Bonferroni and p value Correction
 +
Note: Be sure to undo any filters that you have applied before continuing with the next steps.
 +
 +
Now we will perform adjustments to the p value to correct for the multiple testing problem. Label the next two columns to the right with the same label, (STRAIN)_Bonferroni_p-value.
 +
Type the equation =<(STRAIN)_p-value>*6189, Upon completion of this single computation, use the Step (10) trick to copy the formula throughout the column.
 +
Replace any corrected p value that is greater than 1 by the number 1 by typing the following formula into the first cell below the second (STRAIN)_Bonferroni_p-value header: =IF((STRAIN)_Bonferroni_p-value>1,1,(STRAIN)_Bonferroni_p-value), where "(STRAIN)_Bonferroni_p-value" refers to the cell in which the first Bonferroni p value computation was made. Use the Step (10) trick to copy the formula throughout the column.
 +
Calculate the Benjamini & Hochberg p value Correction
 +
Insert a new worksheet named "(STRAIN)_ANOVA_B-H".
 +
Copy and paste the "MasterIndex", "ID", and "Standard Name" columns from your previous worksheet into the first two columns of the new worksheet.
 +
For the following, use Paste special > Paste values. Copy your unadjusted p values from your ANOVA worksheet and paste it into Column D.
 +
Select all of columns A, B, C, and D. Sort by ascending values on Column D. Click the sort button from A to Z on the toolbar, in the window that appears, sort by column D, smallest to largest.
 +
Type the header "Rank" in cell E1. We will create a series of numbers in ascending order from 1 to 6189 in this column. This is the p value rank, smallest to largest. Type "1" into cell E2 and "2" into cell E3. Select both cells E2 and E3. Double-click on the plus sign on the lower right-hand corner of your selection to fill the column with a series of numbers from 1 to 6189.
 +
Now you can calculate the Benjamini and Hochberg p value correction. Type (STRAIN)_B-H_p-value in cell F1. Type the following formula in cell F2: =(D2*6189)/E2 and press enter. Copy that equation to the entire column.
 +
Type "STRAIN_B-H_p-value" into cell G1.
 +
Type the following formula into cell G2: =IF(F2>1,1,F2) and press enter. Copy that equation to the entire column.
 +
Select columns A through G. Now sort them by your MasterIndex in Column A in ascending order.
 +
Copy column G and use Paste special > Paste values to paste it into the next column on the right of your ANOVA sheet.
 +
 +
Sanity Check: Number of genes significantly changed
 +
We went to our dCIN5_ANOVA worksheet.
 +
We selected row 1 and selected the menu item Data > Filter > Autofilter. Little drop-down arrows appeared at the top of each column. This will enabled us to filter the data according to criteria we set.
 +
We click on the drop-down arrow for the unadjusted p value and set a criterion that filtered the data so that the p value is less than 0.05. These results are also reported in the slide.
 +
 +
The strain that I will be analyzing is dHAP4. The filename is File:BIOL367 F19 microarray-data dHAP4Ntesfaio.xlsx. This is where the information on strain dHAP4 was found.
 +
 +
There are 6,189 replicates total. The timepoints are t15 (4 Replicates (1-4)) , t30 (4 Replicates (1-4)), t60 (4 Replicates (1-4)) and t90 (3 Replicates (2-4)) and t120 (3 Replicates (2-4))
  
 
'''Week 8 Interim deadline for excel spreadsheet'''
 
'''Week 8 Interim deadline for excel spreadsheet'''
  
[[File:Copy of BIOL367 F19 microarray-data dHAP4NtesfaioNT.xlsx]]
+
[[Media:BIOL367 F19 microarray-data dHAP4NtesfaioNT.xlsx| Excel Spreadsheet]]
 +
 
 +
How many genes have p < 0.05? and what is the percentage (out of 6189)?
 +
 
 +
2479/ 6189 or 40%
 +
 
 +
How many genes have p < 0.01? and what is the percentage (out of 6189)?
 +
 
 +
1583/ 6189 or 26%
 +
 
 +
How many genes have p < 0.001? and what is the percentage (out of 6189)?
 +
 
 +
739/ 6189 or 12%
 +
 
 +
How many genes have p < 0.0001? and what is the percentage (out of 6189)?
 +
 
 +
280/6189 or 5%
 +
 
 +
We created a new worksheet in our workbook to record the answers to these questions so that we can write a formula in Excel to automatically calculate the percentage for you.
 +
When we used a p value cut-off of p < 0.05, what we are saying is that we would have seen a gene expression change that deviates this far from zero by chance less than 5% of the time.
 +
We have just performed 6189 hypothesis tests. Another way to state what we are seeing with p < 0.05 is that we would expect to see this a gene expression change for at least one of the timepoints by chance in about 5% of our tests, or 309 times. Since we have more than 309 genes that pass this cut off, we know that some genes are significantly changed. However, we don't know which ones. To apply a more stringent criterion to our p values, we performed the Bonferroni and Benjamini and Hochberg corrections to these unadjusted p values. The Bonferroni correction is very stringent. The Benjamini-Hochberg correction is less stringent. To see this relationship, we filtered the data to determine the following:
 +
 
 +
How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)?
 +
 
 +
75/ 6189 or 1.2%
 +
 
 +
How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?
 +
 
 +
1735/ 6189 or 28%
 +
 
 +
In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level. If we want to be very confident of our data, use a small p value cut-off. If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off.
 +
We compared the numbers we get between the wild type strain and the other strains studied, organized as a table. We used sample PowerPoint slide to see how our table should be formatted. We uploaded our slide to the wiki.
 +
Since the wild type data is being analyzed by one of the groups in the class, it will be sufficient for this week to supply just the data for our strain. We will do the comparison with wild type at a later date.
 +
Comparing results with known data: the expression of the gene NSR1 (ID: YGR159C)is known to be induced by cold shock
 +
 
 +
Find NSR1 in your dataset. What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?
 +
 
 +
Unadjusted it is .016
 +
 
 +
Bonferroni-corrected is 101.3
 +
 
 +
B-H corrected is .056
 +
 
 +
What is its average Log fold change at each of the timepoints in the experiment? Note that the average Log fold change is what we called "STRAIN)_AvgLogFC_(TIME)" in step 3 of the ANOVA analysis.
 +
 
 +
For T15 the average Log was 2.7
 +
 
 +
For T30 the average Log was 3.3
 +
 
 +
For T60 the average Log was 3.5
 +
 
 +
For T90 the average Log was -1.1
 +
 
 +
For T120 the average Log was -1.8
 +
 
 +
Does NSR1 change expression due to cold shock in this experiment?
 +
 
 +
Only at higher time points. The average log began to be negative starting at T90.
 +
 
 +
My favorite gene was RAD53 or YPL153C
 +
 
 +
The unadjusted value was .512
 +
 
 +
Bonferroni-corrected is 3172
 +
 
 +
B-H corrected is .649
 +
 
 +
For T15 the average Log was -.5476
 +
 
 +
For T30 the average Log was -.5768
 +
 
 +
For T60 the average Log was -.5764
 +
 
 +
For T90 the average Log was 1.050
 +
 
 +
For T120 the average Log was -.1536
 +
 
 +
Clustering and GO Term Enrichment with stem (part 2)
 +
Prepare your microarray data file for loading into STEM.
 +
Insert a new worksheet into your Excel workbook, and name it "(STRAIN)_stem".
 +
Select all of the data from your "(STRAIN)_ANOVA" worksheet and Paste special > paste values into your "(STRAIN)_stem" worksheet.
 +
Your leftmost column should have the column header "Master_Index". Rename this column to "SPOT". Column B should be named "ID". Rename this column to "Gene Symbol". Delete the column named "Standard_Name".
 +
Filter the data on the B-H corrected p value to be > 0.05 (that's greater than in this case).
 +
Once the data has been filtered, select all of the rows (except for your header row) and delete the rows by right-clicking and choosing "Delete Row" from the context menu. Undo the filter. This ensures that we will cluster only the genes with a "significant" change in expression and not the noise.
 +
Delete all of the data columns EXCEPT for the Average Log Fold change columns for each timepoint (for example, wt_AvgLogFC_t15, etc.).
 +
Rename the data columns with just the time and units (for example, 15m, 30m, etc.).
 +
Save your work. Then use Save As to save this spreadsheet as Text (Tab-delimited) (*.txt). Click OK to the warnings and close your file.
 +
Note that you should turn on the file extensions if you have not already done so.
 +
Now download and extract the STEM software. Click here to go to the STEM web site.
 +
Click on the download link and download the stem.zip file to your Desktop.
 +
Unzip the file. In Seaver 120, you can right click on the file icon and select the menu item 7-zip > Extract Here.
 +
This will create a folder called stem.
 +
You now need to download the Gene Ontology and yeast GO annotations and place them in this folder.
 +
Click here to download the file "gene_ontology.obo".
 +
Click here to download the file "gene_association.sgd.gz".
 +
Inside the folder, double-click on the stem.jar to launch the STEM program.
 +
Running STEM
 +
In section 1 (Expression Data Info) of the the main STEM interface window, click on the Browse... button to navigate to and select your file.
 +
Click on the radio button No normalization/add 0.
 +
Check the box next to Spot IDs included in the data file.
 +
In section 2 (Gene Info) of the main STEM interface window, leave the default selection for the three drop-down menu selections for Gene Annotation Source, Cross Reference Source, and Gene Location Source as "User provided".
 +
Click the "Browse..." button to the right of the "Gene Annotation File" item. Browse to your "stem" folder and select the file "gene_association.sgd.gz" and click Open.
 +
In section 3 (Options) of the main STEM interface window, make sure that the Clustering Method says "STEM Clustering Method" and do not change the defaults for Maximum Number of Model Profiles or Maximum Unit Change in Model Profiles between Time Points.
 +
In section 4 (Execute) click on the yellow Execute button to run STEM.
 +
If you get an error, there are some known reasons why stem might not work. If you had #DIV/0! errors in your input file, it will cause problems. Re-open your file and open the Find/Replace dialog. Search for #DIV/0!, but don't put anything in the replace field. Click "Replace all" to remove the #DIV/0! errors. Then save your file and try again with stem.
  
 
===Data & Files===
 
===Data & Files===
  
[[File:Copy of BIOL367 F19 microarray-data dHAP4NtesfaioNT.xlsx]]
+
[[Media:BIOL367 F19 microarray-data dHAP4NtesfaioNT.txt-2.txt| Text File]]
 +
 
 +
[[Media:BIOL367 F19 microarray-data dHAP4NtesfaioNT.xlsx| Excel Spreadsheet]]
 +
 
 +
[[Media:BIOL367 F19 sample p-value slideNtesfaio.pptx| Powerpoint Slide]]
 +
 
 +
[[Media:NtesfaioStem profiles pic corrected.jpg|Stem Profile]]
  
 
===Conclusion===
 
===Conclusion===
 +
 +
The purpose of this week's assignment was to go through a DNA microarray dataset and work with filtering p-values that are above or below a specific number. This lab also incorporated STEM which stands for Short Time-series Expression Miner. A Java program used for clustering, comparing, and visualizing short time series gene expression data from microarray experiements. Using excel I was able to filter the datapoints based on p values and compare the percentage of datapoints left each time.
  
 
===Acknowledgments===
 
===Acknowledgments===
  
My homework partners this week are Aby User:Ymesfin and David User:Dramir36. We sat together in class to go over the assignment.
+
My homework partners this week are Aby [[User:Ymesfin]] and David [[User:Dramir36]]. We sat together in class to go over the assignment.
 +
 
 +
Purpose and methods were copied over from [[Week 8]]
  
 
===References===
 
===References===
 +
STEM.Short Time-series Expression Miner. Retrieved on October 23, 2019 from http://www.cs.cmu.edu/~jernst/stem/
 +
 +
LMU BioDB 2019. (2019). Week 8. Retrieved October 23, 2019, from https://xmlpipedb.cs.lmu.edu/biodb/fall2019/index.php/Week_8
 +
 +
Excel Spreadsheet. Retrieved on October 23, 2019 from [[Media:BIOL367 F19 microarray-data dHAP4NtesfaioNT.xlsx| Excel Spreadsheet]]
 +
 +
[[User:Ntesfaio|Ntesfaio]] ([[User talk:Ntesfaio|talk]]) 21:13, 23 October 2019 (PDT)
 +
 +
{{Template:Ntesfaio}}

Latest revision as of 15:19, 24 October 2019

Electronic Workbook

Purpose

The purpose of this week's lab is to continue off of week 7 by going through the steps of the data life cycle for a DNA microarray dataset

Methods/ Results

Background

This is a list of steps required to analyze DNA microarray data.

Quantitate the fluorescence signal in each spot

Calculate the ratio of red/green fluorescence

Log2 transform the ratios

Steps 1-3 have been performed for you by the GenePix Pro software (which runs the microarray scanner).

Normalize the ratios on each microarray slide

Normalize the ratios for a set of slides in an experiment

Steps 4-5 was performed for you using a script in R, a statistics package (see: Microarray Data

Analysis Workflow)

You will perform the following steps:

Perform statistical analysis on the ratios

Compare individual genes with known data

Steps 6-7 are performed in Microsoft Excel

Pattern finding algorithms (clustering)

Map onto biological pathways

NOTE: before beginning any analysis, immediately change the filename (Save As...) so that it contains your initials to distinguish it from other students' work. In the Excel spreadsheet, there is a worksheet labeled "Master_Sheet_<STRAIN>", where <STRAIN> is replaced by the strain designation, wt, dCIN5, dGLN3, or dHAP4. In this worksheet, each row contains the data for one gene (one spot on the microarray). The first column contains the "MasterIndex", which numbers all of the rows sequentially in the worksheet so that we can always use it to sort the genes into the order they were in when we started. The second column (labeled "ID") contains the Systematic Name (gene identifier) from the Saccharomyces Genome Database. The third column contains the Standard Name for each of the genes. Each subsequent column contains the log2 ratio of the red/green fluorescence from each microarray hybridized in the experiment (steps 1-5 above having been performed for you already), for each strain starting with wild type and proceeding in alphabetical order by strain deletion. Each of the column headings from the data begin with the experiment name ("wt" for wild type S. cerevisiae data, "dCIN5" for the Δcin5 data, etc.). "LogFC" stands for "Log2 Fold Change" which is the Log2 red/green ratio. The timepoints are designated as "t" followed by a number in minutes. Replicates are numbered as "-0", "-1", "-2", etc. after the timepoint. The timepoints are t15, t30, t60 (cold shock at 13°C) and t90 and t120 (cold shock at 13°C followed by 30 or 60 minutes of recovery at 30°C).We will use software called STEM for the clustering and mapping

Statistical Analysis Part 1: ANOVA The purpose of the within-stain ANOVA test is to determine if any genes had a gene expression change that was significantly different than zero at any timepoint.

Create a new worksheet, naming it either "(STRAIN)_ANOVA" as appropriate. For example, you might call yours "wt_ANOVA" or "dHAP4_ANOVA" Copy the first three columns containing the "MasterIndex", "ID", and "Standard Name" from the "Master_Sheet" worksheet for your strain and paste it into your new worksheet. Copy the columns containing the data for your strain and paste it into your new worksheet. At the top of the first column to the right of your data, create five column headers of the form (STRAIN)_AvgLogFC_(TIME) where STRAIN is your strain designation and (TIME) is 15, 30, etc. In the cell below the (STRAIN)_AvgLogFC_t15 header, type =AVERAGE( Then highlight all the data in row 2 associated with t15, press the closing paren key (shift 0),and press the "enter" key. This cell now contains the average of the log fold change data from the first gene at t=15 minutes. Click on this cell and position your cursor at the bottom right corner. You should see your cursor change to a thin black plus sign (not a chubby white one). When it does, double click, and the formula will magically be copied to the entire column of 6188 other genes. Repeat steps (4) through (8) with the t30, t60, t90, and the t120 data. Now in the first empty column to the right of the (STRAIN)_AvgLogFC_t120 calculation, create the column header (STRAIN)_ss_HO. In the first cell below this header, type =SUMSQ( Highlight all the LogFC data in row 2 (but not the AvgLogFC), press the closing paren key (shift 0),and press the "enter" key. In the next empty column to the right of (STRAIN)_ss_HO, create the column headers (STRAIN)_ss_(TIME) as in (3). Make a note of how many data points you have at each time point for your strain. For most of the strains, it will be 4, but for dHAP4 t90 or t120, it will be "3", and for the wild type it will be "4" or "5". Count carefully. Also, make a note of the total number of data points. Again, for most strains, this will be 20, but for example, dHAP4, this number will be 18, and for wt it should be 23 (double-check). In the first cell below the header (STRAIN)_ss_t15, type =SUMSQ(<range of cells for logFC_t15>)-COUNTA(<range of cells for logFC_t15>)*<AvgLogFC_t15>^2 and hit enter. The COUNTA function counts the number of cells in the specified range that have data in them (i.e., does not count cells with missing values). The phrase <range of cells for logFC_t15> should be replaced by the data range associated with t15. The phrase <AvgLogFC_t15> should be replaced by the cell number in which you computed the AvgLogFC for t15, and the "^2" squares that value. Upon completion of this single computation, use the Step (7) trick to copy the formula throughout the column. Repeat this computation for the t30 through t120 data points. Again, be sure to get the data for each time point, type the right number of data points, and get the average from the appropriate cell for each time point, and copy the formula to the whole column for each computation. In the first column to the right of (STRAIN)_ss_t120, create the column header (STRAIN)_SS_full. In the first row below this header, type =sum(<range of cells containing "ss" for each timepoint>) and hit enter. In the next two columns to the right, create the headers (STRAIN)_Fstat and (STRAIN)_p-value. Recall the number of data points from (13): call that total n. In the first cell of the (STRAIN)_Fstat column, type =((n-5)/5)*(<(STRAIN)_ss_HO>-<(STRAIN)_SS_full>)/<(STRAIN)_SS_full> and hit enter. Don't actually type the n but instead use the number from (13). Also note that "5" is the number of timepoints. Replace the phrase (STRAIN)_ss_HO with the cell designation. Replace the phrase <(STRAIN)_SS_full> with the cell designation. Copy to the whole column. In the first cell below the (STRAIN)_p-value header, type =FDIST(<(STRAIN)_Fstat>,5,n-5) replacing the phrase <(STRAIN)_Fstat> with the cell designation and the "n" as in (13) with the number of data points total. . Copy to the whole column. Before we move on to the next step, we will perform a quick sanity check to see if we did all of these computations correctly. Click on cell A1 and click on the Data tab. Select the Filter icon (looks like a funnel). Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set. Click on the drop-down arrow on your (STRAIN)_p-value column. Select "Number Filters". In the window that appears, set a criterion that will filter your data so that the p value has to be less than 0.05. Excel will now only display the rows that correspond to data meeting that filtering criterion. A number will appear in the lower left hand corner of the window giving you the number of rows that meet that criterion. We will check our results with each other to make sure that the computations were performed correctly. Be sure to undo any filters that you have applied before making any additional calculations. Calculate the Bonferroni and p value Correction Note: Be sure to undo any filters that you have applied before continuing with the next steps.

Now we will perform adjustments to the p value to correct for the multiple testing problem. Label the next two columns to the right with the same label, (STRAIN)_Bonferroni_p-value. Type the equation =<(STRAIN)_p-value>*6189, Upon completion of this single computation, use the Step (10) trick to copy the formula throughout the column. Replace any corrected p value that is greater than 1 by the number 1 by typing the following formula into the first cell below the second (STRAIN)_Bonferroni_p-value header: =IF((STRAIN)_Bonferroni_p-value>1,1,(STRAIN)_Bonferroni_p-value), where "(STRAIN)_Bonferroni_p-value" refers to the cell in which the first Bonferroni p value computation was made. Use the Step (10) trick to copy the formula throughout the column. Calculate the Benjamini & Hochberg p value Correction Insert a new worksheet named "(STRAIN)_ANOVA_B-H". Copy and paste the "MasterIndex", "ID", and "Standard Name" columns from your previous worksheet into the first two columns of the new worksheet. For the following, use Paste special > Paste values. Copy your unadjusted p values from your ANOVA worksheet and paste it into Column D. Select all of columns A, B, C, and D. Sort by ascending values on Column D. Click the sort button from A to Z on the toolbar, in the window that appears, sort by column D, smallest to largest. Type the header "Rank" in cell E1. We will create a series of numbers in ascending order from 1 to 6189 in this column. This is the p value rank, smallest to largest. Type "1" into cell E2 and "2" into cell E3. Select both cells E2 and E3. Double-click on the plus sign on the lower right-hand corner of your selection to fill the column with a series of numbers from 1 to 6189. Now you can calculate the Benjamini and Hochberg p value correction. Type (STRAIN)_B-H_p-value in cell F1. Type the following formula in cell F2: =(D2*6189)/E2 and press enter. Copy that equation to the entire column. Type "STRAIN_B-H_p-value" into cell G1. Type the following formula into cell G2: =IF(F2>1,1,F2) and press enter. Copy that equation to the entire column. Select columns A through G. Now sort them by your MasterIndex in Column A in ascending order. Copy column G and use Paste special > Paste values to paste it into the next column on the right of your ANOVA sheet.

Sanity Check: Number of genes significantly changed We went to our dCIN5_ANOVA worksheet. We selected row 1 and selected the menu item Data > Filter > Autofilter. Little drop-down arrows appeared at the top of each column. This will enabled us to filter the data according to criteria we set. We click on the drop-down arrow for the unadjusted p value and set a criterion that filtered the data so that the p value is less than 0.05. These results are also reported in the slide.

The strain that I will be analyzing is dHAP4. The filename is File:BIOL367 F19 microarray-data dHAP4Ntesfaio.xlsx. This is where the information on strain dHAP4 was found.

There are 6,189 replicates total. The timepoints are t15 (4 Replicates (1-4)) , t30 (4 Replicates (1-4)), t60 (4 Replicates (1-4)) and t90 (3 Replicates (2-4)) and t120 (3 Replicates (2-4))

Week 8 Interim deadline for excel spreadsheet

Excel Spreadsheet

How many genes have p < 0.05? and what is the percentage (out of 6189)?

2479/ 6189 or 40%

How many genes have p < 0.01? and what is the percentage (out of 6189)?

1583/ 6189 or 26%

How many genes have p < 0.001? and what is the percentage (out of 6189)?

739/ 6189 or 12%

How many genes have p < 0.0001? and what is the percentage (out of 6189)?

280/6189 or 5%

We created a new worksheet in our workbook to record the answers to these questions so that we can write a formula in Excel to automatically calculate the percentage for you. When we used a p value cut-off of p < 0.05, what we are saying is that we would have seen a gene expression change that deviates this far from zero by chance less than 5% of the time. We have just performed 6189 hypothesis tests. Another way to state what we are seeing with p < 0.05 is that we would expect to see this a gene expression change for at least one of the timepoints by chance in about 5% of our tests, or 309 times. Since we have more than 309 genes that pass this cut off, we know that some genes are significantly changed. However, we don't know which ones. To apply a more stringent criterion to our p values, we performed the Bonferroni and Benjamini and Hochberg corrections to these unadjusted p values. The Bonferroni correction is very stringent. The Benjamini-Hochberg correction is less stringent. To see this relationship, we filtered the data to determine the following:

How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)?

75/ 6189 or 1.2%

How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?

1735/ 6189 or 28%

In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level. If we want to be very confident of our data, use a small p value cut-off. If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off. We compared the numbers we get between the wild type strain and the other strains studied, organized as a table. We used sample PowerPoint slide to see how our table should be formatted. We uploaded our slide to the wiki. Since the wild type data is being analyzed by one of the groups in the class, it will be sufficient for this week to supply just the data for our strain. We will do the comparison with wild type at a later date. Comparing results with known data: the expression of the gene NSR1 (ID: YGR159C)is known to be induced by cold shock

Find NSR1 in your dataset. What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values?

Unadjusted it is .016

Bonferroni-corrected is 101.3

B-H corrected is .056

What is its average Log fold change at each of the timepoints in the experiment? Note that the average Log fold change is what we called "STRAIN)_AvgLogFC_(TIME)" in step 3 of the ANOVA analysis.

For T15 the average Log was 2.7

For T30 the average Log was 3.3

For T60 the average Log was 3.5

For T90 the average Log was -1.1

For T120 the average Log was -1.8

Does NSR1 change expression due to cold shock in this experiment?

Only at higher time points. The average log began to be negative starting at T90.

My favorite gene was RAD53 or YPL153C

The unadjusted value was .512

Bonferroni-corrected is 3172

B-H corrected is .649

For T15 the average Log was -.5476

For T30 the average Log was -.5768

For T60 the average Log was -.5764

For T90 the average Log was 1.050

For T120 the average Log was -.1536

Clustering and GO Term Enrichment with stem (part 2) Prepare your microarray data file for loading into STEM. Insert a new worksheet into your Excel workbook, and name it "(STRAIN)_stem". Select all of the data from your "(STRAIN)_ANOVA" worksheet and Paste special > paste values into your "(STRAIN)_stem" worksheet. Your leftmost column should have the column header "Master_Index". Rename this column to "SPOT". Column B should be named "ID". Rename this column to "Gene Symbol". Delete the column named "Standard_Name". Filter the data on the B-H corrected p value to be > 0.05 (that's greater than in this case). Once the data has been filtered, select all of the rows (except for your header row) and delete the rows by right-clicking and choosing "Delete Row" from the context menu. Undo the filter. This ensures that we will cluster only the genes with a "significant" change in expression and not the noise. Delete all of the data columns EXCEPT for the Average Log Fold change columns for each timepoint (for example, wt_AvgLogFC_t15, etc.). Rename the data columns with just the time and units (for example, 15m, 30m, etc.). Save your work. Then use Save As to save this spreadsheet as Text (Tab-delimited) (*.txt). Click OK to the warnings and close your file. Note that you should turn on the file extensions if you have not already done so. Now download and extract the STEM software. Click here to go to the STEM web site. Click on the download link and download the stem.zip file to your Desktop. Unzip the file. In Seaver 120, you can right click on the file icon and select the menu item 7-zip > Extract Here. This will create a folder called stem. You now need to download the Gene Ontology and yeast GO annotations and place them in this folder. Click here to download the file "gene_ontology.obo". Click here to download the file "gene_association.sgd.gz". Inside the folder, double-click on the stem.jar to launch the STEM program. Running STEM In section 1 (Expression Data Info) of the the main STEM interface window, click on the Browse... button to navigate to and select your file. Click on the radio button No normalization/add 0. Check the box next to Spot IDs included in the data file. In section 2 (Gene Info) of the main STEM interface window, leave the default selection for the three drop-down menu selections for Gene Annotation Source, Cross Reference Source, and Gene Location Source as "User provided". Click the "Browse..." button to the right of the "Gene Annotation File" item. Browse to your "stem" folder and select the file "gene_association.sgd.gz" and click Open. In section 3 (Options) of the main STEM interface window, make sure that the Clustering Method says "STEM Clustering Method" and do not change the defaults for Maximum Number of Model Profiles or Maximum Unit Change in Model Profiles between Time Points. In section 4 (Execute) click on the yellow Execute button to run STEM. If you get an error, there are some known reasons why stem might not work. If you had #DIV/0! errors in your input file, it will cause problems. Re-open your file and open the Find/Replace dialog. Search for #DIV/0!, but don't put anything in the replace field. Click "Replace all" to remove the #DIV/0! errors. Then save your file and try again with stem.

Data & Files

Text File

Excel Spreadsheet

Powerpoint Slide

Stem Profile

Conclusion

The purpose of this week's assignment was to go through a DNA microarray dataset and work with filtering p-values that are above or below a specific number. This lab also incorporated STEM which stands for Short Time-series Expression Miner. A Java program used for clustering, comparing, and visualizing short time series gene expression data from microarray experiements. Using excel I was able to filter the datapoints based on p values and compare the percentage of datapoints left each time.

Acknowledgments

My homework partners this week are Aby User:Ymesfin and David User:Dramir36. We sat together in class to go over the assignment.

Purpose and methods were copied over from Week 8

References

STEM.Short Time-series Expression Miner. Retrieved on October 23, 2019 from http://www.cs.cmu.edu/~jernst/stem/

LMU BioDB 2019. (2019). Week 8. Retrieved October 23, 2019, from https://xmlpipedb.cs.lmu.edu/biodb/fall2019/index.php/Week_8

Excel Spreadsheet. Retrieved on October 23, 2019 from Excel Spreadsheet

Ntesfaio (talk) 21:13, 23 October 2019 (PDT)

Bio DB Home page

Template:Ntesfaio

Week 1

User:Ntesfaio

Class Journal Week 1

Week 2

Ntesfaio Week 2

Class Journal Week 2

Week 3

RAD53 / YPL153C Week 3

Class Journal Week 3


Week 4

Ntesfaio Week 4

Class Journal Week 4

Week 5

DrugCentral Week 5

Class Journal Week 5

Week 6

Ntesfaio Week 6

Class Journal Week 6

Week 7

Ntesfaio Week 7

Class Journal Week 7

Week 8

Ntesfaio Week 8

Class Journal Week 8

Week 9

Ntesfaio Week 9

Class Journal Week 9

Week 10

Ntesfaio Week 10

Week 11

Ntesfaio Week 11

Sulfiknights

Week 12/13

Ntesfaio Week 12/13

Sulfiknights

Sulfiknights Deliverables

Ntesfaio Week 15

Ntesfaio Final Individual Reflection