Difference between revisions of "QLanners Week 8"
(added information on p values and NSR1 gene) |
(added electronic journal) |
||
Line 1: | Line 1: | ||
− | + | ==General Information and Metadata== | |
+ | *Strain analyzed: dASH1 | ||
+ | *Filename: DASH1 analysis QL.zip | ||
+ | *Time points: 4 | ||
+ | **15 min | ||
+ | **30 min | ||
+ | **60 min | ||
+ | **90 min | ||
+ | *Number of replicates: 15 total | ||
+ | **15 min: 4 | ||
+ | **30 min: 4 | ||
+ | **60 min: 4 | ||
+ | **90 min: 3 | ||
+ | *Number of NAs in data: 10,854 | ||
− | |||
− | + | ==Electronic Journal== | |
− | + | === Part 1: Statistical Analysis Part 1 === | |
− | |||
− | |||
− | |||
− | |||
− | + | The purpose of the witin-stain ANOVA test is to determine if any genes had a gene expression change that was significantly different than zero at '''''any''''' timepoint. | |
+ | #Download the excel spreadsheet "BIOL367_Fall2017_Dahlquist-microarray-data-master_20171017.xlsx" from the BIOL367_Fall2017 DropBox. | ||
+ | # Before beginning any analysis, immediately change the filename to the format dASH1_analysis_[Your initials] | ||
+ | # Create a new worksheet, naming it "dASH1_ANOVA". | ||
+ | # Copy the first three columns containing the "MasterIndex", "ID", and "Standard Name" from the "Master_Sheet" worksheet and paste it into your new worksheet. Copy the columns containing the data for dASH1 and paste them into your new worksheet to the right of the first three columns. | ||
+ | # Next you will replace cells that have "NA" in them (which indicates missing data) with an empty cell. | ||
+ | #* Use the keyboard shortcut Control+F to open the "Find" dialog box and select the "Replace" tab. | ||
+ | #* Type "NA" in the Search field and don't type anything in the "Replace" field. | ||
+ | #* Click the button "Replace all". There should be 10,854 cells of NAs. | ||
+ | # At the top of the first column to the right of your data, create five column headers of the form dASH1_AvgLogFC_(TIME) where (TIME) is 15, 30, 60, and 90. The dASH1 sample data has no 120 min data. | ||
+ | # In the cell below the dASH1_AvgLogFC_t15 header, type <code>=AVERAGE(</code> | ||
+ | # Then highlight all the data in row 2 associated with dASH1 and t15, press the closing paren key (shift 0),and press the "enter" key. | ||
+ | # This cell now contains the average of the log fold change data from the first gene at t=15 minutes. | ||
+ | # Click on this cell and position your cursor at the bottom right corner. You should see your cursor change to a thin black plus sign (not a chubby white one). When it does, double click, and the formula will magically be copied to the entire column of 6188 other genes. | ||
+ | # Repeat steps (7) through (11) with the t30, t60, and t90 data. | ||
+ | # Now in the first empty column to the right of the (STRAIN)_AvgLogFC_t120 calculation, create the column header (STRAIN)_ss_HO. | ||
+ | # In the first cell below this header, type <code>=SUMSQ(</code> | ||
+ | # Highlight all the LogFC data in row 2 for your (STRAIN) (but not the AvgLogFC), press the closing paren key (shift 0),and press the "enter" key. | ||
+ | # In the next empty column to the right of (STRAIN)_ss_HO, create the column headers (STRAIN)_ss_(TIME) as in (3). | ||
+ | # Make a note of how many data points you have at each time point for your strain. For most of the strains, it will be 4, but for dHAP4 t90 or t120, it will be "3", and for the wild type it will be "4" or "5". Count carefully. Also, make a note of the total number of data points. Again, for most strains, this will be 20, but for example, dHAP4, this number will be 18, and for wt it should be 23 (double-check). | ||
+ | # In the first cell below the header (STRAIN)_ss_t15, type <code>=SUMSQ(<range of cells for logFC_t15>)-COUNTA(<range of cells for logFC_t15>)*<AvgLogFC_t15>^2</code> and hit enter. | ||
+ | #* The <code>COUNTA</code> function counts the number of cells in the specified range that have data in them (i.e., does not count cells with missing values). | ||
+ | #* The phrase <range of cells for logFC_t15> should be replaced by the data range associated with t15. | ||
+ | #* The phrase <AvgLogFC_t15> should be replaced by the cell number in which you computed the AvgLogFC for t15, and the "^2" squares that value. | ||
+ | #* Upon completion of this single computation, use the Step (7) trick to copy the formula throughout the column. | ||
+ | # Repeat this computation for the t30 through t120 data points. Again, be sure to get the data for each time point, type the right number of data points, and get the average from the appropriate cell for each time point, and copy the formula to the whole column for each computation. | ||
+ | # In the first column to the right of (STRAIN)_ss_t120, create the column header (STRAIN)_SS_full. | ||
+ | # In the first row below this header, type <code>=sum(<range of cells containing "ss" for each timepoint>)</code> and hit enter. | ||
+ | # In the next two columns to the right, create the headers (STRAIN)_Fstat and (STRAIN)_p-value. | ||
+ | # Recall the number of data points from (13): call that total n. | ||
+ | # In the first cell of the (STRAIN)_Fstat column, type <code>=((n-5)/5)*(<(STRAIN)_ss_HO>-<(STRAIN)_SS_full>)/<(STRAIN)_SS_full></code> and hit enter. | ||
+ | #* Don't actually type the n but instead use the number from (13). Also note that "5" is the number of timepoints and the dSWI4 strain has 4 timepoints (it is missing t15). | ||
+ | #* Replace the phrase (STRAIN)_ss_HO with the cell designation. | ||
+ | #* Replace the phrase <(STRAIN)_SS_full> with the cell designation. | ||
+ | #* Copy to the whole column. | ||
+ | # In the first cell below the (STRAIN)_p-value header, type <code>=FDIST(<(STRAIN)_Fstat>,5,n-5)</code> replacing the phrase <(STRAIN)_Fstat> with the cell designation and the "n" as in (13) with the number of data points total. (Again, note that the number of timepoints is actually "4" for the dSWI4 strain). Copy to the whole column. | ||
+ | # Before we move on to the next step, we will perform a quick sanity check to see if we did all of these computations correctly. | ||
+ | #* Click on cell A1 and click on the Data tab. Select the Filter icon (looks like a funnel). Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set. | ||
+ | #* Click on the drop-down arrow on your (STRAIN)_p-value column. Select "Number Filters". In the window that appears, set a criterion that will filter your data so that the p value has to be less than 0.05. | ||
+ | #* Excel will now only display the rows that correspond to data meeting that filtering criterion. A number will appear in the lower left hand corner of the window giving you the number of rows that meet that criterion. We will check our results with each other to make sure that the computations were performed correctly. | ||
+ | |||
+ | ==== Calculate the Bonferroni and p value Correction ==== | ||
+ | |||
+ | # Now we will perform adjustments to the p value to correct for the [https://xkcd.com/882/ multiple testing problem]. Label the next two columns to the right with the same label, (STRAIN)_Bonferroni_p-value. | ||
+ | # Type the equation <code>=<(STRAIN)_p-value>*6189</code>, Upon completion of this single computation, use the Step (10) trick to copy the formula throughout the column. | ||
+ | # Replace any corrected p value that is greater than 1 by the number 1 by typing the following formula into the first cell below the second (STRAIN)_Bonferroni_p-value header: <code>=IF(STRAIN_Bonferroni_p-value>1,1,STRAIN_Bonferroni_p-value)</code>, where "STRAIN_Bonferroni_p-value" refers to the cell in which the first Bonferroni p value computation was made. Use the Step (10) trick to copy the formula throughout the column. | ||
+ | |||
+ | ==== Calculate the Benjamini & Hochberg p value Correction ==== | ||
+ | |||
+ | # Insert a new worksheet named "(STRAIN)_ANOVA_B-H". | ||
+ | # Copy and paste the "MasterIndex", "ID", and "Standard Name" columns from your previous worksheet into the first two columns of the new worksheet. | ||
+ | # For the following, use Paste special > Paste values. Copy your unadjusted p values from your ANOVA worksheet and paste it into Column D. | ||
+ | # Select all of columns A, B, C, and D. Sort by ascending values on Column D. Click the sort button from A to Z on the toolbar, in the window that appears, sort by column D, smallest to largest. | ||
+ | # Type the header "Rank" in cell E1. We will create a series of numbers in ascending order from 1 to 6189 in this column. This is the p value rank, smallest to largest. Type "1" into cell E2 and "2" into cell E3. Select both cells E2 and E3. Double-click on the plus sign on the lower right-hand corner of your selection to fill the column with a series of numbers from 1 to 6189. | ||
+ | # Now you can calculate the Benjamini and Hochberg p value correction. Type (STRAIN)_B-H_p-value in cell F1. Type the following formula in cell F2: <code>=(D2*6189)/E2</code> and press enter. Copy that equation to the entire column. | ||
+ | # Type "STRAIN_B-H_p-value" into cell G1. | ||
+ | # Type the following formula into cell G2: <code>=IF(F2>1,1,F2)</code> and press enter. Copy that equation to the entire column. | ||
+ | # Select columns A through G. Now sort them by your MasterIndex in Column A in ascending order. | ||
+ | # Copy column G and use Paste special > Paste values to paste it into the next column on the right of your ANOVA sheet. | ||
+ | |||
+ | * '''''Zip and upload the .xlsx file that you have just created to the wiki.''''' | ||
+ | |||
+ | ==== Sanity Check: Number of genes significantly changed ==== | ||
+ | |||
+ | Before we move on to further analysis of the data, we want to perform a more extensive sanity check to make sure that we performed our data analysis correctly. We are going to find out the number of genes that are significantly changed at various p value cut-offs. | ||
+ | |||
+ | * Go to your (STRAIN)_ANOVA worksheet. | ||
+ | * Select row 1 (the row with your column headers) and select the menu item Data > Filter > Autofilter (The funnel icon on the Data tab). Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set. | ||
+ | * Click on the drop-down arrow for the unadjusted p value. Set a criterion that will filter your data so that the p value has to be less than 0.05. | ||
+ | ** '''''How many genes have p < 0.05? and what is the percentage (out of 6189)?''''' | ||
+ | ** '''''How many genes have p < 0.01? and what is the percentage (out of 6189)?''''' | ||
+ | ** '''''How many genes have p < 0.001? and what is the percentage (out of 6189)?''''' | ||
+ | ** '''''How many genes have p < 0.0001? and what is the percentage (out of 6189)?''''' | ||
+ | * When we use a p value cut-off of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero by chance less than 5% of the time. | ||
+ | * We have just performed 6189 hypothesis tests. Another way to state what we are seeing with p < 0.05 is that we would expect to see this a gene expression change for at least one of the timepoints by chance in about 5% of our tests, or 309 times. Since we have more than 309 genes that pass this cut off, we know that some genes are significantly changed. However, we don't know ''which'' ones. To apply a more stringent criterion to our p values, we performed the Bonferroni and Benjamini and Hochberg corrections to these unadjusted p values. The Bonferroni correction is very stringent. The Benjamini-Hochberg correction is less stringent. To see this relationship, filter your data to determine the following: | ||
+ | ** '''''How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)?''''' | ||
+ | ** '''''How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?''''' | ||
+ | * In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level. If we want to be very confident of our data, use a small p value cut-off. If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off. | ||
+ | * We will compare the numbers we get between the wild type strain and the other strains studied, organized as a table. Use this [[Media:BIOL367_F17_sample_p-value_slide.pptx | sample PowerPoint slide]] to see how your table should be formatted. Upload your slide to the wiki. | ||
+ | ** Note that since the wild type data is being analyzed by one of the groups in the class, it will be sufficient for this week to supply just the data for your strain. We will do the comparison with wild type at a later date. | ||
+ | * Comparing results with known data: the expression of the gene ''NSR1'' (ID: YGR159C)is known to be induced by cold shock. '''''Find ''NSR1'' in your dataset. What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values? What is its average Log fold change at each of the timepoints in the experiment?''''' Note that the average Log fold change is what we called "STRAIN)_AvgLogFC_(TIME)" in step 3 of the ANOVA analysis. Does ''NSR1'' change expression due to cold shock in this experiment? | ||
+ | * For fun, find "your favorite gene" (from your web page) in the dataset. '''''What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values? What is its average Log fold change at each of the timepoints in the experiment?''''' Does your favorite gene change expression due to cold shock in this experiment? | ||
+ | |||
+ | ==== Summary Paragraph ==== | ||
+ | |||
+ | * Write a summary paragraph that gives the conclusions from this week's analysis. | ||
Unadjusted p value | Unadjusted p value | ||
Line 49: | Line 142: | ||
**At 90 min: -2.673801047 | **At 90 min: -2.673801047 | ||
− | The NSR1 gene most certainly changes expression due to the cold shock in this experiment. This is apparent from the large values for all of the Log fold changes at every time interval (as a 0 Log value would mean no change in expression and thus a large non-zero Log value means a large change in expression). | + | The NSR1 gene most certainly changes expression due to the cold shock in this experiment. This is apparent from the large values for all of the Log fold changes at every time interval (as a 0 Log value would mean no change in expression and thus a large non-zero Log value means a large change in expression). And the very small p-values suggest that these change in expressions are likely not just due to chance. |
+ | |||
+ | |||
+ | ADA2 gene | ||
+ | *p values | ||
+ | **unadjusted: 0.78502135 | ||
+ | **Bonferroni-corrected: 1 | ||
+ | **Benjamin and Hochberg-corrected: 0.945600843 | ||
+ | *Average Log fold changes | ||
+ | **At 15 min: -0.021259798 | ||
+ | **At 30 min: -0.172685741 | ||
+ | **At 60 min: -0.622498949 | ||
+ | **At 90 min: -0.347491051 | ||
+ | |||
+ | The ADA2 gene did not change expression due to the cold shock in this experiment. This is evident from the smaller average log change values and the larger p values. | ||
+ | |||
[[Media:DASH1 analysis QL.zip]] | [[Media:DASH1 analysis QL.zip]] |
Revision as of 03:37, 20 October 2017
Contents
General Information and Metadata
- Strain analyzed: dASH1
- Filename: DASH1 analysis QL.zip
- Time points: 4
- 15 min
- 30 min
- 60 min
- 90 min
- Number of replicates: 15 total
- 15 min: 4
- 30 min: 4
- 60 min: 4
- 90 min: 3
- Number of NAs in data: 10,854
Electronic Journal
Part 1: Statistical Analysis Part 1
The purpose of the witin-stain ANOVA test is to determine if any genes had a gene expression change that was significantly different than zero at any timepoint.
- Download the excel spreadsheet "BIOL367_Fall2017_Dahlquist-microarray-data-master_20171017.xlsx" from the BIOL367_Fall2017 DropBox.
- Before beginning any analysis, immediately change the filename to the format dASH1_analysis_[Your initials]
- Create a new worksheet, naming it "dASH1_ANOVA".
- Copy the first three columns containing the "MasterIndex", "ID", and "Standard Name" from the "Master_Sheet" worksheet and paste it into your new worksheet. Copy the columns containing the data for dASH1 and paste them into your new worksheet to the right of the first three columns.
- Next you will replace cells that have "NA" in them (which indicates missing data) with an empty cell.
- Use the keyboard shortcut Control+F to open the "Find" dialog box and select the "Replace" tab.
- Type "NA" in the Search field and don't type anything in the "Replace" field.
- Click the button "Replace all". There should be 10,854 cells of NAs.
- At the top of the first column to the right of your data, create five column headers of the form dASH1_AvgLogFC_(TIME) where (TIME) is 15, 30, 60, and 90. The dASH1 sample data has no 120 min data.
- In the cell below the dASH1_AvgLogFC_t15 header, type
=AVERAGE(
- Then highlight all the data in row 2 associated with dASH1 and t15, press the closing paren key (shift 0),and press the "enter" key.
- This cell now contains the average of the log fold change data from the first gene at t=15 minutes.
- Click on this cell and position your cursor at the bottom right corner. You should see your cursor change to a thin black plus sign (not a chubby white one). When it does, double click, and the formula will magically be copied to the entire column of 6188 other genes.
- Repeat steps (7) through (11) with the t30, t60, and t90 data.
- Now in the first empty column to the right of the (STRAIN)_AvgLogFC_t120 calculation, create the column header (STRAIN)_ss_HO.
- In the first cell below this header, type
=SUMSQ(
- Highlight all the LogFC data in row 2 for your (STRAIN) (but not the AvgLogFC), press the closing paren key (shift 0),and press the "enter" key.
- In the next empty column to the right of (STRAIN)_ss_HO, create the column headers (STRAIN)_ss_(TIME) as in (3).
- Make a note of how many data points you have at each time point for your strain. For most of the strains, it will be 4, but for dHAP4 t90 or t120, it will be "3", and for the wild type it will be "4" or "5". Count carefully. Also, make a note of the total number of data points. Again, for most strains, this will be 20, but for example, dHAP4, this number will be 18, and for wt it should be 23 (double-check).
- In the first cell below the header (STRAIN)_ss_t15, type
=SUMSQ(<range of cells for logFC_t15>)-COUNTA(<range of cells for logFC_t15>)*<AvgLogFC_t15>^2
and hit enter.- The
COUNTA
function counts the number of cells in the specified range that have data in them (i.e., does not count cells with missing values). - The phrase <range of cells for logFC_t15> should be replaced by the data range associated with t15.
- The phrase <AvgLogFC_t15> should be replaced by the cell number in which you computed the AvgLogFC for t15, and the "^2" squares that value.
- Upon completion of this single computation, use the Step (7) trick to copy the formula throughout the column.
- The
- Repeat this computation for the t30 through t120 data points. Again, be sure to get the data for each time point, type the right number of data points, and get the average from the appropriate cell for each time point, and copy the formula to the whole column for each computation.
- In the first column to the right of (STRAIN)_ss_t120, create the column header (STRAIN)_SS_full.
- In the first row below this header, type
=sum(<range of cells containing "ss" for each timepoint>)
and hit enter. - In the next two columns to the right, create the headers (STRAIN)_Fstat and (STRAIN)_p-value.
- Recall the number of data points from (13): call that total n.
- In the first cell of the (STRAIN)_Fstat column, type
=((n-5)/5)*(<(STRAIN)_ss_HO>-<(STRAIN)_SS_full>)/<(STRAIN)_SS_full>
and hit enter.- Don't actually type the n but instead use the number from (13). Also note that "5" is the number of timepoints and the dSWI4 strain has 4 timepoints (it is missing t15).
- Replace the phrase (STRAIN)_ss_HO with the cell designation.
- Replace the phrase <(STRAIN)_SS_full> with the cell designation.
- Copy to the whole column.
- In the first cell below the (STRAIN)_p-value header, type
=FDIST(<(STRAIN)_Fstat>,5,n-5)
replacing the phrase <(STRAIN)_Fstat> with the cell designation and the "n" as in (13) with the number of data points total. (Again, note that the number of timepoints is actually "4" for the dSWI4 strain). Copy to the whole column. - Before we move on to the next step, we will perform a quick sanity check to see if we did all of these computations correctly.
- Click on cell A1 and click on the Data tab. Select the Filter icon (looks like a funnel). Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
- Click on the drop-down arrow on your (STRAIN)_p-value column. Select "Number Filters". In the window that appears, set a criterion that will filter your data so that the p value has to be less than 0.05.
- Excel will now only display the rows that correspond to data meeting that filtering criterion. A number will appear in the lower left hand corner of the window giving you the number of rows that meet that criterion. We will check our results with each other to make sure that the computations were performed correctly.
Calculate the Bonferroni and p value Correction
- Now we will perform adjustments to the p value to correct for the multiple testing problem. Label the next two columns to the right with the same label, (STRAIN)_Bonferroni_p-value.
- Type the equation
=<(STRAIN)_p-value>*6189
, Upon completion of this single computation, use the Step (10) trick to copy the formula throughout the column. - Replace any corrected p value that is greater than 1 by the number 1 by typing the following formula into the first cell below the second (STRAIN)_Bonferroni_p-value header:
=IF(STRAIN_Bonferroni_p-value>1,1,STRAIN_Bonferroni_p-value)
, where "STRAIN_Bonferroni_p-value" refers to the cell in which the first Bonferroni p value computation was made. Use the Step (10) trick to copy the formula throughout the column.
Calculate the Benjamini & Hochberg p value Correction
- Insert a new worksheet named "(STRAIN)_ANOVA_B-H".
- Copy and paste the "MasterIndex", "ID", and "Standard Name" columns from your previous worksheet into the first two columns of the new worksheet.
- For the following, use Paste special > Paste values. Copy your unadjusted p values from your ANOVA worksheet and paste it into Column D.
- Select all of columns A, B, C, and D. Sort by ascending values on Column D. Click the sort button from A to Z on the toolbar, in the window that appears, sort by column D, smallest to largest.
- Type the header "Rank" in cell E1. We will create a series of numbers in ascending order from 1 to 6189 in this column. This is the p value rank, smallest to largest. Type "1" into cell E2 and "2" into cell E3. Select both cells E2 and E3. Double-click on the plus sign on the lower right-hand corner of your selection to fill the column with a series of numbers from 1 to 6189.
- Now you can calculate the Benjamini and Hochberg p value correction. Type (STRAIN)_B-H_p-value in cell F1. Type the following formula in cell F2:
=(D2*6189)/E2
and press enter. Copy that equation to the entire column. - Type "STRAIN_B-H_p-value" into cell G1.
- Type the following formula into cell G2:
=IF(F2>1,1,F2)
and press enter. Copy that equation to the entire column. - Select columns A through G. Now sort them by your MasterIndex in Column A in ascending order.
- Copy column G and use Paste special > Paste values to paste it into the next column on the right of your ANOVA sheet.
- Zip and upload the .xlsx file that you have just created to the wiki.
Sanity Check: Number of genes significantly changed
Before we move on to further analysis of the data, we want to perform a more extensive sanity check to make sure that we performed our data analysis correctly. We are going to find out the number of genes that are significantly changed at various p value cut-offs.
- Go to your (STRAIN)_ANOVA worksheet.
- Select row 1 (the row with your column headers) and select the menu item Data > Filter > Autofilter (The funnel icon on the Data tab). Little drop-down arrows should appear at the top of each column. This will enable us to filter the data according to criteria we set.
- Click on the drop-down arrow for the unadjusted p value. Set a criterion that will filter your data so that the p value has to be less than 0.05.
- How many genes have p < 0.05? and what is the percentage (out of 6189)?
- How many genes have p < 0.01? and what is the percentage (out of 6189)?
- How many genes have p < 0.001? and what is the percentage (out of 6189)?
- How many genes have p < 0.0001? and what is the percentage (out of 6189)?
- When we use a p value cut-off of p < 0.05, what we are saying is that you would have seen a gene expression change that deviates this far from zero by chance less than 5% of the time.
- We have just performed 6189 hypothesis tests. Another way to state what we are seeing with p < 0.05 is that we would expect to see this a gene expression change for at least one of the timepoints by chance in about 5% of our tests, or 309 times. Since we have more than 309 genes that pass this cut off, we know that some genes are significantly changed. However, we don't know which ones. To apply a more stringent criterion to our p values, we performed the Bonferroni and Benjamini and Hochberg corrections to these unadjusted p values. The Bonferroni correction is very stringent. The Benjamini-Hochberg correction is less stringent. To see this relationship, filter your data to determine the following:
- How many genes are p < 0.05 for the Bonferroni-corrected p value? and what is the percentage (out of 6189)?
- How many genes are p < 0.05 for the Benjamini and Hochberg-corrected p value? and what is the percentage (out of 6189)?
- In summary, the p value cut-off should not be thought of as some magical number at which data becomes "significant". Instead, it is a moveable confidence level. If we want to be very confident of our data, use a small p value cut-off. If we are OK with being less confident about a gene expression change and want to include more genes in our analysis, we can use a larger p value cut-off.
- We will compare the numbers we get between the wild type strain and the other strains studied, organized as a table. Use this sample PowerPoint slide to see how your table should be formatted. Upload your slide to the wiki.
- Note that since the wild type data is being analyzed by one of the groups in the class, it will be sufficient for this week to supply just the data for your strain. We will do the comparison with wild type at a later date.
- Comparing results with known data: the expression of the gene NSR1 (ID: YGR159C)is known to be induced by cold shock. Find NSR1 in your dataset. What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values? What is its average Log fold change at each of the timepoints in the experiment? Note that the average Log fold change is what we called "STRAIN)_AvgLogFC_(TIME)" in step 3 of the ANOVA analysis. Does NSR1 change expression due to cold shock in this experiment?
- For fun, find "your favorite gene" (from your web page) in the dataset. What is its unadjusted, Bonferroni-corrected, and B-H-corrected p values? What is its average Log fold change at each of the timepoints in the experiment? Does your favorite gene change expression due to cold shock in this experiment?
Summary Paragraph
- Write a summary paragraph that gives the conclusions from this week's analysis.
Unadjusted p value
- Less than 0.05
- 1630
- Percentage= 26.34%
- Less than 0.01
- 880
- Percentage= 14.22%
- Less than 0.001
- 356
- Percentage= 5.75%
- Less than 0.0001
- 142
- Percentage= 2.29%
Bonferroni-corrected p value
- Less than 0.05
- 53
- Percentage= 0.856%
Benjamin and Hochberg-corrected p value
- Less than 0.05
- 730
- Percentage= 11.80%
NSR1 gene
- p values
- unadjusted: 2.92024E-06
- Bonferroni-corrected: 0.018073394
- Benjamin and Hochberg-corrected: 0.000430319
- Average Log fold changes
- At 15 min: 1.886696138
- At 30 min: 2.611946831
- At 60 min: 1.394879511
- At 90 min: -2.673801047
The NSR1 gene most certainly changes expression due to the cold shock in this experiment. This is apparent from the large values for all of the Log fold changes at every time interval (as a 0 Log value would mean no change in expression and thus a large non-zero Log value means a large change in expression). And the very small p-values suggest that these change in expressions are likely not just due to chance.
ADA2 gene
- p values
- unadjusted: 0.78502135
- Bonferroni-corrected: 1
- Benjamin and Hochberg-corrected: 0.945600843
- Average Log fold changes
- At 15 min: -0.021259798
- At 30 min: -0.172685741
- At 60 min: -0.622498949
- At 90 min: -0.347491051
The ADA2 gene did not change expression due to the cold shock in this experiment. This is evident from the smaller average log change values and the larger p values.