Difference between revisions of "QLanners Week 3"

From LMU BioDB 2017
Jump to: navigation, search
(added all of the indentifiers in the returned HTML)
(added to lab notebook for first two parts of part two of assignment)
Line 28: Line 28:
 
#sib_footer_right : The text/content in the bottom right footer of the page
 
#sib_footer_right : The text/content in the bottom right footer of the page
 
#sb_footer_gototop : The button going to the top of the page included in the footer
 
#sb_footer_gototop : The button going to the top of the page included in the footer
 +
 +
Laboratory Notebook
 +
 +
In the second part of this weeks assignment, the pages [[Introduction to the Command Line]], [[Dynamic Text Processing]] and [[The Web from the Command Line]] were utilized throughout the complete the tasks. In order to access the webpage, my partner and I utilized the format on the [[The Web from the Command Line]] page for how to post and submit data into parts of a webpage and return the HTML. By looking at the developer tools for the [http://web.expasy.org/translate/] page, we were able to determine the arguments that needed to be included for the command. We compared our returned HTML in the terminal with that in the return page's developer tools to ensure that our code was correct.
 +
 +
Once the webpage command was perfected in the command terminal, we parsed through the links and identifiers in the returned HTML. By using the browser to look up all of the things included in the HTML of the returned page, we came up with the lists above, along with a description of each element. For this section, my partner [[Dbashour]] and I worked together in person to split up the work in order to maximize efficiency.

Revision as of 01:40, 19 September 2017

Code used to retrieve info

curl -X POST -d "pre_text=cgatggtacatggagtccagtagccgtagtgatgagatcgatgagctagc&output=Verbose&code=Standard&submit=Submit" http://web.expasy.org/cgi-bin/translate/dna_aa

Links

  1. http://www.w3.org/TR/html4/loose.dtd : A HTML document "which includes presentation attributes and elements that W3C expects to phase out as support for style sheets matures"
  2. http://web.expasy.org/favicon.ico : A picture of the logo used on the page tab
  3. /css/sib_css/sib.css : A template laying out how the page should be formatted
  4. /css/sib_css/sib_print.css : A template laying out how the page should be formatted for printing
  5. /css/base.css : Another template for laying out the format of the page
  6. http://www.isb-sib.ch : Link to Swiss Institute of Bioinformatics Homepage
  7. http://www.expasy.org : Link to the ExPasy Bioinformatics Resource Portal Home
  8. http://web.expasy.org/translate : Link to the Translate Tool page (without any input in)


Identifiers

  1. sib_top : The very top of the page
  2. sib_container : The container for the whole page returned
  3. sib_header_small : The small bar header at the top of the page
  4. sib_expasy_logo : The logo in the top left corner of the page
  5. resource_header : Not obvious, but possibly another formatting section for the header of the page
  6. sib_header_nav : The top right of the page with navigational buttons to home and contact
  7. sib_body : The portion of the page including the text and reading frames returned
  8. sib_footer : The footer at the bottom of the page
  9. sib_footer_content : The text/content included in the footer at the bottom of the page
  10. sib_footer_right : The text/content in the bottom right footer of the page
  11. sb_footer_gototop : The button going to the top of the page included in the footer

Laboratory Notebook

In the second part of this weeks assignment, the pages Introduction to the Command Line, Dynamic Text Processing and The Web from the Command Line were utilized throughout the complete the tasks. In order to access the webpage, my partner and I utilized the format on the The Web from the Command Line page for how to post and submit data into parts of a webpage and return the HTML. By looking at the developer tools for the [1] page, we were able to determine the arguments that needed to be included for the command. We compared our returned HTML in the terminal with that in the return page's developer tools to ensure that our code was correct.

Once the webpage command was perfected in the command terminal, we parsed through the links and identifiers in the returned HTML. By using the browser to look up all of the things included in the HTML of the returned page, we came up with the lists above, along with a description of each element. For this section, my partner Dbashour and I worked together in person to split up the work in order to maximize efficiency.