How can I get html content written by JavaScript with Selenium/Python -


this question has answer here:

i'm doing web-crawling selenium , want element(such link) written javascript after selenium simulating clicking on fake link.

i tried get_html_source(), doesn't include content written javascript.

code i've written:

    def test_comment_url_fetch(self):         sel = self.selenium          sel.open("/rmrb")         url = sel.get_location()         #print url         if url.startswith('http://login'):             sel.open("/rmrb")         = 1         while true:             try:                 if == 1:                     sel.click("//div[@class='wb_feed_type sw_fun s_line2']/div/div/div[3]/div/a[4]")                      print "click"                 else:                     xpath = "//div[@class='wb_feed_type sw_fun s_line2'][%d]/div/div/div[3]/div/a[4]"%i                     sel.click(xpath)                     print "click"             except exception, e:                 print e                 break             += 1         html = sel.get_html_source()         html_file = open("tmp\\foo.html", 'w')         html_file.write(html.encode('utf-8'))         html_file.close() 

i use while-loop click series of fake links trigger js-actions show content, , content want. sel.get_html_source() didn't give want.

anybody may help? lot.

since post-processing on fetched nodes run javascript directly in browser execute_script. example a-tags:

js_code = "return document.getelementsbytagname('a')" your_elements = sel.execute_script(js_code) 

edit: execute_script , get_eval equivalent except get_eval performs implicit return, in execute_script has stated explicitly.


Comments

Popular posts from this blog

Why does Ruby on Rails generate add a blank line to the end of a file? -

keyboard - Smiles and long press feature in Android -

node.js - Bad Request - node js ajax post -