gpt4 book ai didi

python - 抓取 : scraped links - now unable to scrape and dump html files into a folder

转载 作者:行者123 更新时间:2023-11-28 01:30:43 26 4
gpt4 key购买 nike

使用 Python、Selenium、Sublime 和 Firefox:我正在从这个网站上抓取链接,并想将抓取的页面(作为 html 文件)保存到一个文件夹中。但是,我已经工作了好几天,试图将这些 html 文件的主体转储到保管箱文件夹中。问题是 1) 保存 html 文件和 2) 将它们保存到保管箱文件夹(或任何文件夹)。

我已经成功编写了执行搜索的代码,然后从一系列网页中删除链接。以下代码适用于此。

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
import re
import csv
import pickle
import signal
import time

def handler(signum, frame):
raise Exception('Last Resort!')

signal.signal(signal.SIGALRM,handler)

def isReady(browser):
return browser.execute_script("return document.readyState")=="complete"

def waitUntilReady(browser):
if not isReady(browser):
waitUntilReady(browser)

def waitUntilReadyBreak(browser_b,url,counter):
try:
signal.alarm(counter)
waitUntilReady(browser_b)
signal.alarm(0)
except Exception,e:
print e
signal.alarm(0)
browser_b.close()
browser_b = webdriver.Firefox()
browser_b.get(url)
waitUntilReadyBreak(browser_b,url,counter)
return browser_b

browser = webdriver.Firefox()
thisurl = 'http://www.usprwire.com/cgi-bin/news/search.cgi'
browser.get(thisurl)
waitUntilReady(browser)
numarticles = 0
elem = WebDriverWait(browser, 60).until(EC.presence_of_element_located((By.NAME, "query")))
elem = browser.find_element_by_name("query")
elem.send_keys('"test"')
form = browser.find_element_by_xpath("/html/body/center/table/tbody/tr/td/table/tbody/tr[3]/td/table/tbody/tr[3]/td[2]/table/tbody/tr[3]/td/table/tbody/tr[1]/td/font/input[2]").click()

nextpage = False
all_newproduct_links = []
npages = 200

for page in range(1,npages+1):

if page == 1:

elems = browser.find_elements_by_tag_name('a')
article_url = [elems.get_attribute("href")
for elems in browser.find_elements_by_class_name('category_links')]
print page
print article_url
print "END_A_PAGE"

elem = browser.find_element_by_link_text('[>>]').click()
waitUntilReady(browser)

if page >=2 <= 200:
# click the dots
print page
print page
print "B4 LastLoop"
elems = WebDriverWait(browser, 60).until(EC.presence_of_element_located((By.CLASS_NAME, "category_links")))
elems = browser.find_elements_by_tag_name('a')
article_url = [elems.get_attribute("href")
for elems in browser.find_elements_by_class_name('category_links')]
print page
print article_url
print "END_C_PAGE"

# This is the part that will not work :(
for e in elems:
numarticles = numarticles+1
numpages = 0
numpages = numpages+1000
article_url = e.get_attribute('href')
print 'waiting'
bodyelem.send_keys(Keys.COMMAND + "2")
browser.get(article_url)
waitUntilReady(browser)
fw = open('/Users/My/Dropbox/MainFile/articlesdata/'+str(page)+str(numpages)+str(numarticles)+'.html','w')
fw.write(browser.page_source.encode('utf-8'))
fw.close()
bodyelem2 = browser.find_elements_by_xpath("//body")[0]
bodyelem2.send_keys(Keys.COMMAND + "1")

上面的 (for e in elems:) 是为了点击页面并创建一个包含抓取页面主体的 html 文件。我似乎缺少一些基本的东西。

任何指导都将不胜感激。

最佳答案

我认为你过于复杂了。

此 block 中至少存在一个问题:

elems = browser.find_elements_by_tag_name('a')
article_url = [elems.get_attribute("href")
for elems in browser.find_elements_by_class_name('category_links')]

elems 将包含由 find_elements_by_tag_name() 找到的元素列表,但是随后,您在列表中使用相同的 elems 变量理解。因此,当您稍后迭代 elems 时,您会收到错误,因为 elems 现在指的是单个元素而不是列表。

无论如何,这是我会采取的方法:

  • 首先收集所有文章的网址
  • 逐个遍历 url 并使用页面 url 名称作为文件名保存 HTML 源代码。例如。 _Iran_Shipping_Report_Q4_2014_is_now_available_at_Fast_Market_Research_326303.shtml 将是文章文件名

代码:

from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC


def isReady(browser):
return browser.execute_script("return document.readyState") == "complete"


def waitUntilReady(browser):
if not isReady(browser):
waitUntilReady(browser)


browser = webdriver.Firefox()
browser.get('http://www.usprwire.com/cgi-bin/news/search.cgi')

# make a search
query = WebDriverWait(browser, 60).until(EC.presence_of_element_located((By.NAME, "query")))
query.send_keys('"test"')
submit = browser.find_element_by_xpath("//input[@value='Search']")
submit.click()

# grab article urls
npages = 4
article_urls = []
for page in range(1, npages + 1):
article_urls += [elm.get_attribute("href") for elm in browser.find_elements_by_class_name('category_links')]
browser.find_element_by_link_text('[>>]').click()

# iterate over urls and save the HTML source
for url in article_urls:
browser.get(url)
waitUntilReady(browser)

title = browser.current_url.split("/")[-1]
with open('/Users/My/Dropbox/MainFile/articlesdata/' + title, 'w') as fw:
fw.write(browser.page_source.encode('utf-8'))

关于python - 抓取 : scraped links - now unable to scrape and dump html files into a folder,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30365315/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com