gpt4 book ai didi

http - 如何使用 scrapy 爬取依赖于帖子的网站

转载 作者:可可西里 更新时间:2023-11-01 16:22:01 26 4
gpt4 key购买 nike

我正在尝试抓取保险网站 www.ehealthinsurance.com。它的主页有一个 POST 依赖的形式,它采用特定的值和生成下一页。我正在尝试传递值但看不到我想要的所需标签的 HTML 源代码。任何建议都会有很大帮助。

内联是 scrapy 代码:

class ehealthSpider(BaseSpider):
name = "ehealth"
allowed_domains = ["ehealthinsurance.com/"]
start_urls = ["http://www.ehealthinsurance.com/individual-health-insurance"]

def parse(self, response):
yield FormRequest.from_response(response,
formname='main',
formdata={'census.zipCode': '48341',
'census.requestEffectiveDate': '06/01/2013',
'census.primary.gender': 'MALE',
'census.primary.month': '12',
'census.primary.day': '01',
'census.primary.year': '1971',
'census.primary.tobacco': 'No',
'census.primary.student': 'No'}, callback=self.parseAnnonces)

def parseAnnonces(self, response):
hxs = HtmlXPathSelector(response)
data = hxs.select('//div[@class="main-wrap"]').extract()
#print encoding
print data

这是终端响应中的爬虫

  2013-04-30 16:34:16+0530 [elyse] DEBUG: Crawled (200) <GET http://www.ehealthin
urance.com/individual-health-insurance> (referer: None)
2013-04-30 16:34:17+0530 [elyse] DEBUG: Filtered offsite request to 'www.ehealt
insurance.com': <POST http://www.ehealthinsurance.com/individual-health-insuran
e;jsessionid=F5A1123CE731FDDDC1A7A31CD46CC132.prfo23a>
2013-04-30 16:34:17+0530 [elyse] INFO: Closing spider (finished)
2013-04-30 16:34:17+0530 [elyse] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 257,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 32561,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2013, 4, 30, 11, 4, 17, 22000),
'log_count/DEBUG': 8,
'log_count/INFO': 4,
'request_depth_max': 1,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2013, 4, 30, 11, 4, 10, 494000)}

你能帮我得到想要的数据吗?

最佳答案

中间请求的小技巧并且有效。还更正了表单名称。 Scrapy 很棒的调试工具是 inspect_response(response)

from scrapy.selector import Selector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from healthinsspider.items import HealthinsspiderItem
from scrapy.shell import inspect_response
from scrapy.http import FormRequest
from scrapy.http import Request
import time

class EhealthspiderSpider(CrawlSpider):
name = 'ehealthSpider'
allowed_domains = ['ehealthinsurance.com']
start_urls = ["http://www.ehealthinsurance.com/individual-health-insurance"]

def parse(self, response):
yield FormRequest.from_response(response,
formname='form-census',
formdata={'census.zipCode': '48341',
'census.requestEffectiveDate': '06/01/2013',
'census.primary.gender': 'MALE',
'census.primary.month': '12',
'census.primary.day': '01',
'census.primary.year': '1971',
'census.primary.tobacco': 'No',
'census.primary.student': 'No'}, callback=self.InterRequest,
dont_filter=True)

def InterRequest(self, response):
# sleep so, that our request can be processed by the server, than go to results
time.sleep(10)
return Request(url='https://www.ehealthinsurance.com/ehi/ifp/individual-family-health-insurance!goToScreen?referer=https%3A%2F%2Fwww.ehealthinsurance.com%2Fehi%2Fifp%2Findividual-health-insurance%3FredirectFormHTTP&sourcePage=&edit=false&ajax=false&screenName=best-sellers', dont_filter=True, callback=self.parseAnnonces)

def parseAnnonces(self, response):
inspect_response(response)
hxs = Selector(response)
data = hxs.select('//div[@class="main-wrap"]').extract()
#print encoding
print data

附言应在 settings.py 中启用 Cookie:COOKIES_ENABLED=True

关于http - 如何使用 scrapy 爬取依赖于帖子的网站,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/16299054/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com