gpt4 book ai didi

python scrapy登录重定向问题

转载 作者:行者123 更新时间:2023-11-30 23:10:19 26 4
gpt4 key购买 nike

我正在尝试使用 scrapy 抓取网站,但无法通过 scrapy 登录我的帐户。这是蜘蛛代码:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from images.items import ImagesItem
from scrapy.http import Request
from scrapy.http import FormRequest
from loginform import fill_login_form
import requests
import os
import scrapy
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.shell import inspect_response

class ImageSpider(BaseSpider):
counter=0;
name = "images"
start_urls=['https://poshmark.com/login']
f=open('poshmark.txt','wb')
if not os.path.exists('./Image'):
os.mkdir('./Image')

def parse(self,response):
return [FormRequest("https://www.poshmark.com/login",
formdata={
'login_form[username_email]':'Oliver1234',
'login_form[password]':'password'},
callback=self.real_parse)]

def real_parse(self, response):
print 'you are here'
rq=[]
mainsites=response.xpath("//body[@class='two-col feed one-col']/div[@class='body-con']/div[@class='main-con clear-fix']/div[@class='right-col']/div[@id='tiles']/div[@class='listing-con shopping-tile masonry-brick']/a/@href").extract()
for mainsite in mainsites:
r=Request(mainsite, callback=self.get_image)
rq.append(r)
return rq
def get_image(self, response):
req=[]
sites=response.xpath("//body[@class='two-col small fixed']/div[@class='body-con']/div[@class='main-con']/div[@class='right-col']/div[@class='listing-wrapper']/div[@class='listing']/div[@class='img-con']/img/@src").extract()
for site in sites:
r = Request('http:'+site, callback=self.DownLload)
req.append(r)
return req

def DownLload(self, response):
str = response.url[0:-3];
self.counter = self.counter+1
str = str.split('/');
print '----------------Image Get----------------',self.counter,str[-1],'jpg'
imgfile = open('./Image/'+str[-1]+"jpg",'wb')
imgfile.write(response.body)
imgfile.close()

我得到如下命令窗口输出:

C:\Python27\Scripts\tutorial\images>scrapy crawl images
C:\Python27\Scripts\tutorial\images\images\spiders\images_spider.py:14: ScrapyDe
precationWarning: images.spiders.images_spider.ImageSpider inherits from depreca
ted class scrapy.spider.BaseSpider, please inherit from scrapy.spider.Spider. (w
arning only on first subclass, there may be others)
class ImageSpider(BaseSpider):
2015-06-09 23:43:29-0400 [scrapy] INFO: Scrapy 0.24.6 started (bot: images)
2015-06-09 23:43:29-0400 [scrapy] INFO: Optional features available: ssl, http11

2015-06-09 23:43:29-0400 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE'
: 'images.spiders', 'SPIDER_MODULES': ['images.spiders'], 'BOT_NAME': 'images'}
2015-06-09 23:43:29-0400 [scrapy] INFO: Enabled extensions: LogStats, TelnetCons
ole, CloseSpider, WebService, CoreStats, SpiderState
2015-06-09 23:43:30-0400 [scrapy] INFO: Enabled downloader middlewares: HttpAuth
Middleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, Def
aultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, Redirec
tMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-06-09 23:43:30-0400 [scrapy] INFO: Enabled spider middlewares: HttpErrorMid
dleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddlew
are
2015-06-09 23:43:30-0400 [scrapy] INFO: Enabled item pipelines:
2015-06-09 23:43:30-0400 [images] INFO: Spider opened
2015-06-09 23:43:30-0400 [images] INFO: Crawled 0 pages (at 0 pages/min), scrape
d 0 items (at 0 items/min)
2015-06-09 23:43:30-0400 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6
023
2015-06-09 23:43:30-0400 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080

2015-06-09 23:43:33-0400 [images] DEBUG: Crawled (200) <GET https://poshmark.com
/login> (referer: None)
2015-06-09 23:43:35-0400 [images] DEBUG: Redirecting (302) to <GET https://www.p
oshmark.com/feed> from <POST https://www.poshmark.com/login>
2015-06-09 23:43:35-0400 [images] DEBUG: Redirecting (301) to <GET https://poshm
ark.com/feed> from <GET https://www.poshmark.com/feed>
2015-06-09 23:43:36-0400 [images] DEBUG: Redirecting (302) to <GET https://poshm
ark.com/login?pmrd%5Burl%5D=%2Ffeed> from <GET https://poshmark.com/feed>
2015-06-09 23:43:36-0400 [images] DEBUG: Redirecting (301) to <GET https://poshm
ark.com/login> from <GET https://poshmark.com/login?pmrd%5Burl%5D=%2Ffeed>
2015-06-09 23:43:37-0400 [images] DEBUG: Crawled (200) <GET https://poshmark.com
/login> (referer: https://poshmark.com/login)
you are here
2015-06-09 23:43:37-0400 [images] INFO: Closing spider (finished)
2015-06-09 23:43:37-0400 [images] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 4213,
'downloader/request_count': 6,
'downloader/request_method_count/GET': 5,
'downloader/request_method_count/POST': 1,
'downloader/response_bytes': 9535,
'downloader/response_count': 6,
'downloader/response_status_count/200': 2,
'downloader/response_status_count/301': 2,
'downloader/response_status_count/302': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 6, 10, 3, 43, 37, 213000),
'log_count/DEBUG': 8,
'log_count/INFO': 7,
'request_depth_max': 1,
'response_received_count': 2,
'scheduler/dequeued': 6,
'scheduler/dequeued/memory': 6,
'scheduler/enqueued': 6,
'scheduler/enqueued/memory': 6,
'start_time': datetime.datetime(2015, 6, 10, 3, 43, 30, 788000)}
2015-06-09 23:43:37-0400 [images] INFO: Spider closed (finished)

可以看到,一开始是从./login重定向到./feed,看起来登录成功,但最后又重定向回了./login。关于可能导致这种情况的原因有什么想法吗?

最佳答案

当您登录网站时,它会在用户 session 中存储某种 token (具体取决于身份验证方法,可能会有所不同)。您遇到的问题是,当您正确进行身份验证时,您的 session 数据(浏览器能够告诉服务器您已登录以及您是谁的方式)不会被保存。

该线程中的人似乎已经成功地完成了您在这里想要做的事情:

Crawling with an authenticated session in Scrapy

这里:

Using Scrapy with authenticated (logged in) user session

关于python scrapy登录重定向问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30746820/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com