- android - 多次调用 OnPrimaryClipChangedListener
- android - 无法更新 RecyclerView 中的 TextView 字段
- android.database.CursorIndexOutOfBoundsException : Index 0 requested, 光标大小为 0
- android - 使用 AppCompat 时,我们是否需要明确指定其 UI 组件(Spinner、EditText)颜色
items.py classes
import scrapy
from scrapy.item import Item, Field
import json
class Attributes(scrapy.Item):
description = Field()
pages=Field()
author=Field()
class Vendor(scrapy.Item):
title=Field()
order_url=Field()
class bookItem(scrapy.Item):
title = Field()
url = Field()
marketprice=Field()
images=Field()
price=Field()
attributes=Field()
vendor=Field()
time_scraped=Field()
我的抓取工具
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from scrapy.spider import BaseSpider
from scrapy import log
from scrapper.items import bookItem,Attributes,Vendor
import couchdb
import logging
import json
import time
from couchdb import Server
class libertySpider(CrawlSpider):
couch = couchdb.Server()
db = couch['python-tests']
name = "libertybooks"
allowed_domains = ["libertybooks.com"]
unvisited_urls = []
visited_urls = []
start_urls = [
"http://www.libertybooks.com"
]
url=["http://www.kaymu.pk"]
rules = [Rule(SgmlLinkExtractor(), callback='parse_item', follow=True)]
total=0
productpages=0
exceptionnum=0
def parse_item(self,response):
if response.url.find("pid")!=-1:
with open("number.html","w") as w:
self.total=self.total+1
w.write(str(self.total)+","+str(self.productpages))
itm=bookItem()
attrib=Attributes()
ven=Vendor()
images=[]
try:
name=response.xpath('//span[@id="pagecontent_lblbookName"]/text()').extract()[0]
name=name.encode('utf-8')
except:
name="name not found"
try:
price=response.xpath('//span[@id="pagecontent_lblPrice"]/text()').extract()[0]
price=price.encode('utf-8')
except:
price=-1
try:
marketprice=response.xpath('//span[@id="pagecontent_lblmarketprice"]/text()').extract()[0]
marketprice=marketprice.encode('utf-8')
except:
marketprice=-1
try:
pages=response.xpath('//span[@id="pagecontent_spanpages"]/text()').extract()[0]
pages=pages.encode('utf-8')
except:
pages=-1
try:
author=response.xpath('//span[@id="pagecontent_lblAuthor"]/text()').extract()[0]
author=author.encode('utf-8')
except:
author="author not found"
try:
description=response.xpath('//span[@id="pagecontent_lblbookdetail"]/text()').extract()[0]
description=description.encode('utf-8')
except:
description="des: not found"
try:
image=response.xpath('//img[@id="pagecontent_imgProduct"]/@src').extract()[0]
image=image.encode('utf-8')
except:
image="#"
ven['title']='libertybooks'
ven['order_url']=response.url
itm['vendor']=ven
itm['time_scraped']=time.ctime()
itm['title']=name
itm['url']=response.url
itm['price']=price
itm['marketprice']=marketprice
itm['images']=images
attrib['pages']=pages
attrib['author']=author
attrib['description']=description
itm['attributes']=attrib
self.saveindb(itm)
return itm
def saveindb(self,obj):
logging.debug(obj)
self.db.save(obj)
堆栈跟踪
2014-12-09 13:57:37-0800 [libertybooks] ERROR: Spider error processing <GET http://www.libertybooks.com/bookdetail.aspx?pid=16532>
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 824, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/usr/lib/python2.7/dist-packages/twisted/internet/task.py", line 638, in _tick
taskObj._oneWorkUnit()
File "/usr/lib/python2.7/dist-packages/twisted/internet/task.py", line 484, in _oneWorkUnit
result = next(self._iterator)
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/defer.py", line 57, in <genexpr>
work = (callable(elem, *args, **named) for elem in iterable)
--- <exception caught here> ---
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/defer.py", line 96, in iter_errback
yield next(it)
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/offsite.py", line 26, in process_spider_output
for x in result:
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/referer.py", line 22, in <genexpr>
return (_set_referer(r) for r in result or ())
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/urllength.py", line 33, in <genexpr>
return (r for r in result or () if _filter(r))
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/depth.py", line 50, in <genexpr>
return (r for r in result or () if _filter(r))
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spiders/crawl.py", line 67, in _parse_response
cb_res = callback(response, **cb_kwargs) or ()
File "/home/asad/Desktop/scraper/scraper/spiders/liberty_spider.py", line 107, in parse_item
self.saveindb(itm)
File "/home/asad/Desktop/scraper/scraper/spiders/liberty_spider.py", line 112, in saveindb
self.db.save(obj)
File "/usr/local/lib/python2.7/dist-packages/couchdb/client.py", line 431, in save
_, _, data = func(body=doc, **options)
File "/usr/local/lib/python2.7/dist-packages/couchdb/http.py", line 514, in post_json
**params)
File "/usr/local/lib/python2.7/dist-packages/couchdb/http.py", line 533, in _request_json
headers=headers, **params)
File "/usr/local/lib/python2.7/dist-packages/couchdb/http.py", line 529, in _request
credentials=self.credentials)
File "/usr/local/lib/python2.7/dist-packages/couchdb/http.py", line 244, in request
body = json.encode(body).encode('utf-8')
File "/usr/local/lib/python2.7/dist-packages/couchdb/json.py", line 69, in encode
return _encode(obj)
File "/usr/local/lib/python2.7/dist-packages/couchdb/json.py", line 135, in <lambda>
dumps(obj, allow_nan=False, ensure_ascii=False)
File "/usr/lib/python2.7/json/__init__.py", line 250, in dumps
sort_keys=sort_keys, **kw).encode(obj)
File "/usr/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python2.7/json/encoder.py", line 184, in default
raise TypeError(repr(o) + " is not JSON serializable")
exceptions.TypeError: {'attributes': {'author': 'Tina Fey',
'description': "Once in a generation a woman comes along who changes everything. Tina Fey is not that woman, but she met that woman once and acted weird around her.\r\n\r\nBefore 30 Rock, Mean Girls and 'Sarah Palin', Tina Fey was just a young girl with a dream: a recurring stress dream that she was being chased through a local airport by her middle-school gym teacher.\r\n\r\nShe also had a dream that one day she would be a comedian on TV. She has seen both these dreams come true.\r\n\r\nAt last, Tina Fey's story can be told. From her youthful days as a vicious nerd to her tour of duty on Saturday Night Live; from her passionately halfhearted pursuit of physical beauty to her life as a mother eating things off the floor; from her one-sided college romance to her nearly fatal honeymoon - from the beginning of this paragraph to this final sentence.\r\n\r\nTina Fey reveals all, and proves what we've all suspected: you're no one until someone calls you bossy.",
'pages': '304 Pages'},
'images': [],
'marketprice': '1,095',
'price': '986',
'time_scraped': 'Tue Dec 9 13:57:37 2014',
'title': 'Bossypants',
'url': 'http://www.libertybooks.com/bookdetail.aspx?pid=16532',
'vendor': {'order_url': 'http://www.libertybooks.com/bookdetail.aspx?pid=16532',
'title': 'libertybooks'}} is not JSON serializable
我是 scrapy 和 couchdb 的初学者,我也尝试过使用“json.dumps(itm, default=lambda o: o.dict”将项目对象转换为 json 对象, sort_keys=True, indent=4)” 但得到了相同的响应,所以请告诉我有没有办法让我的类 json 可序列化,以便它们可以存储在 couchdb 中?
最佳答案
嗯,更短的答案就是使用 ScrapyJSONEncoder :
from scrapy.utils.serialize import ScrapyJSONEncoder
_encoder = ScrapyJSONEncoder()
...
def saveindb(self,obj):
logging.debug(obj)
self.db.save(_encoder.encode(obj))
更长的版本是:如果你想让这个蜘蛛成长(如果它不应该是一次性的),你可能想要使用 pipeline将项目存储在 CouchDB 中并保持关注点分离(在蜘蛛代码中抓取/抓取,在管道代码中存储在数据库中)。
乍一看这可能看起来像是过度设计,但当项目开始增长并使测试更容易时,它确实有帮助。
关于python - scrapy 项目在将它们存储到 couchdb 时不是 JSON 可序列化的,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27389925/
我正在使用 NetBeans 开发 Java 中的 WebService,并使用 gradle 作为依赖管理。 我找到了this article关于使用 gradle 开发 Web 项目。它使用 Gr
我正在将旧项目从 ant 迁移到 gradle(以使用其依赖项管理和构建功能),并且在生成 时遇到问题>eclipse 项目。今天的大问题是因为该项目有一些子项目被拆分成 war 和 jar 包部署到
我已经为这个错误苦苦挣扎了很长时间。如果有帮助的话,我会提供一些问题的快照。请指导我该怎么办????在我看来,它看起来一团糟。 *** glibc detected *** /home/shivam/
我在 Ubuntu 12.10 上运行 NetBeans 7.3。我正在学习 Java Web 开发类(class),因此我有一个名为 jsage8 的项目,其中包含我为该类(class)所做的工作。
我想知道 Codeplex、GitHub 等中是否有任何突出的项目是 C# 和 ASP.NET,甚至只是 C# API 与功能测试 (NUnit) 和模拟(RhinoMocks、NMock 等)。 重
我创建了一个 Maven 项目,包装类型为“jar”,名为“Y”我已经完成了“Maven 安装”,并且可以在我的本地存储库中找到它.. 然后,我创建了另一个项目,包装类型为“war”,称为“X”。在这
我一直在关注the instructions用于将 facebook SDK 集成到我的应用程序中。除了“helloFacebookSample”之外,我已经成功地编译并运行了所有给定的示例应用程序。
我想知道,为什么我们(Java 社区)需要 Apache Harmony 项目,而已经有了 OpenJDK 项目。两者不是都是在开源许可下发布的吗? 最佳答案 事实恰恰相反。 Harmony 的成立是
我正在尝试使用 Jsoup HTML Parser 从网站获取缩略图 URL我需要提取所有以 60x60.jpg(或 png)结尾的 URL(所有缩略图 URL 都以此 URL 结尾) 问题是我让它在
我无法构建 gradle 项目,即使我编辑 gradle 属性,我也会收到以下错误: Error:(22, 1) A problem occurred evaluating root project
我有这个代码: var NToDel:NSArray = [] var addInNToDelArray = "Test1 \ Test2" 如何在 NToDel:NSArray 中添加 addInN
如何在单击显示更多(按钮)后将主题列表限制为 5 个(项目)。 还有 3(项目),依此类推到列表末尾,然后它会显示显示更少(按钮)。 例如:在 Udemy 过滤器选项中,当您点击查看更多按钮时,它仅显
如何将现有的 Flutter 项目导入为 gradle 项目? “导入项目”向导要求 Gradle 主路径。 我有 gradle,安装在我的系统中。但是这里需要设置什么(哪条路径)。 这是我正在尝试的
我有一个关于 Bitbucket 的项目。只有源被提交。为了将项目检索到新机器上,我在 IntelliJ 中使用了 Version Control > Checkout from Ve
所以,我想更改我公司的一个项目,以使用一些与 IDE 无关的设置。我在使用 Tomcat 设置 Java 应用程序方面有非常少的经验(我几乎不记得它是如何工作的)。 因此,为了帮助制作独立于 IDE
我有 2 个独立的项目,一个在 Cocos2dx v3.6 中,一个在 Swift 中。我想从 Swift 项目开始游戏。我该怎么做? 我已经将整个 cocos2dx 项目复制到我的 Swift 项目
Cordova 绝对是新手。这些是我完成的步骤: checkout 现有项目 运行cordova build ios 以上生成此构建错误: (node:10242) UnhandledPromiseR
我正在使用 JQuery 隐藏/显示 li。我的要求是,当我点击任何 li 时,它应该显示但隐藏所有其他 li 项目。当我将鼠标悬停在文本上时 'show all list item but don
我想将我所有的java 项目(223 个项目)迁移到gradle 项目。我正在使用由 SpringSource STS 团队开发的 Gradle Eclipse 插件。 目前,我所有的 java 项目
我下载this Eclipse Luna ,对于 Java EE 开发人员,如描述中所见,它支持 Web 应用程序。我找不到 file -> new -> other -> web projects
我是一名优秀的程序员,十分优秀!