gpt4 book ai didi

hadoop - 使用Nutch Content Limit的建议

转载 作者:行者123 更新时间:2023-12-02 21:29:02 26 4
gpt4 key购买 nike

我正在使用Nutch 2.1抓取整个域(例如,company.com)。我曾经遇到过这个问题,由于Apache Nutch中设置的内容限制,我没有获得想要爬网的所有链接。通常,当我检查内容时,页面的上半部分仅存储在数据库中,因此下半部分的链接不会被提取。

为了解决此问题,我更改了 nutch-site.xml ,以使内容限制如下所示:

<property>
<name>http.content.limit</name>
<value>-1</value>
<description>The length limit for downloaded content using the http
protocol, in bytes. If this value is nonnegative (>=0), content longer
than it will be truncated; otherwise, no truncation at all. Do not
confuse this setting with the file.content.limit setting.
</description>
</property>

这样做可以解决问题,但是在某些时候,我遇到了OutOfMemory错误,如 所示,解析时输出就是证明:
ParserJob: starting
ParserJob: resuming: false
ParserJob: forced reparse: false
ParserJob: parsing all
Exception in thread "main" java.lang.RuntimeException: job failed: name=parse, jobid=job_local_0001
at org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:54)
at org.apache.nutch.parse.ParserJob.run(ParserJob.java:251)
at org.apache.nutch.parse.ParserJob.parse(ParserJob.java:259)
at org.apache.nutch.parse.ParserJob.run(ParserJob.java:302)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.parse.ParserJob.main(ParserJob.java:306)

这是我的 hadoop.log (错误旁边的部分):
    2016-01-22 02:02:35,898 INFO  crawl.SignatureFactory - Using Signature impl: org.apache.nutch.crawl.MD5Signature
2016-01-22 02:02:37,255 WARN util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-01-22 02:02:39,130 INFO mapreduce.GoraRecordReader - gora.buffer.read.limit = 10000
2016-01-22 02:02:39,255 INFO mapreduce.GoraRecordWriter - gora.buffer.write.limit = 10000
2016-01-22 02:02:39,322 INFO crawl.SignatureFactory - Using Signature impl: org.apache.nutch.crawl.MD5Signature
2016-01-22 02:02:53,018 WARN mapred.FileOutputCommitter - Output path is null in cleanup
2016-01-22 02:02:53,031 WARN mapred.LocalJobRunner - job_local_0001
java.lang.OutOfMemoryError: Java heap space
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3051)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2991)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3532)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:943)
at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1441)
at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2936)
at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:477)
at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2631)
at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1800)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2221)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2624)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2127)
at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2293)
at org.apache.gora.sql.store.SqlStore.execute(SqlStore.java:423)
at org.apache.gora.query.impl.QueryBase.execute(QueryBase.java:71)
at org.apache.gora.mapreduce.GoraRecordReader.executeQuery(GoraRecordReader.java:66)
at org.apache.gora.mapreduce.GoraRecordReader.nextKeyValue(GoraRecordReader.java:102)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.map

我将内容限制设置为-1时才遇到此问题。但是,如果我不这样做,则有可能我不会获得我想要爬网的所有链接。关于如何使用内容限制的任何建议?这样做真的不明智吗?如果是这样,我可以使用哪些替代方案?谢谢!

最佳答案

问题是您将爬网深度设置为无限制(-1)。当搜寻器系统访问诸如https://en.wikipedia.org, https://wikipedia.org and https://en.wikibooks.org之类的重URL时,您的系统在搜寻过程中可能会耗尽内存。您应该通过设置NUTCH_HEAPSIZE环境变量值e.g., export NUTCH_HEAPSIZE=4000来增加Nuch的内存(请参阅Nutch脚本中的详细信息)。请注意,此值等效于Hadoop的HADOOP_HEAPSIZE。如果仍然无法正常工作,则应增加系统中的物理内存^ ^

希望这可以帮助,

Le Quoc Do

关于hadoop - 使用Nutch Content Limit的建议,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34938020/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com