gpt4 book ai didi

python - python中的Hadoop Streaming Job失败错误

转载 作者:IT老高 更新时间:2023-10-28 20:45:25 26 4
gpt4 key购买 nike

来自 this guide ,我已成功运行示例练习。但是在运行我的 mapreduce 作业时,我收到以下错误
ERROR streaming.StreamJob: Job not Successful!<br/>
10/12/16 17:13:38 INFO streaming.StreamJob: killJob...<br/>
Streaming Job Failed!

来自日志文件的错误

java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 2
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:311)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:545)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:132)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
at org.apache.hadoop.mapred.Child.main(Child.java:170)

映射器.py

import sys

i=0

for line in sys.stdin:
i+=1
count={}
for word in line.strip().split():
count[word]=count.get(word,0)+1
for word,weight in count.items():
print '%s\t%s:%s' % (word,str(i),str(weight))

reducer.py

import sys

keymap={}
o_tweet="2323"
id_list=[]
for line in sys.stdin:
tweet,tw=line.strip().split()
#print tweet,o_tweet,tweet_id,id_list
tweet_id,w=tw.split(':')
w=int(w)
if tweet.__eq__(o_tweet):
for i,wt in id_list:
print '%s:%s\t%s' % (tweet_id,i,str(w+wt))
id_list.append((tweet_id,w))
else:
id_list=[(tweet_id,w)]
o_tweet=tweet

[edit] 运行作业的命令:

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoop jar contrib/streaming/hadoop-0.20.0-streaming.jar -file /home/hadoop/mapper.py -mapper /home/hadoop/mapper.py -file /home/hadoop/reducer.py -reducer /home/hadoop/reducer.py -input my-input/* -output my-output

输入是任意随机序列的句子。

谢谢,

最佳答案

您的 -mapper 和 -reducer 应该只是脚本名称。

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoop jar contrib/streaming/hadoop-0.20.0-streaming.jar -file /home/hadoop/mapper.py -mapper mapper.py -file /home/hadoop/reducer.py -reducer reducer.py -input my-input/* -output my-output

当您的脚本位于 hdfs 内另一个文件夹中的作业中时,该作业与执行为“。”的尝试任务相关。 (仅供引用,如果您想要添加另一个文件,例如查找表,您可以在 Python 中打开它,就好像它与您的脚本在同一目录中一样,而您的脚本在 M/R 作业中)

还要确保你有 chmod a+x mapper.py 和 chmod a+x reducer.py

关于python - python中的Hadoop Streaming Job失败错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/4460522/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com