gpt4 book ai didi

python - 获取 Hadoop Mapreduce 字数中出现的最大字数

转载 作者:可可西里 更新时间:2023-11-01 16:39:59 25 4
gpt4 key购买 nike

因此,我一直在关注此网站 ( http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/ ) 上的 Mapreduce python 代码,该代码从文本文件返回字数统计(即该字及其在文本中出现的次数)。但是,我想知道如何返回出现的最大单词。 mapper和reducer如下-

#Mapper

import sys

# input comes from STDIN (standard input)
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()
# split the line into words
words = line.split()
# increase counters
for word in words:
# write the results to STDOUT (standard output);
# what we output here will be the input for the
# Reduce step, i.e. the input for reducer.py
#
# tab-delimited; the trivial word count is 1
print '%s\t%s' % (word, 1)

#Reducer

from operator import itemgetter
import sys

current_word = None
current_count = 0
word = None

# input comes from STDIN
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()

# parse the input we got from mapper.py
word, count = line.split('\t', 1)

# convert count (currently a string) to int
try:
count = int(count)
except ValueError:
# count was not a number, so silently
# ignore/discard this line
continue

# this IF-switch only works because Hadoop sorts map output
# by key (here: word) before it is passed to the reducer
if current_word == word:
current_count += count
else:
if current_word:
# write result to STDOUT
print '%s\t%s' % (current_word, current_count)
current_count = count
current_word = word

# do not forget to output the last word if needed!
if current_word == word:
print '%s\t%s' % (current_word, current_count)

所以,我知道我需要在 reducer 的末尾添加一些东西,但我不确定是什么。

最佳答案

你只需要设置一个reducer来聚合所有的值(-numReduceTasks 1)

你的 reduce 应该是这样的:

max_count = 0
max_word = None

for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()

# parse the input we got from mapper.py
word, count = line.split('\t', 1)

# convert count (currently a string) to int
try:
count = int(count)
except ValueError:
# count was not a number, so silently
# ignore/discard this line
continue

# this IF-switch only works because Hadoop sorts map output
# by key (here: word) before it is passed to the reducer
if current_word == word:
current_count += count
else:
# check if new word greater
if current_count > max_count:
max_count= current_count
max_word = current_word
current_count = count
current_word = word

# do not forget to check last word if needed!
if current_count > max_count:
max_count= current_count
max_word = current_word

print '%s\t%s' % (max_word, max_count)

但是只有一个 reducer 你失去了并行化,所以如果你在第一个之后运行这个作业可能会更快,而不是相反。这样,您的 mapper 将与 reducer 相同。

关于python - 获取 Hadoop Mapreduce 字数中出现的最大字数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43057596/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com