gpt4 book ai didi

python - 使用Python通过MapReduce在Hadoop中解析HTML .txt文件

转载 作者:行者123 更新时间:2023-12-02 20:59:09 25 4
gpt4 key购买 nike

我对使用Hadoop平台和定义MapReduce函数非常陌生,并且在尝试理解为什么该Mapper无法在我的MapReduce脚本中工作时我遇到了困难。我正在尝试解析.txt文件中以字符串形式编写的页面的集合,其中每个“行”都表示<page>...</page>。这个脚本有什么错误?感谢您的帮助!

from mrjob.job import MRJob
from mrjob.step import MRStep
from mrjob.compat import jobconf_from_env
import lxml
import mwparserfromhell
import heapq
import re

class MRParser(MRJob):
def mapper(self, _, line):
bigString = ''.join(re.findall(r'(<text xml:space="preserve">.*</text>)',line))
root = etree.fromstring(bigString.decode('utf-8'))
if not(bigString == ''):
bigString = etree.tostring(root,method='text', encoding = "UTF-8")
wikicode = mwparserfromhell.parse(bigString)
bigString = wikicode.strip_code()
yield None, bigString

def steps(self):
return [
MRStep(mapper=self.mapper)
]

最佳答案

您缺少 reducer 功能。您需要将映射器中的行作为“键”(没有值)传递给化简器。试试这个:

def mapper(self, _, line):
bigString = ''.join(re.findall(r'(<text xml:space="preserve">.*</text>)',line))
root = etree.fromstring(bigString.decode('utf-8'))
if not(bigString == ''):
bigString = etree.tostring(root,method='text', encoding = "UTF-8")
wikicode = mwparserfromhell.parse(bigString)
bigString = wikicode.strip_code()
yield bigString, None

def reducer(self, key, values):
yield key, None

def steps(self):
return [
MRStep(mapper=self.mapper, reducer=self.reducer)
]

关于python - 使用Python通过MapReduce在Hadoop中解析HTML .txt文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43691302/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com