gpt4 book ai didi

python - 在 Hadoop 上运行 MapReduce 程序仅输出一半的数据

转载 作者:行者123 更新时间:2023-12-01 06:47:48 26 4
gpt4 key购买 nike

我正在 hadoop 上运行一个简单的 MapReduce 程序,计算数据集列中值的最小值、最大值、中值和标准差。当我在计算机上本地运行此程序时,我得到了根据数据集的所有值计算出的最终输出。但是,当我在 Hadoop 上运行此程序时,输出几乎恰好对应于数据集列中值的一半。代码如下:

映射器.py

#!/usr/bin/env python3
import sys
import csv

# Load data
data = csv.DictReader(sys.stdin)

# Prints/Passes key-value to reducer.py
for row in data:
for col, value in row.items():
if col == sys.argv[1]:
print('%s\t%s' % (col, value))

reducer .py

#!/usr/bin/env python3
import sys
import statistics

key = None
current_key = None
num_list = []
for line in sys.stdin:
# Remove leading and trailng whitespace
line = line.strip()

# Parse input
key, value = line.split('\t', 1)

# Convert string to int
try:
value = float(value)
except ValueError:
# Skip the value
continue

if current_key == key:
num_list.append(value)
else:
if current_key:
print("Num. of Data Points %s\t --> Max: %s\t Min: %s\t Median: %s\t Standard Deviation: %s" \
% (len(num_list), max(num_list), min(num_list), statistics.median(num_list), statistics.pstdev(num_list)))
num_list.clear()
num_list.append(value)
current_key = key

# Output last value if needed
if current_key == key:
print("Num. of Data Points %s\t --> Max: %s\t Min: %s\t Median: %s\t Standard Deviation: %s" \
% (len(num_list), max(num_list), min(num_list), statistics.median(num_list), statistics.pstdev(num_list)))

Hadoop 日志:

2019-12-02 23:54:40,705 INFO mapreduce.Job: Running job: job_1575141442909_0026
2019-12-02 23:54:47,903 INFO mapreduce.Job: Job job_1575141442909_0026 running in uber mode : false
2019-12-02 23:54:47,906 INFO mapreduce.Job: map 0% reduce 0%
2019-12-02 23:54:54,019 INFO mapreduce.Job: map 100% reduce 0%
2019-12-02 23:54:59,076 INFO mapreduce.Job: map 100% reduce 100%
2019-12-02 23:55:00,115 INFO mapreduce.Job: Job job_1575141442909_0026 completed successfully
2019-12-02 23:55:00,253 INFO mapreduce.Job: Counters: 54
File System Counters
FILE: Number of bytes read=139868
FILE: Number of bytes written=968967
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=501097
HDFS: Number of bytes written=114
HDFS: Number of read operations=11
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
HDFS: Number of bytes read erasure-coded=0
Job Counters
Launched map tasks=2
Launched reduce tasks=1
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=7492
Total time spent by all reduces in occupied slots (ms)=2767
Total time spent by all map tasks (ms)=7492
Total time spent by all reduce tasks (ms)=2767
Total vcore-milliseconds taken by all map tasks=7492
Total vcore-milliseconds taken by all reduce tasks=2767
Total megabyte-milliseconds taken by all map tasks=7671808
Total megabyte-milliseconds taken by all reduce tasks=2833408
Map-Reduce Framework
Map input records=10408
Map output records=5203
Map output bytes=129456
Map output materialized bytes=139874
Input split bytes=220
Combine input records=0
Combine output records=0
Reduce input groups=1
Reduce shuffle bytes=139874
Reduce input records=5203
Reduce output records=1
Spilled Records=10406
Shuffled Maps =2
Failed Shuffles=0
Merged Map outputs=2
GC time elapsed (ms)=80
CPU time spent (ms)=2790
Physical memory (bytes) snapshot=676896768
Virtual memory (bytes) snapshot=8266964992
Total committed heap usage (bytes)=482344960
Peak Map Physical memory (bytes)=253210624
Peak Map Virtual memory (bytes)=2755108864
Peak Reduce Physical memory (bytes)=173010944
Peak Reduce Virtual memory (bytes)=2758103040
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=500877
File Output Format Counters
Bytes Written=114
2019-12-02 23:55:00,254 INFO streaming.StreamJob: Output directory: data/output

本地输出:

# Data Points: 10407     Max: 89.77682042        Min: 13.87331897        Median: 46.44807153     Standard Deviation: 11.156280347146872

Hadoop 输出:

# Data Points: 5203      Max: 89.77682042        Min: 13.87331897        Median: 46.202181       Standard Deviation: 11.28118280525746

正如您所看到的,Hadoop 输出中的数据点数量几乎恰好是本地输出的全部数据点数量的一半。我尝试使用不同大小的不同数据集,但它仍然总是一半......我做错了什么或遗漏了什么吗?

最佳答案

我已经弄清楚为什么我会得到这样的输出。原因是因为正如我怀疑的那样,Hadoop 将我的输入数据分成两部分,用于两个单独的映射器。然而,只有前半部分数据保留了数据集的列标题,因此,当映射器读取数据集的后半部分时,将不会访问指定的列。

我从数据集中删除了现有 header ,并在读取数据时设置字段名称以解决我的问题:

data = csv.DictReader(sys.stdin, fieldnames=("col1", "col2", "col3", "col4", "col5"))

关于python - 在 Hadoop 上运行 MapReduce 程序仅输出一半的数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59150406/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com