gpt4 book ai didi

java - MapReduce canopy 聚类中心

转载 作者:可可西里 更新时间:2023-11-01 15:16:15 26 4
gpt4 key购买 nike

我正在尝试理解用于 canopy 聚类的代码。这两个类(一个 map ,一个缩减)的目的是找到树冠中心。我的问题是我不明白 map 和 reduce 函数之间的区别。它们几乎相同。

所以有区别吗?或者我只是在 reducer 中再次重复相同的过程?

我认为答案是 map 和 reduce 函数处理代码的方式不同。即使使用相似的代码,它们也会对数据执行不同的操作。

那么当我们试图找到树冠中心时,有人可以解释一下 map 和减少的过程吗?

例如,我知道 map 可能看起来像这样 -- (joe, 1) (dave, 1) (joe, 1) (joe, 1)

然后 reduce 会像这样:--- (joe, 3) (dave, 1)

同样的事情发生在这里吗?

或者也许我正在执行相同的任务两次?

非常感谢。

map 功能:

package nasdaq.hadoop;

import java.io.*;
import java.util.*;

import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.util.*;

public class CanopyCentersMapper extends Mapper<LongWritable, Text, Text, Text> {
//A list with the centers of the canopy
private ArrayList<ArrayList<String>> canopyCenters;

@Override
public void setup(Context context) {
this.canopyCenters = new ArrayList<ArrayList<String>>();
}

@Override
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
//Seperate the stock name from the values to create a key of the stock and a list of values - what is list of values?
//What exactly are we splitting here?
ArrayList<String> stockData = new ArrayList<String>(Arrays.asList(value.toString().split(",")));

//remove stock and make first canopy center around it canopy center
String stockKey = stockData.remove(0);

//?
String stockValue = StringUtils.join(",", stockData);

//Check wether the stock is avaliable for usage as a new canopy center
boolean isClose = false;

for (ArrayList<String> center : canopyCenters) { //Run over the centers

//I think...let's say at this point we have a few centers. Then we have our next point to check.
//We have to compare that point with EVERY center already created. If the distance is larger than EVERY T1
//then that point becomes a new center! But the more canopies we have there is a good chance it is within
//the radius of one of the canopies...

//Measure the distance between the center and the currently checked center
if (ClusterJob.measureDistance(center, stockData) <= ClusterJob.T1) {
//Center is too close
isClose = true;
break;
}
}

//The center is not smaller than the small radius, add it to the canopy
if (!isClose) {
//Center is not too close, add the current data to the center
canopyCenters.add(stockData);

//Prepare hadoop data for output
Text outputKey = new Text();
Text outputValue = new Text();

outputKey.set(stockKey);
outputValue.set(stockValue);

//Output the stock key and values to reducer
context.write(outputKey, outputValue);
}
}

归约函数:

    package nasdaq.hadoop;

import java.io.*;
import java.util.*;

import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;

public class CanopyCentersReducer extends Reducer<Text, Text, Text, Text> {
//The canopy centers list
private ArrayList<ArrayList<String>> canopyCenters;

@Override
public void setup(Context context) {
//Create a new list for the canopy centers
this.canopyCenters = new ArrayList<ArrayList<String>>();
}

public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
for (Text value : values) {
//Format the value and key to fit the format
String stockValue = value.toString();
ArrayList<String> stockData = new ArrayList<String>(Arrays.asList(stockValue.split(",")));
String stockKey = key.toString();

//Check wether the stock is avaliable for usage as a new canopy center
boolean isClose = false;
for (ArrayList<String> center : canopyCenters) { //Run over the centers
//Measure the distance between the center and the currently checked center
if (ClusterJob.measureDistance(center, stockData) <= ClusterJob.T1) {
//Center is too close
isClose = true;
break;
}
}

//The center is not smaller than the small radius, add it to the canopy
if (!isClose) {
//Center is not too close, add the current data to the center
canopyCenters.add(stockData);

//Prepare hadoop data for output
Text outputKey = new Text();
Text outputValue = new Text();

outputKey.set(stockKey);
outputValue.set(stockValue);

//Output the stock key and values to reducer
context.write(outputKey, outputValue);
}


}

**编辑 -- 更多代码和解释

Stockkey 是代表股票的键值。 (纳斯达克和类似的东西)

ClusterJob.measureDistance():

    public static double measureDistance(ArrayList<String> origin, ArrayList<String> destination)
{
double deltaSum = 0.0;
//Run over all points in the origin vector and calculate the sum of the squared deltas
for (int i = 0; i < origin.size(); i++) {
if (destination.size() > i) //Only add to sum if there is a destination to compare to
{
deltaSum = deltaSum + Math.pow(Math.abs(Double.valueOf(origin.get(i)) - Double.valueOf(destination.get(i))),2);
}
}
//Return the square root of the sum
return Math.sqrt(deltaSum);

最佳答案

好了,代码的直白解释就是:- 制图员遍历数据的某些(可能是随机的)子集,并生成冠层中心,所有这些中心彼此之间的距离至少为 T1。这些中心被发射。- 然后,reducer 从所有映射器中遍历属于每个特定库存键(如 MSFT、GOOG 等)的所有冠层中心,然后确保每个库存键值在彼此的 T1 范围内没有冠层中心(例如,GOOG 中没有两个中心彼此在 T1 内,尽管 MSFT 中的中心和 GOOG 中的中心可能靠得很近)。

代码的目标不明确,我个人认为一定有错误。 reducers 基本上解决了这个问题,就好像你正在尝试独立地为每个股票 key 生成中心(即,为 GOOG 的所有数据点计算树冠中心),而映射器似乎解决了试图为所有股票生成中心的问题。像这样放在一起,你会得到一个矛盾,所以这两个问题实际上都没有得到解决。

如果您想要所有库存 key 的中心:- 然后 map 输出必须将所有内容发送到一个 reducer。将 map 输出键设置为 NullWritable 之类的微不足道的东西。然后 reducer 将执行正确的操作而不会发生变化。

如果你想要每个库存键的中心:- 然后需要更改映射器,以便有效地为每个股票键拥有一个单独的树冠列表,您可以通过为每个股票键保留一个单独的 arrayList 来实现这一点(首选,因为它会更快)或者,您可以更改距离度量,使属于不同库存键的库存键相距无限远(因此它们永远不会相互作用)。

附言顺便说一句,您的距离度量也有一些不相关的问题。首先,您使用 Double.parseDouble 解析数据,但没有捕获 NumberFormatException。由于您为其提供了 stockData,它在第一个字段中包含非数字字符串,例如“GOOG”,因此您将在运行后立即崩溃。其次,距离度量忽略任何具有缺失值的字段。这是 L2(毕达哥拉斯)距离度量的错误实现。要了解原因,请考虑这个字符串:","与任何其他点的距离为 0,如果它被选为树冠中心,则不能选择其他中心。除了将缺失维度的增量设置为零,您还可以考虑将其设置为合理的值,例如该属性的总体均值,或者(为了安全起见)出于聚类目的而从数据集中丢弃该行。

关于java - MapReduce canopy 聚类中心,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/20931868/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com