- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
<分区>
我正在尝试使用 Mahout 运行集群程序。以下是我正在使用的 java 代码
package com;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.SequenceFile;
import org.apache.hadoop.io.Text;
import org.apache.mahout.clustering.WeightedVectorWritable;
import org.apache.mahout.clustering.kmeans.Cluster;
import org.apache.mahout.clustering.kmeans.KMeansDriver;
import org.apache.mahout.common.distance.EuclideanDistanceMeasure;
import org.apache.mahout.math.RandomAccessSparseVector;
import org.apache.mahout.math.Vector;
import org.apache.mahout.math.VectorWritable;
public class ClusteringDemo {
public static final double[][] points = { { 1, 1 }, { 2, 1 }, { 1, 2 },
{ 2, 2 }, { 3, 3 }, { 8, 8 }, { 9, 8 }, { 8, 9 }, { 9, 9 } };
public static void writePointsToFile(List<Vector> points, String fileName,
FileSystem fs, Configuration conf) throws IOException {
Path path = new Path(fileName);
SequenceFile.Writer writer = new SequenceFile.Writer(fs, conf, path,
LongWritable.class, VectorWritable.class);
long recNum = 0;
VectorWritable vec = new VectorWritable();
for (Vector point : points) {
vec.set(point);
writer.append(new LongWritable(recNum++), vec);
}
writer.close();
}
public static List<Vector> getPoints(double[][] raw) {
List<Vector> points = new ArrayList<Vector>();
for (int i = 0; i < raw.length; i++) {
double[] fr = raw[i];
Vector vec = new RandomAccessSparseVector(fr.length);
vec.assign(fr);
points.add(vec);
}
return points;
}
public static void main(String args[]) throws Exception {
int k = 3;
List<Vector> vectors = getPoints(points);
File testData = new File("/home/vishal/testdata");
if (!testData.exists()) {
testData.mkdir();
}
testData = new File("/home/vishal/testdata/points");
if (!testData.exists()) {
testData.mkdir();
}
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
writePointsToFile(vectors, "/home/vishal/testdata/points/file1", fs,
conf);
Path path = new Path("/home/vishal/testdata/clusters/part-00000");
SequenceFile.Writer writer = new SequenceFile.Writer(fs, conf, path,
Text.class, Cluster.class);
for (int i = 0; i < k; i++) {
Vector vec = vectors.get(i);
Cluster cluster = new Cluster(vec, i,
new EuclideanDistanceMeasure());
writer.append(new Text(cluster.getIdentifier()), cluster);
}
writer.close();
KMeansDriver.run(conf, new Path("/home/vishal/testdata/points"),
new Path("/home/vishal/testdata/clusters"), new Path(
"/home/vishal/output"), new EuclideanDistanceMeasure(),
0.001, 10, true, false);
SequenceFile.Reader reader = new SequenceFile.Reader(fs, new Path(
"/home/vishal/output/" + Cluster.CLUSTERED_POINTS_DIR
+ "/part-m-00000"), conf);
IntWritable key = new IntWritable();
WeightedVectorWritable value = new WeightedVectorWritable();
while (reader.next(key, value)) {
System.out.println(value.toString() + " belongs to cluster "
+ key.toString());
}
reader.close();
}
}
但是当我运行它时,它开始正常执行但最后给我一个错误..以下是我在运行它时得到的堆栈跟踪。
13/05/30 09:49:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/05/30 09:49:22 INFO kmeans.KMeansDriver: Input: /home/vishal/testdata/points Clusters In: /home/vishal/testdata/clusters Out: /home/vishal/output Distance: org.apache.mahout.common.distance.EuclideanDistanceMeasure
13/05/30 09:49:22 INFO kmeans.KMeansDriver: convergence: 0.0010 max Iterations: 10 num Reduce Tasks: org.apache.mahout.math.VectorWritable Input Vectors: {}
13/05/30 09:49:22 INFO kmeans.KMeansDriver: K-Means Iteration 1
13/05/30 09:49:22 INFO common.HadoopUtil: Deleting /home/vishal/output/clusters-1
13/05/30 09:49:23 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/05/30 09:49:23 INFO input.FileInputFormat: Total input paths to process : 1
13/05/30 09:49:23 INFO mapred.JobClient: Running job: job_local_0001
13/05/30 09:49:23 INFO util.ProcessTree: setsid exited with exit code 0
13/05/30 09:49:23 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@15fc40c
13/05/30 09:49:23 INFO mapred.MapTask: io.sort.mb = 100
13/05/30 09:49:23 INFO mapred.MapTask: data buffer = 79691776/99614720
13/05/30 09:49:23 INFO mapred.MapTask: record buffer = 262144/327680
13/05/30 09:49:23 INFO mapred.MapTask: Starting flush of map output
13/05/30 09:49:23 INFO mapred.MapTask: Finished spill 0
13/05/30 09:49:23 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
13/05/30 09:49:24 INFO mapred.JobClient: map 0% reduce 0%
13/05/30 09:49:26 INFO mapred.LocalJobRunner:
13/05/30 09:49:26 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
13/05/30 09:49:26 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@15ed659
13/05/30 09:49:26 INFO mapred.LocalJobRunner:
13/05/30 09:49:26 INFO mapred.Merger: Merging 1 sorted segments
13/05/30 09:49:26 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 185 bytes
13/05/30 09:49:26 INFO mapred.LocalJobRunner:
13/05/30 09:49:26 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
13/05/30 09:49:26 INFO mapred.LocalJobRunner:
13/05/30 09:49:26 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
13/05/30 09:49:26 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to /home/vishal/output/clusters-1
13/05/30 09:49:27 INFO mapred.JobClient: map 100% reduce 0%
13/05/30 09:49:29 INFO mapred.LocalJobRunner: reduce > reduce
13/05/30 09:49:29 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
13/05/30 09:49:30 INFO mapred.JobClient: map 100% reduce 100%
13/05/30 09:49:30 INFO mapred.JobClient: Job complete: job_local_0001
13/05/30 09:49:30 INFO mapred.JobClient: Counters: 21
13/05/30 09:49:30 INFO mapred.JobClient: File Output Format Counters
13/05/30 09:49:30 INFO mapred.JobClient: Bytes Written=474
13/05/30 09:49:30 INFO mapred.JobClient: Clustering
13/05/30 09:49:30 INFO mapred.JobClient: Converged Clusters=1
13/05/30 09:49:30 INFO mapred.JobClient: FileSystemCounters
13/05/30 09:49:30 INFO mapred.JobClient: FILE_BYTES_READ=3328461
13/05/30 09:49:30 INFO mapred.JobClient: FILE_BYTES_WRITTEN=3422872
13/05/30 09:49:30 INFO mapred.JobClient: File Input Format Counters
13/05/30 09:49:30 INFO mapred.JobClient: Bytes Read=443
13/05/30 09:49:30 INFO mapred.JobClient: Map-Reduce Framework
13/05/30 09:49:30 INFO mapred.JobClient: Map output materialized bytes=189
13/05/30 09:49:30 INFO mapred.JobClient: Map input records=9
13/05/30 09:49:30 INFO mapred.JobClient: Reduce shuffle bytes=0
13/05/30 09:49:30 INFO mapred.JobClient: Spilled Records=6
13/05/30 09:49:30 INFO mapred.JobClient: Map output bytes=531
13/05/30 09:49:30 INFO mapred.JobClient: Total committed heap usage (bytes)=325713920
13/05/30 09:49:30 INFO mapred.JobClient: CPU time spent (ms)=0
13/05/30 09:49:30 INFO mapred.JobClient: SPLIT_RAW_BYTES=104
13/05/30 09:49:30 INFO mapred.JobClient: Combine input records=9
13/05/30 09:49:30 INFO mapred.JobClient: Reduce input records=3
13/05/30 09:49:30 INFO mapred.JobClient: Reduce input groups=3
13/05/30 09:49:30 INFO mapred.JobClient: Combine output records=3
13/05/30 09:49:30 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
13/05/30 09:49:30 INFO mapred.JobClient: Reduce output records=3
13/05/30 09:49:30 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
13/05/30 09:49:30 INFO mapred.JobClient: Map output records=9
13/05/30 09:49:30 INFO kmeans.KMeansDriver: K-Means Iteration 2
13/05/30 09:49:30 INFO common.HadoopUtil: Deleting /home/vishal/output/clusters-2
13/05/30 09:49:30 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/05/30 09:49:30 INFO input.FileInputFormat: Total input paths to process : 1
13/05/30 09:49:30 INFO mapred.JobClient: Running job: job_local_0002
13/05/30 09:49:30 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@13f136e
13/05/30 09:49:30 INFO mapred.MapTask: io.sort.mb = 100
13/05/30 09:49:30 INFO mapred.MapTask: data buffer = 79691776/99614720
13/05/30 09:49:30 INFO mapred.MapTask: record buffer = 262144/327680
13/05/30 09:49:30 INFO mapred.MapTask: Starting flush of map output
13/05/30 09:49:30 INFO mapred.MapTask: Finished spill 0
13/05/30 09:49:30 INFO mapred.Task: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting
13/05/30 09:49:31 INFO mapred.JobClient: map 0% reduce 0%
13/05/30 09:49:33 INFO mapred.LocalJobRunner:
13/05/30 09:49:33 INFO mapred.Task: Task 'attempt_local_0002_m_000000_0' done.
13/05/30 09:49:33 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@d6b059
13/05/30 09:49:33 INFO mapred.LocalJobRunner:
13/05/30 09:49:33 INFO mapred.Merger: Merging 1 sorted segments
13/05/30 09:49:33 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 124 bytes
13/05/30 09:49:33 INFO mapred.LocalJobRunner:
13/05/30 09:49:33 INFO mapred.Task: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting
13/05/30 09:49:33 INFO mapred.LocalJobRunner:
13/05/30 09:49:33 INFO mapred.Task: Task attempt_local_0002_r_000000_0 is allowed to commit now
13/05/30 09:49:33 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0002_r_000000_0' to /home/vishal/output/clusters-2
13/05/30 09:49:34 INFO mapred.JobClient: map 100% reduce 0%
13/05/30 09:49:36 INFO mapred.LocalJobRunner: reduce > reduce
13/05/30 09:49:36 INFO mapred.Task: Task 'attempt_local_0002_r_000000_0' done.
13/05/30 09:49:37 INFO mapred.JobClient: map 100% reduce 100%
13/05/30 09:49:37 INFO mapred.JobClient: Job complete: job_local_0002
13/05/30 09:49:37 INFO mapred.JobClient: Counters: 20
13/05/30 09:49:37 INFO mapred.JobClient: File Output Format Counters
13/05/30 09:49:37 INFO mapred.JobClient: Bytes Written=364
13/05/30 09:49:37 INFO mapred.JobClient: FileSystemCounters
13/05/30 09:49:37 INFO mapred.JobClient: FILE_BYTES_READ=6658544
13/05/30 09:49:37 INFO mapred.JobClient: FILE_BYTES_WRITTEN=6844248
13/05/30 09:49:37 INFO mapred.JobClient: File Input Format Counters
13/05/30 09:49:37 INFO mapred.JobClient: Bytes Read=443
13/05/30 09:49:37 INFO mapred.JobClient: Map-Reduce Framework
13/05/30 09:49:37 INFO mapred.JobClient: Map output materialized bytes=128
13/05/30 09:49:37 INFO mapred.JobClient: Map input records=9
13/05/30 09:49:37 INFO mapred.JobClient: Reduce shuffle bytes=0
13/05/30 09:49:37 INFO mapred.JobClient: Spilled Records=4
13/05/30 09:49:37 INFO mapred.JobClient: Map output bytes=531
13/05/30 09:49:37 INFO mapred.JobClient: Total committed heap usage (bytes)=525074432
13/05/30 09:49:37 INFO mapred.JobClient: CPU time spent (ms)=0
13/05/30 09:49:37 INFO mapred.JobClient: SPLIT_RAW_BYTES=104
13/05/30 09:49:37 INFO mapred.JobClient: Combine input records=9
13/05/30 09:49:37 INFO mapred.JobClient: Reduce input records=2
13/05/30 09:49:37 INFO mapred.JobClient: Reduce input groups=2
13/05/30 09:49:37 INFO mapred.JobClient: Combine output records=2
13/05/30 09:49:37 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
13/05/30 09:49:37 INFO mapred.JobClient: Reduce output records=2
13/05/30 09:49:37 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
13/05/30 09:49:37 INFO mapred.JobClient: Map output records=9
13/05/30 09:49:37 INFO kmeans.KMeansDriver: K-Means Iteration 3
13/05/30 09:49:37 INFO common.HadoopUtil: Deleting /home/vishal/output/clusters-3
13/05/30 09:49:37 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/05/30 09:49:37 INFO input.FileInputFormat: Total input paths to process : 1
13/05/30 09:49:37 INFO mapred.JobClient: Running job: job_local_0003
13/05/30 09:49:37 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@988707
13/05/30 09:49:37 INFO mapred.MapTask: io.sort.mb = 100
13/05/30 09:49:37 INFO mapred.MapTask: data buffer = 79691776/99614720
13/05/30 09:49:37 INFO mapred.MapTask: record buffer = 262144/327680
13/05/30 09:49:37 INFO mapred.MapTask: Starting flush of map output
13/05/30 09:49:37 INFO mapred.MapTask: Finished spill 0
13/05/30 09:49:37 INFO mapred.Task: Task:attempt_local_0003_m_000000_0 is done. And is in the process of commiting
13/05/30 09:49:38 INFO mapred.JobClient: map 0% reduce 0%
13/05/30 09:49:40 INFO mapred.LocalJobRunner:
13/05/30 09:49:40 INFO mapred.Task: Task 'attempt_local_0003_m_000000_0' done.
13/05/30 09:49:40 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@6214f5
13/05/30 09:49:40 INFO mapred.LocalJobRunner:
13/05/30 09:49:40 INFO mapred.Merger: Merging 1 sorted segments
13/05/30 09:49:40 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 124 bytes
13/05/30 09:49:40 INFO mapred.LocalJobRunner:
13/05/30 09:49:40 INFO mapred.Task: Task:attempt_local_0003_r_000000_0 is done. And is in the process of commiting
13/05/30 09:49:40 INFO mapred.LocalJobRunner:
13/05/30 09:49:40 INFO mapred.Task: Task attempt_local_0003_r_000000_0 is allowed to commit now
13/05/30 09:49:40 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0003_r_000000_0' to /home/vishal/output/clusters-3
13/05/30 09:49:41 INFO mapred.JobClient: map 100% reduce 0%
13/05/30 09:49:43 INFO mapred.LocalJobRunner: reduce > reduce
13/05/30 09:49:43 INFO mapred.Task: Task 'attempt_local_0003_r_000000_0' done.
13/05/30 09:49:44 INFO mapred.JobClient: map 100% reduce 100%
13/05/30 09:49:44 INFO mapred.JobClient: Job complete: job_local_0003
13/05/30 09:49:44 INFO mapred.JobClient: Counters: 21
13/05/30 09:49:44 INFO mapred.JobClient: File Output Format Counters
13/05/30 09:49:44 INFO mapred.JobClient: Bytes Written=364
13/05/30 09:49:44 INFO mapred.JobClient: Clustering
13/05/30 09:49:44 INFO mapred.JobClient: Converged Clusters=2
13/05/30 09:49:44 INFO mapred.JobClient: FileSystemCounters
13/05/30 09:49:44 INFO mapred.JobClient: FILE_BYTES_READ=9988052
13/05/30 09:49:44 INFO mapred.JobClient: FILE_BYTES_WRITTEN=10265506
13/05/30 09:49:44 INFO mapred.JobClient: File Input Format Counters
13/05/30 09:49:44 INFO mapred.JobClient: Bytes Read=443
13/05/30 09:49:44 INFO mapred.JobClient: Map-Reduce Framework
13/05/30 09:49:44 INFO mapred.JobClient: Map output materialized bytes=128
13/05/30 09:49:44 INFO mapred.JobClient: Map input records=9
13/05/30 09:49:44 INFO mapred.JobClient: Reduce shuffle bytes=0
13/05/30 09:49:44 INFO mapred.JobClient: Spilled Records=4
13/05/30 09:49:44 INFO mapred.JobClient: Map output bytes=531
13/05/30 09:49:44 INFO mapred.JobClient: Total committed heap usage (bytes)=724434944
13/05/30 09:49:44 INFO mapred.JobClient: CPU time spent (ms)=0
13/05/30 09:49:44 INFO mapred.JobClient: SPLIT_RAW_BYTES=104
13/05/30 09:49:44 INFO mapred.JobClient: Combine input records=9
13/05/30 09:49:44 INFO mapred.JobClient: Reduce input records=2
13/05/30 09:49:44 INFO mapred.JobClient: Reduce input groups=2
13/05/30 09:49:44 INFO mapred.JobClient: Combine output records=2
13/05/30 09:49:44 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
13/05/30 09:49:44 INFO mapred.JobClient: Reduce output records=2
13/05/30 09:49:44 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
13/05/30 09:49:44 INFO mapred.JobClient: Map output records=9
Exception in thread "main" java.io.IOException: Target /home/vishal/output/clusters-3-final/clusters-3 is a directory
at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:359)
at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:361)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:211)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
at org.apache.hadoop.fs.RawLocalFileSystem.rename(RawLocalFileSystem.java:287)
at org.apache.hadoop.fs.ChecksumFileSystem.rename(ChecksumFileSystem.java:425)
at org.apache.mahout.clustering.kmeans.KMeansDriver.buildClustersMR(KMeansDriver.java:322)
at org.apache.mahout.clustering.kmeans.KMeansDriver.buildClusters(KMeansDriver.java:239)
at org.apache.mahout.clustering.kmeans.KMeansDriver.run(KMeansDriver.java:154)
at com.ClusteringDemo.main(ClusteringDemo.java:80)
可能是什么原因??
谢谢
入门教程使用内置的梯度下降优化器非常有意义。但是,k均值不仅可以插入梯度下降中。似乎我不得不编写自己的优化程序,但是鉴于TensorFlow原语,我不确定如何执行此操作。 我应该采取什么方法? 最佳答
我想知道 K-Mean 和 K-Means++ 算法之间的区别。如果有人了解 K-Means++ 算法的流程,您能举例说明一下吗?虽然,我了解 K-Mean 算法,但发现如何实现 K-Means++
我有不同的数据帧均值计算值。通常,我想它们应该是一样的。或者有什么区别: daily1 = daily_above_zero['2011-2'].mean() daily1 Out[181]: P_S
我有关于人们每周上类旅行次数的数据。随着行程的距离,我对两个变量之间的关系感兴趣。 (预计频率会随着距离的增加而下降,本质上是一种负相关。)Cor.test 支持这个假设:-0.08993444,p
我了解 k-means 算法步骤。 但是我不确定该算法是否会始终收敛?或者观察总是可以从一个质心切换到另一个质心? 最佳答案 该算法总是收敛(按定义)但 不一定是全局最优 . 算法可能会从质心切换到质
(添加了可重现的示例。) 我对 rnorm 函数有点困惑。 我期待 mean(rnorm(100,mean=0,sd=1))为0;和 sd(rnorm(100,mean=0,sd=1))为 1。但给出
我想计算一个平均值。这是带有示例数据的代码: # sample data Nr <- c(1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
我有一个像这样的数据框: Id F M R 7 1 286 907 12 1 286 907 17 1 186 1271 21 1 296 905 30 1
如果我们将 K-means 和顺序 K-means 方法应用于具有相同初始设置的相同数据集,我们会得到相同的结果吗?解释你的理由。 个人认为答案是否定的,顺序K-means得到的结果取决于数据点的呈现
我想使用 MEAN JavaScript 堆栈,但我注意到有两个不同的堆栈,它们有自己的网站和安装方法:mean.js 和 mean.io。所以我开始问自己这个问题:“我用哪一个?”。 所以为了回答这
似乎有多种方法可以安装 Mean Stack (mean.io) 的所有模块。但是,在 c9.io 中执行此操作的最佳方法是什么?我一直在尝试很多事情,但我似乎并没有全部掌握。 c9.io 有专门的
在开发过程中,我希望加载原始(未聚合).js 文件。 Mean.io 文档说: All javascript within public is automatically aggregated wit
我正在尝试添加 angular-material到 mean.io应用。 在我的自定义包中,我使用 bower 来安装 angular-material,现在我有一个 .../public/asset
我只运行以下三行: df = pd.read_hdf('data.h5') print(df.mean()) print(df['derived_3'].mean()) 第一个 print 列出了每一
k-means++算法有助于原始k-means算法的以下两点: 原始的 k-means 算法在输入大小的 super 多项式的最坏情况下运行时间,而 k-means++ 声称是 O(log k)。 与
这两个字段有什么区别? : 每个请求的时间(平均) 每个请求的时间(平均,跨所有并发请求) 它们每个是如何计算的? 示例输出: Time per request: 3953.446 [ms
关闭。这个问题不符合Stack Overflow guidelines .它目前不接受答案。 想改进这个问题?将问题更新为 on-topic对于堆栈溢出。 7年前关闭。 Improve this qu
我想看看是否可以根据它们所处理的目标函数来比较两者的性能? 最佳答案 顺便说一句,Fuzzy-C-Means (FCM) 聚类算法也称为Soft K-Means。 目标函数实际上是相同的,唯一的区别是
虽然我看到了很多与此相关的问题,但我并没有真正得到答案,可能是因为我是使用 nltk 集群的新手。我确实需要对聚类新手进行基本解释,特别是关于 NLTK K 均值聚类的向量表示以及如何使用它。我有一个
我在学习mean.io来自 this tutorial video ,它显示了示例包(由 mean package mymodule 创建。它也在 docs 的“包”下进行了描述)。我想帮助了解给定的
我是一名优秀的程序员,十分优秀!