gpt4 book ai didi

java - 运行 Hipi mapreduce 程序

转载 作者:可可西里 更新时间:2023-11-01 15:38:31 24 4
gpt4 key购买 nike

我正在尝试运行 HIPI map reduce 示例(下载程序)。我已将 hipi jar 添加到构建路径,但在执行时出现以下错误。

我的命令看起来像,

hadoop jar Downloader.jar Downloader  ./hipi/hipi.txt ./hipi/output.hib 1

我的输入文件 hipi.txt 包含三个 URL

错误日志:

> Output HIB: ./hipi/ 14/01/12 02:39:08 WARN mapred.JobClient: Use
> GenericOptionsParser for parsing the arguments. Applications should
> implement Tool for the same. Found host successfully: 0 Tried to get 1
> nodes, got 1 14/01/12 02:39:09 INFO input.FileInputFormat: Total input
> paths to process : 1 First n-1 nodes responsible for 3 images Last
> node responsible for 3 images 14/01/12 02:39:10 INFO mapred.JobClient:
> Running job: job_201401050058_0010 14/01/12 02:39:12 INFO
> mapred.JobClient: map 0% reduce 0% 14/01/12 02:40:10 INFO
> mapred.JobClient: Task Id : attempt_201401050058_0010_m_000000_0,
> Status : FAILED Error: java.lang.ClassNotFoundException:
> hipi.imagebundle.HipiImageBundle at
> java.net.URLClassLoader$1.run(URLClassLoader.java:202) at
> java.security.AccessController.doPrivileged(Native Method) at
> java.net.URLClassLoader.findClass(URLClassLoader.java:190) at
> java.lang.ClassLoader.loadClass(ClassLoader.java:306) at
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at
> java.lang.ClassLoader.loadClass(ClassLoader.java:247) at
> Downloader$DownloaderMapper.map(Downloader.java:61) at
> Downloader$DownloaderMapper.map(Downloader.java:1) at
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:140) at
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:672) at
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:330) at
> org.apache.hadoop.mapred.Child$4.run(Child.java:268) at
> java.security.AccessController.doPrivileged(Native Method) at
> javax.security.auth.Subject.doAs(Subject.java:396) at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> at org.apache.hadoop attempt_201401050058_0010_m_000000_0: Temp path:
> ./hipi/0.hib.tmp 14/01/12 02:40:18 INFO mapred.JobClient: Task Id :
> attempt_201401050058_0010_m_000000_1, Status : FAILED Error:
> java.lang.ClassNotFoundException: hipi.imagebundle.HipiImageBundle at
> java.net.URLClassLoader$1.run(URLClassLoader.java:202) at
> java.security.AccessController.doPrivileged(Native Method) at
> java.net.URLClassLoader.findClass(URLClassLoader.java:190) at
> java.lang.ClassLoader.loadClass(ClassLoader.java:306) at
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at
> java.lang.ClassLoader.loadClass(ClassLoader.java:247) at
> Downloader$DownloaderMapper.map(Downloader.java:61) at
> Downloader$DownloaderMapper.map(Downloader.java:1) at
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:140) at
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:672) at
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:330) at
> org.apache.hadoop.mapred.Child$4.run(Child.java:268) at
> java.security.AccessController.doPrivileged(Native Method) at
> javax.security.auth.Subject.doAs(Subject.java:396) at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> at org.apache.hadoop attempt_201401050058_0010_m_000000_1: Temp path:
> ./hipi/0.hib.tmp 14/01/12 02:40:27 INFO mapred.JobClient: Task Id :
> attempt_201401050058_0010_m_000000_2, Status : FAILED Error:
> java.lang.ClassNotFoundException: hipi.imagebundle.HipiImageBundle at
> java.net.URLClassLoader$1.run(URLClassLoader.java:202) at
> java.security.AccessController.doPrivileged(Native Method) at
> java.net.URLClassLoader.findClass(URLClassLoader.java:190) at
> java.lang.ClassLoader.loadClass(ClassLoader.java:306) at
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at
> java.lang.ClassLoader.loadClass(ClassLoader.java:247) at
> Downloader$DownloaderMapper.map(Downloader.java:61) at
> Downloader$DownloaderMapper.map(Downloader.java:1) at
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:140) at
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:672) at
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:330) at
> org.apache.hadoop.mapred.Child$4.run(Child.java:268) at
> java.security.AccessController.doPrivileged(Native Method) at
> javax.security.auth.Subject.doAs(Subject.java:396) at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> at org.apache.hadoop attempt_201401050058_0010_m_000000_2: Temp path:
> ./hipi/0.hib.tmp 14/01/12 02:40:44 INFO mapred.JobClient: Job
> complete: job_201401050058_0010 14/01/12 02:40:44 INFO
> mapred.JobClient: Counters: 7 14/01/12 02:40:44 INFO mapred.JobClient:
> Job Counters 14/01/12 02:40:44 INFO mapred.JobClient: Failed map
> tasks=1 14/01/12 02:40:44 INFO mapred.JobClient: Launched map
> tasks=4 14/01/12 02:40:44 INFO mapred.JobClient: Data-local map
> tasks=4 14/01/12 02:40:44 INFO mapred.JobClient: Total time spent
> by all maps in occupied slots (ms)=61598 14/01/12 02:40:44 INFO
> mapred.JobClient: Total time spent by all reduces in occupied
> slots (ms)=0 14/01/12 02:40:44 INFO mapred.JobClient: Total time
> spent by all maps waiting after reserving slots (ms)=0 14/01/12
> 02:40:44 INFO mapred.JobClient: Total time spent by all reduces
> waiting after reserving slots (ms)=0

最佳答案

如果使用HIPI网站上提供的命令会更好。点击HERE访问该网站。这是有用的命令:

./runDownloader.sh /hdfs/path/to/list.txt /hdfs/path/to/output.hib 100

根据您获得的 Hadoop 版本,用于创建 jar 文件的 HIPI 包附带的 build.xml 文件中的路径名将不正确。例如,我下载了 hadoop-1.2.1,但 HIPI 库使用的是更旧的版本。有了这个,HIPI 团队给出了这个代码片段:

<project basedir="." default="all">

<target name="setup">
<property name="hadoop.home" value="/hadoop/hadoop-0.20.1" />
<property name="hadoop.version" value="0.20.1" />
<property name="hadoop.classpath" value="${hadoop.home}/hadoop-${hadoop.version}-core.jar" />
<property name="metadata.jar" value="3rdparty/metadata-extractor-2.3.1.jar" />
</target>
...

对我来说,hadoop.classpath 是不正确的。我不得不改成

<target name="setup">
<property name="hadoop.home" value="/your/path/hadoop-1.2.1" />
<property name="hadoop.version" value="1.2.1" />
<property name="hadoop.classpath" value="${hadoop.home}/hadoop-core-${hadoop.version}.jar" />
<property name="metadata.jar" value="3rdparty/metadata-extractor-2.3.1.jar" />
</target>

我只需要移动“核心”关键字即可。

在此之后,您应该能够:

./runDownloader.sh /hdfs/path/to/list.txt /hdfs/path/to/output.hib 100

假设您知道 hadoop 文件系统的路径,这应该会成功编译您的 build.xml 文件并运行该程序。

关于java - 运行 Hipi mapreduce 程序,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/21078522/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com