gpt4 book ai didi

hadoop - 为什么我需要在 hdfs 中保留 hbase/lib 文件夹?

转载 作者:可可西里 更新时间:2023-11-01 15:33:37 25 4
gpt4 key购买 nike

我有一个主集群,它在 Hbase 中有一些数据,我想复制它。我已经创建了一个备份集群并创建了我要复制的表的快照。我正在尝试将快照从源集群导出到目标,但出现了一些错误。我在执行

./hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot mySnap -copy-to hdfs://198.58.88.11:9000/hbase

作为执行的结果,我得到了

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/vagrant/hbase/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/vagrant/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015-03-05 10:58:43,155 INFO [main] snapshot.ExportSnapshot: Copy Snapshot Manifest
2015-03-05 10:58:43,596 INFO [main] Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
2015-03-05 10:58:43,597 INFO [main] jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
2015-03-05 10:58:43,890 INFO [main] mapreduce.JobSubmitter: Cleaning up the staging area file:/home/vagrant/hadoop/hadoop-datastore/mapred/staging/vagrant1489762780/.staging/job_local1489762780_0001
2015-03-05 10:58:43,892 ERROR [main] snapshot.ExportSnapshot: Snapshot export failed
java.io.FileNotFoundException: File does not exist: hdfs://namenode:9000/home/vagrant/hbase/lib/hbase-client-1.0.0.jar
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1072)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at org.apache.hadoop.hbase.snapshot.ExportSnapshot.runCopyJob(ExportSnapshot.java:775)
at org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:934)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:1008)
at org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:1012)

因此,据我了解,它会尝试查找 base-client-1.0.0.jar但是正在查看 hdfs://namenode:9000/home/vagrant/hbase/lib/hbase-client-1.0.0.jar 而不是在本地存储中。知道为什么会这样吗?

最佳答案

在我的例子中,问题的原因是 yarn 和 map-reduce 配置错误。正确配置它们后,我能够毫无问题地导出快照。

让你的 mapred-site.xml 看起来像这样

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.address</name>
<value>cluster2.master:8021</value>
</property>
</configuration>

yarn-site.xml

<property>
<name>yarn.resourcemanager.hostname</name>
<value>cluster2.master</value>
<description>The hostname of the RM.</description>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<description>shuffle service that needs to be set for Map Reduce to run </description>
</property>

cluster2.master 应根据您的设置进行更改。

关于hadoop - 为什么我需要在 hdfs 中保留 hbase/lib 文件夹?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28877600/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com