- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我的配置如下:
Hadoop实验我用了两台机器,分别是pc720(10.10.1.1)和pc719(10.10.1.2)。
jdk(版本 1.8.0_181)由 apt-get 安装。 Hadoop2.7.1 下载自https://archive.apache.org/dist/hadoop/common/hadoop-2.7.1/ , 并放入 /opt/
第一步:
我配置了/etc/bash.bashrc,添加了
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export PATH=${JAVA_HOME}/bin:${PATH}
export HADOOP_HOME=/opt/hadoop-2.7.1
export PATH=${HADOOP_HOME}/bin:${PATH}
export PATH=${HADOOP_HOME}/sbin:${PATH}
然后运行“source/etc/profile
”
第二步:
我配置 xml:
奴隶:
10.10.1.2
核心站点.xml
<property>
<name>fs.defautFS</name>
<value>hdfs://10.10.1.1:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/root/hadoop_store/tmp</value>
</property>
Hdfs-site.xml
<property>
<name>dfs:replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/root/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/root/hadoop_store/hdfs/datanode</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>10.10.1.1:9001</value>
</property>
<property>
<name>dfs.namenode.rpc-address</name>
<value>10.10.1.1:8080</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
Mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.address</name>
<value>10.10.1.1:9002</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>10.10.1.1:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>10.10.1.1:19888</value>
</property>
<property>
<name>mapred.acls.enabled</name>
<value>true</value>
</property>
yarn 站点.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>10.10.1.1</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>${yarn.resourcemanager.hostname}:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>${yarn.resourcemanager.hostname}:8030</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>${yarn.resourcemanager.hostname}:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.https.address</name>
<value>${yarn.resourcemanager.hostname}:8090</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>${yarn.resourcemanager.hostname}:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>${yarn.resourcemanager.hostname}:8033</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>8182</value>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
第三步:
在/root/下,建立了几个目录:
mkdir hadoop_store
mkdir hadoop_store/hdfs
mkdir hadoop_store/tmp
mkdir hadoop_store/hdfs/datanode
mkdir hadoop_store/hdfs/namenode
然后我转入/opt/Hadoop-2.7.1/bin,运行
./hdfs namenode –format
cd ..
cd sbin/
./start-all.sh
./mr-jobhistory-daemon.sh start historyserver
运行jps
后,pc720显示
pc719 显示
到这里,我认为我的hadoop2.7.1安装配置成功了。但是问题来了。
我打开了/opt/hadoop-2.7.1/share/hadoop/mapreduce/显示
然后我运行 hadoop jar hadoop-mapreduce-examples-2.7.1.jar pi 2 2
日志在下面
Number of Maps = 2
Samples per Map = 2
Wrote input for Map #0
Wrote input for Map #1
Starting Job
18/11/02 08:14:48 INFO client.RMProxy: Connecting to ResourceManager at /10.10.1.1:8032
18/11/02 08:14:48 INFO input.FileInputFormat: Total input paths to process : 2
18/11/02 08:14:48 INFO mapreduce.JobSubmitter: number of splits:2
18/11/02 08:14:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1541144755485_0002
18/11/02 08:14:49 INFO impl.YarnClientImpl: Submitted application application_1541144755485_0002
18/11/02 08:14:49 INFO mapreduce.Job: The url to track the job: http://node-0-link-0:8088/proxy/application_1541144755485_0002/
18/11/02 08:14:49 INFO mapreduce.Job: Running job: job_1541144755485_0002
18/11/02 08:14:53 INFO mapreduce.Job: Job job_1541144755485_0002 running in uber mode : false
18/11/02 08:14:53 INFO mapreduce.Job: map 0% reduce 0%
18/11/02 08:14:53 INFO mapreduce.Job: Job job_1541144755485_0002 failed with state FAILED due to: Application application_1541144755485_0002 failed 2 times due to AM Container for appattempt_1541144755485_0002_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://node-0-link-0:8088/cluster/app/application_1541144755485_0002Then, click on links to logs of each attempt.
Diagnostics: File file:/tmp/hadoop-yarn/staging/root/.staging/job_1541144755485_0002/job.splitmetainfo does not exist
java.io.FileNotFoundException: File file:/tmp/hadoop-yarn/staging/root/.staging/job_1541144755485_0002/job.splitmetainfo does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Failing this attempt. Failing the application.
18/11/02 08:14:53 INFO mapreduce.Job: Counters: 0
Job Finished in 5.08 seconds
java.io.FileNotFoundException: File file:/opt/hadoop-2.7.1/share/hadoop/mapreduce/QuasiMonteCarlo_1541168087724_1532373667/out/reduce-out does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1752)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1776)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
我尝试了很多解决方案,但似乎没有用。这个问题让我困惑了大约 1 周。我的配置有什么错误吗?我能做什么?请帮助我。
谢谢!
最佳答案
我相信您错过了 /
fs.defaultFS
的结尾值(value)将是 <value>hdfs://10.10.1.1:9000/</value>
请注意 /tmp/hadoop-yarn/staging/root/.staging/job_1541144755485_0002/job.splitmetainfo
由 yarn.app.mapreduce.am.staging-dir
配置通常保存在 HDFS 中。这给了我们一个 HDFS 问题的提示。
来源:https://www.cloudera.com/documentation/enterprise/5-15-x/topics/cdh_ig_hdfs_cluster_deploy.html
关于hadoop - 运行hadoop example,遇到 ".staging/job_1541144755485_0002/job.splitmetainfo does not exist",怎么办?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53147682/
我在 SQL 查询中使用了一个简单的 IF NOT EXISTS/WHERE NOT EXISTS 语句(我都尝试过),但我总是收到 mysql 错误,不知道为什么。尝试使用不同的引号,检查我的 My
我有 2 个表:tbl1 和 tbl2。我想从 tbl1 返回一行,其中包含以下列:col1、col2、col3、can_be_deleted 、有重要项目。这个想法是,can_be_deleted
如果您是 "t1".persona_1_id = 2,则预期结果应返回 persona_id = 4。 like --- id persona_1_id persona_2_id liked 1 2
我遇到了这个用于执行幂等插入的 github SQL 代码示例。完全按照我想要的方式工作。我不想使用 EXISTS,因为我觉得它有点困惑。可以使用联接对相同的操作进行编码吗? 下面是我在 github
public bool CheckTblExist(string TblName) { try { string cmTxt = "s
表1 Id Name DemoID 1 a 33 2 b 44 3 c 33 4 d 33 5 e 44 表2 Id DemoID IsT
我对 SQL 非常陌生。我想知道当我使用“IF EXISTS”或“IF NOT EXISTS”时会发生什么。例如:以下两个语句有什么区别: 语句 1:(存在) IF EXISTS( SELECT OR
我正在更新 exist-db 集合中的 XML 文件,我必须检查是否存在 id 以决定是否必须在我的文档中替换或插入某些内容。 我注意到随着文件的增长,查询执行时间显着恶化,我决定为我的文件添加一个索
我有一个正在尝试更新的数据库,但我不明白为什么会收到有关不存在的列的奇怪错误。当我使用“heroku pg:psql”访问数据库时,我完全可以看到该列。我找到了couple其他questions遇到类
我有一个这样的查询 SELECT ... FROM ... WHERE (SELECT EXISTS (SELECT...)) which did not return anything th
我有一个可以对数据库执行插入和更新的程序,我从 API 获取数据。这是我得到的示例数据: $uname = $get['userName']; $oname = $get['offerNa
我的批处理文件中有这个脚本 -- if not exist "%JAVA_HOME%" ( echo JAVA_HOME '%JAVA_HOME%' path doesn't exist) -
有没有办法让 Directory.Exists/File.Existssince 区分大小写 Directory.Exists(folderPath) 和 Directory.Exists(folde
考虑使用这两个表和以下查询: SELECT Product. * FROM Product WHERE EXISTS ( SELECT * FROM Codes
我正在使用 Subclipse 1.6.18 使用 Eclipse 3.72 (Indigo) 来处理 SVN 1.6 存储库。这一切都在 Ubuntu 下运行。 我有一个项目,在我更新我的 Ecli
我正在尝试使用 Terraform 配置 Azure 存储帐户和文件共享: resource "random_pet" "prefix" {} provider "azurerm" { versi
我有兴趣为需要使用 NOT EXISTS 的应用程序编写查询。子句来检查一行是否存在。 我正在使用 Sybase,但我想知道一般 SQL 中是否有一个示例,您可以在其中编写具有 NOT EXISTS
我正在尝试使用 Terraform 配置 Azure 存储帐户和文件共享: resource "random_pet" "prefix" {} provider "azurerm" { versi
下面是代码示例: CREATE TABLE #titles( title_id varchar(20), title varchar(80)
我曾经这样编写 EXISTS 检查: IF EXISTS (SELECT * FROM TABLE WHERE Columns=@Filters) BEGIN UPDATE TABLE SET
我是一名优秀的程序员,十分优秀!