gpt4 book ai didi

docker - 错误:无法设置journalnode进程6520的优先级

转载 作者:行者123 更新时间:2023-12-02 18:58:13 28 4
gpt4 key购买 nike

我有三个安装了docker的物理节点。我在这些节点之间配置了一个高可用的hadoop集群。配置如下:
Core-site.xml:

  <property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>

<property>
<name>dfs.journalnode.edits.dir</name>
<value>/tmp/hadoop/dfs/jn</value>
</property>

<property>
<name>fs.default.name</name>
<value>hdfs://mycluster</value>
</property>

<property>
<name>ha.zookeeper.quorum</name>
<value>10.32.0.1:2181,10.32.0.2:2181,10.32.0.3:2181</value>
</property>

Hdfs-site.xml:
   <property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>10.32.0.1:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>10.32.0.2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>10.32.0.1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>10.32.0.2:50070</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.
ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://
10.32.0.1:8485;10.32.0.2:8485;10.32.0.3:8485/mycluster</value>
</property>
<property>
<name>dfs.permissions.enable</name>
<value> false </value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hdfs/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
<property>
<name>dfs.permissions.superusergroup</name>
<value>hdfs</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///usr/local/hadoop_store/hdfs/datanode</value>
</property>
<property>
<name>dfs.namenode.datanode.registration.ip-hostname-
check</name>
<value>false</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>

我设置了 hdfs 用户和 ssh-passwordless 。当我想通过以下命令启动journalnode以格式化namenode时:
    sudo /opt/hadoop/bin/hdfs --daemon start journalnode

我收到此错误:

ERROR: Cannot set priority of journalnode process 6520



请问我的配置有什么问题才能收到错误?

先感谢您。

最佳答案

问题解决了。我在 /opt/hadoop/logs/*.log 中检查日志,并看到以下行:

Cannot make directory of /tmp/hadoop/dfs/journalnode.



首先,我将日志节点目录的配置放入 hdfs-site.xml 中,并创建了日志节点目录。然后,我再次启动了日记节点,并且遇到了这个错误:

directory is not writable. So, I ran these commands to make the directory writable:


  chmod 777 /tmp/hadoop/dfs/journalnode
chown -R root /tmp/hadoop/dfs/journalnode

然后,我可以启动日记帐节点。

关于docker - 错误:无法设置journalnode进程6520的优先级,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56052827/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com