gpt4 book ai didi

hadoop - Hadoop:resourcemanager不在本地主机上运行

转载 作者:行者123 更新时间:2023-12-02 19:16:40 25 4
gpt4 key购买 nike

所以我无法在 hadoop 3.1.1 上访问http:// localhost:8088 /
这是我所做的:

  • bin/hdfs namenode -format
  • sbin/start-dfs.sh
  • bin/hdfs dfs -mkdir /user
  • bin/hdfs dfs -mkdir /user/username

  • NameNode的Web界面有效,但Resource Manager的Web界面无效。
  • core-site.xml :

  • <configuration>
    <property>
    <name>fs.defaultFS</name>
    <value>hdfs://localhost:9000</value>
    </property>
    </configuration>
  • hdfs-site.xml :

  • <configuration>
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    </configuration>
  • mapred-site.xml :

  •     <configuration>
    <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    </property>
    </configuration>

    <configuration>
    <property>
    <name>mapreduce.application.classpath</name>
    <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
    </property>
    </configuration>
  • yarn-site.xml :

  • <configuration>
    <property>
    <name>yarn.resourcemanager.address</name>
    <value>127.0.0.1:8032</value>
    </property>
    <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>127.0.0.1:8030</value>
    </property>
    <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>127.0.0.1:8031</value>
    </property>
    <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    </property>
    <property>
    <name>yarn.nodemanager.env-whitelist</name>
    <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
    </configuration>
  • .bash_profile

  • export PATH="/usr/local/sbin:$PATH"

    export SCALA_HOME=/usr/local/scala
    export JAVA_HOME=/Library/Java/JavaVirtualMachines/openjdk-11.0.1.jdk/Contents/Home
    export SPARK_HOME=/usr/local/spark

    export HADOOP_HOME=/usr/local/Cellar/hadoop/3.1.1/libexec/
    export HADOOP_CONF_DIR=/usr/local/Cellar/hadoop/3.1.1/libexec/etc/hadoop

    export PATH=$PATH:/usr/local/hadoop/bin
    export PATH=$PATH:/usr/local/spark/bin
    export PATH=$PATH:/usr/local/scala/bin

    export HADOOP_MAPRED_HOME=$HADOOP_HOME

    export HADOOP_COMMON_HOME=$HADOOP_HOME

    export HADOOP_HDFS_HOME=$HADOOP_HOME

    export YARN_HOME=$HADOOP_HOME

    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

    export HADOOP_OPTS=-Djava.library.path=$HADOOP_HOME/lib

    export PATH=$HADOOP_HOME/bin:$PATH

    export PATH=$HADOOP_HOME/sbin:$PATH
    问题是当我运行时: sbin / start-yarn.sh 这是结果
    在[]上启动资源管理器
    启动节点管理器

    它不应该说:在[localhost]上启动resourcemanagers吗?

    最佳答案

    默认情况下,文档中主机名的默认值为0.0.0.0,而不是localhost,如果要在下面显式配置属性,请在此处提供其默认值,并覆盖它们。

    yarn.resourcemanager.hostname 0.0.0.0 The hostname of the RM. yarn.resourcemanager.address ${yarn.resourcemanager.hostname}:8032 The address of the applications manager interface in the RM.

    关于hadoop - Hadoop:resourcemanager不在本地主机上运行,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54188062/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com