gpt4 book ai didi

hadoop - 没有要停止的 Namenode 或 Datanode 或 Secondary NameNode

转载 作者:可可西里 更新时间:2023-11-01 14:45:31 94 4
gpt4 key购买 nike

我按照以下链接中的步骤在我的 Ubuntu 12.04 中安装了 Hadoop。

http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php

一切都已成功安装,当我运行 start-all.sh 时,只有一些服务在运行。

wanderer@wanderer-Lenovo-IdeaPad-S510p:~$ su - hduse
Password:

hduse@wanderer-Lenovo-IdeaPad-S510p:~$ cd /usr/local/hadoop/sbin

hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ start-all.sh

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
hduse@localhost's password:
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduse-namenode-wanderer-Lenovo-IdeaPad-S510p.out
hduse@localhost's password:
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduse-datanode-wanderer-Lenovo-IdeaPad-S510p.out
Starting secondary namenodes [0.0.0.0]
hduse@0.0.0.0's password:
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduse-secondarynamenode-wanderer-Lenovo-IdeaPad-S510p.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduse-resourcemanager-wanderer-Lenovo-IdeaPad-S510p.out
hduse@localhost's password:
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduse-nodemanager-wanderer-Lenovo-IdeaPad-S510p.out

hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ jps
7940 Jps
7545 ResourceManager
7885 NodeManager

一旦我通过运行脚本 stop-all.sh 停止服务

hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [localhost]
hduse@localhost's password:
localhost: no namenode to stop
hduse@localhost's password:
localhost: no datanode to stop
Stopping secondary namenodes [0.0.0.0]
hduse@0.0.0.0's password:
0.0.0.0: no secondarynamenode to stop
stopping yarn daemons
stopping resourcemanager
hduse@localhost's password:
localhost: stopping nodemanager
no proxyserver to stop

我的配置文件

  1. 编辑bashrc文件

    vi ~/.bashrc

    #HADOOP VARIABLES START
    export JAVA_HOME=/usr/lib/jvm/java-8-oracle/
    export HADOOP_INSTALL=/usr/local/hadoop
    export PATH=$PATH:$HADOOP_INSTALL/bin
    export PATH=$PATH:$HADOOP_INSTALL/sbin
    export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
    export HADOOP_COMMON_HOME=$HADOOP_INSTALL
    export HADOOP_HDFS_HOME=$HADOOP_INSTALL
    export YARN_HOME=$HADOOP_INSTALL
    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
    #HADOOP VARIABLES END
  2. hdfs-site.xml

    vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml

    <configuration>
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    <description>Default block replication.
    The actual number of replications can be specified when the file is created.
    The default is used if replication is not specified in create time.
    </description>
    </property>
    <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
    </property>
    <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
    </property>
    </configuration>
  3. hadoop-env.sh

    vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh

    export JAVA_HOME=/usr/lib/jvm/java-8-oracle/
    export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}

    for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
    if [ "$HADOOP_CLASSPATH" ]; then
    export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
    else
    export HADOOP_CLASSPATH=$f
    fi
    done

    export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"
    export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
    export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"

    export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"

    export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
    export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"

    # The following applies to multiple commands (fs, dfs, fsck, distcp etc)
    export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
    export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}

    export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}
    export HADOOP_PID_DIR=${HADOOP_PID_DIR}
    export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}

    # A string representing this instance of hadoop. $USER by default.
    export HADOOP_IDENT_STRING=$USER
  4. 核心站点.xml

    vi /usr/local/hadoop/etc/hadoop/core-site.xml
    <configuration>
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/app/hadoop/tmp</value>
    <description>A base for other temporary directories.</description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
    <description>The name of the default file system. A URI whose
    scheme and authority determine the FileSystem implementation. The
    uri's scheme determines the config property (fs.SCHEME.impl) naming
    the FileSystem implementation class. The uri's authority is used to
    determine the host, port, etc. for a filesystem.</description>
    </property>
    </configuration>
  5. mapred-site.xml

    vi /usr/local/hadoop/etc/hadoop/mapred-site.xml
    <configuration>
    <property>
    <name>mapred.job.tracker</name>
    <value>localhost:54311</value>
    <description>The host and port that the MapReduce job tracker runs
    at. If "local", then jobs are run in-process as a single map
    and reduce task.
    </description>
    </property>
    </configuration>

    $ javac -version

    javac 1.8.0_66

    $java -version

    java version "1.8.0_66"  
    Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
    Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)

我是 Hadoop 新手,找不到问题所在。我在哪里可以找到 Jobtracker 和 NameNode 的日志文件以跟踪服务?

最佳答案

如果不是 ssh 问题,请执行下一步:

  1. 从临时目录中删除所有内容:rm -Rf/app/hadoop/tmp 并格式化namenode 服务器bin/hadoop namenode -format。使用 bin/start-dfs.sh 启动名称节点和数据节点。在命令行输入jps检查节点是否运行。

  2. 使用ls -ld directory检查hduser是否有权写入hadoop_store/hdfs/namenode和datanode目录

    可以通过sudo chmod +777/hadoop_store/hdfs/namenode/来更改权限

关于hadoop - 没有要停止的 Namenode 或 Datanode 或 Secondary NameNode,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33772495/

94 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com