gpt4 book ai didi

hadoop - 如何启动数据节点? (找不到 start-dfs.sh 脚本)

转载 作者:可可西里 更新时间:2023-11-01 16:50:58 30 4
gpt4 key购买 nike

我们正在无外设系统上设置自动部署:因此这里不能使用 GUI。

Hortonworks 数据平台中 hdfs 的 start-dfs.sh 脚本在哪里? CDH/cloudera 将这些文件打包到 hadoop/sbin 目录下。然而,当我们在 HDP 下搜索这些脚本时,却找不到它们:

$ pwd
/usr/hdp/current

HDP 中存在哪些脚本?

[stack@s1-639016 current]$ find -L . -name \*.sh
./hadoop-hdfs-client/sbin/refresh-namenodes.sh
./hadoop-hdfs-client/sbin/distribute-exclude.sh
./hadoop-hdfs-datanode/sbin/refresh-namenodes.sh
./hadoop-hdfs-datanode/sbin/distribute-exclude.sh
./hadoop-hdfs-nfs3/sbin/refresh-namenodes.sh
./hadoop-hdfs-nfs3/sbin/distribute-exclude.sh
./hadoop-hdfs-secondarynamenode/sbin/refresh-namenodes.sh
./hadoop-hdfs-secondarynamenode/sbin/distribute-exclude.sh
./hadoop-hdfs-namenode/sbin/refresh-namenodes.sh
./hadoop-hdfs-namenode/sbin/distribute-exclude.sh
./hadoop-hdfs-journalnode/sbin/refresh-namenodes.sh
./hadoop-hdfs-journalnode/sbin/distribute-exclude.sh
./hadoop-hdfs-portmap/sbin/refresh-namenodes.sh
./hadoop-hdfs-portmap/sbin/distribute-exclude.sh
./hadoop-client/sbin/hadoop-daemon.sh
./hadoop-client/sbin/slaves.sh
./hadoop-client/sbin/hadoop-daemons.sh
./hadoop-client/etc/hadoop/hadoop-env.sh
./hadoop-client/etc/hadoop/kms-env.sh
./hadoop-client/etc/hadoop/mapred-env.sh
./hadoop-client/conf/hadoop-env.sh
./hadoop-client/conf/kms-env.sh
./hadoop-client/conf/mapred-env.sh
./hadoop-client/libexec/kms-config.sh
./hadoop-client/libexec/init-hdfs.sh
./hadoop-client/libexec/hadoop-layout.sh
./hadoop-client/libexec/hadoop-config.sh
./hadoop-client/libexec/hdfs-config.sh
./zookeeper-client/conf/zookeeper-env.sh
./zookeeper-client/bin/zkCli.sh
./zookeeper-client/bin/zkCleanup.sh
./zookeeper-client/bin/zkServer-initialize.sh
./zookeeper-client/bin/zkEnv.sh
./zookeeper-client/bin/zkServer.sh

注意:有零启动/停止 sh 脚本..

我特别感兴趣的是启动 namenode(s)、journalnode 和 datanodes 的 start-dfs.sh 脚本。

最佳答案

如何启动DataNode

su - hdfs -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config/etc/hadoop/conf 启动数据节点";

Github - Hortonworks Start Scripts

更新

决定自己去寻找它。

  1. 使用 Ambari 启动单个节点,安装 HDP 2.2 (a)、HDP 2.3 (b)
  2. sudo find/-name\*.sh | grep 开始
  3. 找到

    (a) /usr/hdp/2.2.8.0-3150/hadoop/src/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/s‌ tart-dfs.sh

    奇怪的是它不存在于 /usr/hdp/current 中,应该是符号链接(symbolic link)。

    (b) /hadoop/yarn/local/filecache/10/mapreduce.tar.gz/hadoop/sbin/start-dfs.sh

关于hadoop - 如何启动数据节点? (找不到 start-dfs.sh 脚本),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33293631/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com