gpt4 book ai didi

hadoop - hadoop-env.sh 文件中的语法错误

转载 作者:可可西里 更新时间:2023-11-01 16:22:10 27 4
gpt4 key购买 nike

我决定使用 hadoop2.5.0 我设置了 HADOOP_PREFIX,但是当我想查看版本或格式 namenode 时,发生了这个错误:

[hdfs@master1 bin]$ ./hadoop version
: command not found.5.0/etc/hadoop/hadoop-env.sh: line 16:
: command not found.5.0/etc/hadoop/hadoop-env.sh: line 18:
: command not found.5.0/etc/hadoop/hadoop-env.sh: line 23:
: command not found.5.0/etc/hadoop/hadoop-env.sh: line 29:
: command not found.5.0/etc/hadoop/hadoop-env.sh: line 30:
: command not found.5.0/etc/hadoop/hadoop-env.sh: line 32:
'usr/local/hadoop-2.5.0/etc/hadoop/hadoop-env.sh: line 34: syntax error near unexpected token `do
'usr/local/hadoop-2.5.0/etc/hadoop/hadoop-env.sh: line 34: `for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
Error: Could not find or load main class org.apache.hadoop.util.VersionInfo

操作系统:CentOs 6.5。

模式:全分布式4节点:1主+3从。

最佳答案

  Please verified steps from this :

Step-1 Create a dedicated user(hduser)for hadoop on three machine from terminal

Command-1 sudo addgroup hadoop
Command-2 sudo adduser --ingroup hadoop hduser
Command-3 sudo adduser hduser sudo



Step-2 Login to user(hduser) on three machine from terminal

Command-1 su hduser



Step-3 Create passwordless ssh between all the machines

Command-1 ssh-keygen -t rsa -P ""

Command-2 cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

Command-3 ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@slave
hduser@slave means hduser@IP of slave (Eg:hduser@192.168.213.25)
Execute Command-3 from master machine to the two slave machines



Step-4 Setup JAVA JDK on all the machines

Command-1 sudo apt-get install sun-java6-jdk

Command-2 sudo update-java-alternatives -s java-6-sun



Step-5 Download hadoop-2.x tar and unzip it on all machines

Command-1 cd /usr/local

Command-2 sudo tar xzf hadoop-2.x.tar.gz

Command-3 sudo mv hadoop-2.x hadoop

Command-4 sudo chown -R hduser:hadoop hadoop


Step-6 Open $HOME/.bashrc on all machines

Command-1 vi $HOME/.bashrc

Step-7 Add the following lines to the end of opened .bashrc file on all machines
(Find location of JAVA_HOME on all of the machines. It should be set accordingly on each machine)
export JAVA_HOME=/usr/local/java/jdk1.6.0_20
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$JAVA_HOME/bin:$HADOOP_INSTALL/bin:$PATH
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
export YARN_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
Press Esc and type :wq! To update the file

Now execute this command

Command-1 source .bashrc


Step-8 Update /etc/hosts files on all machine

I. add ip and name of machines for all the machines in /etc/hosts file
eg:-

192.168.213.25 N337
192.168.213.94 N336
192.168.213.47 UBUNTU
II. comment all other things



Step-9 Tweak Config Files on all machines

I.hadoop-config.sh

Command-1 cd $HADOOP_INSTALL
Command-2 vi libexec/hadoop-config.sh

Now add the following line at the start of hadoop-config.sh(Take appropriate location of JAVA_HOME on each machine)
export JAVA_HOME=/usr/local/java/jdk1.6.0_20

II.yarn-env.sh

Command-1 cd $HADOOP_INSTALL/etc/hadoop
Command-2 vi yarn-env.sh

#Now add following lines

export JAVA_HOME=/usr/local/java/jdk1.6.0_20
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$JAVA_HOME/bin:$HADOOP_INSTALL/bin:$PATH
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
export YARN_CONF_DIR=$HADOOP_INSTALL/etc/hadoop


#Press Esc and type :wq! To update the file

III.core-site.xml

Command-1 vi $HADOOP_INSTALL/etc/hosts/core-site.xml

Add the following lines

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://N364U:9000</value>
</property>
</configuration>

IV.yarn-site.xml

Command-1 vi $HADOOP_INSTALL/etc/hosts/yarn-site.xml

Add the following lines(change machine name according to your machine)


<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>N337:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>N337:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>N337:8040</value>
</property>
</configuration>

V.hdfs-site.xml
Add the following lines
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>

VI.mapred-site.xml
Add the following lines

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

Don't forget to save all the configuration files. Cross check.


Step-10 Add slaves on master machine only

Command-I vi $HADOOP_INSTALL/etc/hosts/slaves

Add the IP of the two machines on master machine

eg:

192.168.213.94
192.168.213.47



Step-11 Format namenode once

Command-1 cd $HADOOP_INSTALL

Command-2 bin/hdfs namenode -format


Step-12 Now start hadoop

Command-1 cd $HADOOP_INSTALL
Command-2 sbin/start-all.sh

关于hadoop - hadoop-env.sh 文件中的语法错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/26016521/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com