gpt4 book ai didi

Hadoop报错无法启动-all.sh

转载 作者:可可西里 更新时间:2023-11-01 16:34:18 27 4
gpt4 key购买 nike

我在我的笔记本电脑单机模式下设置了一个 hadoop。信息:Ubuntu 12.10,jdk 1.7 oracle,从 .deb 文件安装 hadoop。地点:/etc/hadoop/usr/share/hadoop

我在/usr/share/hadoop/templates/conf/core-site.xml 中有配置我添加了 2 个属性

    <property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>

在 hdfs-site.xml 中

<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>

在 mapred-site.xml 中

    <property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>

当我开始使用命令时hduser@sepdau:~$ start-all.sh

starting namenode, logging to /var/log/hadoop/hduser/hadoop-hduser-namenode-sepdau.com.out
localhost: starting datanode, logging to /var/log/hadoop/hduser/hadoop-hduser-datanode-sepdau.com.out
localhost: starting secondarynamenode, logging to /var/log/hadoop/hduser/hadoop-hduser-secondarynamenode-sepdau.com.out
starting jobtracker, logging to /var/log/hadoop/hduser/hadoop-hduser-jobtracker-sepdau.com.out
localhost: starting tasktracker, logging to /var/log/hadoop/hduser/hadoop-hduser-tasktracker-sepdau.com.out

但是当我通过jps查看进程时

hduser@sepdau:~$ jps
13725 Jps

更多

 root@sepdau:/home/sepdau# netstat -plten | grep java
tcp6 0 0 :::8080 :::* LISTEN 117 9953 1316/java
tcp6 0 0 :::53976 :::* LISTEN 117 16755 1316/java
tcp6 0 0 127.0.0.1:8700 :::* LISTEN 1000 786271 8323/java
tcp6 0 0 :::59012 :::* LISTEN 117 16756 1316/java

当我停止-all.sh

    hduser@sepdau:~$ stop-all.sh
no jobtracker to stop
localhost: no tasktracker to stop
no namenode to stop
localhost: no datanode to stop
localhost: no secondarynamenode to stop

在我的主机文件中

hduser@sepdau:~$ cat /etc/hosts

127.0.0.1 localhost
127.0.1.1 sepdau.com



# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

文件从机:本地主机主机:本地主机

这是一些日志

    hduser@sepdau:/home/sepdau$ start-all.sh
mkdir: cannot create directory `/var/run/hadoop': Permission denied
starting namenode, logging to /var/log/hadoop/hduser/hadoop-hduser-namenode-sepdau.com.out
/usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-namenode.pid: No such file or directory
localhost: mkdir: cannot create directory `/var/run/hadoop': Permission denied
localhost: starting datanode, logging to /var/log/hadoop/hduser/hadoop-hduser-datanode-sepdau.com.out
localhost: /usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-datanode.pid: No such file or directory
localhost: mkdir: cannot create directory `/var/run/hadoop': Permission denied
localhost: starting secondarynamenode, logging to /var/log/hadoop/hduser/hadoop-hduser-secondarynamenode-sepdau.com.out
localhost: /usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-secondarynamenode.pid: No such file or directory
mkdir: cannot create directory `/var/run/hadoop': Permission denied
starting jobtracker, logging to /var/log/hadoop/hduser/hadoop-hduser-jobtracker-sepdau.com.out
/usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-jobtracker.pid: No such file or directory
localhost: mkdir: cannot create directory `/var/run/hadoop': Permission denied
localhost: starting tasktracker, logging to /var/log/hadoop/hduser/hadoop-hduser-tasktracker-sepdau.com.out
localhost: /usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-tasktracker.pid: No such file or directory

我用的是root用户,但也有同样的问题

我这里哪里错了。如何使用 hadoop 插件连接到 eclipse。感谢提前

最佳答案

尝试添加

<property>
<name>dfs.name.dir</name>
<value>/home/abhinav/hdfs</value>
</property>

到 hdfs-site.xml 并确保它存在

我为此编写了一个小教程。看看这是否有帮助 http://blog.abhinavmathur.net/2013/01/experience-with-setting-multinode.html

关于Hadoop报错无法启动-all.sh,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/13183164/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com