- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我尝试部署了一个测试hadoop集群环境。当我启动它时,所有日志都正确,但是无法运行任何hadoop命令,我发现9000端口没有被监听。
运行hadoop命令(ERROR,所有命令都是一样的错误):
hadoop-2.5.0/bin$ ./hdfs dfs -ls /
14/08/15 10:19:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ls: Call From master-hadoop/172.17.65.225 to master-hadoop:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Namenode 9000端口没有被监听
hadoop-2.5.0/bin$ sudo netstat -ntap | grep 9000
Terminal console doesn't output anything.
hadoop 配置:
核心站点.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master-hadoop:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/hadoop/tmp</value>
</property>
</configuration>
hdfs-站点:xml
<configuration>
<property>
<name>dfs.namenode.rpc-address</name>
<value>master-hadoop:9001</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>secondary-hadoop:50090</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.namenode.data.dir</name>
<value>file:/home/hadoop/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master-hadoop:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master-hadoop:19888</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce-shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master-hadoop:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master-hadoop:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master-hadoop:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master-hadoop:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master-hadoop:8088</value>
</property>
</configuration>
名称节点/etc/hosts:
172.17.65.225 master-hadoop
127.0.0.1 master-hadoop
::1 master-hadoop localhost
172.17.65.151 slave1-hadoop
172.17.65.14 slave2-hadoop
172.17.65.117 secondary-hadoop
Namenode格式日志:
hadoop-2.5.0/bin$ ./hdfs namenode -format
14/08/15 10:16:16 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = master-hadoop/172.17.65.225
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.5.0
STARTUP_MSG: classpath = /home/hadoop/hadoop/hadoop-2.5.0/etc/hadoop:/home/hadoop/hadoop/hadoop-2.5.0/share/hadoop/common/lib/jersey-json-1.9.jar:
...
:/home/hadoop/hadoop/hadoop-2.5.0/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common -r 1616291; compiled by 'jenkins' on 2014-08-06T17:31Z
STARTUP_MSG: java = 1.7.0_21
************************************************************/
14/08/15 10:16:16 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14/08/15 10:16:16 INFO namenode.NameNode: createNameNode [-format]
14/08/15 10:16:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-4d27991c-4852-407c-9c6b-70df76994d13
14/08/15 10:16:16 INFO namenode.FSNamesystem: fsLock is fair:true
14/08/15 10:16:16 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/08/15 10:16:16 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
14/08/15 10:16:16 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
14/08/15 10:16:16 INFO blockmanagement.BlockManager: The block deletion will start around 2014 Aug 15 10:16:16
14/08/15 10:16:16 INFO util.GSet: Computing capacity for map BlocksMap
14/08/15 10:16:16 INFO util.GSet: VM type = 32-bit
14/08/15 10:16:16 INFO util.GSet: 2.0% max memory 888.9 MB = 17.8 MB
14/08/15 10:16:16 INFO util.GSet: capacity = 2^22 = 4194304 entries
14/08/15 10:16:16 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/08/15 10:16:16 INFO blockmanagement.BlockManager: defaultReplication = 2
14/08/15 10:16:16 INFO blockmanagement.BlockManager: maxReplication = 512
14/08/15 10:16:16 INFO blockmanagement.BlockManager: minReplication = 1
14/08/15 10:16:16 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
14/08/15 10:16:16 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
14/08/15 10:16:16 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/08/15 10:16:16 INFO blockmanagement.BlockManager: encryptDataTransfer = false
14/08/15 10:16:16 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
14/08/15 10:16:16 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE)
14/08/15 10:16:16 INFO namenode.FSNamesystem: supergroup = supergroup
14/08/15 10:16:16 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/08/15 10:16:16 INFO namenode.FSNamesystem: HA Enabled: false
14/08/15 10:16:16 INFO namenode.FSNamesystem: Append Enabled: true
14/08/15 10:16:17 INFO util.GSet: Computing capacity for map INodeMap
14/08/15 10:16:17 INFO util.GSet: VM type = 32-bit
14/08/15 10:16:17 INFO util.GSet: 1.0% max memory 888.9 MB = 8.9 MB
14/08/15 10:16:17 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/08/15 10:16:17 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/08/15 10:16:17 INFO util.GSet: Computing capacity for map cachedBlocks
14/08/15 10:16:17 INFO util.GSet: VM type = 32-bit
14/08/15 10:16:17 INFO util.GSet: 0.25% max memory 888.9 MB = 2.2 MB
14/08/15 10:16:17 INFO util.GSet: capacity = 2^19 = 524288 entries
14/08/15 10:16:17 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/08/15 10:16:17 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/08/15 10:16:17 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
14/08/15 10:16:17 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/08/15 10:16:17 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/08/15 10:16:17 INFO util.GSet: Computing capacity for map NameNodeRetryCache
14/08/15 10:16:17 INFO util.GSet: VM type = 32-bit
14/08/15 10:16:17 INFO util.GSet: 0.029999999329447746% max memory 888.9 MB = 273.1 KB
14/08/15 10:16:17 INFO util.GSet: capacity = 2^16 = 65536 entries
14/08/15 10:16:17 INFO namenode.NNConf: ACLs enabled? false
14/08/15 10:16:17 INFO namenode.NNConf: XAttrs enabled? true
14/08/15 10:16:17 INFO namenode.NNConf: Maximum size of an xattr: 16384
14/08/15 10:16:17 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1935486596-172.17.65.225-1408068977173
14/08/15 10:16:17 INFO common.Storage: Storage directory /home/hadoop/hadoop/dfs/name has been successfully formatted.
14/08/15 10:16:17 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/08/15 10:16:17 INFO util.ExitUtil: Exiting with status 0
14/08/15 10:16:17 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master-hadoop/172.17.65.225
************************************************************/
hadoop-hadoop-namenode-master-hadoop.log
2014-08-15 10:17:48,855 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = master-hadoop/172.17.65.225
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.5.0
STARTUP_MSG: classpath = /home/hadoop/hadoop/hadoop-2.5.0/etc/hadoop:/home/hadoop/hadoop/hadoop-2.5.0/share/hadoop/common/lib/jersey-json-1.9.jar:
...
...
:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common -r 1616291; compiled by 'jenkins' on 2014-08-06T17:31Z
STARTUP_MSG: java = 1.7.0_21
************************************************************/
2014-08-15 10:17:48,870 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2014-08-15 10:17:48,880 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2014-08-15 10:17:49,117 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2014-08-15 10:17:49,209 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2014-08-15 10:17:49,209 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2014-08-15 10:17:49,211 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://master-hadoop:9000
2014-08-15 10:17:49,211 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use master-hadoop:9000 to access this namenode/service.
2014-08-15 10:17:49,389 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-08-15 10:17:54,555 INFO org.apache.hadoop.hdfs.DFSUtil: Starting web server as: ${dfs.web.authentication.kerberos.principal}
2014-08-15 10:17:54,556 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2014-08-15 10:17:54,605 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2014-08-15 10:17:54,609 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2014-08-15 10:17:54,620 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2014-08-15 10:17:54,622 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2014-08-15 10:17:54,622 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2014-08-15 10:17:54,623 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2014-08-15 10:17:54,653 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2014-08-15 10:17:54,655 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2014-08-15 10:17:54,676 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2014-08-15 10:17:54,676 INFO org.mortbay.log: jetty-6.1.26
2014-08-15 10:17:54,883 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: 'signature.secret' configuration not set, using a random value as secret
2014-08-15 10:17:54,948 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@`0.0.0.0`:50070
2014-08-15 10:17:59,984 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2014-08-15 10:17:59,984 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2014-08-15 10:18:00,023 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2014-08-15 10:18:00,062 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2014-08-15 10:18:00,062 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2014-08-15 10:18:00,065 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2014-08-15 10:18:00,066 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2014 Aug 15 10:18:00
2014-08-15 10:18:00,068 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2014-08-15 10:18:00,068 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-08-15 10:18:00,069 INFO org.apache.hadoop.util.GSet: 2.0% max memory 888.9 MB = 17.8 MB
2014-08-15 10:18:00,069 INFO org.apache.hadoop.util.GSet: capacity = 2^22 = 4194304 entries
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 2
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2014-08-15 10:18:00,092 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE)
2014-08-15 10:18:00,092 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2014-08-15 10:18:00,092 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2014-08-15 10:18:00,092 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2014-08-15 10:18:00,094 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2014-08-15 10:18:00,279 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2014-08-15 10:18:00,279 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-08-15 10:18:00,280 INFO org.apache.hadoop.util.GSet: 1.0% max memory 888.9 MB = 8.9 MB
2014-08-15 10:18:00,280 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2014-08-15 10:18:00,297 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2014-08-15 10:18:00,305 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2014-08-15 10:18:00,305 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-08-15 10:18:00,306 INFO org.apache.hadoop.util.GSet: 0.25% max memory 888.9 MB = 2.2 MB
2014-08-15 10:18:00,306 INFO org.apache.hadoop.util.GSet: capacity = 2^19 = 524288 entries
2014-08-15 10:18:00,308 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2014-08-15 10:18:00,308 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2014-08-15 10:18:00,308 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2014-08-15 10:18:00,310 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2014-08-15 10:18:00,310 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2014-08-15 10:18:00,312 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2014-08-15 10:18:00,312 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-08-15 10:18:00,312 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 888.9 MB = 273.1 KB
2014-08-15 10:18:00,312 INFO org.apache.hadoop.util.GSet: capacity = 2^16 = 65536 entries
2014-08-15 10:18:00,316 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
2014-08-15 10:18:00,316 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
2014-08-15 10:18:00,317 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 16384
2014-08-15 10:18:00,355 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hadoop/hadoop/dfs/name/in_use.lock acquired by nodename 21145@master-hadoop
2014-08-15 10:18:00,433 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /home/hadoop/hadoop/dfs/name/current
2014-08-15 10:18:00,433 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected.
2014-08-15 10:18:00,488 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2014-08-15 10:18:00,534 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2014-08-15 10:18:00,534 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /home/hadoop/hadoop/dfs/name/current/fsimage_0000000000000000000
2014-08-15 10:18:00,542 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2014-08-15 10:18:00,543 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 1
2014-08-15 10:18:00,689 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2014-08-15 10:18:00,689 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 372 msecs
2014-08-15 10:18:00,902 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to master-hadoop:9001
2014-08-15 10:18:00,909 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2014-08-15 10:18:00,923 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9001
2014-08-15 10:18:00,954 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2014-08-15 10:18:00,963 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks under construction: 0
2014-08-15 10:18:00,963 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks under construction: 0
2014-08-15 10:18:00,963 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues
2014-08-15 10:18:00,963 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs
2014-08-15 10:18:00,963 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2014-08-15 10:18:00,964 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2014-08-15 10:18:00,982 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 0
2014-08-15 10:18:00,994 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0
2014-08-15 10:18:00,994 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0
2014-08-15 10:18:00,994 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0
2014-08-15 10:18:00,994 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 0
2014-08-15 10:18:00,994 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 31 msec
2014-08-15 10:18:01,010 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2014-08-15 10:18:01,011 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9001: starting
2014-08-15 10:18:01,131 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: master-hadoop/172.17.65.225:9001
2014-08-15 10:18:01,132 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2014-08-15 10:18:01,142 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
2014-08-15 10:18:01,142 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning because of pending operations
2014-08-15 10:18:01,147 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 5 millisecond(s).
2014-08-15 10:18:02,566 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(172.17.65.14, datanodeUuid=e3b6ade5-3534-4f5f-99fa-959bbbd9dce9, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-4d27991c-4852-407c-9c6b-70df76994d13;nsid=995055688;c=0) storage e3b6ade5-3534-4f5f-99fa-959bbbd9dce9
2014-08-15 10:18:02,571 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/172.17.65.14:50010
2014-08-15 10:18:02,648 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-8dee85fa-82b5-40a6-98e4-db44cca23371 for DN 172.17.65.14:50010
2014-08-15 10:18:02,698 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* processReport: Received first block report from DatanodeStorage[DS-8dee85fa-82b5-40a6-98e4-db44cca23371,DISK,NORMAL] after starting up or becoming active. Its block contents are no longer considered stale
2014-08-15 10:18:02,698 INFO BlockStateChange: BLOCK* processReport: from storage DS-8dee85fa-82b5-40a6-98e4-db44cca23371 node DatanodeRegistration(172.17.65.14, datanodeUuid=e3b6ade5-3534-4f5f-99fa-959bbbd9dce9, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-4d27991c-4852-407c-9c6b-70df76994d13;nsid=995055688;c=0), blocks: 0, hasStaleStorages: false, processing time: 3 msecs
2014-08-15 10:18:05,783 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks.
2014-08-15 10:18:09,235 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks.
2014-08-15 10:18:14,099 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks.
2014-08-15 10:18:14,578 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(172.17.65.151, datanodeUuid=43bc6f34-b8ad-4355-9fe4-9951f40e982a, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-4d27991c-4852-407c-9c6b-70df76994d13;nsid=995055688;c=0) storage 43bc6f34-b8ad-4355-9fe4-9951f40e982a
2014-08-15 10:18:14,578 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/172.17.65.151:50010
2014-08-15 10:18:14,628 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-1e42ed67-0da7-476a-9e67-d778cd56b2b1 for DN 172.17.65.151:50010
2014-08-15 10:18:14,660 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* processReport: Received first block report from DatanodeStorage[DS-1e42ed67-0da7-476a-9e67-d778cd56b2b1,DISK,NORMAL] after starting up or becoming active. Its block contents are no longer considered stale
2014-08-15 10:18:14,660 INFO BlockStateChange: BLOCK* processReport: from storage DS-1e42ed67-0da7-476a-9e67-d778cd56b2b1 node DatanodeRegistration(172.17.65.151, datanodeUuid=43bc6f34-b8ad-4355-9fe4-9951f40e982a, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-4d27991c-4852-407c-9c6b-70df76994d13;nsid=995055688;c=0), blocks: 0, hasStaleStorages: false, processing time: 1 msecs
2014-08-15 10:18:19,074 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks.
2014-08-15 10:18:31,143 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2014-08-15 10:18:31,144 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
2014-08-15 10:19:01,143 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2014-08-15 10:19:01,144 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
2014-08-15 10:19:01,521 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 172.17.65.117
2014-08-15 10:19:01,521 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2014-08-15 10:19:01,521 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 1
2014-08-15 10:19:01,522 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 55
2014-08-15 10:19:01,536 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 69
2014-08-15 10:19:01,538 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hadoop/hadoop/dfs/name/current/edits_inprogress_0000000000000000001 -> /home/hadoop/hadoop/dfs/name/current/edits_0000000000000000001-0000000000000000002
2014-08-15 10:19:01,542 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3
2014-08-15 10:19:02,094 WARN org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory: The property 'ssl.client.truststore.location' has not been set, no TrustStore will be loaded
2014-08-15 10:19:02,969 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 0.05s at 0.00 KB/s
2014-08-15 10:19:02,969 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000002 size 353 bytes.
2014-08-15 10:19:03,014 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 0
2014-08-15 10:19:31,143 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2014-08-15 10:19:31,145 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
2014-08-15 10:20:01,144 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2014-08-15 10:20:01,145 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
名称节点 jps:
/hadoop-2.5.0/logs$ jps
21145 NameNode
21409 ResourceManager
二级 jps:
hadoop-2.5.0$ jps
15534 SecondaryNameNode
Datanode1 jps:
/hadoop-2.5.0$ jps
7350 DataNode
Datanode2 jps:
/hadoop-2.5.0$ jps
11784 DataNode
最佳答案
我遇到了同样的问题。
当你使用hdfs namenode -format
时,你需要检查namenode信息是什么SHUTDOWN_MSG
like
我解决问题的方法是修改core-site.xml
的defaultFS
中的IP,如图所示。
但我认为这不是一个很好的解决方案,也许有更好的方法来解决。
关于java - 从 <hostname>/<ip> 调用到 <hostname> :9000 failed on connection exception: java.net.ConnectException:连接被拒绝,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/25320696/
我正在编写一个具有以下签名的 Java 方法。 void Logger(Method method, Object[] args); 如果一个方法(例如 ABC() )调用此方法 Logger,它应该
我是 Java 新手。 我的问题是我的 Java 程序找不到我试图用作的图像文件一个 JButton。 (目前这段代码什么也没做,因为我只是得到了想要的外观第一的)。这是我的主课 代码: packag
好的,今天我在接受采访,我已经编写 Java 代码多年了。采访中说“Java 垃圾收集是一个棘手的问题,我有几个 friend 一直在努力弄清楚。你在这方面做得怎么样?”。她是想骗我吗?还是我的一生都
我的 friend 给了我一个谜语让我解开。它是这样的: There are 100 people. Each one of them, in his turn, does the following
如果我将使用 Java 5 代码的应用程序编译成字节码,生成的 .class 文件是否能够在 Java 1.4 下运行? 如果后者可以工作并且我正在尝试在我的 Java 1.4 应用程序中使用 Jav
有关于why Java doesn't support unsigned types的问题以及一些关于处理无符号类型的问题。我做了一些搜索,似乎 Scala 也不支持无符号数据类型。限制是Java和S
我只是想知道在一个 java 版本中生成的字节码是否可以在其他 java 版本上运行 最佳答案 通常,字节码无需修改即可在 较新 版本的 Java 上运行。它不会在旧版本上运行,除非您使用特殊参数 (
我有一个关于在命令提示符下执行 java 程序的基本问题。 在某些机器上我们需要指定 -cp 。 (类路径)同时执行java程序 (test为java文件名与.class文件存在于同一目录下) jav
我已经阅读 StackOverflow 有一段时间了,现在我才鼓起勇气提出问题。我今年 20 岁,目前在我的家乡(罗马尼亚克卢日-纳波卡)就读 IT 大学。足以介绍:D。 基本上,我有一家提供簿记应用
我有 public JSONObject parseXML(String xml) { JSONObject jsonObject = XML.toJSONObject(xml); r
我已经在 Java 中实现了带有动态类型的简单解释语言。不幸的是我遇到了以下问题。测试时如下代码: def main() { def ks = Map[[1, 2]].keySet()
一直提示输入 1 到 10 的数字 - 结果应将 st、rd、th 和 nd 添加到数字中。编写一个程序,提示用户输入 1 到 10 之间的任意整数,然后以序数形式显示该整数并附加后缀。 public
我有这个 DownloadFile.java 并按预期下载该文件: import java.io.*; import java.net.URL; public class DownloadFile {
我想在 GUI 上添加延迟。我放置了 2 个 for 循环,然后重新绘制了一个标签,但这 2 个 for 循环一个接一个地执行,并且标签被重新绘制到最后一个。 我能做什么? for(int i=0;
我正在对对象 Student 的列表项进行一些测试,但是我更喜欢在 java 类对象中创建硬编码列表,然后从那里提取数据,而不是连接到数据库并在结果集中选择记录。然而,自从我这样做以来已经很长时间了,
我知道对象创建分为三个部分: 声明 实例化 初始化 classA{} classB extends classA{} classA obj = new classB(1,1); 实例化 它必须使用
我有兴趣使用 GPRS 构建车辆跟踪系统。但是,我有一些问题要问以前做过此操作的人: GPRS 是最好的技术吗?人们意识到任何问题吗? 我计划使用 Java/Java EE - 有更好的技术吗? 如果
我可以通过递归方法反转数组,例如:数组={1,2,3,4,5} 数组结果={5,4,3,2,1}但我的结果是相同的数组,我不知道为什么,请帮助我。 public class Recursion { p
有这样的标准方式吗? 包括 Java源代码-测试代码- Ant 或 Maven联合单元持续集成(可能是巡航控制)ClearCase 版本控制工具部署到应用服务器 最后我希望有一个自动构建和集成环境。
我什至不知道这是否可能,我非常怀疑它是否可能,但如果可以,您能告诉我怎么做吗?我只是想知道如何从打印机打印一些文本。 有什么想法吗? 最佳答案 这里有更简单的事情。 import javax.swin
我是一名优秀的程序员,十分优秀!