- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
嗨,我是 Hadoop 的新手,并试图让它在我的本地机器上运行。但是每次我运行 hdfs namenode -format 然后 start-dfs.sh 和 start-yarn.sh 或 start-all.sh 时。它记录了这些文件,我无法弄清楚哪里出了问题。
请查看下面的日志文件
2016-11-27 21:02:37,468 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host =
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.3
STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.3.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.7.3.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
STARTUP_MSG: java = 1.8.0_111
************************************************************/
2016-11-27 21:02:37,479 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2016-11-27 21:02:37,483 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2016-11-27 21:02:37,755 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2016-11-27 21:02:37,841 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-11-27 21:02:37,841 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2016-11-27 21:02:37,843 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
2016-11-27 21:02:37,844 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
2016-11-27 21:02:37,910 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-11-27 21:02:38,035 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2016-11-27 21:02:38,090 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-11-27 21:02:38,118 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2016-11-27 21:02:38,124 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2016-11-27 21:02:38,136 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-11-27 21:02:38,138 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2016-11-27 21:02:38,138 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-11-27 21:02:38,138 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-11-27 21:02:38,247 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2016-11-27 21:02:38,249 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2016-11-27 21:02:38,271 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2016-11-27 21:02:38,271 INFO org.mortbay.log: jetty-6.1.26
2016-11-27 21:02:38,412 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2016-11-27 21:02:38,439 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2016-11-27 21:02:38,439 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2016-11-27 21:02:38,470 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2016-11-27 21:02:38,470 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2016-11-27 21:02:38,507 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2016-11-27 21:02:38,507 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2016-11-27 21:02:38,508 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2016-11-27 21:02:38,508 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2016 Nov 27 21:02:38
2016-11-27 21:02:38,509 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2016-11-27 21:02:38,509 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2016-11-27 21:02:38,510 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2016-11-27 21:02:38,511 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2016-11-27 21:02:38,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2016-11-27 21:02:38,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2016-11-27 21:02:38,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2016-11-27 21:02:38,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2016-11-27 21:02:38,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2016-11-27 21:02:38,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2016-11-27 21:02:38,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2016-11-27 21:02:38,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2016-11-27 21:02:38,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE)
2016-11-27 21:02:38,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2016-11-27 21:02:38,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2016-11-27 21:02:38,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2016-11-27 21:02:38,531 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2016-11-27 21:02:38,684 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2016-11-27 21:02:38,684 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2016-11-27 21:02:38,684 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2016-11-27 21:02:38,684 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2016-11-27 21:02:38,685 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2016-11-27 21:02:38,685 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2016-11-27 21:02:38,685 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2016-11-27 21:02:38,685 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2016-11-27 21:02:38,691 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2016-11-27 21:02:38,691 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2016-11-27 21:02:38,691 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2016-11-27 21:02:38,691 INFO org.apache.hadoop.util.GSet: capacity = 2^18 = 262144 entries
2016-11-27 21:02:38,693 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2016-11-27 21:02:38,693 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2016-11-27 21:02:38,693 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2016-11-27 21:02:38,696 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2016-11-27 21:02:38,696 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2016-11-27 21:02:38,696 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2016-11-27 21:02:38,697 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2016-11-27 21:02:38,697 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2016-11-27 21:02:38,699 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2016-11-27 21:02:38,699 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2016-11-27 21:02:38,699 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2016-11-27 21:02:38,699 INFO org.apache.hadoop.util.GSet: capacity = 2^15 = 32768 entries
2016-11-27 21:02:38,723 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/hadoopinfra/hdfs/namenode/in_use.lock acquired by nodename 14736@mybook-macbook-pro.local
2016-11-27 21:02:38,777 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /usr/local/hadoop/hadoopinfra/hdfs/namenode/current
2016-11-27 21:02:38,777 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected.
2016-11-27 21:02:38,777 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Planning to load image: FSImageFile(file=/usr/local/hadoop/hadoopinfra/hdfs/namenode/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
2016-11-27 21:02:38,875 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2016-11-27 21:02:38,898 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2016-11-27 21:02:38,898 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /usr/local/hadoop/hadoopinfra/hdfs/namenode/current/fsimage_0000000000000000000
2016-11-27 21:02:38,907 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2016-11-27 21:02:38,908 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 1
2016-11-27 21:02:39,041 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2016-11-27 21:02:39,041 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 339 msecs
2016-11-27 21:02:39,192 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to localhost:9000
2016-11-27 21:02:39,199 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-11-27 21:02:39,212 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9000
2016-11-27 21:02:39,237 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2016-11-27 21:02:39,249 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2016-11-27 21:02:39,250 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2016-11-27 21:02:39,250 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues
2016-11-27 21:02:39,250 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs
2016-11-27 21:02:39,250 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2016-11-27 21:02:39,250 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2016-11-27 21:02:39,255 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2016-11-27 21:02:39,260 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 0
2016-11-27 21:02:39,260 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0
2016-11-27 21:02:39,260 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0
2016-11-27 21:02:39,260 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0
2016-11-27 21:02:39,260 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 0
2016-11-27 21:02:39,261 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 10 msec
2016-11-27 21:02:39,281 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2016-11-27 21:02:39,282 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2016-11-27 21:02:39,284 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: localhost/127.0.0.1:9000
2016-11-27 21:02:39,284 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2016-11-27 21:02:39,289 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
2016-11-27 21:04:07,310 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2016-11-27 21:04:07,310 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2016-11-27 21:04:07,310 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 1
2016-11-27 21:04:07,310 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 10
2016-11-27 21:04:07,319 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 18
2016-11-27 21:04:07,322 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/local/hadoop/hadoopinfra/hdfs/namenode/current/edits_inprogress_0000000000000000001 -> /usr/local/hadoop/hadoopinfra/hdfs/namenode/current/edits_0000000000000000001-0000000000000000002
2016-11-27 21:04:07,323 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3
2016-11-27 21:05:07,419 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2016-11-27 21:05:07,420 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2016-11-27 21:05:07,420 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3
2016-11-27 21:05:07,420 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 2
2016-11-27 21:05:07,427 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 10
2016-11-27 21:05:07,428 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/local/hadoop/hadoopinfra/hdfs/namenode/current/edits_inprogress_0000000000000000003 -> /usr/local/hadoop/hadoopinfra/hdfs/namenode/current/edits_0000000000000000003-0000000000000000004
2016-11-27 21:05:07,428 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 5
2016-11-27 21:06:07,445 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2016-11-27 21:06:07,445 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2016-11-27 21:06:07,445 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 5
2016-11-27 21:06:07,446 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 3
2016-11-27 21:06:07,454 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 11
2016-11-27 21:06:07,455 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/local/hadoop/hadoopinfra/hdfs/namenode/current/edits_inprogress_0000000000000000005 -> /usr/local/hadoop/hadoopinfra/hdfs/namenode/current/edits_0000000000000000005-0000000000000000006
2016-11-27 21:06:07,455 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 7
最佳答案
此日志中哪些消息与您有关?
它看起来像一个典型的 namenode 启动日志,我没有看到任何致命的东西。
有一些关于复制 block 不足的消息,但这应该只是一个警告。看起来您的默认 block 复制设置为 1,通常为 3。您可以运行任何 hadoop 命令吗?
hadoop fs -ls /
sudo lsof -i tcp | grep -i LISTEN
9000
聆听,如果名称节点已启动,也可能是 8020。顺便说一下,这些端口与数据节点将使用的端口不同。
http://hostname.example.com:50070/
你也应该有一个 web ui这应该给你基本的地位。
hdfs getconf -namenodes
-secondaryNameNodes
-backupNodes
-includeFile
-excludeFile
-nnRpcAddresses
-confKey [key]
hdfs dfs -ls /
关于linux - Hadoop(NameNode、DataNode 和 SecondaryNameNode)未启动,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40833953/
我找到了 this excellent question and answer它以 x/y(加上 center x/y 和 degrees/radians)开始并计算旋转- 到 x'/y'。这个计算很
全部: 我已经创建了一个 Windows 窗体和一个按钮。在另一个线程中,我试图更改按钮的文本,但它崩溃了;但是如果我尝试更改按钮的颜色,它肯定会成功。我认为如果您更改任何 Windows 窗体控件属
本网站的另一个问题已证实,C 中没有缩写的字面后缀,并且可以执行以下操作: short Number = (short)1; 但是转换它和不这样做有什么区别: short Number = 1; 您使
我有下表: ID (int) EMAIL (varchar(50)) CAMPAIGNID (int) isSubscribe (bit) isActionByUser (bit) 此表存储了用户对事
也就是说,无需触发Javascript事件即可改变的属性,如何保留我手动选中或取消选中的复选框的状态,然后复制到另一个地方? 运行下面的代码片段并选中或取消选中其中的一些,然后点击“复制”: $('#
我在网上找到的所有关于递增指针导致段错误的示例都涉及指针的取消引用 - 如果我只想递增它(例如在 for 循环的末尾)并且我不在乎它是否最终进入无效内存,因为我不会再使用它。例如,在这个程序中,每次迭
我有一个 Spring MVC REST 服务,它使用 XStream 将消息与 XML 相互转换。 有什么方法可以将请求和响应中的 xml(即正文)打印到普通的 log4j 记录器? 在 Contr
做我的任务有一个很大的挑战,那就是做相互依赖的任务我在这张照片中说的。假设我们有两个任务 A 和 B,执行子任务 A1、A2 和 B1、B2,假设任务 B 依赖于 A。 要理想地执行任务 B,您应该执
通过阅读该网站上的几个答案,我了解到 CoInitialize(Ex) should be called by the creator of a thread 。然后,在该线程中运行的任何代码都可以使
这个问题已经困扰我一段时间了。我以前从未真正使用过 ListViews,也没有使用过 FirebaseListAdapters。我想做的就是通过显示 id 和用户位置来启动列表的基础,但由于某种原因,
我很难解释这两个(看似简单)句子的含义: “受检异常由编译器在编译时检查” 这是什么意思?编译器检查是否捕获了所有已检查的异常(在代码中抛出)? “未经检查的异常在运行时检查,而不是编译时” 这句话中
我有一个包含排除子字符串的文本文件,我想迭代该文件以检查并返回不带排除子字符串的输入项。 这里我使用 python 2.4,因此下面的代码可以实现此目的,因为 with open 和 any 不起作用
Spring 的缓存框架能否了解请求上下文的身份验证状态,或者更容易推出自己的缓存解决方案? 最佳答案 尽管我发现这个用例 super 奇怪,但您可以为几乎任何与 SpEL 配合使用的内容设置缓存条件
我有以下函数模板: template HeldAs* duplicate(MostDerived *original, HeldAs *held) { // error checking omi
如果我的应用程序具有设备管理员/设备所有者权限(未获得 root 权限),我如何才能从我的应用程序中终止(或阻止启动)另一个应用程序? 最佳答案 设备所有者可以阻止应用程序: DevicePolicy
非常简单的问题,但我似乎无法让它正常工作。 我有一个组件,其中有一些 XSLT(用于导航)。它通过 XSLT TBB 使用 XSLT Mediator 发布。 发布后
我正在将一个对象拖动到一个可拖放的对象内,该对象也是可拖动的。放置对象后,它会嵌套在可放置对象内。同样,如果我将对象拖到可放置的外部,它就不再嵌套。 但是,如果我经常拖入和拖出可放置对象,则可拖动对象
我正在尝试为按钮和弹出窗口等多个指令实现“取消选择”功能。也就是说,我希望当用户单击不属于指令模板一部分的元素时触发我的函数。目前,我正在使用以下 JQuery 代码: $('body').click
我从 this question 得到了下面的代码,该脚本用于在 Google tasks 上更改 iframe[src="about:blank"] 内的 CSS使用 Chrome 扩展 Tempe
我有一些 @Mock 对象,但没有指定在该对象上调用方法的返回值。该方法返回 int (不是 Integer)。我很惊讶地发现 Mockito 没有抛出 NPE 并返回 0。这是预期的行为吗? 例如:
我是一名优秀的程序员,十分优秀!