- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我在笔记本电脑上运行了 hadoop。当 hadoop 启动时,我执行命令 全部启动.cmd .然后它从 4 个守护进程开始。 cmd 显示 4 个进程中的 3 个
SHUTDOWN_MSG:在 DESKTOP-T7R9JV1/192.168.1.100 关闭 NameNode
我如何避免这种情况
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = DESKTOP-T7R9JV1/192.168.1.101
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.9.1
19/09/08 22:03:13 INFO namenode.NameNode: createNameNode []
19/09/08 22:03:14 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
19/09/08 22:03:14 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
19/09/08 22:03:14 INFO impl.MetricsSystemImpl: NameNode metrics system started
19/09/08 22:03:14 INFO namenode.NameNode: fs.defaultFS is hdfs://0.0.0.0:19000
19/09/08 22:03:14 INFO namenode.NameNode: Clients are to use 0.0.0.0:19000 to access this namenode/service.
19/09/08 22:03:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/09/08 22:03:15 INFO util.JvmPauseMonitor: Starting JVM pause monitor
19/09/08 22:03:15 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
19/09/08 22:03:15 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
19/09/08 22:03:15 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
19/09/08 22:03:15 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined
19/09/08 22:03:15 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
19/09/08 22:03:15 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
19/09/08 22:03:15 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
19/09/08 22:03:15 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
19/09/08 22:03:16 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
19/09/08 22:03:16 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
19/09/08 22:03:16 INFO http.HttpServer2: Jetty bound to port 50070
19/09/08 22:03:16 INFO mortbay.log: jetty-6.1.26
19/09/08 22:03:16 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
19/09/08 22:03:17 ERROR common.Util: Syntax error in URI C:\BigData\hadoop-2.9.1\data\namenode. Please check hdfs configuration.
java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\BigData\hadoop-2.9.1\data\namenode
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.checkChars(URI.java:3021)
at java.net.URI$Parser.parse(URI.java:3058)
at java.net.URI.<init>(URI.java:588)
at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:49)
at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:99)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1462)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceDirs(FSNamesystem.java:1417)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkConfiguration(FSNamesystem.java:617)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:669)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
19/09/08 22:03:17 WARN common.Util: Path C:\BigData\hadoop-2.9.1\data\namenode should be specified as a URI in configuration files. Please update hdfs configuration.
19/09/08 22:03:17 ERROR common.Util: Syntax error in URI C:\BigData\hadoop-2.9.1\data\namenode. Please check hdfs configuration.
java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\BigData\hadoop-2.9.1\data\namenode
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.checkChars(URI.java:3021)
at java.net.URI$Parser.parse(URI.java:3058)
at java.net.URI.<init>(URI.java:588)
at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:49)
at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:99)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1462)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1507)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1476)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkConfiguration(FSNamesystem.java:619)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:669)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
19/09/08 22:03:17 WARN common.Util: Path C:\BigData\hadoop-2.9.1\data\namenode should be specified as a URI in configuration files. Please update hdfs configuration.
19/09/08 22:03:17 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
19/09/08 22:03:17 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
19/09/08 22:03:17 ERROR common.Util: Syntax error in URI C:\BigData\hadoop-2.9.1\data\namenode. Please check hdfs configuration.
java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\BigData\hadoop-2.9.1\data\namenode
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.checkChars(URI.java:3021)
at java.net.URI$Parser.parse(URI.java:3058)
at java.net.URI.<init>(URI.java:588)
at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:49)
at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:99)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1462)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceDirs(FSNamesystem.java:1417)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:670)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
19/09/08 22:03:17 WARN common.Util: Path C:\BigData\hadoop-2.9.1\data\namenode should be specified as a URI in configuration files. Please update hdfs configuration.
19/09/08 22:03:17 ERROR common.Util: Syntax error in URI C:\BigData\hadoop-2.9.1\data\namenode. Please check hdfs configuration.
java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\BigData\hadoop-2.9.1\data\namenode
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.checkChars(URI.java:3021)
at java.net.URI$Parser.parse(URI.java:3058)
at java.net.URI.<init>(URI.java:588)
at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:49)
at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:99)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1462)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1507)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1476)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:670)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
19/09/08 22:03:17 WARN common.Util: Path C:\BigData\hadoop-2.9.1\data\namenode should be specified as a URI in configuration files. Please update hdfs configuration.
19/09/08 22:03:17 INFO namenode.FSEditLog: Edit logging is async:true
19/09/08 22:03:17 INFO namenode.FSNamesystem: KeyProvider: null
19/09/08 22:03:17 INFO namenode.FSNamesystem: fsLock is fair: true
19/09/08 22:03:17 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
19/09/08 22:03:17 INFO namenode.FSNamesystem: fsOwner = User (auth:SIMPLE)
19/09/08 22:03:17 INFO namenode.FSNamesystem: supergroup = supergroup
19/09/08 22:03:17 INFO namenode.FSNamesystem: isPermissionEnabled = true
19/09/08 22:03:17 INFO namenode.FSNamesystem: HA Enabled: false
19/09/08 22:03:17 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
19/09/08 22:03:17 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
19/09/08 22:03:17 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
19/09/08 22:03:17 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
19/09/08 22:03:17 INFO blockmanagement.BlockManager: The block deletion will start around 2019 Sep 08 22:03:17
19/09/08 22:03:17 INFO util.GSet: Computing capacity for map BlocksMap
19/09/08 22:03:17 INFO util.GSet: VM type = 32-bit
19/09/08 22:03:17 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
19/09/08 22:03:17 INFO util.GSet: capacity = 2^22 = 4194304 entries
19/09/08 22:03:17 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
19/09/08 22:03:17 WARN conf.Configuration: No unit for dfs.heartbeat.interval(3) assuming SECONDS
19/09/08 22:03:17 WARN conf.Configuration: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
19/09/08 22:03:17 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
19/09/08 22:03:17 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
19/09/08 22:03:17 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
19/09/08 22:03:17 INFO blockmanagement.BlockManager: defaultReplication = 1
19/09/08 22:03:17 INFO blockmanagement.BlockManager: maxReplication = 512
19/09/08 22:03:17 INFO blockmanagement.BlockManager: minReplication = 1
19/09/08 22:03:17 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
19/09/08 22:03:17 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
19/09/08 22:03:17 INFO blockmanagement.BlockManager: encryptDataTransfer = false
19/09/08 22:03:17 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
19/09/08 22:03:17 INFO namenode.FSNamesystem: Append Enabled: true
19/09/08 22:03:17 INFO util.GSet: Computing capacity for map INodeMap
19/09/08 22:03:17 INFO util.GSet: VM type = 32-bit
19/09/08 22:03:17 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
19/09/08 22:03:17 INFO util.GSet: capacity = 2^21 = 2097152 entries
19/09/08 22:03:17 INFO namenode.FSDirectory: ACLs enabled? false
19/09/08 22:03:17 INFO namenode.FSDirectory: XAttrs enabled? true
19/09/08 22:03:17 INFO namenode.NameNode: Caching file names occurring more than 10 times
19/09/08 22:03:17 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: falseskipCaptureAccessTimeOnlyChange: false
19/09/08 22:03:17 INFO util.GSet: Computing capacity for map cachedBlocks
19/09/08 22:03:17 INFO util.GSet: VM type = 32-bit
19/09/08 22:03:17 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
19/09/08 22:03:17 INFO util.GSet: capacity = 2^19 = 524288 entries
19/09/08 22:03:17 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
19/09/08 22:03:17 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
19/09/08 22:03:17 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
19/09/08 22:03:17 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
19/09/08 22:03:17 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
19/09/08 22:03:17 INFO util.GSet: Computing capacity for map NameNodeRetryCache
19/09/08 22:03:17 INFO util.GSet: VM type = 32-bit
19/09/08 22:03:17 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
19/09/08 22:03:17 INFO util.GSet: capacity = 2^16 = 65536 entries
19/09/08 22:03:17 ERROR namenode.NameNode: Failed to start namenode.
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:606)
at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:1006)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:558)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:518)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:370)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:226)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1048)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
19/09/08 22:03:17 INFO util.ExitUtil: Exiting with status 1: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
19/09/08 22:03:17 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at DESKTOP-T7R9JV1/192.168.1.101
************************************************************/
最佳答案
Illegal character in opaque part at index 2
C:\BigData\hadoop-2.9.1\data\namenode
file:/C:/BigData/hadoop-2.9.1/data/namenode
关于hadoop - 启动时关闭 NodeManager,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57836103/
2014-11-21 19:05:37,532 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.Lo
我在笔记本电脑上运行了 hadoop。当 hadoop 启动时,我执行命令 全部启动.cmd .然后它从 4 个守护进程开始。 cmd 显示 4 个进程中的 3 个 SHUTDOWN_MSG:在 DE
我已经安装了 kerberos 配置的 Apache hadoop(2.8.5)。 NameNode、DataNode 和 ResourceManager 运行良好,但 Nodemanager 启动失
我已经安装了Hadoop Cluster,它是hadoop 0.23.9版本。我安装了 HDFS-1943.patch,现在我可以启动所有的名称节点和数据节点了。 (start-dfs.sh 正在为我
当运行 start-all.sh 我的 slave1 和 slave2 在 jps 中找不到 nodemanager 使用 VM virtualbox 在 Ubuntu 16.04 上工作主人 yun
我正在设置一个多节点集群,我的 NodeManager 和 ResourceManager 进程由于某种原因没有启动,我无法弄清楚原因。当我运行 jps 命令时,我只看到 NameNode 和 Sec
所以我删除了我的主机,然后尝试再次添加它。 DataNode 工作正常,但我无法让 Nodemanager 工作。我在删除后用 yum 删除了 hadoop-yarn 包,然后使用 ambari 再次
我们是否需要在名称节点节点上启动节点管理器,即我不会将其设为数据节点?我将在一台机器上启动 namenode 和 resourcemanager。由于我不会在这台机器上启动 datanode,我想我也
我正在尝试在 Ubuntu 13.10 64 位上设置最新的 Hadoop 2.2 单节点集群。操作系统是全新安装的,我尝试过同时使用 java-6 64 位和 java-7 64 位。 按照 thi
我已经在 windows x64 机器上编译了 hadoop 二进制文件。我还在路径中构建了 native 源代码 hadoop-2.6.0-src.tar\hadoop-2.6.0-src\hado
我正在开发以下版本: 操作系统- Windows 8 Hadoop- 2.3(单节点集群),Pig- 0.12.1,Hive- 0.13 我可以使用 Windows 命令提示符(没有管理员权限)启动
我正在尝试借助 Ansible 在 EC2 Spot 实例上设置自定义 Hadoop 基础设施。在这样的实例中,只有内部 IP 是已知的。幸运的是,有一个 ec2.py 脚本可以动态生成 list ,
我正在我的集群中安装 hadoop yarn。但是由于 Service spark_shuffle 在 INITED 状态下失败,我的一台从机无法启动 NodeManager。 我尝试删除所有与had
我现在已经在独立模式下设置了 hadoop 和 yarn。我正在尝试用 yarn 开始所有过程。除节点管理器外,所有进程都已启动。每次都抛出 jvm 错误。 [root@ip-10-100-223-
(默认情况下)在Hadoop Yarn中是否有与“资源管理器”相同的节点中的“节点管理器”?如果不是,是否可以在同一节点上运行它们? 最佳答案 取决于您是否要在 RM 节点上运行其他容器(用于 App
我无法识别 Hadoop 2.0 架构中 ApplicationMaster 和 NodeManager 之间的区别。 我知道 ApplicationMaster 负责运行 map 和 reduce
我的 YARN 集群中的一个节点有 64GB 内存和 24 个内核。我在 yarn-site.xml 中设置了以下属性: yarn.nodemanager.resource.memory-mb
我正在尝试在多节点集群上运行 wordcount 作业。每次我启动进程时,NodeManager 都会成功启动,但随后就会消失。日志文件显示以下错误: Caused by: java.net.Bind
我按照此链接安装 hadoop http://thepowerofdata.io/setting-up-a-apache-hadoop-2-7-single-node-on-ubuntu-14-04/
当我启动 NodeManager 时,我在日志中看到了这个错误。此作业总是在服务器启动时启动,如何删除此作业并解决问题? Application application_1511362704902_0
我是一名优秀的程序员,十分优秀!