- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我已经安装了 CDH 5.5.2,它在 Cloudera Manager 中看起来没问题,直到我单击 Spark HistoryServer UI 链接或 Yarn History Server UI 链接。那些不起作用。不工作,我的意思是他们根本无法从浏览器访问。
我在文件 spark-defaults.conf 中添加了以下几行
spark.eventLog.dir=hdfs://name-node-1:8020/user/spark/applicationHistory
spark.eventLog.enabled=true
spark.yarn.historyServer.address=http://name-node-1:18088
我也无法用命令启动服务
sudo service spark-history-server start
当我转到 Cloudera Manager -> Spark -> History Server 时,它正在运行并且位于 name-node-1 上,我可以从 Cloudera Manager 启动它。
这是 Spark、YARN、HDFS、SCM 和 Cloudera Manager 日志的输出
name-node-1 INFO May 24, 2016 10:29 PM JobHistory
Starting scan to move intermediate done files
View Log File
name-node-1 INFO May 24, 2016 10:29 PM StateChange
BLOCK* allocateBlock: /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2016_05_24-22_29_59. BP-1451272641-10.128.0.2-1459245660194 blk_1073747330_6799{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-50e1e66a-ef5c-469e-ba0e-df1c259cbbae:NORMAL:10.128.0.3:50010|RBW], ReplicaUnderConstruction[[DISK]DS-4cacdc34-99a8-4d21-8744-40b5f5bd9919:NORMAL:10.128.0.4:50010|RBW], ReplicaUnderConstruction[[DISK]DS-09b4e549-2fcd-4ee4-8ccd-e5c15bdb3d7d:NORMAL:10.128.0.5:50010|RBW]]}
View Log File
data-node-1 INFO May 24, 2016 10:29 PM DataNode
Receiving BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799 src: /10.128.0.2:38325 dest: /10.128.0.3:50010
View Log File
data-node-2 INFO May 24, 2016 10:29 PM DataNode
Receiving BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799 src: /10.128.0.3:49410 dest: /10.128.0.4:50010
View Log File
data-node-3 INFO May 24, 2016 10:29 PM DataNode
Receiving BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799 src: /10.128.0.4:53572 dest: /10.128.0.5:50010
View Log File
data-node-3 INFO May 24, 2016 10:29 PM DataNode
PacketResponder: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
View Log File
data-node-3 INFO May 24, 2016 10:29 PM clienttrace
src: /10.128.0.4:53572, dest: /10.128.0.5:50010, bytes: 56, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_375545611_68, offset: 0, srvID: 2690c629-9322-4b95-b70e-20270682fe5e, blockid: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, duration: 8712883
View Log File
data-node-2 INFO May 24, 2016 10:29 PM clienttrace
src: /10.128.0.3:49410, dest: /10.128.0.4:50010, bytes: 56, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_375545611_68, offset: 0, srvID: 9a9d8417-9b4e-482b-80c8-133eeb679c68, blockid: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, duration: 9771398
View Log File
name-node-1 INFO May 24, 2016 10:29 PM BlockStateChange
BLOCK* addStoredBlock: blockMap updated: 10.128.0.5:50010 is added to blk_1073747330_6799{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-50e1e66a-ef5c-469e-ba0e-df1c259cbbae:NORMAL:10.128.0.3:50010|RBW], ReplicaUnderConstruction[[DISK]DS-4cacdc34-99a8-4d21-8744-40b5f5bd9919:NORMAL:10.128.0.4:50010|RBW], ReplicaUnderConstruction[[DISK]DS-09b4e549-2fcd-4ee4-8ccd-e5c15bdb3d7d:NORMAL:10.128.0.5:50010|RBW]]} size 0
View Log File
data-node-2 INFO May 24, 2016 10:29 PM DataNode
PacketResponder: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
View Log File
data-node-1 INFO May 24, 2016 10:29 PM DataNode
PacketResponder: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
View Log File
data-node-1 INFO May 24, 2016 10:29 PM clienttrace
src: /10.128.0.2:38325, dest: /10.128.0.3:50010, bytes: 56, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_375545611_68, offset: 0, srvID: a5a064ce-0710-462a-b8b2-489493fd7d8f, blockid: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, duration: 10857807
View Log File
name-node-1 INFO May 24, 2016 10:29 PM BlockStateChange
BLOCK* addStoredBlock: blockMap updated: 10.128.0.4:50010 is added to blk_1073747330_6799{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-50e1e66a-ef5c-469e-ba0e-df1c259cbbae:NORMAL:10.128.0.3:50010|RBW], ReplicaUnderConstruction[[DISK]DS-4cacdc34-99a8-4d21-8744-40b5f5bd9919:NORMAL:10.128.0.4:50010|RBW], ReplicaUnderConstruction[[DISK]DS-09b4e549-2fcd-4ee4-8ccd-e5c15bdb3d7d:NORMAL:10.128.0.5:50010|RBW]]} size 0
View Log File
name-node-1 INFO May 24, 2016 10:29 PM BlockStateChange
BLOCK* addStoredBlock: blockMap updated: 10.128.0.3:50010 is added to blk_1073747330_6799{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-50e1e66a-ef5c-469e-ba0e-df1c259cbbae:NORMAL:10.128.0.3:50010|RBW], ReplicaUnderConstruction[[DISK]DS-4cacdc34-99a8-4d21-8744-40b5f5bd9919:NORMAL:10.128.0.4:50010|RBW], ReplicaUnderConstruction[[DISK]DS-09b4e549-2fcd-4ee4-8ccd-e5c15bdb3d7d:NORMAL:10.128.0.5:50010|RBW]]} size 0
View Log File
name-node-1 INFO May 24, 2016 10:29 PM StateChange
DIR* completeFile: /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2016_05_24-22_29_59 is closed by DFSClient_NONMAPREDUCE_375545611_68
View Log File
name-node-1 INFO May 24, 2016 10:29 PM BlockStateChange
BLOCK* addToInvalidates: blk_1073747330_6799 10.128.0.3:50010 10.128.0.4:50010 10.128.0.5:50010
View Log File
name-node-1 INFO May 24, 2016 10:30 PM BlockStateChange
BLOCK* BlockManager: ask 10.128.0.5:50010 to delete [blk_1073747330_6799]
View Log File
data-node-3 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService
Deleted BP-1451272641-10.128.0.2-1459245660194 blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330
View Log File
data-node-3 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService
Scheduling blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330 for deletion
View Log File
name-node-1 INFO May 24, 2016 10:30 PM BlockStateChange
BLOCK* BlockManager: ask 10.128.0.4:50010 to delete [blk_1073747330_6799]
View Log File
data-node-2 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService
Scheduling blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330 for deletion
View Log File
data-node-2 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService
Deleted BP-1451272641-10.128.0.2-1459245660194 blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330
View Log File
name-node-1 INFO May 24, 2016 10:30 PM BlockStateChange
BLOCK* BlockManager: ask 10.128.0.3:50010 to delete [blk_1073747330_6799]
View Log File
data-node-1 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService
Deleted BP-1451272641-10.128.0.2-1459245660194 blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330
View Log File
data-node-1 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService
Scheduling blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330 for deletion
View Log File
name-node-1 INFO May 24, 2016 10:30 PM FsHistoryProvider
Replaying log path: hdfs://name-node-1:8020/user/spark/applicationHistory/application_1464057137814_0006
View Log File
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager
Running the LDBTimeSeriesRollupManager at 2016-05-24T22:30:15.155Z, forMigratedData=false
View Log File
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager
Starting rollup from raw to rollup=TEN_MINUTELY for rollupTimestamp=2016-05-24T22:30:00.000Z
View Log File
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager
Finished rollup: duration=PT0.729S, numStreamsChecked=38563, numStreamsRolledUp=1295
View Log File
name-node-1 INFO May 24, 2016 10:30 PM FsHistoryProvider
Replaying log path: hdfs://name-node-1:8020/user/spark/applicationHistory/application_1464057137814_0006
View Log File
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager
Running the LDBTimeSeriesRollupManager at 2016-05-24T22:30:19.235Z, forMigratedData=false
View Log File
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager
Starting rollup from raw to rollup=TEN_MINUTELY for rollupTimestamp=2016-05-24T22:30:00.000Z
View Log File
name-node-1 INFO May 24, 2016 10:30 PM CacheReplicationMonitor
Rescanning after 30000 milliseconds
View Log File
name-node-1 INFO May 24, 2016 10:30 PM CacheReplicationMonitor
Scanned 0 directive(s) and 0 block(s) in 2 millisecond(s).
View Log File
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager
Finished rollup: duration=PT5.328S, numStreamsChecked=63547, numStreamsRolledUp=23639
View Log File
name-node-1 INFO May 24, 2016 10:30 PM metastore
Opened a connection to metastore, current connections: 1
View Log File
name-node-1 INFO May 24, 2016 10:30 PM metastore
Trying to connect to metastore with URI thrift://name-node-1:9083
View Log File
name-node-1 INFO May 24, 2016 10:30 PM metastore
Connected to metastore.
View Log File
name-node-1 INFO May 24, 2016 10:30 PM metastore
Closed a connection to metastore, current connections: 0
View Log File
name-node-1 INFO May 24, 2016 10:30 PM SearcherManager
Warming up the FieldCache
View Log File
name-node-1 INFO May 24, 2016 10:30 PM SearcherManager
FieldCache built for 192 docs using 0.00 MB of space.
View Log File
name-node-1 INFO May 24, 2016 10:30 PM FsHistoryProvider
Replaying log path: hdfs://name-node-1:8020/user/spark/applicationHistory/application_1464057137814_0006
最佳答案
这是我在云中的集群的问题。我需要为每个端口创建防火墙规则
gcloud compute firewall-rules create allow-http --description "Incoming http allowed." --allow tcp:80 --format json
gcloud compute firewall-rules create allow-http --description "allow-spark-ui." --allow tcp:18088 --format json
gcloud compute firewall-rules create allow-hue --description "allow-hue." --allow tcp:8888 --format json
gcloud compute firewall-rules create allow-spark-rpc-master --description "allow-spark-rpc-master." --allow tcp:7077 --format json
gcloud compute firewall-rules create allow-spark-rpc-worker --description "allow-spark-rpc-worker." --allow tcp:7078 --format json
gcloud compute firewall-rules create allow-spark-webui-master --description "allow-spark-webui-master." --allow tcp:18080 --format json
gcloud compute firewall-rules create allow-spark-webui-worker --description "allow-spark-webui-worker." --allow tcp:18081 --format json
gcloud compute firewall-rules create allow-yarn-resourcemanager-address --description "yarn-resourcemanager-address." --allow tcp:8032 --format json
gcloud compute firewall-rules create allow-yarn-resourcemanager-scheduler-address --description "yarn-resourcemanager-scheduler-address." --allow tcp:8030 --format json
gcloud compute firewall-rules create allow-yarn-resourcemanager-resource-tracker-address --description "allow-yarn-resourcemanager-resource-tracker-address." --allow tcp:8031 --format json
gcloud compute firewall-rules create allow-yarn-resourcemanager-admin-address --description "allow-yarn-resourcemanager-admin-address." --allow tcp:8033 --format json
gcloud compute firewall-rules create allow-yarn-resourcemanager-webapp-address --description "allow-yarn-resourcemanager-webapp-address." --allow tcp:8088 --format json
gcloud compute firewall-rules create allow-yarn-resourcemanager-webapp-https-address --description "allow-yarn-resourcemanager-webapp-https-address." --allow tcp:8090 --format json
gcloud compute firewall-rules create allow-yarn-historyserver --description "allow-yarn-historyserver." --allow tcp:19888 --format json
gcloud compute firewall-rules create allow-oozie-webui --description "Allow Oozie Web UI." --allow tcp:11000 --format json
gcloud compute firewall-rules create zeppelin-webui --description "Zeppelin UI." --allow tcp:8080 --format json
关于hadoop - Cloudera Manager Yarn 和 Spark UI 不工作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37399802/
我在Windows 10中使用一些简单的Powershell代码遇到了这个奇怪的问题,我认为这可能是我做错了,但我不是Powershell的天才。 我有这个: $ix = [System.Net.Dn
var urlsearch = "http://192.168.10.113:8080/collective-intellegence/StoreClicks?userid=" + userId +
我有一个非常奇怪的问题,过去两天一直让我抓狂。 我有一个我试图控制的串行设备(LS 100 光度计)。使用设置了正确参数的终端(白蚁),我可以发送命令(“MES”),然后是定界符(CR LF),然后我
我目前正试图让无需注册的 COM 使用 Excel 作为客户端,使用 .NET dll 作为服务器。目前,我只是试图让概念验证工作,但遇到了麻烦。 显然,当我使用 Excel 时,我不能简单地使用与可
我开发了简单的 REST API - https://github.com/pavelpetrcz/MandaysFigu - 我的问题是在本地主机上,WildFly 16 服务器的应用程序运行正常。
我遇到了奇怪的情况 - 从 Django shell 创建一些 Mongoengine 对象是成功的,但是从 Django View 创建相同的对象看起来成功,但 MongoDB 中没有出现任何数据。
我是 flask 的新手,只编写了一个相当简单的网络应用程序——没有数据库,只是一个航类搜索 API 的前端。一切正常,但为了提高我的技能,我正在尝试使用应用程序工厂和蓝图重构我的代码。让它与 pus
我的谷歌分析 JavaScript 事件在开发者控制台中运行得很好。 但是当从外部 js 文件包含在页面上时,它们根本不起作用。由于某种原因。 例如; 下面的内容将在包含在控制台中时运行。但当包含在单
这是一本名为“Node.js 8 the Right Way”的书中的任务。你可以在下面看到它: 这是我的解决方案: 'use strict'; const zmq = require('zeromq
我正在阅读文本行,并创建其独特单词的列表(在将它们小写之后)。我可以使它与 flatMap 一起工作,但不能使它与 map 的“子”流一起工作。 flatMap 看起来更简洁和“更好”,但为什么 di
我正在编写一些 PowerShell 脚本来进行一些构建自动化。我发现 here echo $? 根据前面的语句返回真或假。我刚刚发现 echo 是 Write-Output 的别名。 写主机 $?
关闭。这个问题不满足Stack Overflow guidelines .它目前不接受答案。 想改善这个问题吗?更新问题,使其成为 on-topic对于堆栈溢出。 4年前关闭。 Improve thi
我将一个工作 View Controller 类从另一个项目复制到一个新项目中。我无法在新项目中加载 View 。在旧项目中我使用了presentModalViewController。在新版本中,我
我对 javascript 很陌生,所以很难看出我哪里出错了。由于某种原因,我的功能无法正常工作。任何帮助,将不胜感激。我尝试在外部 js 文件、头部/主体中使用它们,但似乎没有任何效果。错误要么出在
我正在尝试学习Flutter中的复选框。 问题是,当我想在Scaffold(body :)中使用复选框时,它正在工作。但我想在不同的地方使用它,例如ListView中的项目。 return Cente
我们当前使用的是 sleuth 2.2.3.RELEASE,我们看不到在 http header 中传递的 userId 字段没有传播。下面是我们的代码。 BaggageField REQUEST_I
我有一个组合框,其中包含一个项目,比如“a”。我想调用该组合框的 Action 监听器,仅在手动选择项目“a”完成时才调用。我也尝试过 ItemStateChanged,但它的工作原理与 Action
你能看一下照片吗?现在,一步前我执行了 this.interrupt()。您可以看到 this.isInterrupted() 为 false。我仔细观察——“这个”没有改变。它具有相同的 ID (1
我们当前使用的是 sleuth 2.2.3.RELEASE,我们看不到在 http header 中传递的 userId 字段没有传播。下面是我们的代码。 BaggageField REQUEST_I
我正在尝试在我的网站上设置一个联系表单,当有人点击发送时,就会运行一个作业,并在该作业中向所有管理员用户发送通知。不过,我在失败的工作表中不断收到此错误: Illuminate\Database\El
我是一名优秀的程序员,十分优秀!