- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我已经安装了 CDH 5.5.2,它在 Cloudera Manager 中看起来没问题,直到我单击 Spark HistoryServer UI 链接或 Yarn History Server UI 链接。那些不起作用。不工作,我的意思是他们根本无法从浏览器访问。
我在文件 spark-defaults.conf 中添加了以下几行
spark.eventLog.dir=hdfs://name-node-1:8020/user/spark/applicationHistory
spark.eventLog.enabled=true
spark.yarn.historyServer.address=http://name-node-1:18088
我也无法用命令启动服务
sudo service spark-history-server start
当我转到 Cloudera Manager -> Spark -> History Server 时,它正在运行并且位于 name-node-1 上,我可以从 Cloudera Manager 启动它。
这是 Spark、YARN、HDFS、SCM 和 Cloudera Manager 日志的输出
name-node-1 INFO May 24, 2016 10:29 PM JobHistory
Starting scan to move intermediate done files
View Log File
name-node-1 INFO May 24, 2016 10:29 PM StateChange
BLOCK* allocateBlock: /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2016_05_24-22_29_59. BP-1451272641-10.128.0.2-1459245660194 blk_1073747330_6799{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-50e1e66a-ef5c-469e-ba0e-df1c259cbbae:NORMAL:10.128.0.3:50010|RBW], ReplicaUnderConstruction[[DISK]DS-4cacdc34-99a8-4d21-8744-40b5f5bd9919:NORMAL:10.128.0.4:50010|RBW], ReplicaUnderConstruction[[DISK]DS-09b4e549-2fcd-4ee4-8ccd-e5c15bdb3d7d:NORMAL:10.128.0.5:50010|RBW]]}
View Log File
data-node-1 INFO May 24, 2016 10:29 PM DataNode
Receiving BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799 src: /10.128.0.2:38325 dest: /10.128.0.3:50010
View Log File
data-node-2 INFO May 24, 2016 10:29 PM DataNode
Receiving BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799 src: /10.128.0.3:49410 dest: /10.128.0.4:50010
View Log File
data-node-3 INFO May 24, 2016 10:29 PM DataNode
Receiving BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799 src: /10.128.0.4:53572 dest: /10.128.0.5:50010
View Log File
data-node-3 INFO May 24, 2016 10:29 PM DataNode
PacketResponder: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
View Log File
data-node-3 INFO May 24, 2016 10:29 PM clienttrace
src: /10.128.0.4:53572, dest: /10.128.0.5:50010, bytes: 56, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_375545611_68, offset: 0, srvID: 2690c629-9322-4b95-b70e-20270682fe5e, blockid: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, duration: 8712883
View Log File
data-node-2 INFO May 24, 2016 10:29 PM clienttrace
src: /10.128.0.3:49410, dest: /10.128.0.4:50010, bytes: 56, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_375545611_68, offset: 0, srvID: 9a9d8417-9b4e-482b-80c8-133eeb679c68, blockid: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, duration: 9771398
View Log File
name-node-1 INFO May 24, 2016 10:29 PM BlockStateChange
BLOCK* addStoredBlock: blockMap updated: 10.128.0.5:50010 is added to blk_1073747330_6799{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-50e1e66a-ef5c-469e-ba0e-df1c259cbbae:NORMAL:10.128.0.3:50010|RBW], ReplicaUnderConstruction[[DISK]DS-4cacdc34-99a8-4d21-8744-40b5f5bd9919:NORMAL:10.128.0.4:50010|RBW], ReplicaUnderConstruction[[DISK]DS-09b4e549-2fcd-4ee4-8ccd-e5c15bdb3d7d:NORMAL:10.128.0.5:50010|RBW]]} size 0
View Log File
data-node-2 INFO May 24, 2016 10:29 PM DataNode
PacketResponder: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
View Log File
data-node-1 INFO May 24, 2016 10:29 PM DataNode
PacketResponder: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
View Log File
data-node-1 INFO May 24, 2016 10:29 PM clienttrace
src: /10.128.0.2:38325, dest: /10.128.0.3:50010, bytes: 56, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_375545611_68, offset: 0, srvID: a5a064ce-0710-462a-b8b2-489493fd7d8f, blockid: BP-1451272641-10.128.0.2-1459245660194:blk_1073747330_6799, duration: 10857807
View Log File
name-node-1 INFO May 24, 2016 10:29 PM BlockStateChange
BLOCK* addStoredBlock: blockMap updated: 10.128.0.4:50010 is added to blk_1073747330_6799{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-50e1e66a-ef5c-469e-ba0e-df1c259cbbae:NORMAL:10.128.0.3:50010|RBW], ReplicaUnderConstruction[[DISK]DS-4cacdc34-99a8-4d21-8744-40b5f5bd9919:NORMAL:10.128.0.4:50010|RBW], ReplicaUnderConstruction[[DISK]DS-09b4e549-2fcd-4ee4-8ccd-e5c15bdb3d7d:NORMAL:10.128.0.5:50010|RBW]]} size 0
View Log File
name-node-1 INFO May 24, 2016 10:29 PM BlockStateChange
BLOCK* addStoredBlock: blockMap updated: 10.128.0.3:50010 is added to blk_1073747330_6799{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-50e1e66a-ef5c-469e-ba0e-df1c259cbbae:NORMAL:10.128.0.3:50010|RBW], ReplicaUnderConstruction[[DISK]DS-4cacdc34-99a8-4d21-8744-40b5f5bd9919:NORMAL:10.128.0.4:50010|RBW], ReplicaUnderConstruction[[DISK]DS-09b4e549-2fcd-4ee4-8ccd-e5c15bdb3d7d:NORMAL:10.128.0.5:50010|RBW]]} size 0
View Log File
name-node-1 INFO May 24, 2016 10:29 PM StateChange
DIR* completeFile: /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2016_05_24-22_29_59 is closed by DFSClient_NONMAPREDUCE_375545611_68
View Log File
name-node-1 INFO May 24, 2016 10:29 PM BlockStateChange
BLOCK* addToInvalidates: blk_1073747330_6799 10.128.0.3:50010 10.128.0.4:50010 10.128.0.5:50010
View Log File
name-node-1 INFO May 24, 2016 10:30 PM BlockStateChange
BLOCK* BlockManager: ask 10.128.0.5:50010 to delete [blk_1073747330_6799]
View Log File
data-node-3 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService
Deleted BP-1451272641-10.128.0.2-1459245660194 blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330
View Log File
data-node-3 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService
Scheduling blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330 for deletion
View Log File
name-node-1 INFO May 24, 2016 10:30 PM BlockStateChange
BLOCK* BlockManager: ask 10.128.0.4:50010 to delete [blk_1073747330_6799]
View Log File
data-node-2 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService
Scheduling blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330 for deletion
View Log File
data-node-2 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService
Deleted BP-1451272641-10.128.0.2-1459245660194 blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330
View Log File
name-node-1 INFO May 24, 2016 10:30 PM BlockStateChange
BLOCK* BlockManager: ask 10.128.0.3:50010 to delete [blk_1073747330_6799]
View Log File
data-node-1 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService
Deleted BP-1451272641-10.128.0.2-1459245660194 blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330
View Log File
data-node-1 INFO May 24, 2016 10:30 PM FsDatasetAsyncDiskService
Scheduling blk_1073747330_6799 file /data/data01/dfs/dn/current/BP-1451272641-10.128.0.2-1459245660194/current/finalized/subdir0/subdir21/blk_1073747330 for deletion
View Log File
name-node-1 INFO May 24, 2016 10:30 PM FsHistoryProvider
Replaying log path: hdfs://name-node-1:8020/user/spark/applicationHistory/application_1464057137814_0006
View Log File
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager
Running the LDBTimeSeriesRollupManager at 2016-05-24T22:30:15.155Z, forMigratedData=false
View Log File
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager
Starting rollup from raw to rollup=TEN_MINUTELY for rollupTimestamp=2016-05-24T22:30:00.000Z
View Log File
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager
Finished rollup: duration=PT0.729S, numStreamsChecked=38563, numStreamsRolledUp=1295
View Log File
name-node-1 INFO May 24, 2016 10:30 PM FsHistoryProvider
Replaying log path: hdfs://name-node-1:8020/user/spark/applicationHistory/application_1464057137814_0006
View Log File
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager
Running the LDBTimeSeriesRollupManager at 2016-05-24T22:30:19.235Z, forMigratedData=false
View Log File
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager
Starting rollup from raw to rollup=TEN_MINUTELY for rollupTimestamp=2016-05-24T22:30:00.000Z
View Log File
name-node-1 INFO May 24, 2016 10:30 PM CacheReplicationMonitor
Rescanning after 30000 milliseconds
View Log File
name-node-1 INFO May 24, 2016 10:30 PM CacheReplicationMonitor
Scanned 0 directive(s) and 0 block(s) in 2 millisecond(s).
View Log File
name-node-1 INFO May 24, 2016 10:30 PM LDBTimeSeriesRollupManager
Finished rollup: duration=PT5.328S, numStreamsChecked=63547, numStreamsRolledUp=23639
View Log File
name-node-1 INFO May 24, 2016 10:30 PM metastore
Opened a connection to metastore, current connections: 1
View Log File
name-node-1 INFO May 24, 2016 10:30 PM metastore
Trying to connect to metastore with URI thrift://name-node-1:9083
View Log File
name-node-1 INFO May 24, 2016 10:30 PM metastore
Connected to metastore.
View Log File
name-node-1 INFO May 24, 2016 10:30 PM metastore
Closed a connection to metastore, current connections: 0
View Log File
name-node-1 INFO May 24, 2016 10:30 PM SearcherManager
Warming up the FieldCache
View Log File
name-node-1 INFO May 24, 2016 10:30 PM SearcherManager
FieldCache built for 192 docs using 0.00 MB of space.
View Log File
name-node-1 INFO May 24, 2016 10:30 PM FsHistoryProvider
Replaying log path: hdfs://name-node-1:8020/user/spark/applicationHistory/application_1464057137814_0006
最佳答案
这是我在云中的集群的问题。我需要为每个端口创建防火墙规则
gcloud compute firewall-rules create allow-http --description "Incoming http allowed." --allow tcp:80 --format json
gcloud compute firewall-rules create allow-http --description "allow-spark-ui." --allow tcp:18088 --format json
gcloud compute firewall-rules create allow-hue --description "allow-hue." --allow tcp:8888 --format json
gcloud compute firewall-rules create allow-spark-rpc-master --description "allow-spark-rpc-master." --allow tcp:7077 --format json
gcloud compute firewall-rules create allow-spark-rpc-worker --description "allow-spark-rpc-worker." --allow tcp:7078 --format json
gcloud compute firewall-rules create allow-spark-webui-master --description "allow-spark-webui-master." --allow tcp:18080 --format json
gcloud compute firewall-rules create allow-spark-webui-worker --description "allow-spark-webui-worker." --allow tcp:18081 --format json
gcloud compute firewall-rules create allow-yarn-resourcemanager-address --description "yarn-resourcemanager-address." --allow tcp:8032 --format json
gcloud compute firewall-rules create allow-yarn-resourcemanager-scheduler-address --description "yarn-resourcemanager-scheduler-address." --allow tcp:8030 --format json
gcloud compute firewall-rules create allow-yarn-resourcemanager-resource-tracker-address --description "allow-yarn-resourcemanager-resource-tracker-address." --allow tcp:8031 --format json
gcloud compute firewall-rules create allow-yarn-resourcemanager-admin-address --description "allow-yarn-resourcemanager-admin-address." --allow tcp:8033 --format json
gcloud compute firewall-rules create allow-yarn-resourcemanager-webapp-address --description "allow-yarn-resourcemanager-webapp-address." --allow tcp:8088 --format json
gcloud compute firewall-rules create allow-yarn-resourcemanager-webapp-https-address --description "allow-yarn-resourcemanager-webapp-https-address." --allow tcp:8090 --format json
gcloud compute firewall-rules create allow-yarn-historyserver --description "allow-yarn-historyserver." --allow tcp:19888 --format json
gcloud compute firewall-rules create allow-oozie-webui --description "Allow Oozie Web UI." --allow tcp:11000 --format json
gcloud compute firewall-rules create zeppelin-webui --description "Zeppelin UI." --allow tcp:8080 --format json
关于hadoop - Cloudera Manager Yarn 和 Spark UI 不工作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37399802/
目前正在学习 Spark 的类(class)并了解到执行者的定义: Each executor will hold a chunk of the data to be processed. Thisc
阅读了有关 http://spark.apache.org/docs/0.8.0/cluster-overview.html 的一些文档后,我有一些问题想要澄清。 以 Spark 为例: JavaSp
Spark核心中的调度器与以下Spark Stack(来自Learning Spark:Lightning-Fast Big Data Analysis一书)中的Standalone Schedule
我想在 spark-submit 或 start 处设置 spark.eventLog.enabled 和 spark.eventLog.dir -all level -- 不要求在 scala/ja
我有来自 SQL Server 的数据,需要在 Apache Spark (Databricks) 中进行操作。 在 SQL Server 中,此表的三个键列使用区分大小写的 COLLATION 选项
所有这些有什么区别和用途? spark.local.ip spark.driver.host spark.driver.bind地址 spark.driver.hostname 如何将机器修复为 Sp
我有大约 10 个 Spark 作业,每个作业都会进行一些转换并将数据加载到数据库中。必须为每个作业单独打开和关闭 Spark session ,每次初始化都会耗费时间。 是否可以只创建一次 Spar
/Downloads/spark-3.0.1-bin-hadoop2.7/bin$ ./spark-shell 20/09/23 10:58:45 WARN Utils: Your hostname,
我是 Spark 的完全新手,并且刚刚开始对此进行更多探索。我选择了更长的路径,不使用任何 CDH 发行版安装 hadoop,并且我从 Apache 网站安装了 Hadoop 并自己设置配置文件以了解
TL; 博士 Spark UI 显示的内核和内存数量与我在使用 spark-submit 时要求的数量不同 更多细节: 我在独立模式下运行 Spark 1.6。 当我运行 spark-submit 时
spark-submit 上的文档说明如下: The spark-submit script in Spark’s bin directory is used to launch applicatio
关闭。这个问题是opinion-based .它目前不接受答案。 想改善这个问题吗?更新问题,以便可以通过 editing this post 用事实和引文回答问题. 6 个月前关闭。 Improve
我想了解接收器如何在 Spark Streaming 中工作。根据我的理解,将有一个接收器任务在执行器中运行,用于收集数据并保存为 RDD。当调用 start() 时,接收器开始读取。需要澄清以下内容
有没有办法在不同线程中使用相同的 spark 上下文并行运行多个 spark 作业? 我尝试使用 Vertx 3,但看起来每个作业都在排队并按顺序启动。 如何让它在相同的 spark 上下文中同时运行
我们有一个 Spark 流应用程序,这是一项长期运行的任务。事件日志指向 hdfs 位置 hdfs://spark-history,当我们开始流式传输应用程序时正在其中创建 application_X
我们正在尝试找到一种加载 Spark (2.x) ML 训练模型的方法,以便根据请求(通过 REST 接口(interface))我们可以查询它并获得预测,例如http://predictor.com
Spark newb 问题:我在 spark-sql 中进行完全相同的 Spark SQL 查询并在 spark-shell . spark-shell版本大约需要 10 秒,而 spark-sql版
我正在使用 Spark 流。根据 Spark 编程指南(参见 http://spark.apache.org/docs/latest/programming-guide.html#accumulato
我正在使用 CDH 5.2。我可以使用 spark-shell 运行命令。 如何运行包含spark命令的文件(file.spark)。 有没有办法在不使用 sbt 的情况下在 CDH 5.2 中运行/
我使用 Elasticsearch 已经有一段时间了,但使用 Cassandra 的经验很少。 现在,我有一个项目想要使用 Spark 来处理数据,但我需要决定是否应该使用 Cassandra 还是
我是一名优秀的程序员,十分优秀!