gpt4 book ai didi

sql - Hive: java.lang.OutOfMemoryError: Java heap space and Job running in-process (local Hadoop)

转载 作者:可可西里 更新时间:2023-11-01 14:35:58 37 4
gpt4 key购买 nike

我的设置:运行 NixOS Linux 的 Google Cloud Platform 中的 4 节点集群(1 个主节点,3 个工作节点)。

我一直在使用 TPC-DS 工具包来生成数据和查询都是标准的。在较小的数据集/更简单的查询上,它们工作得很好。我从这里获取的查询:https://github.com/hortonworks/hive-testbench/tree/hdp3/sample-queries-tpcds

这是第一个,query1.sql :

WITH customer_total_return AS 
(
SELECT sr_customer_sk AS ctr_customer_sk ,
sr_store_sk AS ctr_store_sk ,
Sum(sr_fee) AS ctr_total_return
FROM store_returns ,
date_dim
WHERE sr_returned_date_sk = d_date_sk
AND d_year =2000
GROUP BY sr_customer_sk ,
sr_store_sk)
SELECT c_customer_id
FROM customer_total_return ctr1 ,
store ,
customer
WHERE ctr1.ctr_total_return >
(
SELECT Avg(ctr_total_return)*1.2
FROM customer_total_return ctr2
WHERE ctr1.ctr_store_sk = ctr2.ctr_store_sk)
AND s_store_sk = ctr1.ctr_store_sk
AND s_state = 'NM'
AND ctr1.ctr_customer_sk = c_customer_sk
ORDER BY c_customer_id limit 100;

起初我遇到的问题是根本无法成功运行它,遇到了 java.lang.OutOfMemoryError: Java heap space

我所做的是:

  1. 增加了 GCP 节点的能力(高达 7.5 gb 的 RAM 和双核 CPU)
  2. 在 Hive CLI 中设置这些变量:
set mapreduce.map.memory.mb=2048;
set mapreduce.map.java.opts=-Xmx1024m;
set mapreduce.reduce.memory.mb=4096;
set mapreduce.reduce.java.opts=-Xmxe3072m;
set mapred.child.java.opts=-Xmx1024m;

  1. 重新启动 hive

当涉及到 1 GB 的数据集时,此查询(与其他类似查询一起)有效。我用 htop 监测了情况,内存使用量不超过 2gb,而两个 CPU 内核几乎一直使用到 100%。

现在的问题是,当涉及到具有更大数据集的更复杂的查询时,错误又开始了:

查询运行了一整分钟左右,但以失败告终。完整的堆栈跟踪:

hive> with customer_total_return as
> (select sr_customer_sk as ctr_customer_sk
> ,sr_store_sk as ctr_store_sk
> ,sum(SR_FEE) as ctr_total_return
> from store_returns
> ,date_dim
> where sr_returned_date_sk = d_date_sk
> and d_year =2000
> group by sr_customer_sk
> ,sr_store_sk)
> select c_customer_id
> from customer_total_return ctr1
> ,store
> ,customer
> where ctr1.ctr_total_return > (select avg(ctr_total_return)*1.2
> from customer_total_return ctr2
> where ctr1.ctr_store_sk = ctr2.ctr_store_sk)
> and s_store_sk = ctr1.ctr_store_sk
> and s_state = 'TN'
> and ctr1.ctr_customer_sk = c_customer_sk
> order by c_customer_id
> limit 100;
No Stats for default@store_returns, Columns: sr_returned_date_sk, sr_fee, sr_store_sk, sr_customer_sk
No Stats for default@date_dim, Columns: d_date_sk, d_year
No Stats for default@store, Columns: s_state, s_store_sk
No Stats for default@customer, Columns: c_customer_sk, c_customer_id
Query ID = root_20190811164854_c253c67c-ef94-4351-b4d3-74ede4c5d990
Total jobs = 14
Stage-29 is selected by condition resolver.
Stage-1 is filtered out by condition resolver.
Stage-30 is selected by condition resolver.
Stage-10 is filtered out by condition resolver.
SLF4J: Found binding in [jar:file:/nix/store/jjm6636r99r0irqa03dc1za9gs2b4fx6-source/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/nix/store/q9jpwzbqbg8k8322q785xfavg0p0v18i-hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
Execution completed successfully
MapredLocal task succeeded
SLF4J: Found binding in [jar:file:/nix/store/jjm6636r99r0irqa03dc1za9gs2b4fx6-source/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/nix/store/q9jpwzbqbg8k8322q785xfavg0p0v18i-hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Execution completed successfully
MapredLocal task succeeded
Launching Job 3 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:49:19,415 Stage-20 map = 0%, reduce = 0%
2019-08-11 16:49:22,418 Stage-20 map = 100%, reduce = 0%
Ended Job = job_local404291246_0005
Launching Job 4 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:49:24,718 Stage-22 map = 0%, reduce = 0%
2019-08-11 16:49:27,721 Stage-22 map = 100%, reduce = 0%
Ended Job = job_local566999875_0006
Launching Job 5 out of 14
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2019-08-11 16:49:29,958 Stage-2 map = 0%, reduce = 0%
2019-08-11 16:49:33,970 Stage-2 map = 100%, reduce = 0%
2019-08-11 16:49:35,974 Stage-2 map = 100%, reduce = 100%
Ended Job = job_local1440279093_0007
Launching Job 6 out of 14
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2019-08-11 16:49:37,235 Stage-11 map = 0%, reduce = 0%
2019-08-11 16:49:40,421 Stage-11 map = 100%, reduce = 0%
2019-08-11 16:49:42,424 Stage-11 map = 100%, reduce = 100%
Ended Job = job_local1508103541_0008
SLF4J: Found binding in [jar:file:/nix/store/jjm6636r99r0irqa03dc1za9gs2b4fx6-source/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/nix/store/q9jpwzbqbg8k8322q785xfavg0p0v18i-hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

2019-08-11 16:49:51 Dump the side-table for tag: 1 with group count: 21 into file: file:/tmp/root/3ab30b3b-380d-40f5-9f72-68788d998013/hive_2019-08-11_16-48-54_393_105456265244058313-1/-local-10019/HashTable-Stage-19/MapJoin-mapfile71--.hashtable
Execution completed successfully
MapredLocal task succeeded
Launching Job 7 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:49:53,956 Stage-19 map = 100%, reduce = 0%
Ended Job = job_local2121921517_0009
Stage-26 is filtered out by condition resolver.
Stage-27 is selected by condition resolver.
Stage-4 is filtered out by condition resolver.

2019-08-11 16:50:01 Dump the side-table for tag: 0 with group count: 99162 into file: file:/tmp/root/3ab30b3b-380d-40f5-9f72-68788d998013/hive_2019-08-11_16-48-54_393_105456265244058313-1/-local-10017/HashTable-Stage-17/MapJoin-mapfile60--.hashtable
2019-08-11 16:50:02 Uploaded 1 File to: file:/tmp/root/3ab30b3b-380d-40f5-9f72-68788d998013/hive_2019-08-11_16-48-54_393_105456265244058313-1/-local-10017/HashTable-Stage-17/MapJoin-mapfile60--.hashtable (2832042 bytes)
Execution completed successfully
MapredLocal task succeeded
Launching Job 9 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:50:04,004 Stage-17 map = 0%, reduce = 0%
2019-08-11 16:50:05,005 Stage-17 map = 100%, reduce = 0%
Ended Job = job_local694362009_0010
Stage-24 is selected by condition resolver.
Stage-25 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.

SLF4J: Found binding in [jar:file:/nix/store/q9jpwzbqbg8k8322q785xfavg0p0v18i-hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
2019-08-11 16:50:12 Starting to launch local task to process map join; maximum memory = 239075328
Execution completed successfully
MapredLocal task succeeded
Launching Job 11 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:50:14,254 Stage-13 map = 100%, reduce = 0%
Ended Job = job_local1812693452_0011
Launching Job 12 out of 14
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2019-08-11 16:50:15,481 Stage-6 map = 0%, reduce = 0%
Ended Job = job_local920309638_0012 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-20: HDFS Read: 8662606197 HDFS Write: 0 SUCCESS
Stage-Stage-22: HDFS Read: 9339349675 HDFS Write: 0 SUCCESS
Stage-Stage-2: HDFS Read: 9409277766 HDFS Write: 0 SUCCESS
Stage-Stage-11: HDFS Read: 9409277766 HDFS Write: 0 SUCCESS
Stage-Stage-19: HDFS Read: 4704638883 HDFS Write: 0 SUCCESS
Stage-Stage-17: HDFS Read: 4771516428 HDFS Write: 0 SUCCESS
Stage-Stage-13: HDFS Read: 4771516428 HDFS Write: 0 SUCCESS
Stage-Stage-6: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

hive.log文件中的问题还是一样:

java.lang.Exception: java.lang.OutOfMemoryError: Java heap space

我意识到我的工作节点实际上并没有做任何事情(htop 显示它们处于空闲状态,而只有主节点在工作)即使在堆栈跟踪中:

Job running in-process (local Hadoop)

如何让 Hive 使用 HDFS 而不仅仅是本地 Hadoop?运行 hdfs dfs -df -h hdfs:<redacted>:9000/ 返回

Filesystem                   Size    Used  Available  Use%
hdfs://<redacted>:9000 88.5 G 34.3 G 35.2 G 39%

这是正确的,我有 3 个工作节点,每个节点有 30 GB 的磁盘。

最佳答案

java.lang.OutOfMemoryError: Java heap space 如果您试图在单台机器上推送过多数据,就会发生这种情况。

根据提供的查询,您可以尝试以下几件事:

  1. 将您的连接条件更改为显式(删除WHERE CLAUSE 并使用INNER/LEFT JOIN)。例如
FROM     customer_total_return ctr1 
INNER JOIN store s
ON ctr1.ctr_store_sk = s.s_store_sk
AND s_state = 'NM'
INNER JOIN customer c
ON ctr1.ctr_customer_sk = c.c_customer_sk
  1. 检查以下字段之一的数据是否存在偏差:
    1. store_returns -> sr_returned_date_sk
    2. store_returns -> sr_store_sk
    3. store_returns -> sr_customer_sk
    4. 客户 -> c_customer_sk
    5. 商店 -> s_store_sk

KEY 之一可能具有高百分比的值,这可能导致节点 1 过载(当数据量很大时)。

基本上,您正在尝试消除节点过载的可能原因。

如果有帮助,请告诉我。

关于sql - Hive: java.lang.OutOfMemoryError: Java heap space and Job running in-process (local Hadoop),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57433675/

37 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com