gpt4 book ai didi

java - Hive:将小写字母应用于数组

转载 作者:行者123 更新时间:2023-12-01 18:18:41 27 4
gpt4 key购买 nike

在 Hive 中,如何将 lower() UDF 应用于字符串数组?或者一般的任何 UDF。我不知道如何在选择查询中应用“ map ”

最佳答案

如果您的用例是单独转换数组(而不是作为表的一部分),则可以结合使用 explodelowercollect_list 应该可以解决问题。例如(请原谅可怕的执行时间,我在一个动力不足的虚拟机上运行):

hive> SELECT collect_list(lower(val))
> FROM (SELECT explode(array('AN', 'EXAMPLE', 'ARRAY')) AS val) t;
...
... Lots of MapReduce spam
...
MapReduce Total cumulative CPU time: 4 seconds 10 msec
Ended Job = job_1422453239049_0017
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 4.01 sec HDFS Read: 283 HDFS Write: 17 SUCCESS
Total MapReduce CPU Time Spent: 4 seconds 10 msec
OK
["an","example","array"]
Time taken: 33.05 seconds, Fetched: 1 row(s)

(注意:将上述查询中的 array('AN', 'EXAMPLE', 'ARRAY') 替换为您用于生成数组的表达式。

如果您的用例是这样的,您的数组存储在 Hive 表的列中,并且您需要对其应用小写转换,据我所知,您有两个主要选项:

方法#1:结合使用explodeLATERAL VIEW 来分离数组。使用 lower 转换各个元素,然后使用 collect_list 将它们重新粘合在一起。一个带有愚蠢的虚构数据的简单示例:

hive> DESCRIBE foo;
OK
id int
data array<string>
Time taken: 0.774 seconds, Fetched: 2 row(s)
hive> SELECT * FROM foo;
OK
1001 ["ONE","TWO","THREE"]
1002 ["FOUR","FIVE","SIX","SEVEN"]
Time taken: 0.434 seconds, Fetched: 2 row(s)

hive> SELECT
> id, collect_list(lower(exploded))
> FROM
> foo LATERAL VIEW explode(data) exploded_table AS exploded
> GROUP BY id;
...
... Lots of MapReduce spam
...
MapReduce Total cumulative CPU time: 3 seconds 310 msec
Ended Job = job_1422453239049_0014
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 3.31 sec HDFS Read: 358 HDFS Write: 44 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 310 msec
OK
1001 ["one","two","three"]
1002 ["four","five","six","seven"]
Time taken: 34.268 seconds, Fetched: 2 row(s)

方法#2:编写一个简单的 UDF 来应用转换。类似于:

package my.package_name;

import java.util.ArrayList;
import java.util.List;

import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.io.Text;

public class LowerArray extends UDF {
public List<Text> evaluate(List<Text> input) {
List<Text> output = new ArrayList<Text>();
for (Text element : input) {
output.add(new Text(element.toString().toLowerCase()));
}
return output;
}
}

然后直接对数据调用UDF:

hive> ADD JAR my_jar.jar;
Added my_jar.jar to class path
Added resource: my_jar.jar
hive> CREATE TEMPORARY FUNCTION lower_array AS 'my.package_name.LowerArray';
OK
Time taken: 2.803 seconds
hive> SELECT id, lower_array(data) FROM foo;
...
... Lots of MapReduce spam
...
MapReduce Total cumulative CPU time: 2 seconds 760 msec
Ended Job = job_1422453239049_0015
MapReduce Jobs Launched:
Job 0: Map: 1 Cumulative CPU: 2.76 sec HDFS Read: 358 HDFS Write: 44 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 760 msec
OK
1001 ["one","two","three"]
1002 ["four","five","six","seven"]
Time taken: 27.243 seconds, Fetched: 2 row(s)

这两种方法之间存在一些权衡。一般来说,#2 在运行时可能比 #1 更高效,因为 #1 中的 GROUP BY 子句强制执行归约阶段,而 UDF 方法则不然。然而,#1 在 HiveQL 中完成了所有操作,并且更容易泛化(如果需要,您可以将 lower 替换为查询中的其他类型的字符串转换)。使用 #2 的 UDF 方法,您可能必须为要应用的每种不同类型的转换编写新的 UDF。

关于java - Hive:将小写字母应用于数组,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28194623/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com