gpt4 book ai didi

java - 如何在 Apache Spark Java 中将数组类型的数据集转换为字符串类型

转载 作者:行者123 更新时间:2023-11-30 07:52:34 25 4
gpt4 key购买 nike

我的数据集中有一个数组类型需要转换成字符串类型。我已经尝试过传统的方式。我觉得我们可以用更好的方式来做到这一点。你能指导我吗?输入数据集1

    +---------------------------+-----------+-------------------------------------------------------------------------------------------------+
ManufacturerSource |upcSource |productDescriptionSource | |
+---------------------------+-----------+-------------------------------------------------------------------------------------------------+
|3M |51115665883|[c, gdg, whl, t27, 5, x, 1, 4, x, 7, 8, grindig, flap, wheels, 36, grit, 12, 250, rpm] | |
|3M |51115665937|[c, gdg, whl, t27, q, c, 6, x, 1, 4, x, 5, 8, 11, grinding, flap, wheels, 36, grit, 10, 200, rpm]| |
|3M |0 |[3mite, rb, cloth, 3, x, 2, wd] | |
|3M |0 |[trizact, disc, cloth, 237aaa16x5, hole] | |
-------------------------------------------------------------------------------------------------------------------------------------------

预期输出数据集

     +---------------------------+-----------+---------------------------------------------------------------------------------------------------|
|ManufacturerSource |upcSource |productDescriptionSource | |
+---------------------------+-----------+---------------------------------------------------------------------------------------------------|
|3M |51115665883|c gdg whl t27 5 x 1 4 x 7 8 grinding flap wheels 36 grit 12 250 rpm | | |
|3M |51115665937|c gdg whl t27 q c 6 x 1 4 x 5 8 11 grinding flap wheels 36 grit 10 200 rpm | |
|3M |0 |3mite rb cloth 3 x 2 wd | |
|3M |0 |trizact disc cloth 237aaa16x5 hole | |
+-------------------------------------------------------------------------------------------------------------------------------------------|

常规方法 1

        Dataset<Row> afterstopwordsRemoved = 
stopwordsRemoved.select("productDescriptionSource");
stopwordsRemoved.show();

List<Row> individaulRows= afterstopwordsRemoved.collectAsList();

System.out.println("After flatmap\n");
List<String> temp;
for(Row individaulRow:individaulRows){
temp=individaulRow.getList(0);
System.out.println(String.join(" ",temp));
}

方法 2(不产生结果)

异常:无法执行用户定义的函数($anonfun$27:(数组) => 字符串)

       UDF1 untoken = new UDF1<String,String[]>() {
public String call(String[] token) throws Exception {
//return types.replaceAll("[^a-zA-Z0-9\\s+]", "");
return Arrays.toString(token);
}

@Override
public String[] call(String t1) throws Exception {
// TODO Auto-generated method stub
return null;
}
};

sqlContext.udf().register("unTokenize", untoken, DataTypes.StringType);

source.createOrReplaceTempView("DataSetOfTokenize");
Dataset<Row> newDF = sqlContext.sql("select *,unTokenize(productDescriptionSource)FROM DataSetOfTokenize");
newDF.show(4000,false);

最佳答案

我会使用 concat_ws:

sqlContext.sql("select *, concat_ws(' ', productDescriptionSource) FROM DataSetOfTokenize");

或:

import static org.apache.spark.sql.functions.*;

df.withColumn("foo" ,concat_ws(" ", col("productDescriptionSource")));

关于java - 如何在 Apache Spark Java 中将数组类型的数据集转换为字符串类型,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45717135/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com