gpt4 book ai didi

scala - Spark RDD未从Elasticsearch获取所有源字段

转载 作者:行者123 更新时间:2023-12-02 23:31:39 24 4
gpt4 key购买 nike

我在Elasticseach(本地单节点服务器)中有以下数据

seach命令curl -XPOST 'localhost:9200/sparkdemo/_search?pretty' -d '{ "query": { "match_all": {} } }'
输出:

{
"took" : 4,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 10,
"max_score" : 1.0,
"hits" : [ {
"_index" : "sparkdemo",
"_type" : "hrinfo",
"_id" : "AVNAY_H0lYe0cQl--Bin",
"_score" : 1.0,
"_source" : {
"date" : "9/Mar/2016",
"pid" : "1",
"propName" : "HEARTRATE",
"var" : null,
"propValue" : 86,
"avg" : 86,
"stage" : "S1"
}
}, {
"_index" : "sparkdemo",
"_type" : "hrinfo",
"_id" : "AVNAY_KklYe0cQl--Bir",
"_score" : 1.0,
"_source" : {
"date" : "13/Mar/2016",
"pid" : "1",
"propName" : "HEARTRATE",
"var" : null,
"propValue" : 86,
"avg" : 87,
"stage" : "S1"
}
}, {
"_index" : "sparkdemo",
"_type" : "hrinfo",
"_id" : "AVNAY-TolYe0cQl--Bii",
"_score" : 1.0,
"_source" : {
"date" : "4/Mar/2016",
"pid" : "1",
"propName" : "HEARTRATE",
"var" : null,
"propValue" : 82,
"avg" : 82,
"stage" : "S0"
}
},
.......
... Few more records
..........
}, {
"_index" : "sparkdemo",
"_type" : "hrinfo",
"_id" : "AVNAY_KklYe0cQl--Biq",
"_score" : 1.0,
"_source" : {
"date" : "12/Mar/2016",
"pid" : "1",
"propName" : "HEARTRATE",
"var" : null,
"propValue" : 91,
"avg" : 89,
"stage" : "S1"
}
} ]
}
}

我正在尝试获取Spark程序(从Eclipse运行的本地独立程序)中的所有数据。
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.elasticsearch.spark._
import scala.collection.mutable.Map;

object Test1 {

def main(args: Array[String]) {
val conf = new SparkConf().setMaster("local[2]").setAppName("HRInfo");
val sc = new SparkContext(conf);

val esRdd = sc.esRDD("sparkdemo/hrinfo", "?q=*");

val searchResultRDD = esRdd.map(t => {
println("id:" + t._1 + ", map:" + t._2);
t._2;
});

val infoRDD = searchResultRDD.collect().foreach(map => {
var stage = map.get("stage");
var pid = map.get("pid");
var date = map.get("date");
var propName = map.get("propName");
var propValue = map.get("propValue");
var avg = map.get("avg");
var variation = map.get("var");

println("Info(" + stage + "," + pid + "," + date + "," + propName + "," + propValue + "," + avg + "," + variation + ")");

});

}
}

但是程序无法获取存储在ElasticSearch中的记录的所有文件。

程序输出:
id:AVNAY_H0lYe0cQl--Bin, map:Map(date -> 9/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY_KklYe0cQl--Bir, map:Map(date -> 13/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY-TolYe0cQl--Bii, map:Map(date -> 4/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY_H0lYe0cQl--Bio, map:Map(date -> 10/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY_KklYe0cQl--Bip, map:Map(date -> 11/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY-TolYe0cQl--Bij, map:Map(date -> 5/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY-Y9lYe0cQl--Bil, map:Map(date -> 7/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY-Y9lYe0cQl--Bim, map:Map(date -> 8/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY-Y9lYe0cQl--Bik, map:Map(date -> 6/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY_KklYe0cQl--Biq, map:Map(date -> 12/Mar/2016, pid -> 1, propName -> HEARTRATE)
Info(None,Some(1),Some(9/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(13/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(4/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(10/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(11/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(5/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(7/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(8/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(6/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(12/Mar/2016),Some(HEARTRATE),None,None,None)

程序获取所有记录,但在每个记录中不获取其他字段(即stage,propValue,avg和variabtion),为什么?
非常感谢。

最佳答案

这是由于文档中的"var": null值而发生的。每个文档中的"var": null以及以下所有值都不会使其进入Scala中的 map 。

您可以通过将"var": null值之一替换为非null值(例如"var": "test")来显示此信息。然后,您将按预期获得正确返回的所有值。或者,您可以在文档的开头放置一个空值。例如

curl -X POST 'http://localhost:9200/sparkdemo/hrinfo/5' -d '{"test":null,"date": "9/Mar/2016","pid": "1","propName": "HEARTRATE","propValue": 86,"avg": 86,"stage": "S1"}'

并且该文档的 map 将为空:
id:5, map:Map()

关于scala - Spark RDD未从Elasticsearch获取所有源字段,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35790168/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com