gpt4 book ai didi

apache-spark - 如何在pyspark中为 token 特征数组维护单词到索引映射的顺序?

转载 作者:行者123 更新时间:2023-12-04 05:15:46 25 4
gpt4 key购买 nike

这是我正在寻找的 pyspark 用例。我目前有一个带有单词标记的数据框,并且想要构建一个词汇表,然后用词汇表中的索引替换单词。这是我的数据框

>>> wordDataFrame.show(10, False)
+---+-------------------------------------------------+
|id |words |
+---+-------------------------------------------------+
|0 |[hi, i, heard, about, spark] |
|1 |[i, wish, java, could, use, case, spark, classes]|
+---+-------------------------------------------------+

当我使用 CountVectorizer
from pyspark.ml.feature import CountVectorizer
cv = CountVectorizer(binary=True)\
.setInputCol("words")\
.setOutputCol("countVec")\
.setToLowercase(True)
.setMinTF(1)\
.setMinDF(1)
fittedCV = cv.fit(wordDataFrame)
fittedCV.transform(wordDataFrame).show(2, False)
+---+-------------------------------------------------+---------------------------------------------------------+
|id |words |features |
+---+-------------------------------------------------+---------------------------------------------------------+
|0 |[hi, i, heard, about, spark] |(11,[0,1,6,8,9],[1.0,1.0,1.0,1.0,1.0]) |
|1 |[i, wish, java, could, use, case, spark, classes]|(11,[0,1,2,3,4,5,7,10],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0])|
+---+-------------------------------------------------+---------------------------------------------------------+

接下来是我的词汇表的样子
>>> from pprint import pprint
>>> pprint(dict([(i, x) for i,x in enumerate(fittedCV.vocabulary)]))
{0: 'i',
1: 'spark',
2: 'wish',
3: 'use',
4: 'case',
5: 'java',
6: 'hi',
7: 'could',
8: 'about',
9: 'heard',
10: 'classes'}

我要找的是这个
[hi, i , heard, about, spark] -> [6, 0, 9, 8, 1] instead of [0,1,6,8,9]

基本上保持 token 的顺序。我尝试查看文档,但看起来所有矢量化器都失去了位置。对于我的情况,我需要保持位置,因为此功能将进入更下游的 LSTM 层

最佳答案

我最近有一个与您类似的用例。我最终使用了 StringIndexer :

l = [
(0, ["hi", "i", "heard", "about", "spark"]),
(1, ["i", "wish", "java", "could", "use", "case", "spark", "classes"])
]
wordDataFrame = spark.createDataFrame(l, ['id', 'words'])
wordDataFrame.show()

.
+---+--------------------+
| id| words|
+---+--------------------+
| 0|[hi, i, heard, ab...|
| 1|[i, wish, java, c...|
+---+--------------------+

.
from pyspark.ml.feature import StringIndexer

# define indexer
indexer = StringIndexer(inputCol="word_strings", outputCol="word_index")

# use explode to map col<array<string>> => col<string>
# fit indexer on col<string>
indexer = indexer.fit(
wordDataFrame
.select(F.explode(F.col("words")).alias("word_strings"))
)

print(indexer.labels)

.
['i', 'spark', 'heard', 'classes', 'java', 'could', 'use', 'hi', 'case', 'about', 'wish']

.
indexedWordDataFrame = (
indexer
.transform(
# use explode to map col<array<string>> => col<string>
# use indexer to transform col<string> => col<double>
wordDataFrame
.withColumn("word_strings", F.explode(F.col("words")))
)
# use groupby + collect_list to map col<double> => col<array<double>>
.groupby("id", "words")
.agg(F.collect_list("word_index").alias("word_index_array"))
)

indexedWordDataFrame.orderBy("id").show()

.
+---+--------------------+--------------------+
| id| words| word_index_array|
+---+--------------------+--------------------+
| 0|[hi, i, heard, ab...|[7.0, 0.0, 2.0, 9...|
| 1|[i, wish, java, c...|[0.0, 10.0, 4.0, ...|
+---+--------------------+--------------------+

HTH

关于apache-spark - 如何在pyspark中为 token 特征数组维护单词到索引映射的顺序?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51756720/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com