gpt4 book ai didi

python-3.x - 从列表中的元素中删除尾随空格

转载 作者:行者123 更新时间:2023-12-01 23:16:11 26 4
gpt4 key购买 nike

我有一个 spark 数据框,其中给定的列是一些文本。我正在尝试清理文本并用逗号分隔,这将输出一个包含单词列表的新列。

我遇到的问题是该列表中的某些元素包含我想删除的尾随空格。

代码:

# Libraries
# Standard Libraries
from typing import Dict, List, Tuple

# Third Party Libraries
import pyspark
from pyspark.ml.feature import Tokenizer
from pyspark.sql import SparkSession
import pyspark.sql.functions as s_function


def tokenize(sdf, input_col="text", output_col="tokens"):
# Remove email
sdf_temp = sdf.withColumn(
colName=input_col,
col=s_function.regexp_replace(s_function.col(input_col), "[\w\.-]+@[\w\.-]+\.\w+", ""))
# Remove digits
sdf_temp = sdf_temp.withColumn(
colName=input_col,
col=s_function.regexp_replace(s_function.col(input_col), "\d", ""))
# Remove one(1) character that is not a word character except for
# commas(,), since we still want to split on commas(,)
sdf_temp = sdf_temp.withColumn(
colName=input_col,
col=s_function.regexp_replace(s_function.col(input_col), "[^a-zA-Z0-9,]+", " "))
# Split the affiliation string based on a comma
sdf_temp = sdf_temp.withColumn(
colName=output_col,
col=s_function.split(sdf_temp[input_col], ", "))

return sdf_temp


if __name__ == "__main__":
# Sample data
a_1 = "Department of Bone and Joint Surgery, Ehime University Graduate"\
" School of Medicine, Shitsukawa, Toon 791-0295, Ehime, Japan."\
" shinyama@m.ehime-u.ac.jp."
a_2 = "Stroke Pharmacogenomics and Genetics, Fundació Docència i Recerca"\
" Mútua Terrassa, Hospital Mútua de Terrassa, 08221 Terrassa, Spain."
a_3 = "Neurovascular Research Laboratory, Vall d'Hebron Institute of Research,"\
" Hospital Vall d'Hebron, 08035 Barcelona, Spain;catycarrerav@gmail.com"\
" (C.C.). catycarrerav@gmail.com."

data = [(1, a_1), (2, a_2), (3, a_3)]

spark = SparkSession\
.builder\
.master("local[*]")\
.appName("My_test")\
.config("spark.ui.port", "37822")\
.getOrCreate()
sc = spark.sparkContext
sc.setLogLevel("WARN")

af_data = spark.createDataFrame(data, ["index", "text"])
sdf_tokens = tokenize(af_data)
# sdf_tokens.select("tokens").show(truncate=False)

输出

|[Department of Bone and Joint Surgery, Ehime University Graduate School of Medicine, Shitsukawa, Toon , Ehime, Japan ]                                                |
|[Stroke Pharmacogenomics and Genetics, Fundaci Doc ncia i Recerca M tua Terrassa, Hospital M tua de Terrassa, Terrassa, Spain ] |
|[Neurovascular Research Laboratory, Vall d Hebron Institute of Research, Hospital Vall d Hebron, Barcelona, Spain C C ]

期望的输出:

|[Department of Bone and Joint Surgery, Ehime University Graduate School of Medicine, Shitsukawa, Toon, Ehime, Japan]                                                |
|[Stroke Pharmacogenomics and Genetics, Fundaci Doc ncia i Recerca M tua Terrassa, Hospital M tua de Terrassa, Terrassa, Spain] |
|[Neurovascular Research Laboratory, Vall d Hebron Institute of Research, Hospital Vall d Hebron, Barcelona, Spain C C]

所以在

  1. 第一行:'Toon' -> 'Toon''Japan' -> 'Japan'
  2. 第二行:'西类牙' -> '西类牙'
  3. 第 3 行:'西类牙 C C' -> '西类牙 C C'

注意

尾随空格不仅出现在列表的最后一个元素中,它们还可以出现在任何元素中。

最佳答案

更新

原来的解决方案行不通,因为 trim 只对整个字符串的开头和结尾进行操作,而您需要它对每个标记进行操作。

@PatrickArtnersolution可行,但另一种方法是使用 RegexTokenizer

这是一个如何修改 tokenize() 函数的示例:

from pyspark.ml.feature import RegexTokenizer

def tokenize(sdf, input_col="text", output_col="tokens"):

# Remove email
sdf_temp = sdf.withColumn(
colName=input_col,
col=s_function.regexp_replace(s_function.col(input_col), "[\w\.-]+@[\w\.-]+\.\w+", ""))
# Remove digits
sdf_temp = sdf_temp.withColumn(
colName=input_col,
col=s_function.regexp_replace(s_function.col(input_col), "\d", ""))
# Remove one(1) character that is not a word character except for
# commas(,), since we still want to split on commas(,)
sdf_temp = sdf_temp.withColumn(
colName=input_col,
col=s_function.regexp_replace(s_function.col(input_col), "[^a-zA-Z0-9,]+", " "))

# call trim to remove any trailing (or leading spaces)
sdf_temp = sdf_temp.withColumn(
colName=input_col,
col=s_function.trim(sdf_temp[input_col]))

# use RegexTokenizer to split on commas optionally surrounded by whitespace
myTokenizer = RegexTokenizer(
inputCol=input_col,
outputCol=output_col,
pattern="( +)?, ?")

sdf_temp = myTokenizer.transform(sdf_temp)

return sdf_temp

本质上,对您的字符串调用 trim 以处理任何前导或尾随空格。然后使用 RegexTokenizer 使用模式 "( +)?, ?" 进行拆分。

  • ( +)?:匹配零个和无限个空格
  • ,:精确匹配一个逗号
  • ?:匹配可选空格

这是

的输出
sdf_tokens.select('tokens', f.size('tokens').alias('size')).show(truncate=False)

您可以看到数组的长度(标记数)是正确的,但是所有标记都是小写的(因为这就是 TokenizerRegexTokenizer 所做的).

+------------------------------------------------------------------------------------------------------------------------------+----+
|tokens |size|
+------------------------------------------------------------------------------------------------------------------------------+----+
|[department of bone and joint surgery, ehime university graduate school of medicine, shitsukawa, toon, ehime, japan] |6 |
|[stroke pharmacogenomics and genetics, fundaci doc ncia i recerca m tua terrassa, hospital m tua de terrassa, terrassa, spain]|5 |
|[neurovascular research laboratory, vall d hebron institute of research, hospital vall d hebron, barcelona, spain c c] |5 |
+------------------------------------------------------------------------------------------------------------------------------+----+

原始答案

只要您使用的是 Spark 1.5 或更高版本,就可以使用 pyspark.sql.functions.trim()这将:

Trim the spaces from both ends for the specified string column.

所以一种方法是添加:

sdf_temp = sdf_temp.withColumn(
colName=input_col,
col=s_function.trim(sdf_temp[input_col]))

tokenize() 函数的末尾。

但您可能想查看 pyspark.ml.feature.Tokenizerpyspark.ml.feature.RegexTokenizer .一个想法可能是使用您的函数来清理您的字符串,然后使用 Tokenizer 来制作标记。 (我看到你已经导入了它,但似乎没有使用它)。

关于python-3.x - 从列表中的元素中删除尾随空格,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50971773/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com