gpt4 book ai didi

hadoop - Wordcount Nonetype 错误 pyspark-

转载 作者:可可西里 更新时间:2023-11-01 16:28:50 24 4
gpt4 key购买 nike

我正在尝试进行一些文本分析:

def cleaning_text(sentence):
sentence=sentence.lower()
sentence=re.sub('\'','',sentence.strip())
sentence=re.sub('^\d+\/\d+|\s\d+\/\d+|\d+\-\d+\-\d+|\d+\-\w+\-\d+\s\d+\:\d+|\d+\-\w+\-\d+|\d+\/\d+\/\d+\s\d+\:\d+',' ',sentence.strip())# dates removed
sentence=re.sub(r'(.)(\/)(.)',r'\1\3',sentence.strip())
sentence=re.sub("(.*?\//)|(.*?\\\\)|(.*?\\\)|(.*?\/)",' ',sentence.strip())
sentence=re.sub('^\d+','',sentence.strip())
sentence = re.sub('[%s]' % re.escape(string.punctuation),'',sentence.strip())
cleaned=' '.join([w for w in sentence.split() if not len(w)<2 and w not in ('no', 'sc','ln') ])
cleaned=cleaned.strip()
if(len(cleaned)<=1):
return "NA"
else:
return cleaned

org_val=udf(cleaning_text,StringType())
df_new =df.withColumn("cleaned_short_desc", org_val(df["symptom_short_description_"]))
df_new =df_new.withColumn("cleaned_long_desc", org_val(df_new["long_description"]))
longWordsDF = (df_new.select(explode(split('cleaned_long_desc',' ')).alias('word'))
longWordsDF.count()

我收到以下错误。

File "<stdin>", line 2, in cleaning_text
AttributeError: 'NoneType' object has no attribute 'lower'

我想执行字数统计,但任何类型的聚合函数都会给我一个错误。

我尝试了以下操作:

sentence=sentence.encode("ascii", "ignore")

在cleaning_text函数中添加了这条语句

df.dropna()

还是一样的问题,不知道怎么解决。

最佳答案

看起来您在某些列中有空值。在cleaning_text函数的开头添加一个if,错误就会消失:

if sentence is None:
return "NA"

关于hadoop - Wordcount Nonetype 错误 pyspark-,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46183491/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com