gpt4 book ai didi

python - spark 做了多少环境副本?

转载 作者:太空狗 更新时间:2023-10-30 00:16:17 24 4
gpt4 key购买 nike

我有一个 PySpark 应用程序,它必须详细说明大约 5gb 的压缩数据(字符串)。我正在使用具有 12 个内核(24 个线程)和 72Gb RAM 的小型服务器。我的 PySpark 程序仅包含 2 个映射操作,借助于 3 个非常大的正则表达式(每个已编译 3gb)并加载了 pickle。 Spark 在独立模式下工作,worker 和 master 在同一台机器上。

我的问题是:spark 是否会为每个执行程序核心复制每个变量?因为它使用了所有可用的内存,然后使用了大量的交换空间。或者它是否将所有分区加载到 RAM 中? RDD 包含大约 1000 万个必须由 3 正则表达式搜索的字符串。 RDD 大约有 1000 个分区。我很难完成这项任务,因为几分钟后内存已满, Spark 开始使用交换空间变得非常非常慢。 我注意到没有正则表达式情况是一样的。

这是我的代码,它删除了 twitter 推文的所有无用字段,并扫描推文的文本和描述中的特定单词:

import json
import re
import twitter_util as twu
import pickle

from pyspark import SparkContext
sc = SparkContext()

prefix = '/home/lucadiliello'

source = prefix + '/data/tweets'
dest = prefix + '/data/complete_tweets'

#Regex's path
companies_names_regex = prefix + '/data/comp_names_regex'
companies_names_dict = prefix + '/data/comp_names_dict'
companies_names_dict_to_legal = prefix + '/data/comp_names_dict_to_legal'

#Loading the regex's
comp_regex = pickle.load(open(companies_names_regex))
comp_dict = pickle.load(open(companies_names_dict))
comp_dict_legal = pickle.load(open(companies_names_dict_to_legal))

#Loading the RDD from textfile
tx = sc.textFile(source).map(lambda a: json.loads(a))


def get_device(input_text):
output_text = re.sub('<[^>]*>', '', input_text)
return output_text

def filter_data(a):
res = {}
try:
res['mentions'] = a['entities']['user_mentions']
res['hashtags'] = a['entities']['hashtags']
res['created_at'] = a['created_at']
res['id'] = a['id']

res['lang'] = a['lang']
if 'place' in a and a['place'] is not None:
res['place'] = {}
res['place']['country_code'] = a['place']['country_code']
res['place']['place_type'] = a['place']['place_type']
res['place']['name'] = a['place']['name']
res['place']['full_name'] = a['place']['full_name']

res['source'] = get_device(a['source'])
res['text'] = a['text']
res['timestamp_ms'] = a['timestamp_ms']

res['user'] = {}
res['user']['created_at'] = a['user']['created_at']
res['user']['description'] = a['user']['description']
res['user']['followers_count'] = a['user']['followers_count']
res['user']['friends_count'] = a['user']['friends_count']
res['user']['screen_name'] = a['user']['screen_name']
res['user']['lang'] = a['user']['lang']
res['user']['name'] = a['user']['name']
res['user']['location'] = a['user']['location']
res['user']['statuses_count'] = a['user']['statuses_count']
res['user']['verified'] = a['user']['verified']
res['user']['url'] = a['user']['url']
except KeyError:
return []

return [res]


results = tx.flatMap(filter_data)


def setting_tweet(tweet):

text = tweet['text'] if tweet['text'] is not None else ''
descr = tweet['user']['description'] if tweet['user']['description'] is not None else ''
del tweet['text']
del tweet['user']['description']

tweet['text'] = {}
tweet['user']['description'] = {}
del tweet['mentions']

#tweet
tweet['text']['original_text'] = text
tweet['text']['mentions'] = twu.find_retweet(text)
tweet['text']['links'] = []
for j in twu.find_links(text):
tmp = {}
try:
tmp['host'] = twu.get_host(j)
tmp['link'] = j
tweet['text']['links'].append(tmp)
except ValueError:
pass

tweet['text']['companies'] = []
for x in comp_regex.findall(text.lower()):
tmp = {}
tmp['id'] = comp_dict[x.lower()]
tmp['name'] = x
tmp['legalName'] = comp_dict_legal[x.lower()]
tweet['text']['companies'].append(tmp)

# descr
tweet['user']['description']['original_text'] = descr
tweet['user']['description']['mentions'] = twu.find_retweet(descr)
tweet['user']['description']['links'] = []
for j in twu.find_links(descr):
tmp = {}
try:
tmp['host'] = twu.get_host(j)
tmp['link'] = j
tweet['user']['description']['links'].append(tmp)
except ValueError:
pass

tweet['user']['description']['companies'] = []
for x in comp_regex.findall(descr.lower()):
tmp = {}
tmp['id'] = comp_dict[x.lower()]
tmp['name'] = x
tmp['legalName'] = comp_dict_legal[x.lower()]
tweet['user']['description']['companies'].append(tmp)

return tweet


res = results.map(setting_tweet)

res.map(lambda a: json.dumps(a)).saveAsTextFile(dest, compressionCodecClass="org.apache.hadoop.io.compress.BZip2Codec")

更新大约 1 小时后,内存 (72gb) 和交换空间 (72gb) 完全满了。就我而言,使用广播不是解决方案。

更新 2在不使用 pickle 加载 3 个变量的情况下,使用高达 10gb 的 RAM 而不是 144GB 的 RAM 就可以毫无问题地结束! (72GB 内存 + 72Gb 交换空间)

最佳答案

My question is: does spark replicate each variable for each executor core?

是的!

每个(局部)变量的副本数等于您分配给 Python worker 的线程数。


至于你的问题,尝试在不使用 pickle 的情况下加载 comp_regexcomp_dictcomp_dict_legal

关于python - spark 做了多少环境副本?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43955326/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com