gpt4 book ai didi

Python 记录集中的多线程

转载 作者:行者123 更新时间:2023-12-01 03:39:58 26 4
gpt4 key购买 nike

我有一个数据库记录集(大约 1000 行),目前正在迭代它们,以便使用每条记录的额外数据库查询来集成更多数据。

这样做会将整个处理时间提高到大约 100 秒。

我想做的是将功能共享给 2-4 个进程。

我使用 Python 2.7 来兼容 AWS Lambda。

def handler(event, context):

try:

records = connection.get_users()

mandrill_client = open_mandrill_connection()

mandrill_messages = get_mandrill_messages()

mandrill_template = 'POINTS weekly-report-to-user'

start_time = time.time()

messages = build_messages(mandrill_messages, records)

print("OVERALL: %s seconds ---" % (time.time() - start_time))

send_mandrill_message(mandrill_client, mandrill_template, messages)

connection.close_database_connection()

return "Process Completed"

except Exception as e:

print(e)

以下是我要放入线程中的函数:

def build_messages(messages, records):

for record in records:

record = dict(record)

stream = get_user_stream(record)

data = compile_loyalty_stream(stream)

messages['to'].append({
'email': record['email'],
'type': 'to'
})

messages['merge_vars'].append({
'rcpt': record['email'],
'vars': [
{
'name': 'total_points',
'content': record['total_points']
},
{
'name': 'total_week',
'content': record['week_points']
},
{
'name': 'stream_greek',
'content': data['el']
},
{
'name': 'stream_english',
'content': data['en']
}
]
})

return messages

我尝试过导入多处理库:

from multiprocessing.pool import ThreadPool

try block 内创建一个池并将函数映射到该池内:

pool = ThreadPool(4)

messages = pool.map(build_messages_in, itertools.izip(itertools.repeat(mandrill_messages), records))

def build_messages_in(a_b):
build_msg(*a_b)


def build_msg(a, b):
return build_messages(a, b)

def get_user_stream(record):

response = []

i = 0

for mod, mod_id, act, p, act_created in izip(record['models'], record['model_ids'], record['actions'],
record['points'], record['action_creation']):

information = get_reference(mod, mod_id)

if information:

response.append({
'action': act,
'points': p,
'created': act_created,
'info': information
})

if (act == 'invite_friend') \
or (act == 'donate') \
or (act == 'bonus_500_general') \
or (act == 'bonus_1000_general') \
or (act == 'bonus_500_cancel') \
or (act == 'bonus_1000_cancel'):

response[i]['info']['date_ref'] = act_created
response[i]['info']['slug'] = 'attiki'

if (act == 'bonus_500_general') \
or (act == 'bonus_1000_general') \
or (act == 'bonus_500_cancel') \
or (act == 'bonus_1000_cancel'):

response[i]['info']['title'] = ''

i += 1

return response

最后,我从 build_message 函数中删除了 for 循环。

我得到的结果是“NoneType”对象不可迭代。

这是正确的做法吗?

最佳答案

您的代码看起来相当深入,因此您无法确定多线程在高级别应用时会带来任何性能提升。因此,值得深入挖掘导致最大延迟的点,并考虑如何解决特定瓶颈。请参阅here有关线程限制的更多讨论。

例如,正如我们在评论中讨论的那样,如果您可以查明一个需要很长时间的任务,那么您可以尝试使用多处理来并行化它 - 以充分利用您的 CPU力量。这是一个通用示例,希望它足够简单,易于理解,可以镜像您的 Postgres 查询,而无需进入您自己的代码库;我认为这是一个不可行的努力。

import multiprocessing as mp
import time
import random
import datetime as dt

MAILCHIMP_RESPONSE = [x for x in range(1000)]

def chunks(l, n):
n = max(1, n)
return [l[i:i + n] for i in range(0, len(l), n)]


def db_query():
''' Delayed response from database '''
time.sleep(0.01)
return random.random()


def do_queries(query_list):
''' The function that takes all your query ids and executes them
sequentially for each id '''
results = []
for item in query_list:
query = db_query()
# Your super-quick processing of the Postgres response
processing_result = query * 2
results.append([item, processing_result])
return results


def single_processing():
''' As you do now - equivalent to get_reference '''
result_of_process = do_queries(MAILCHIMP_RESPONSE)
return result_of_process


def multi_process(chunked_data, queue):
''' Same as single_processing, except we put our results in queue rather
than returning them '''
result_of_process = do_queries(chunked_data)
queue.put(result_of_process)


def multiprocess_handler():
''' Divide and conquor on our db requests. We split the mailchimp response
into a series of chunks and fire our queries simultaneously. Thus, each
concurrent process has a smaller number of queries to make '''

num_processes = 4 # depending on cores/resources
size_chunk = len(MAILCHIMP_RESPONSE) / num_processes
chunked_queries = chunks(MAILCHIMP_RESPONSE, size_chunk)

queue = mp.Queue() # This is going to combine all the results

processes = [mp.Process(target=multi_process,
args=(chunked_queries[x], queue)) for x in range(num_processes)]

for p in processes: p.start()

divide_and_conquor_result = []
for p in processes:
divide_and_conquor_result.extend(queue.get())

return divide_and_conquor_result


if __name__ == '__main__':
start_single = dt.datetime.now()

single_process = single_processing()

print "Single process took {}".format(dt.datetime.now() - start_single)
print "Number of records processed = {}".format(len(single_process))

start_multi = dt.datetime.now()

multi = multiprocess_handler()

print "Multi process took {}".format(dt.datetime.now() - start_multi)
print "Number of records processed = {}".format(len(multi))

关于Python 记录集中的多线程,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39750873/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com