gpt4 book ai didi

python - Cassandra 多处理无法 pickle _thread.lock 对象

转载 作者:行者123 更新时间:2023-12-01 03:53:01 26 4
gpt4 key购买 nike

我尝试使用 Cassandramultiprocessing 根据中的示例同时插入行(虚拟数据)

http://www.datastax.com/dev/blog/datastax-python-driver-multiprocessing-example-for-improved-bulk-data-throughput

这是我的代码

class QueryManager(object):

concurrency = 100 # chosen to match the default in execute_concurrent_with_args

def __init__(self, session, process_count=None):
self.pool = Pool(processes=process_count, initializer=self._setup, initargs=(session,))

@classmethod
def _setup(cls, session):
cls.session = session
cls.prepared = cls.session.prepare("""
INSERT INTO test_table (key1, key2, key3, key4, key5) VALUES (?, ?, ?, ?, ?)
""")

def close_pool(self):
self.pool.close()
self.pool.join()

def get_results(self, params):
results = self.pool.map(_multiprocess_write, (params[n:n+self.concurrency] for n in range(0, len(params), self.concurrency)))
return list(itertools.chain(*results))

@classmethod
def _results_from_concurrent(cls, params):
return [results[1] for results in execute_concurrent_with_args(cls.session, cls.prepared, params)]


def _multiprocess_write(params):
return QueryManager._results_from_concurrent(params)


if __name__ == '__main__':

processes = 2

# connect cluster
cluster = Cluster(contact_points=['127.0.0.1'], port=9042)
session = cluster.connect()

# database name is a concatenation of client_id and system_id
keyspace_name = 'unit_test_0'

# drop keyspace if it already exists in a cluster
try:
session.execute("DROP KEYSPACE IF EXISTS " + keyspace_name)
except:
pass

create_keyspace_query = "CREATE KEYSPACE " + keyspace_name \
+ " WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'};"
session.execute(create_keyspace_query)

# use a session's keyspace
session.set_keyspace(keyspace_name)

# drop table if it already exists in the keyspace
try:
session.execute("DROP TABLE IF EXISTS " + "test_table")
except:
pass

# create a table for invoices in the keyspace
create_test_table = "CREATE TABLE test_table("

keys = "key1 text,\n" \
"key2 text,\n" \
"key3 text,\n" \
"key4 text,\n" \
"key5 text,\n"

create_invoice_table_query += keys
create_invoice_table_query += "PRIMARY KEY (key1))"
session.execute(create_test_table)

qm = QueryManager(session, processes)

params = list()
for row in range(100000):
key = 'test' + str(row)
params.append([key, 'test', 'test', 'test', 'test'])

start = time.time()
rows = qm.get_results(params)
delta = time.time() - start
log.info(fm('Cassandra inserts 100k dummy rows for ', delta, ' secs'))

当我执行代码时,出现以下错误

TypeError: can't pickle _thread.lock objects

指向

self.pool = Pool(processes=process_count, initializer=self._setup, initargs=(session,))

最佳答案

这表明您正在尝试序列化 IPC 边界上的锁定。我认为这可能是因为您提供了 Session 对象作为工作程序初始化函数的参数。让 init 函数在每个工作进程中创建一个新 session (请参阅您引用的 blog post 中的“每个进程的 session ”部分)。

关于python - Cassandra 多处理无法 pickle _thread.lock 对象,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37942249/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com