gpt4 book ai didi

python - cx_Oracle executemany 与 CLOB

转载 作者:行者123 更新时间:2023-11-28 23:05:08 25 4
gpt4 key购买 nike

我正在尝试解析多个 CSV 并使用 cx_Oracle 将它们的数据插入到表中。我使用 execute 插入表没有问题,但是当我尝试使用 executemany 执行相同的过程时,出现错误。我使用 execute 的代码是

with open(key,'r') as file:
for line in file:
data = data.split(",")
query = "INSERT INTO " + tables[key] + " VALUES ("
for col in range(len(data)):
query += ":" + str(col) + ","
query = query[:-1] + ")"
cursor.execute(query, data)

但是当我用

替换它时
with open(key,'r') as file:
list = []
for line in file:
data = data.split(",")
list.append(data)
if len(list) > 0:
query = "INSERT INTO " + tables[key] + " VALUES ("
for col in range(len(data)):
query += ":" + str(col) + ","
query = query[:-1] + ")"
cursor.prepare(query)
cursor.executemany(None,list)

当我尝试插入到具有 CLOB 列且数据超过 4000 字节的表时,出现“ValueError:字符串数据太大”。当表没有 CLOB 列时,Executemany 工作得很好。有什么方法可以让 cx_Oracle 在执行 executemany 时将适当的列视为 CLOB?

最佳答案

尝试将大列的输入大小设置为 cx_Oracle.CLOB。如果您有二进制数据,则可能不起作用,但应该适用于 CSV 中的任何文本。 2K 值可能低于所需值。

请注意,当涉及 CLOB 列时,executemany 似乎要慢很多,但仍然比重复执行要好:

def _executemany(cursor, sql, data):
'''
run the parameterized sql with the given dataset using cursor.executemany
if any column contains string values longer than 2k, use CLOBS to avoid "string
too large" errors.

@param sql parameterized sql, with parameters named according to the field names in data
@param data array of dicts, one per row to execute. each dict must have fields corresponding
to the parameter names in sql
'''
input_sizes = {}
for row in data:
for k, v in row.items():
if isinstance(v, basestring) and len(v) > 2000:
input_sizes[k] = cx_Oracle.CLOB
cursor.setinputsizes(**input_sizes)
cursor.executemany(sql, data)

关于python - cx_Oracle executemany 与 CLOB,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/6624991/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com