gpt4 book ai didi

Python 多线程性能 - 改用 C++?

转载 作者:搜寻专家 更新时间:2023-10-31 01:04:21 32 4
gpt4 key购买 nike

所以,我有一个 Python 脚本,基本上可以写入 80GB 以上的文件。目前它只是串行运行,在服务器上运行大约需要 13 个小时,这是我唯一一次实际运行它。

我打算将它并行化,以便它写入多个文件,而不是一个文件。

将我已经拥有的东西保留在 Python 中但合并多个线程会稍微容易一些(有一个共享数据映射,他们需要访问,没有人会写入,所以它不会需要保护)。

但是,将它保留在 Python 中是不是很愚蠢?我也知道 C++,所以你认为我应该用 C++ 重写它吗?我认为该程序比其他任何程序都更受磁盘限制(没有大量用于写入文件的逻辑)所以也许它没有太大区别。我不确定 C++ 需要多长时间才能编写相同的 80GB 文件(串行)。


2014 年 6 月 6 日更新,太平洋标准时间 16:40:我在下面发布我的代码以确定代码本身是否存在瓶颈,而不是纯粹的磁盘绑定(bind)。

我对每个表调用 writeEntriesToSql() 一次,其中大约有 30 个表。 “大小”是写入表的插入数。所有表的累计大小约为 200,000,000。

我确实注意到我一遍又一遍地编译我的正则表达式,这可能会造成很多浪费,尽管我不确定有多少。

def writeEntriesToSql(db, table, size, outputFile):

# get a description of the table
rows = queryDatabaseMultipleRows(db, 'DESC ' + table)
fieldNameCol = 0 # no enums in python 2.7 :(
typeCol = 1
nullCol = 2
keyCol = 3
defaultCol = 4
extraCol = 5

fieldNamesToTypes = {}

for row in rows:
if (row[extraCol].find("auto_increment") == -1):
# insert this one
fieldNamesToTypes[row[fieldNameCol]] = row[typeCol]


for i in range(size):
fieldNames = ""
fieldVals = ""
count = 0

# go through the fields
for fieldName, type in fieldNamesToTypes.iteritems():
# build a string of field names to be used in the INSERT statement
fieldNames += table + "." + fieldName

if fieldName in foreignKeys[table]:
otherTable = foreignKeys[table][fieldName][0]
otherTableKey = foreignKeys[table][fieldName][1]
if len(foreignKeys[table][fieldName]) == 3:
# we already got the value so we don't have to get it again
val = foreignKeys[table][fieldName][2]
else:
# get the value from the other table and store it
#### I plan for this to be an infrequent query - unless something is broken here!
query = "SELECT " + otherTableKey + " FROM " + otherTable + " LIMIT 1"
val = queryDatabaseSingleRowCol(db, query)
foreignKeys[table][fieldName].append(val)
fieldVals += val
else:
fieldVals += getDefaultFieldVal(type)
count = count + 1
if count != len(fieldNamesToTypes):
fieldNames += ","
fieldVals += ","


# return the default field value for a given field type which will be used to prepopulate our tables
def getDefaultFieldVal(type):

if not ('insertTime' in globals()):
global insertTime
insertTime = datetime.utcnow()
# store this time in a file so that it can be retrieved by SkyReporterTest.perfoutput.py
try:
timeFileName = perfTestDir + "/dbTime.txt"
timeFile = open(timeFileName, 'w')
timeFile.write(str(insertTime))
except:
print "!!! cannot open file " + timeFileName + " for writing. Please make sure this is run where you have write permissions\n"
os.exit(1)


# many of the types are formatted with a typename, followed by a size in parentheses
##### Looking at this more closely, I suppose I could be compiling this once instead of over and over - a bit bottleneck here?
p = re.compile("(.*)\(([0-9]+).*")


size = 0
if (p.match(type)):
size = int(p.sub(r"\2", type))
type = p.sub(r"\1", type)
else:
size = 0


if (type == "tinyint"):
return str(random.randint(1, math.pow(2,7)))
elif (type == "smallint"):
return str(random.randint(1, math.pow(2,15)))
elif (type == "mediumint"):
return str(random.randint(1, math.pow(2,23)))
elif (type == "int" or type == "integer"):
return str(random.randint(1, math.pow(2,31)))
elif (type == "bigint"):
return str(random.randint(1, math.pow(2,63)))
elif (type == "float" or type == "double" or type == "doubleprecision" or type == "decimal" or type == "realdecimal" or type == "numeric"):
return str(random.random() * 100000000) # random endpoints for this random
elif (type == "date"):
insertTime = insertTime - timedelta(seconds=1)
return "'" + insertTime.strftime("%Y-%m-%d") + "'"
elif (type == "datetime"):
insertTime = insertTime - timedelta(seconds=1)
return "'" + insertTime.strftime("%Y-%m-%d %H:%M:%S") + "'"
elif (type == "timestamp"):
insertTime = insertTime - timedelta(seconds=1)
return "'" + insertTime.strftime("%Y%m%d%H%M%S") + "'"
elif (type == "time"):
insertTime = insertTime - timedelta(seconds=1)
return "'" + insertTime.strftime("%H:%M:%S") + "'"
elif (type == "year"):
insertTime = insertTime - timedelta(seconds=1)
return "'" + insertTime.strftime("%Y") + "'"
elif (type == "char" or type == "varchar" or type == "tinyblog" or type == "tinytext" or type == "blob" or type == "text" or type == "mediumblob"
or type == "mediumtext" or type == "longblob" or type == "longtext"):
if (size == 0): # not specified
return "'a'"
else:
lst = [random.choice(string.ascii_letters + string.digits) for n in xrange(size)]
strn = "".join(lst)
return strn
elif (type == "enum"):
return "NULL" # TBD if needed
elif (type == "set"):
return "NULL" # TBD if needed
else:
print "!!! Unrecognized mysql type: " + type + "\n"
os.exit(1)

最佳答案

Python 的 I/O 并不比其他语言慢多少。解释器启动起来可能很慢,但是写入这么大的文件会分摊这种影响。

我建议查看 multiprocessing模块,这将允许您通过拥有多个 Python 实例来实现真正的并行性,这将有助于绕过 GIL。然而,这些会附加一些开销,但同样,对于一个 80GB 的文件来说,这无关紧要。请记住,每个进程都是一个完整的进程,这意味着它将占用更多的计算资源。

关于Python 多线程性能 - 改用 C++?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/24086491/

32 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com