gpt4 book ai didi

Python - 避免内存错误与巨大的数据集

转载 作者:太空宇宙 更新时间:2023-11-03 12:44:57 26 4
gpt4 key购买 nike

我有一个连接到 PostGreSQL 数据库的 python 程序。在这个数据库中,我有相当多的数据(大约 12 亿行)。幸运的是,我不必同时分析所有这些行。

那 12 亿行分布在多个表(大约 30 个)上。目前我正在访问一个名为 table_3 的表,我想在其中访问具有特定“did”值的所有行(因为该列被调用)。

我已经使用 SQL 命令计算了行数:

SELECT count(*) FROM table_3 WHERE did='356002062376054';

返回 1.57 亿行。

我将对所有这些行执行一些“分析”(提取 2 个特定值)并对这些值进行一些计算,然后将它们写入字典,然后将它们保存回 PostGreSQL 的不同表中。

问题是我正在创建大量列表和字典来管理所有这些,即使我使用的是 Python 3 64 位并且有 64 GB RAM,我最终还是内存不足。

部分代码:

CONNECTION = psycopg2.connect('<psycopg2 formatted string>')
CURSOR = CONNECTION.cursor()

DID_LIST = ["357139052424715",
"353224061929963",
"356002064810514",
"356002064810183",
"358188051768472",
"358188050598029",
"356002061925067",
"358188056470108",
"356002062376054",
"357460064130045"]

SENSOR_LIST = [1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, 12, 13, 801, 900, 901,
902, 903, 904, 905, 906, 907,
908, 909, 910, 911]

for did in did_list:
table_name = did
for sensor_id in sensor_list:
rows = get_data(did, sensor_id)
list_object = create_standard_list(sensor_id, rows) # Happens here
formatted_list = format_table_dictionary(list_object) # Or here
pushed_rows = write_to_table(table_name, formatted_list) #write_to_table method is omitted as that is not my problem.

def get_data(did, table_id):
"""Getting data from postgresql."""
table_name = "table_{0}".format(table_id)
query = """SELECT * FROM {0} WHERE did='{1}'
ORDER BY timestamp""".format(table_name, did)

CURSOR.execute(query)
CONNECTION.commit()

return CURSOR

def create_standard_list(sensor_id, data):
"""Formats DB data to dictionary"""
list_object = []

print("Create standard list")
for row in data: # data is the psycopg2 CURSOR
row_timestamp = row[2]
row_data = row[3]

temp_object = {"sensor_id": sensor_id, "timestamp": row_timestamp,
"data": row_data}

list_object.append(temp_object)

return list_object


def format_table_dictionary(list_dict):
"""Formats dictionary to simple data
table_name = (dates, data_count, first row)"""
print("Formatting dict to DB")
temp_today = 0
dict_list = []
first_row = {}
count = 1

for elem in list_dict:
# convert to seconds
date = datetime.fromtimestamp(elem['timestamp'] / 1000)
today = int(date.strftime('%d'))
if temp_today is not today:
if not first_row:
first_row = elem['data']
first_row_str = str(first_row)
dict_object = {"sensor_id": elem['sensor_id'],
"date": date.strftime('%d/%m-%Y'),
"reading_count": count,
# size in MB of data
"approx_data_size": (count*len(first_row_str)/1000),
"time": date.strftime('%H:%M:%S'),
"first_row": first_row}

dict_list.append(dict_object)
first_row = {}
temp_today = today
count = 0
else:
count += 1

return dict_list

我的错误发生在创建代码中标有注释的两个列表中的任何一个时。它代表我的电脑停止响应,并最终将我注销。如果这很重要,我正在运行 Windows 10。

我知道可以排除我使用“create_standard_list”方法创建的第一个列表,并且可以在“format_table_dictionary”代码中运行该代码,从而避免内存中包含 157 个 mio 元素的列表,但我认为其中的一些我将遇到的其他表也会有类似的问题,而且可能更大,所以我现在想优化它,但我不确定我能做什么?

我想写入文件并没有多大帮助,因为我必须读取该文件,然后再将它放回内存中?

极简示例

我有一张 table

---------------------------------------------------------------
|Row 1 | did | timestamp | data | unused value | unused value |
|Row 2 | did | timestamp | data | unused value | unused value |
....
---------------------------------

table = [{ values from above row1 }, { values from above row2},...]

connection = psycopg2.connect(<connection string>)
cursor = connection.cursor()

table = cursor.execute("""SELECT * FROM table_3 WHERE did='356002062376054'
ORDER BY timestamp""")

extracted_list = extract(table)
calculated_list = calculate(extracted_list)
... write to db ...

def extract(table):
"""extract all but unused values"""
new_list = []
for row in table:
did = row[0]
timestamp = row[1]
data = row[2]

a_dict = {'did': did, 'timestamp': timestamp, 'data': data}
new_list.append(a_dict)

return new_list


def calculate(a_list):
"""perform calculations on values"""
dict_list = []
temp_today = 0
count = 0
for row in a_list:
date = datetime.fromtimestamp(row['timestamp'] / 1000) # from ms to sec
today = int(date.strfime('%d'))
if temp_today is not today:
new_dict = {'date': date.strftime('%d/%m-%Y'),
'reading_count': count,
'time': date.strftime('%H:%M:%S')}
dict_list.append(new_dict)

return dict_list


最佳答案

create_standard_list()format_table_dictionary() 可以构建生成器(yielding 每个项目而不是 returning完整列表),这将停止在内存中保存整个列表,因此应该可以解决您的问题,例如:

def create_standard_list(sensor_id, data):
for row in data:
row_timestamp = row[2]
row_data = row[3]

temp_object = {"sensor_id": sensor_id, "timestamp": row_timestamp,
"data": row_data}
yield temp_object
#^ yield each item instead of appending to a list

有关 generators 的更多信息和 yield keyword .

关于Python - 避免内存错误与巨大的数据集,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41890945/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com