gpt4 book ai didi

flask - Sqlalchemy - 当数据有关系时如何正确地将数据批量插入到数据库中

转载 作者:行者123 更新时间:2023-12-03 15:54:28 25 4
gpt4 key购买 nike

我有一行行的数据文件。每件事都有一个基因列在其中。这是一对多的关系,因为每个基因都可以是多个事物的一部分,但每个事物只能有一个基因。

想象一下大致像这样的模型:

class Gene(db.Model):

__tablename__ = "gene"

id = db.Column(db.Integer, primary_key=True)

name1 = db.Column(db.Integer, index=True, unique=True, nullable=False) # nullable might be not right
name2 = db.Column(db.String(120), index=True, unique=True)

things = db.relationship("Thing", back_populates='gene')

def __init__(self, name1, name2=None):
self.name1 = name1
self.name2 = name2

@classmethod
def find_or_create(cls, name1, name2=None):
record = cls.query.filter_by(name1=name1).first()
if record != None:
if record.name2 == None and name2 != None:
record.name2 = name2
else:
record = cls(name1, name2)
db.session.add(record)
return record


class Thing(db.Model):

__tablename__ = "thing"

id = db.Column(db.Integer, primary_key=True)

gene_id = db.Column(db.Integer, db.ForeignKey("gene.id"), nullable=False, index=True)
gene = db.relationship("Gene", back_populates='thing')

data = db.Column(db.Integer)

我想批量插入很多东西,但恐怕通过使用
    db.engine.execute(Thing.__table__.insert(), things)

我不会有数据库中的关系。是否有某种方法可以通过批量添加来保留关系,或者以某种方式依次添加这些关系,然后在以后建立关系?全部 the documentation about bulk adding似乎假设您想插入极其简单的模型,而当您的模型更复杂时,我对如何执行此操作感到有些困惑(上面的示例是简化版)。

-- 更新 1 --

This answer似乎表明没有真正的解决方案。

This answer似乎证实了这一点。

最佳答案

我实际上对我的代码进行了相当多的更改,我认为它有所改进,并且我也在此处更改了我的答案。

我定义了以下 2 个表。 Sets 和 Data,对于 Sets 中的每个 set,Data 中有很多数据。

class Sets(sa_dec_base):
__tablename__ = 'Sets'
id = sa.Column(sa.Integer, primary_key=True)
FileName = sa.Column(sa.String(250), nullable=False)
Channel = sa.Column(sa.Integer, nullable=False)
Loop = sa.Column(sa.Integer, nullable=False)
Frequencies = sa.Column(sa.Integer, nullable=False)
Date = sa.Column(sa.String(250), nullable=False)
Time = sa.Column(sa.String(250), nullable=False)
Instrument = sa.Column(sa.String(250), nullable=False)
Set_Data = sa_orm.relationship('Data')
Set_RTD_spectra = sa_orm.relationship('RTD_spectra')
Set_RTD_info = sa.orm.relationship('RTD_info')
__table_args__ = (sa.UniqueConstraint('FileName', 'Channel', 'Loop'),)
class Data(sa_dec_base):
__tablename__ = 'Data'
id = sa.Column(sa.Integer, primary_key = True)
Frequency = sa.Column(sa.Float, nullable=False)
Magnitude = sa.Column(sa.Float, nullable=False)
Phase = sa.Column(sa.Float, nullable=False)
Set_ID = sa.Column(sa.Integer, sa.ForeignKey('Sets.id'))
Data_Set = sa_orm.relationship('Sets', foreign_keys = [Set_ID])

然后,我将这个函数写入了有关系的bulk_insert数据。
def insert_set_data(session, set2insert, data2insert, Data):
"""
Insert set and related data; with uniqueconstraint check on the set
set2insert is the prepared set object without id. A correct and unique id will given by the db itself
data2insert is a big pandas df, so that bulk_insert is used
"""

session.add(set2insert)
try:
session.flush()
except sa.exc.IntegrityError as err: # here catch uniqueconstraint error if set already in db
session.rollback()
print('already inserted ', set2insert.FileName, 'loop ', set2insert.Loop, 'channel ', set2insert.Channel)
pass
else: # if not error, flush will give the id to the set ("Set.id")
data2insert['Set_ID'] = set2insert.id # pass Set.id to data2insert as foreign_keys to keep relationship
data2insert = data2insert.to_dict(orient = 'records') # convert df to record for bulk_insert
session.bulk_insert_mappings(Data, data2insert) # bulk_insert
session.commit() # commit only once, so that it is done only if set and data were correctly inserted
print('inserting ', set2insert.FileName, 'loop ', set2insert.Loop, 'channel ', set2insert.Channel)

无论如何,可能其他更好的解决方案是可能的。

关于flask - Sqlalchemy - 当数据有关系时如何正确地将数据批量插入到数据库中,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37472433/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com