gpt4 book ai didi

java - 为什么 Hibernate 将我的批量插入分成 3 个查询

转载 作者:行者123 更新时间:2023-12-02 08:44:55 26 4
gpt4 key购买 nike

我目前正在尝试使用 Hibernate 实现批量插入。以下是我实现的一些内容:

<强>1。实体

@Entity
@Table(name = "my_bean_table")
@Data
public class MyBean {

@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "seqGen")
@SequenceGenerator(name = "seqGen", sequenceName = "bean_c_seq", allocationSize=50)
@Column(name = "my_bean_id")
private Long id;

@Column(name = "my_bean_name")
private String name;

@Column(name = "my_bean_age")
private int age;

public MyBean(String name, int age) {
this.name = name;
this.age = age;
}
}

2.application.properties

Hibernate 和数据源是这样配置的:

spring.datasource.url=jdbc:postgresql://{ip}:{port}/${db}?reWriteBatchedInserts=true&loggerLevel=TRACE&loggerFile=pgjdbc.log
spring.jpa.show-sql=truespring.jpa.properties.hibernate.jdbc.batch_size=50
spring.jpa.properties.hibernate.order_inserts=true

注意:&loggerLevel=TRACE&loggerFile=pgjdbc.log 用于调试目的

<强>3。我的 PostgresSQL 数据库中的元素

CREATE TABLE my_bean_table
(
my_bean_id bigint NOT NULL DEFAULT nextval('my_bean_seq'::regclass),
my_bean_name "char(100)" NOT NULL,
my_bean_age smallint NOT NULL,
CONSTRAINT bean_c_table_pkey PRIMARY KEY (bean_c_id)
)

CREATE SEQUENCE my_bean_seq
INCREMENT 50
START 1
MINVALUE 1
MAXVALUE 9223372036854775807
CACHE 1;

编辑:添加了 ItemWriter

public class MyBeanWriter implements ItemWriter<MyBean> {

private Logger logger = LoggerFactory.getLogger(MyBeanWriter .class);

@Autowired
MyBeanRepository repository;

@Override
public void write(List<? extends BeanFluxC> items) throws Exception {
repository.saveAll(items);
}

}

commit-interval 也设置为 50。

在 jdbc 驱动程序提供的日志文件中,我得到以下几行:

avr. 10, 2020 7:26:48 PM org.postgresql.core.v3.QueryExecutorImpl execute
FINEST: batch execute 3 queries, handler=org.postgresql.jdbc.BatchResultHandler@1317ac2c, maxRows=0, fetchSize=0, flags=5
avr. 10, 2020 7:26:48 PM org.postgresql.core.v3.QueryExecutorImpl sendParse
FINEST: FE=> Parse(stmt=null,query="insert into my_bean_table (my_bean_age, my_bean_name, my_bean_id) values ($1, $2, $3),($4, $5, $6),($7, $8, $9),($10, $11, $12),($13, $14, $15),($16, $17, $18),($19, $20, $21),($22, $23, $24),($25, $26, $27),($28, $29, $30),($31, $32, $33),($34, $35, $36),($37, $38, $39),($40, $41, $42),($43, $44, $45),($46, $47, $48),($49, $50, $51),($52, $53, $54),($55, $56, $57),($58, $59, $60),($61, $62, $63),($64, $65, $66),($67, $68, $69),($70, $71, $72),($73, $74, $75),($76, $77, $78),($79, $80, $81),($82, $83, $84),($85, $86, $87),($88, $89, $90),($91, $92, $93),($94, $95, $96)",oids={23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20})
...
FINEST: FE=> Execute(portal=null,limit=1)
avr. 10, 2020 7:26:48 PM org.postgresql.core.v3.QueryExecutorImpl sendParse
FINEST: FE=> Parse(stmt=null,query="insert into my_bean_table (my_bean_age, my_bean_name, my_bean_id) values ($1, $2, $3),($4, $5, $6),($7, $8, $9),($10, $11, $12),($13, $14, $15),($16, $17, $18),($19, $20, $21),($22, $23, $24),($25, $26, $27),($28, $29, $30),($31, $32, $33),($34, $35, $36),($37, $38, $39),($40, $41, $42),($43, $44, $45),($46, $47, $48)",oids={23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20})
...
avr. 10, 2020 7:26:48 PM org.postgresql.core.v3.QueryExecutorImpl sendParse
FINEST: FE=> Parse(stmt=null,query="insert into my_bean_table (my_bean_age, my_bean_name, my_bean_id) values ($1, $2, $3),($4, $5, $6)",oids={23,1043,20,23,1043,20})

这是我的问题:为什么批量查询分为 3 个查询:

  • 第一个查询:32 个元素
  • 第二个查询:16 个元素
  • 第三个查询:2 个元素

注意:我尝试将批量大小设置为 100 和 200,但仍然得到 3 个不同的查询。

最佳答案

调试时发现PgPreparedStatement类及其 transformQueriesAndParameters() 方法:

 @Override
protected void transformQueriesAndParameters() throws SQLException {
...
BatchedQuery originalQuery = (BatchedQuery) preparedQuery.query;
// Single query cannot have more than {@link Short#MAX_VALUE} binds, thus
// the number of multi-values blocks should be capped.
// Typically, it does not make much sense to batch more than 128 rows: performance
// does not improve much after updating 128 statements with 1 multi-valued one, thus
// we cap maximum batch size and split there.
...
final int highestBlockCount = 128;
final int maxValueBlocks = bindCount == 0 ? 1024 /* if no binds, use 1024 rows */
: Integer.highestOneBit( // deriveForMultiBatch supports powers of two only
Math.min(Math.max(1, (Short.MAX_VALUE - 1) / bindCount), highestBlockCount));
}
  • 批量插入的单个查询最多只能包含 128 个元素
  • 其他行数将为 2 的幂

我现在使用 128 作为数据库中的序列增量和客户端的批量大小参数,它就像一个魅力。

关于java - 为什么 Hibernate 将我的批量插入分成 3 个查询,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61145660/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com