gpt4 book ai didi

java - 82秒提取数据库中的一行,大型数据库出现java.lang.OutOfMemoryError : GC overhead limit exceeded,

转载 作者:行者123 更新时间:2023-12-01 11:16:31 25 4
gpt4 key购买 nike

我向数据库(localhost)中的表添加了 100 000 行,从那时起我得到 this error :

java.lang.OutOfMemoryError: GC overhead limit exceeded

我通过输入 consol 解决了问题:

javaw -XX:-UseConcMarkSweepGC

输出的控制台是(请参阅下面的代码了解上下文):

2015-08-02T02:57:22.779+0200|Info: 5
2015-08-02T02:57:22.779+0200|Info: end, time taken: 82755

提取数据库中的一行需要 82 秒(请参阅最后的代码)。当我的行数较少时,它工作正常,所以我想知道:

  • 为什么提取 1 行需要这么多时间? JPA 不可能提取对象中的每一行吗?或者是吗?哇。
  • 有办法解决这个问题吗?我的意思是在 80 秒内提取一行的速度非常慢。
  • 我真的需要输入命令 -XX:-UseConcMarkSweepGC 吗?它有什么作用 ?来自 the doc :

Use concurrent mark-sweep collection for the old generation. (Introduced in 1.4.1)

这是我的代码:

@EJB
private ThreadLookUpInterface ts;

@Schedule(hour = "*", minute = "*/1", second = "0", persistent = false)
@Override
public void makeTopThreadList() {
System.out.println("" + ts.getThread(5).getIdthread());
}

我的服务 ejb 像这样:

@Stateless
public class ThreadLookUpService implements ThreadLookUpInterface {

@PersistenceContext(unitName = "my-pu")
private EntityManager em;

private static final String FIND_THREAD_BY_ID = "SELECT t FROM Thethread t WHERE t.idthread=:id";

@Override
public Thethread getThread(int threadId) {
Query query = em.createQuery(FIND_THREAD_BY_ID);
query.setParameter("id", threadId);
try {
Thethread thread = (Thethread) query.getSingleResult();
return thread;
} catch (NoResultException e) {
return null;
} catch (Exception e) {
throw new DAOException(e);
}
}
}

还有我的实体:

@Entity
@Table(name = "thethreads")
@NamedQuery(name = "Thethread.findAll", query = "SELECT t FROM Thethread t")
public class Thethread implements Serializable {
private static final long serialVersionUID = 1L;

@Id
private int idthread;

private String content;

@Temporal(TemporalType.TIMESTAMP)
@Column(name = "date_posted")
private Date datePosted;

private int downvotes;

@Column(name = "hot_score")
private int hotScore;

@Column(name = "is_pol")
private String isPol;

private String title;

private String type;

private int upvotes;

// bi-directional many-to-one association to Category
@OneToMany(mappedBy = "thethread", fetch = FetchType.EAGER)
private List<Category> categories;

// bi-directional many-to-one association to Post
@OneToMany(mappedBy = "thethread", fetch = FetchType.EAGER)
private List<Post> posts;

// bi-directional many-to-one association to Category
@ManyToOne
@JoinColumn(name = "categories_idcategory")
private Category category;

// bi-directional many-to-one association to Post
@ManyToOne
@JoinColumn(name = "last_post")
private Post post;

// bi-directional many-to-one association to User
@ManyToOne
@JoinColumn(name = "posted_by")
private User user;

// bi-directional many-to-one association to ThreadVote
@OneToMany(mappedBy = "thethread", fetch = FetchType.EAGER)
private List<ThreadVote> threadVotes;

public Thethread() {
}
}

最佳答案

command -XX:-UseConcMarkSweepGC? What does it do?

它设置垃圾收集器:ConcurrentMarkSweep (CMS)。Java 堆(对象在其生命周期中驻留的地方)主要分为两部分:年轻代和老年代。垃圾收集器负责在此分区上应用一些算法或策略来清理堆。CMS 不像默认 GC 那样在完全垃圾收集期间停止应用程序线程,而是使用一个或多个后台线程定期扫描老年代并丢弃未使用的对象。这可以帮助您减少开销情况,但问题并不会消失。

extract every row in an object ? Or does it ? Just wow.Is there a way around this ? I mean extracting a single row in 80 seconds is borderline slow.

答案首先取决于您的实体模型。例如,一个常见的问题是滥用 Eager fetch 类型,这会导致获取/检索大量不必要的对象,并为此执行多个查询语句。还取决于您的 JPA 实现和数据库如何解决任务,但我建议您开始检查实体模型。如果您发布 Thethread 实体,也许有人可以识别出可能存在的问题。

关于java - 82秒提取数据库中的一行,大型数据库出现java.lang.OutOfMemoryError : GC overhead limit exceeded,,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31767706/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com