gpt4 book ai didi

mysql - 无法隔离 Scala 批量数据加载应用程序中的 JDBC 内存泄漏

转载 作者:行者123 更新时间:2023-11-29 06:41:56 36 4
gpt4 key购买 nike

我用 Scala 编写了一个传感器网络网络服务(这是我的第一个 Scala 应用程序;是的,我知道它使用了 Spring 依赖注入(inject)。我正在从中提取很多内容。不要评判)。不管怎样,赞助它的公司倒闭了,我从 Web 服务的原始 XML 数据包中提取了所有数据。我正在尝试将该数据加载回数据库。该服务支持 Postgres、MySQL 和 Microsoft SQL(我们使用的原始数据库)。我可以让 Postgres 正常加载,但 MySQL/Microsoft 会不断出现 OutOfMemeory 问题,即使我使用 8GB 或 12GB 堆。

我认为 Postgres 驱动程序可能只是在内部数据结构方面更有效率,因为我看到使用 Postgres 时内存会增加而不是被释放,只是没有那么多。我很小心地关闭了 ResultSet 对象和 Connection 对象,但我仍然遗漏了一些东西。

这是我的 Scala 批量加载器,它获取 XML 文件的 tar.bz2 并将其加载到

val BUFFER_SIZE = 4096
val PACKAGE_CHUNK_SIZE = 10000

def main(args : Array[String]) {

if(args.length != 1) {
System.err.println("Usage: %s [bzip2 file]".format(this.getClass))
System.exit(1)
}

val loader = MySpring.getObject("FormatAGRA.XML").asInstanceOf[FormatTrait]
val db = MySpring.getObject("serviceDataHandler").asInstanceOf[ServiceDataHandlerTrait]

val bzin = new TarArchiveInputStream(new BZip2CompressorInputStream(new BufferedInputStream(new FileInputStream(args(0)))))
val models = new ListBuffer[ModelTrait]()
var chunks = 0

Stream.continually(bzin.getNextEntry()).takeWhile(_ != null) foreach {
entry => {
if(entry.asInstanceOf[TarArchiveEntry].isFile()) {

val xmlfile = new ByteArrayOutputStream()
IOUtils.copy(bzin,xmlfile)
//val models = new ListBuffer[ModelTrait]()
models.appendAll( loader.loadModels(new String(xmlfile.toByteArray())) )
System.out.println(String.format("Processing Entry %s",entry.getName));

chunks = chunks + 1
if( chunks % PACKAGE_CHUNK_SIZE == 0) {
System.out.println("Sending batch of %d to database".format(PACKAGE_CHUNK_SIZE))
db.loadData(models.toList.asInstanceOf[List[DataModel]])
models.clear()
}
}
}
}

现在是那些讨厌的 Spring 细节。这是我的 bean

<bean id="serviceDataHandler" parent="baseDataHandler" class="io.bigsense.db.ServiceDataHandler">
<property name="ds" ref="serviceDataSource" />
</bean>

<!-- Database configurations -->
<bean id="baseDataSource" abstract="true" class="com.jolbox.bonecp.BoneCPDataSource" destroy-method="close">
<property name="driverClass" value="dbDriver" />
<property name="jdbcUrl" value="connectionString" />
<property name="idleConnectionTestPeriod" value="60"/>
<property name="idleMaxAge" value="240"/>
<property name="maxConnectionsPerPartition" value="dbPoolMaxPerPart"/>
<property name="minConnectionsPerPartition" value="dbPoolMinPerPart"/>
<property name="partitionCount" value="dbPoolPartitions"/>
<property name="acquireIncrement" value="5"/>
<property name="statementsCacheSize" value="100"/>
<property name="releaseHelperThreads" value="3"/>
</bean>

<bean id="serviceDataSource" parent="baseDataSource" >
<property name="username" value="dbUser"/>
<property name="password" value="dbPass"/>
</bean>

dbUser/dbPass/connectionString/dbDriver 之类的东西在编译时被替换(以后的版本将使用运行时属性文件代替,因此您不必为不同的配置重新编译 war。但您已经了解了基本概念。

作为 FormatAGRA.XML 引入的模型只是将 XML 读入一个对象(是的,我知道这很糟糕......XML 将在下一个版本中消失,仅限 JSON!):

class AgraDataXMLFormat extends FormatTrait {

def renderModels(model : List[ModelTrait]) : String = {

if(model.length > 0) {
model.head match {
case x:DataModel => {
return <AgraData>{
for( pack <- model.asInstanceOf[List[DataModel]]) yield {
<package id={pack.uniqueId} timestamp={pack.timestamp}>
<sensors>{
for( sensor <- pack.sensors) yield {
<sensor id={sensor.uniqueId} type={sensor.stype} units={sensor.units} timestamp={sensor.timestamp}>
<data>{sensor.data}</data></sensor>
}
}</sensors><errors>{ for(error <- pack.errors) yield {
<error>{error}</error>
}}
</errors></package>
}
}</AgraData>.toString()
}
case x:RelayModel => {
return <AgraRelays>{
for( r <- model.asInstanceOf[List[RelayModel]]) yield {
/* TODO Get this working */
/* <relay id={r.id} identifier={r.identifier} publicKey={r.publicKey} />*/
}
}</AgraRelays>.toString()
}
case _ => {
//TODO: This needs to be an exception to generate a 400 BAD RESPONSE
"Format not implemented for given model Type"
}
}
}
//TODO throw exception? No...hmm
""
}


def loadModels(data : String) : List[ModelTrait] = {

var xml : Elem = XML.loadString(data)

var log : Logger = Logger.getLogger(this.getClass())

var models = new ListBuffer[DataModel]

for( pack <- xml \\ "package") yield {

var model = new DataModel()

var sensors = pack \ "sensors"
var errors = pack \ "errors"
model.timestamp = (pack \"@timestamp").text.trim()
model.uniqueId = (pack \"@id" ).text.trim()

var sbList = new ListBuffer[SensorModel]()
var sbErr = new ListBuffer[String]()

for( node <- sensors \"sensor") yield {
var sensorData = new SensorModel()
sensorData.uniqueId = (node\"@id").text.trim()
sensorData.stype = (node\"@type").text.trim()
sensorData.units = (node\"@units").text.trim()
sensorData.timestamp = (node\"@timestamp").text.trim()
sensorData.data = (node\"data").text.trim()
sbList += sensorData
}
for( err <- errors \"error") yield {
model.errors.append(err.text.trim())
}
model.sensors = sbList.toList
models += model
}
models.toList
}

}

最后是有趣的部分。数据库的东西。有一个基本特征。它有一些用于关闭连接和运行查询的样板文件。所有查询都使用此 runQuery()。我正在关闭连接和结果集。我不太清楚泄漏在哪里。我感觉这与我处理 JDBC 的方式有关,因为即使 PostgreSQL 中存在泄漏(我可以看到内存使用量增加),它仍然可以完成加载而不会耗尽 8GB 堆。相同的数据集在 MS SQL 和 MySQL 上失败了大约 100,000 条记录

trait DataHandlerTrait {

@BeanProperty
var ds : DataSource = _

@BeanProperty
var converters : scala.collection.mutable.Map[String,ConverterTrait] = _

@BeanProperty
var sqlCommands : EProperties = _

@BeanProperty
var dbDialect : String = _

val DB_MSSQL = "mssql"
val DB_MYSQL = "mysql"
val DB_PGSQL = "pgsql"

protected var log = Logger.getLogger(getClass())

//Taken From: http://zcox.wordpress.com/2009/08/17/simple-jdbc-queries-in-scala/
protected def using[Closeable <: {def close(): Unit}, B](closeable: Closeable)(getB: Closeable => B): B =
try {
getB(closeable)
} finally {
try { closeable.close() } catch { case e:Exception => {} }
}


protected def runQuery(req: DBRequest): DBResult = {

val retval = new DBResult()

val consBuilder = new StringBuilder(sqlCommands.getProperty(req.queryName))

val paramList: ListBuffer[Any] = new ListBuffer()
paramList.appendAll(req.args)

//constraints
// (we can't use mkstring because we need to deal with the
// complex case of if something is an actual constraint (happens in query)
// or a conversation (happens row by row)
var whereAnd = " WHERE "
for ((para, list) <- req.constraints) {
val con = sqlCommands.getProperty("constraint" + para)
if ((!converters.contains(para)) && (con == null || con == "")) {
throw new DatabaseException("Unknown Constraint: %s".format(para))
}
else if (!converters.contains(para)) {
for (l <- list) {
consBuilder.append(whereAnd)
consBuilder.append(con)
paramList.append(l)
whereAnd = " AND "
}
}
}

...最后,实际的数据加载函数:

def loadData(sets : List[DataModel]) : List[Int] = {

val log = Logger.getLogger(this.getClass())
var generatedIds : ListBuffer[Int] = new ListBuffer()


using(ds.getConnection()) { conn =>
//Start Transaction
conn.setAutoCommit(false)
try{
var req : DBRequest = null

sets.foreach( set => {
req = new DBRequest(conn,"getRelayId")
req.args = List(set.uniqueId)
val rid : DBResult = runQuery(req)

var relayId : java.lang.Integer = null
if(rid.results.length == 0) {
req = new DBRequest(conn,"registerRelay")
req.args = List(set.uniqueId)
relayId = runQuery(req).generatedKeys(0).asInstanceOf[Int]
}
else {
relayId = rid.results(0)("id").toString().toInt;
}

req = new DBRequest(conn,"addDataPackage")


req.args = List(TimeHelper.timestampToDate(set.timestamp),relayId)
val packageId = runQuery(req)
.generatedKeys(0)
.asInstanceOf[Int]
generatedIds += packageId //We will pull data in GET via packageId

var sensorId : java.lang.Integer = -1
set.sensors.foreach( sensor => {
req = new DBRequest(conn,"getSensorRecord")
req.args = List(relayId,sensor.uniqueId)
val sid : DBResult = runQuery(req)
if(sid.results.length == 0) {
req = new DBRequest(conn,"addSensorRecord")
req.args = List(sensor.uniqueId,relayId,sensor.stype,sensor.units)
sensorId = runQuery(req).generatedKeys(0).toString().toInt
}
else {
sensorId = sid.results(0)("id").toString().toInt;
}
req = new DBRequest(conn,"addSensorData")
req.args = List(packageId,sensorId,sensor.data)
runQuery(req)
})
set.processed.foreach( pro => {
pro.units match {
case "NImageU" => {
req = new DBRequest(conn,"addImage")
req.args = List(packageId, sensorId, new ByteArrayInputStream(Base64.decodeBase64(pro.data)))
runQuery(req)
}
case "NCounterU" => { /*TODO: Implement Me*/}
case _ => { set.errors.append("Unknown Processed Unit Type %s for Sensor %s With Data %s at Time %s"
.format(pro.units,pro.uniqueId,pro.data,pro.timestamp)) }
}
})
set.errors.foreach( error => {
req = new DBRequest(conn,"addError")
req.args = List(packageId,error)
runQuery(req)
})

})
conn.commit()
}
catch {
case e:Exception => {
//make sure we unlock the transaction but pass the exception onward
conn.rollback()
throw e
}
}
conn.setAutoCommit(true)
generatedIds.toList
}
}

//group by and order by
for (i <- List(req.group, req.order)) yield {
i match {
case Some(i: String) => consBuilder.append(sqlCommands.getProperty(i))
case None => {}
}
}

//prepare statement
log.debug("SQL Statement: %s".format(consBuilder.toString()))

/* PostgreSQL drivers quirk. If you use RETURN_GENERATED_KEYS, it adds RETURING
to the end of every statement! Meanwhile, certain MySQL SELECT statements need RETURN_GENERATED_KEYS.
*/
var keys = Statement.RETURN_GENERATED_KEYS
if (dbDialect == DB_PGSQL) {
keys = if (consBuilder.toString().toUpperCase().startsWith("INSERT")) Statement.RETURN_GENERATED_KEYS else Statement.NO_GENERATED_KEYS
}

using(req.conn.prepareStatement(consBuilder.toString(), keys)) {
stmt =>

//row limit
if (req.maxRows > 0) {
stmt.setMaxRows(req.maxRows)
}

var x = 1
paramList.foreach(a => {
log.debug("Parameter %s: %s".format(x, a))
a.asInstanceOf[AnyRef] match {
case s: java.lang.Integer => {
stmt.setInt(x, s)
}
case s: String => {
stmt.setString(x, s)
}
case s: Date => {
stmt.setDate(x, s, Calendar.getInstance(TimeZone.getTimeZone("UTC")))
}
case s: Time => {
stmt.setTime(x, s)
}
case s: Timestamp => {
stmt.setTimestamp(x, s, Calendar.getInstance(TimeZone.getTimeZone("UTC")))
}
case s: ByteArrayInputStream => {
stmt.setBinaryStream(x, s, s.asInstanceOf[ByteArrayInputStream].available())
}
case s => {
stmt.setObject(x, s)
}
}
x += 1
})


//run statement
stmt.execute()
log.debug("Statement Executed")

//get auto-insert keys
val keys = stmt.getGeneratedKeys()
if (keys != null) {
var keybuf = new ListBuffer[Any]();
while (keys.next()) {
keybuf += keys.getInt(1)
}
retval.generatedKeys = keybuf.toList
}

//pull results
log.debug("Pulling Results")
using(stmt.getResultSet()) {
ret =>
if (ret != null) {

val meta = ret.getMetaData()

var retbuf = new ListBuffer[Map[String, Any]]()
while (ret.next) {
val rMap = scala.collection.mutable.Map[String, Any]()

for (i <- 1 to meta.getColumnCount()) {
rMap += (meta.getColumnLabel(i) -> ret.getObject(i))
}

//conversion
for ((para, arg) <- req.constraints) {
if (converters.contains(para)) {
for (a <- arg) {
log.debug("Running Converstion %s=%s".format(para, a))
converters(para).convertRow(rMap, a.toString)
}
}
}


retbuf += Map(rMap.toSeq: _*)
}

retval.results = retbuf.toList
ret.close()
}
}
log.debug("Result Pull Complete")
}
retval
}


}

我已经尝试通过监视器抽取它并查看堆转储,但我不确定在哪里可以解决这个问题。我知道我可以将 bzip 分成更小的批处理,但如果出现内存泄漏,我真的需要启动它。我不想在生产中每个月左右重新启动集群成员。

这是我正在处理的源代码的当前副本的提交:

https://github.com/sumdog/BigSense/tree/fbd026124e09785bfecc834af6932b9952945fc6

最佳答案

发现问题。通过 VisualVM 运行所有内容后,我注意到线程数保持不变,但周围有大量 JDBC4ResultSet 对象。我以为我要关闭所有这些,但后来我仔细查看并注意到了这一点:

//get auto-insert keys
val keys = stmt.getGeneratedKeys()
if (keys != null) {
var keybuf = new ListBuffer[Any]();
while (keys.next()) {
keybuf += keys.getInt(1)
}
retval.generatedKeys = keybuf.toList
}

我没有意识到 stmt.getGeneratedKeys() 实际上返回了一个 ResultSet!将其更改为使用 Closable 包装器解决了这个问题:

    //get auto-insert keys
using(stmt.getGeneratedKeys()) { keys =>
if (keys != null) {
var keybuf = new ListBuffer[Any]();
while (keys.next()) {
keybuf += keys.getInt(1)
}
retval.generatedKeys = keybuf.toList
}
}

之前:

Memory profile of leak

Leaking ResultSet Objects

及之后:

Memory usage after leak fix!

关于mysql - 无法隔离 Scala 批量数据加载应用程序中的 JDBC 内存泄漏,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/20967077/

36 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com