gpt4 book ai didi

ios - 使用 `CIFilter` 模糊图像时应用程序崩溃

转载 作者:行者123 更新时间:2023-11-29 11:27:22 25 4
gpt4 key购买 nike

由于使用此功能模糊图像,我经常收到 CoreImage 的崩溃报告。 :

// Code exactly as in app
extension UserImage {

func blurImage(_ radius: CGFloat) -> UIImage? {

guard let ciImage = CIImage(image: self) else {
return nil
}

let clampedImage = ciImage.clampedToExtent()

let blurFilter = CIFilter(name: "CIGaussianBlur", parameters: [
kCIInputImageKey: clampedImage,
kCIInputRadiusKey: radius])

var filterImage = blurFilter?.outputImage

filterImage = filterImage?.cropped(to: ciImage.extent)

guard let finalImage = filterImage else {
return nil
}

return UIImage(ciImage: finalImage)
}
}

// Code stripped down, contains more in app
class MyImage {

var blurredImage: UIImage?

func setBlurredImage() {
DispatchQueue.global(qos: DispatchQoS.QoSClass.userInitiated).async {

let blurredImage = self.getImage().blurImage(100)

DispatchQueue.main.async {

guard let blurredImage = blurredImage else { return }

self.blurredImage = blurredImage
}
}
}
}

根据 Crashlytics 的说法:
  • 崩溃仅发生在一小部分 session
  • 崩溃发生在从 11.x 到 12.x 的各种 iOS 版本
  • 发生崩溃时,0% 的设备处于后台状态

  • 我无法重现崩溃,过程是:
  • MyImageView对象( UIImageView 的 child )收到一个 Notification
  • 有时(取决于其他逻辑) UIImage 的模糊版本在线程 DispatchQueue.global(qos: DispatchQoS.QoSClass.userInitiated).async 上创建
  • 在主线程上,对象设置 UIImageself.image = ...

  • 根据崩溃日志 ( UIImageView setImage ),应用程序似乎在第 3 步之后崩溃。另一方面崩溃 CIImage在崩溃日志中表明问题出在步骤 2 中的某处,其中 CIFilter用于创建图像的模糊版本。注: MyImageView有时用于 UICollectionViewCell .

    崩溃日志:

    EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000000000000000
    Crashed: com.apple.main-thread
    0 CoreImage 0x1c18128c0 CI::Context::recursive_render(CI::TileTask*, CI::Node*, CGRect const&, CI::Node*, bool) + 2388
    1 CoreImage 0x1c18128c0 CI::Context::recursive_render(CI::TileTask*, CI::Node*, CGRect const&, CI::Node*, bool) + 2388
    2 CoreImage 0x1c18122e8 CI::Context::recursive_render(CI::TileTask*, CI::Node*, CGRect const&, CI::Node*, bool) + 892
    3 CoreImage 0x1c18122e8 CI::Context::recursive_render(CI::TileTask*, CI::Node*, CGRect const&, CI::Node*, bool) + 892
    4 CoreImage 0x1c18122e8 CI::Context::recursive_render(CI::TileTask*, CI::Node*, CGRect const&, CI::Node*, bool) + 892
    5 CoreImage 0x1c18122e8 CI::Context::recursive_render(CI::TileTask*, CI::Node*, CGRect const&, CI::Node*, bool) + 892
    6 CoreImage 0x1c18122e8 CI::Context::recursive_render(CI::TileTask*, CI::Node*, CGRect const&, CI::Node*, bool) + 892
    7 CoreImage 0x1c18122e8 CI::Context::recursive_render(CI::TileTask*, CI::Node*, CGRect const&, CI::Node*, bool) + 892
    8 CoreImage 0x1c18122e8 CI::Context::recursive_render(CI::TileTask*, CI::Node*, CGRect const&, CI::Node*, bool) + 892
    9 CoreImage 0x1c18122e8 CI::Context::recursive_render(CI::TileTask*, CI::Node*, CGRect const&, CI::Node*, bool) + 892
    10 CoreImage 0x1c18122e8 CI::Context::recursive_render(CI::TileTask*, CI::Node*, CGRect const&, CI::Node*, bool) + 892
    11 CoreImage 0x1c1812f04 CI::Context::render(CI::ProgramNode*, CGRect const&) + 116
    12 CoreImage 0x1c182ca3c invocation function for block in CI::image_render_to_surface(CI::Context*, CI::Image*, CGRect, CGColorSpace*, __IOSurface*, CGPoint, CI::PixelFormat, CI::RenderDestination const*) + 40
    13 CoreImage 0x1c18300bc CI::recursive_tile(CI::RenderTask*, CI::Context*, CI::RenderDestination const*, char const*, CI::Node*, CGRect const&, CI::PixelFormat, CI::swizzle_info const&, CI::TileTask* (CI::ProgramNode*, CGRect) block_pointer) + 608
    14 CoreImage 0x1c182b740 CI::tile_node_graph(CI::Context*, CI::RenderDestination const*, char const*, CI::Node*, CGRect const&, CI::PixelFormat, CI::swizzle_info const&, CI::TileTask* (CI::ProgramNode*, CGRect) block_pointer) + 396
    15 CoreImage 0x1c182c308 CI::image_render_to_surface(CI::Context*, CI::Image*, CGRect, CGColorSpace*, __IOSurface*, CGPoint, CI::PixelFormat, CI::RenderDestination const*) + 1340
    16 CoreImage 0x1c18781c0 -[CIContext(CIRenderDestination) _startTaskToRender:toDestination:forPrepareRender:error:] + 2488
    17 CoreImage 0x1c18777ec -[CIContext(CIRenderDestination) startTaskToRender:fromRect:toDestination:atPoint:error:] + 140
    18 CoreImage 0x1c17c9e4c -[CIContext render:toIOSurface:bounds:colorSpace:] + 268
    19 UIKitCore 0x1e8f41244 -[UIImageView _updateLayerContentsForCIImageBackedImage:] + 880
    20 UIKitCore 0x1e8f38968 -[UIImageView _setImageViewContents:] + 872
    21 UIKitCore 0x1e8f39fd8 -[UIImageView _updateState] + 664
    22 UIKitCore 0x1e8f79650 +[UIView(Animation) performWithoutAnimation:] + 104
    23 UIKitCore 0x1e8f3ff28 -[UIImageView _updateImageViewForOldImage:newImage:] + 504
    24 UIKitCore 0x1e8f3b0ac -[UIImageView setImage:] + 340
    25 App 0x100482434 MyImageView.updateImageView() (<compiler-generated>)
    26 App 0x10048343c closure #1 in MyImageView.handleNotification(_:) + 281 (MyImageView.swift:281)
    27 App 0x1004f1870 thunk for @escaping @callee_guaranteed () -> () (<compiler-generated>)
    28 libdispatch.dylib 0x1bbbf4a38 _dispatch_call_block_and_release + 24
    29 libdispatch.dylib 0x1bbbf57d4 _dispatch_client_callout + 16
    30 libdispatch.dylib 0x1bbbd59e4 _dispatch_main_queue_callback_4CF$VARIANT$armv81 + 1008
    31 CoreFoundation 0x1bc146c1c __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 12
    32 CoreFoundation 0x1bc141b54 __CFRunLoopRun + 1924
    33 CoreFoundation 0x1bc1410b0 CFRunLoopRunSpecific + 436
    34 GraphicsServices 0x1be34179c GSEventRunModal + 104
    35 UIKitCore 0x1e8aef978 UIApplicationMain + 212
    36 App 0x1002a3544 main + 18 (AppDelegate.swift:18)
    37 libdyld.dylib 0x1bbc068e0 start + 4

    坠机的原因可能是什么?

    更新

    可能与 CIImage memory leak有关.分析时,我看到很多 CIImage内存泄漏与崩溃日志中的堆栈跟踪相同:

    Img

    可能与 Core Image and memory leak, swift 3.0有关.我刚刚发现图像存储在内存中的数组中, onReceiveMemoryWarning没有得到正确处理,也没有清除该数组。因此,在某些情况下,应用程序会因内存问题而崩溃。也许这可以解决问题,我会在这里提供更新。

    更新 2

    看来我能够重现崩溃。使用 5MB JPEG 图像在物理设备 iPhone Xs Max 上进行测试。
  • 当显示图像不模糊全屏时,应用程序的内存使用量为 160MB。
  • 当显示1/4屏幕大小的模糊图像时,内存使用量为380MB。
  • 当全屏显示模糊的图像时,内存使用量跃升至 >1.6GB,然后应用程序大部分时间崩溃:

  • Message from debugger: Terminated due to memory issue



    我很惊讶 5MB 的图像会导致内存使用量超过 1.6GB 以实现“简单”模糊。我必须在这里手动释放任何东西吗, CIContext , CIImage ,等等,或者这是正常的,我必须在模糊之前手动将图像大小调整为 ~kB?

    更新 3

    添加多个显示模糊图像的 ImageView 会导致每次添加 ImageView 时内存使用量增加数百 MB,直到删除该 View ,即使一次只有 1 个图像可见。也许 CIFilter不打算用于显示图像,因为它比渲染图像本身占用更多的内存。

    因此,我更改了模糊功能以在上下文中渲染图像,果然,内存只会在渲染图像时很快增加,然后又回落到模糊前的水平。

    这是更新的方法:
    func blurImage(_ radius: CGFloat) -> UIImage? {

    guard let ciImage = CIImage(image: self) else {
    return nil
    }

    let clampedImage = ciImage.clampedToExtent()

    let blurFilter = CIFilter(name: "CIGaussianBlur", withInputParameters: [
    kCIInputImageKey: clampedImage,
    kCIInputRadiusKey: radius])

    var filteredImage = blurFilter?.outputImage

    filteredImage = filteredImage?.cropped(to: ciImage.extent)

    guard let blurredCiImage = filteredImage else {
    return nil
    }

    let rect = CGRect(origin: CGPoint.zero, size: size)

    UIGraphicsBeginImageContext(rect.size)
    UIImage(ciImage: blurredCiImage).draw(in: rect)
    let blurredImage = UIGraphicsGetImageFromCurrentImageContext()
    UIGraphicsEndImageContext()

    return blurredImage
    }

    此外,感谢@matt 和@FrankSchlegel 在评论中建议通过在模糊之前对图像进行下采样来减轻高内存消耗,我也会这样做。令人惊讶的是,即使是 300x300 像素的图像也会导致约 500MB 的内存使用量激增。考虑到 2GB 是应用程序将被终止的限制。一旦应用程序使用这些更新,我将发布更新。

    更新 4

    我添加了此代码以在模糊图像之前将图像下采样到最大 300x300px:
    func resizeImageWithAspectFit(_ boundSize: CGSize) -> UIImage {

    let ratio = self.size.width / self.size.height
    let maxRatio = boundSize.width / boundSize.height

    var scaleFactor: CGFloat

    if ratio > maxRatio {
    scaleFactor = boundSize.width / self.size.width

    } else {
    scaleFactor = boundSize.height / self.size.height
    }

    let newWidth = self.size.width * scaleFactor
    let newHeight = self.size.height * scaleFactor

    let rect = CGRect(x: 0.0, y: 0.0, width: newWidth, height: newHeight)

    UIGraphicsBeginImageContext(rect.size)
    self.draw(in: rect)
    let newImage = UIGraphicsGetImageFromCurrentImageContext()
    UIGraphicsEndImageContext()

    return newImage!
    }

    崩溃现在看起来不同了,但我不确定崩溃是在下采样期间还是在更新 #3 中描述的绘制模糊图像时发生,因为两者都使用 UIGraphicsImageContext :
    EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000000000000010
    Crashed: com.apple.root.user-initiated-qos
    0 libobjc.A.dylib 0x1ce457530 objc_msgSend + 16
    1 CoreImage 0x1d48773dc -[CIContext initWithOptions:] + 96
    2 CoreImage 0x1d4877358 +[CIContext contextWithOptions:] + 52
    3 UIKitCore 0x1fb7ea794 -[UIImage drawInRect:blendMode:alpha:] + 984
    4 MyApp 0x1005bb478 UIImage.blurImage(_:) (<compiler-generated>)
    5 MyApp 0x100449f58 closure #1 in MyImage.getBlurredImage() + 153 (UIImage+Extension.swift:153)
    6 MyApp 0x1005cda48 thunk for @escaping @callee_guaranteed () -> () (<compiler-generated>)
    7 libdispatch.dylib 0x1ceca4a38 _dispatch_call_block_and_release + 24
    8 libdispatch.dylib 0x1ceca57d4 _dispatch_client_callout + 16
    9 libdispatch.dylib 0x1cec88afc _dispatch_root_queue_drain + 636
    10 libdispatch.dylib 0x1cec89248 _dispatch_worker_thread2 + 116
    11 libsystem_pthread.dylib 0x1cee851b4 _pthread_wqthread + 464
    12 libsystem_pthread.dylib 0x1cee87cd4 start_wqthread + 4

    以下是用于调整图像大小和模糊图像的线程( blurImage() 是更新 #3 中描述的方法):
    class MyImage {

    var originalImage: UIImage?
    var blurredImage: UIImage?

    // Called on the main thread
    func getBlurredImage() -> UIImage {

    DispatchQueue.global(qos: DispatchQoS.QoSClass.userInitiated).async {

    // Create resized image
    let smallImage = self.originalImage.resizeImageWithAspectFitToSizeLimit(CGSize(width: 1000, height: 1000))

    // Create blurred image
    let blurredImage = smallImage.blurImage()

    DispatchQueue.main.async {

    self.blurredImage = blurredImage

    // Notify observers to display `blurredImage` in UIImageView on the main thread
    NotificationCenter.default.post(name: BlurredImageIsReady, object: nil, userInfo: ni)
    }
    }
    }
    }
    }

    最佳答案

    我做了一些基准测试,发现在直接渲染到 MTKView 时模糊和显示非常大的图像是可行的。 ,即使处理发生在原始输入大小上。下面是整个测试代码:

    import CoreImage
    import MetalKit
    import UIKit

    class ViewController: UIViewController {

    var device: MTLDevice!
    var commandQueue: MTLCommandQueue!
    var context: CIContext!
    let filter = CIFilter(name: "CIGaussianBlur")!
    let testImage = UIImage(named: "test10")! // 10 MB, 40 MP image
    @IBOutlet weak var metalView: MTKView!

    override func viewDidLoad() {
    super.viewDidLoad()

    self.device = MTLCreateSystemDefaultDevice()
    self.commandQueue = self.device.makeCommandQueue()

    self.context = CIContext(mtlDevice: self.device)

    self.metalView.delegate = self
    self.metalView.device = self.device
    self.metalView.isPaused = true
    self.metalView.enableSetNeedsDisplay = true
    self.metalView.framebufferOnly = false
    }

    }

    extension ViewController: MTKViewDelegate {

    func draw(in view: MTKView) {
    guard let currentDrawable = view.currentDrawable,
    let commandBuffer = self.commandQueue.makeCommandBuffer() else { return }

    let input = CIImage(image: self.testImage)!

    self.filter.setValue(input.clampedToExtent(), forKey: kCIInputImageKey)
    self.filter.setValue(100.0, forKey: kCIInputRadiusKey)
    let output = self.filter.outputImage!.cropped(to: input.extent)

    let drawableSize = view.drawableSize

    // Scale image to aspect-fit view.
    // NOTE: This is a benchmark scenario. Usually you would scale the image to a reasonable processing size
    // (i.e. close to your output size) _before_ applying expensive filters.
    let scaleX = drawableSize.width / output.extent.width
    let scaleY = drawableSize.height / output.extent.height
    let scale = min(scaleX, scaleY)
    let scaledOutput = output.transformed(by: CGAffineTransform(scaleX: scale, y: scale))

    let destination = CIRenderDestination(mtlTexture: currentDrawable.texture, commandBuffer: commandBuffer)
    // BONUS: You can Quick Look the `task` in Xcode to see what Core Image is actually going to do on the GPU.
    let task = try! self.context.startTask(toRender: scaledOutput, to: destination)

    commandBuffer.present(currentDrawable)
    commandBuffer.commit()

    // BONUS: No need to wait, but you can Quick Look the `info` to see what was actually done during rendering
    // and to get performance metrics, like the actual number of pixels processed.
    DispatchQueue.global(qos: .background).async {
    let info = try! task.waitUntilCompleted()
    }
    }

    func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {}

    }

    对于 10 MB 的测试图像(40 兆像素!),内存在渲染期间非常短暂地飙升至 800 MB,这是可以预料的。我什至尝试了 30 MB(~74 兆像素!!)的图像,它通过使用 1.3 GB 的内存没有问题。

    当我在应用过滤器之前将图像缩放到目标时,内存一直保持在 ~60 MB 左右。因此,这确实是您无论如何都应该做的事情。但请注意,在这种情况下,您需要更改高斯模糊的半径才能获得相同的结果。

    如果您不仅需要渲染结果用于显示,我想您可以使用 createCGImage CIContext的API而不是渲染到 MTKView的可绘制并获得相同的内存使用量。

    我希望这适用于您的场景。

    关于ios - 使用 `CIFilter` 模糊图像时应用程序崩溃,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57295035/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com