gpt4 book ai didi

ios - AVCaptureVideoDataOutputSampleBufferDelegate 使用 CIFilters 进行视频过滤的丢帧

转载 作者:行者123 更新时间:2023-12-04 10:03:11 31 4
gpt4 key购买 nike

如果我使用 13 个不同的过滤器链,我有一个非常奇怪的情况,即 AVCaptureVideoDataOutputSampleBufferDelegate 会丢帧。让我解释:

我有 CameraController 设置,没什么特别的,这是我的委托(delegate)方法:

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
if !paused {

if connection.output?.connection(with: .audio) == nil {
//capture video

// my try to avoid "Out of buffers error", no luck ;(
lastCapturedBuffer = nil
let err = CMSampleBufferCreateCopy(allocator: kCFAllocatorDefault, sampleBuffer: sampleBuffer, sampleBufferOut: &lastCapturedBuffer)
if err == noErr {

}

connection.videoOrientation = .portrait

// getting image
let pixelBuffer = CMSampleBufferGetImageBuffer(lastCapturedBuffer!)
// remove if any
CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))

// captured - is just ciimage property
captured = CIImage(cvPixelBuffer: pixelBuffer!)
//remove if any
CVPixelBufferUnlockBaseAddress(pixelBuffer!,CVPixelBufferLockFlags(rawValue: 0))
//CVPixelBufferUnlockBaseAddress(pixelBuffer!, .readOnly)

// transform image to targer resolution
let srcWidth = CGFloat(captured.extent.width)
let srcHeight = CGFloat(captured.extent.height)

let dstWidth: CGFloat = ConstantsManager.shared.k_video_width
let dstHeight: CGFloat = ConstantsManager.shared.k_video_height

let scaleX = dstWidth / srcWidth
let scaleY = dstHeight / srcHeight

var transform = CGAffineTransform.init(scaleX: scaleX, y: scaleY)
captured = captured.transformed(by: transform).cropped(to: CGRect(x: 0, y: 0, width: dstWidth, height: dstHeight))
// mirror for front camera
if front {
var t = CGAffineTransform.init(scaleX: -1, y: 1)
t = t.translatedBy(x: -ConstantsManager.shared.k_video_width, y: 0)
captured = captured.transformed(by: t)
}

// video capture logic
let writable = canWrite()

if writable,
sessionAtSourceTime == nil {
sessionAtSourceTime = CMSampleBufferGetPresentationTimeStamp(lastCapturedBuffer!)
videoWriter.startSession(atSourceTime: sessionAtSourceTime!)
}

if writable, (videoWriterInput.isReadyForMoreMediaData) {
videoWriterInput.append(lastCapturedBuffer!)
}

// apply effect in realtime <- here is problem. If I comment next line, it will be fixed but effect will n't be applied
captured = FilterManager.shared.applyFilterForCamera(inputImage: captured)

// current frame in case user wants to save image as photo
self.capturedPhoto = captured

// sent frame to Camcoder view controller
self.delegate?.didCapturedFrame(frame: captured)
} else {
// capture sound
let writable = canWrite()
if writable, (audioWriterInput.isReadyForMoreMediaData) {
//print("write audio buffer")
audioWriterInput?.append(lastCapturedBuffer!)
}
}
} else {
// paused
}
}

我还实现了 didDrop 委托(delegate)方法,这是我如何弄清楚它为什么会丢帧的方法:
func captureOutput(_ output: AVCaptureOutput, didDrop sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
print("did drop")
var mode: CMAttachmentMode = 0
let reason = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_DroppedFrameReason, attachmentModeOut: &mode)
print("reason \(String(describing: reason))") // Optional(OutOfBuffers)
}

所以我像专业人士一样做了,只是注释了部分代码以找出问题所在。所以,它在这里:
captured = FilterManager.shared.applyFilterForCamera(inputImage: captured)

FilterManager - 是单例的,这里叫func:
func applyFilterForCamera(inputImage: CIImage) -> CIImage {
return currentVsFilter!.apply(sourceImage: inputImage)
}

currentVsFilter 是 VSFilter 类型的对象 - 这是一个示例:
import Foundation
import AVKit

class TestFilter: CustomFilter {

let _name = "Тестовый Фильтр"
let _displayName = "Test Filter"

var tempImage: CIImage?
var final: CGImage?

override func name() -> String {
return _name
}

override func displayName() -> String {
return _displayName
}

override init() {
super.init()
print("Test Filter init")

// setup my custom kernel filter
self.noise.type = GlitchFilter.GlitchType.allCases[2]
}

// this returns composition for playback using AVPlayer
override func composition(asset: AVAsset) -> AVMutableVideoComposition {
let composition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in
let inputImage = request.sourceImage.cropped(to: request.sourceImage.extent)
DispatchQueue.global(qos: .userInitiated).async {
let output = self.apply(sourceImage: inputImage, forComposition: true)
request.finish(with: output, context: nil)
}
})
let size = FilterManager.shared.cropRectForOrientation().size

composition.renderSize = size
return composition
}

// this returns actual filtered CIImage, used for both AVPlayer composition and realtime camera
override func apply(sourceImage: CIImage, forComposition: Bool = false) -> CIImage {

// rendered text
tempImage = FilterManager.shared.textRenderedImage()

// some filters chained one by one
self.screenBlend?.setValue(tempImage, forKey: kCIInputImageKey)
self.screenBlend?.setValue(sourceImage, forKey: kCIInputBackgroundImageKey)

self.noise.inputImage = self.screenBlend?.outputImage
self.noise.inputAmount = CGFloat.random(in: 1.0...3.0)

// result
tempImage = self.noise.outputImage

// correct crop
let rect = forComposition ? FilterManager.shared.cropRectForOrientation() : FilterManager.shared.cropRect
final = self.context.createCGImage(tempImage!, from: rect!)

return CIImage(cgImage: final!)
}

}

现在,最奇怪的是,我有 30 个 VSFilter,当我达到 13 个(通过 UIButton 逐个切换)时,出现错误“缓冲区不足”,这个:
kCMSampleBufferDroppedFrameReason_OutOfBuffers

我测试了什么:
  • 我在 FilterManager 单例中的过滤器数组中更改了 vsFilters 顺序 - 相同
  • 我尝试从第一个切换到 12,然后返回 - 工作,但是在我切换到 13tn(从 0 到第 30 个)之后 - 错误

  • 看起来它只能处理 12 个 VSFIlter 对象,比如它是否以某种方式保留它们,或者它可能与线程有关,我不知道。

    这个应用程序为 iOS 设备制作,在 iPhone X iOs 13.3.1 上测试
    这是视频编辑器应用程序,可将不同的效果应用于来自相机的实时流和来自相机胶卷的视频文件

    也许有人有这方面的经验?

    祝你有美好的一天

    最好的,维克多

    编辑 1. 如果我重新初始化 cameraController(AVCaptureSession. input/output devices) 它可以工作,但这是一个丑陋的选项,并且在切换过滤器时会增加延迟

    最佳答案

    好的,所以我终于赢得了这场战斗。如果其他人遇到此“OutOfBuffer”问题,这是我的解决方案

    正如我所知道的,CIFilter 抓取 CVPixelBuffer 并且在过滤图像时不释放它。我猜这有点创造了一个巨大的缓冲区。奇怪的事情:它不会造成内存泄漏,所以我猜它不是抓取特定的缓冲区,而是创建对它的强引用。正如谣言(我)所说,它只能处理 12 个这样的引用。

    所以,我的方法是复制 CVPixelBuffer 然后使用它而不是我从 AVCaptureVideoDataOutputSampleBufferDelegate didOutput func 获得的缓冲区

    这是我的新代码:

    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {

    if !paused {
    //print("camera controller \(id) got frame")

    if connection.output?.connection(with: .audio) == nil {
    //capture video

    connection.videoOrientation = .portrait

    // getting image
    guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }

    // this works!
    let copyBuffer = pixelBuffer.copy()

    // captured - is just ciimage property
    captured = CIImage(cvPixelBuffer: copyBuffer)
    //remove if any

    // transform image to targer resolution
    let srcWidth = CGFloat(captured.extent.width)
    let srcHeight = CGFloat(captured.extent.height)

    let dstWidth: CGFloat = ConstantsManager.shared.k_video_width
    let dstHeight: CGFloat = ConstantsManager.shared.k_video_height

    let scaleX = dstWidth / srcWidth
    let scaleY = dstHeight / srcHeight

    var transform = CGAffineTransform.init(scaleX: scaleX, y: scaleY)
    captured = captured.transformed(by: transform).cropped(to: CGRect(x: 0, y: 0, width: dstWidth, height: dstHeight))
    // mirror for front camera
    if front {
    var t = CGAffineTransform.init(scaleX: -1, y: 1)
    t = t.translatedBy(x: -ConstantsManager.shared.k_video_width, y: 0)
    captured = captured.transformed(by: t)
    }

    // video capture logic
    let writable = canWrite()

    if writable,
    sessionAtSourceTime == nil {
    sessionAtSourceTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
    videoWriter.startSession(atSourceTime: sessionAtSourceTime!)
    }

    if writable, (videoWriterInput.isReadyForMoreMediaData) {
    videoWriterInput.append(sampleBuffer)
    }

    self.captured = FilterManager.shared.applyFilterForCamera(inputImage: self.captured)

    // current frame in case user wants to save image as photo
    self.capturedPhoto = captured

    // sent frame to Camcoder view controller
    self.delegate?.didCapturedFrame(frame: captured)
    } else {
    // capture sound
    let writable = canWrite()
    if writable, (audioWriterInput.isReadyForMoreMediaData) {
    //print("write audio buffer")
    audioWriterInput?.append(sampleBuffer)
    }
    }
    } else {
    // paused
    //print("paused camera controller \(id)")
    }
    }

    并且有复制缓冲区的功能:
    func copy() -> CVPixelBuffer {
    precondition(CFGetTypeID(self) == CVPixelBufferGetTypeID(), "copy() cannot be called on a non-CVPixelBuffer")

    var _copy : CVPixelBuffer?
    CVPixelBufferCreate(
    kCFAllocatorDefault,
    CVPixelBufferGetWidth(self),
    CVPixelBufferGetHeight(self),
    CVPixelBufferGetPixelFormatType(self),
    nil,
    &_copy)

    guard let copy = _copy else { fatalError() }

    CVPixelBufferLockBaseAddress(self, CVPixelBufferLockFlags.readOnly)
    CVPixelBufferLockBaseAddress(copy, CVPixelBufferLockFlags(rawValue: 0))


    let copyBaseAddress = CVPixelBufferGetBaseAddress(copy)
    let currBaseAddress = CVPixelBufferGetBaseAddress(self)

    print("copy data size: \(CVPixelBufferGetDataSize(copy))")
    print("self data size: \(CVPixelBufferGetDataSize(self))")

    memcpy(copyBaseAddress, currBaseAddress, CVPixelBufferGetDataSize(copy))
    //memcpy(copyBaseAddress, currBaseAddress, CVPixelBufferGetDataSize(self) * 2)

    CVPixelBufferUnlockBaseAddress(copy, CVPixelBufferLockFlags(rawValue: 0))
    CVPixelBufferUnlockBaseAddress(self, CVPixelBufferLockFlags.readOnly)


    return copy
    }

    我用它作为扩展

    我希望,这将帮助任何有类似问题的人

    最好的,维克多

    关于ios - AVCaptureVideoDataOutputSampleBufferDelegate 使用 CIFilters 进行视频过滤的丢帧,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61730352/

    31 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com