gpt4 book ai didi

swift - 由于 texture.getbytes 函数, Metal View 的记录很慢 - Swift

转载 作者:行者123 更新时间:2023-12-04 03:31:15 30 4
gpt4 key购买 nike

我是using this post用于记录自定义 Metal View ,但我遇到了一些问题。当我开始录制时,我会在 iPhone 12 Pro Max 上从 60fps 到 ~20fps。在分析之后,使一切变慢的函数是 texture.getBytes,因为它正在从 GPU 获取缓冲区到 CPU。

另一个问题,不确定这是否会带来后果,是视频和音频不同步。我不确定我是否应该进入信号量路线来解决这个问题,或者是否有任何其他潜在的解决方法。

在我的例子中,纹理大小与屏幕大小一样大,因为我从相机流创建它,然后通过几个 CIFilters 对其进行处理。我不确定问题是否在于它太大,所以 getBytes 无法实时支持这种大小的纹理。

如果我需要定义优先级,我的第一要务是解决音频和视频之间的不同步问题。任何想法都会非常有帮助。

代码如下:

import AVFoundation

class MetalVideoRecorder {
var isRecording = false
var recordingStartTime = TimeInterval(0)

private var assetWriter: AVAssetWriter
private var assetWriterVideoInput: AVAssetWriterInput
private var assetWriterPixelBufferInput: AVAssetWriterInputPixelBufferAdaptor

init?(outputURL url: URL, size: CGSize) {
do {
assetWriter = try AVAssetWriter(outputURL: url, fileType: AVFileType.m4v)
} catch {
return nil
}

let outputSettings: [String: Any] = [ AVVideoCodecKey : AVVideoCodecType.h264,
AVVideoWidthKey : size.width,
AVVideoHeightKey : size.height ]

assetWriterVideoInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: outputSettings)
assetWriterVideoInput.expectsMediaDataInRealTime = true

let sourcePixelBufferAttributes: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String : kCVPixelFormatType_32BGRA,
kCVPixelBufferWidthKey as String : size.width,
kCVPixelBufferHeightKey as String : size.height ]

assetWriterPixelBufferInput = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: assetWriterVideoInput,
sourcePixelBufferAttributes: sourcePixelBufferAttributes)

assetWriter.add(assetWriterVideoInput)
}

func startRecording() {
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: CMTime.zero)

recordingStartTime = CACurrentMediaTime()
isRecording = true
}

func endRecording(_ completionHandler: @escaping () -> ()) {
isRecording = false

assetWriterVideoInput.markAsFinished()
assetWriter.finishWriting(completionHandler: completionHandler)
}

func writeFrame(forTexture texture: MTLTexture) {
if !isRecording {
return
}

while !assetWriterVideoInput.isReadyForMoreMediaData {}

guard let pixelBufferPool = assetWriterPixelBufferInput.pixelBufferPool else {
print("Pixel buffer asset writer input did not have a pixel buffer pool available; cannot retrieve frame")
return
}

var maybePixelBuffer: CVPixelBuffer? = nil
let status = CVPixelBufferPoolCreatePixelBuffer(nil, pixelBufferPool, &maybePixelBuffer)
if status != kCVReturnSuccess {
print("Could not get pixel buffer from asset writer input; dropping frame...")
return
}

guard let pixelBuffer = maybePixelBuffer else { return }

CVPixelBufferLockBaseAddress(pixelBuffer, [])
let pixelBufferBytes = CVPixelBufferGetBaseAddress(pixelBuffer)!

// Use the bytes per row value from the pixel buffer since its stride may be rounded up to be 16-byte aligned
let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
let region = MTLRegionMake2D(0, 0, texture.width, texture.height)

texture.getBytes(pixelBufferBytes, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0)

let frameTime = CACurrentMediaTime() - recordingStartTime
let presentationTime = CMTimeMakeWithSeconds(frameTime, preferredTimescale: 240)
assetWriterPixelBufferInput.append(pixelBuffer, withPresentationTime: presentationTime)

CVPixelBufferUnlockBaseAddress(pixelBuffer, [])
}
}

最佳答案

与 OpenGL 不同,Metal 没有默认帧缓冲区的概念。相反,它使用一种称为交换链的技术。交换链是用于向用户显示帧的缓冲区集合。每次应用程序呈现一个新帧进行显示时,交换链中的第一个缓冲区将取代显示缓冲区。

enter image description here

When a command queue schedules a command buffer for execution, thedrawable tracks all render or write requests on itself in that commandbuffer. The operating system doesn't present the drawable onscreenuntil the commands have finished executing. By asking the commandbuffer to present the drawable, you guarantee that presentationhappens after the command queue has scheduled this command buffer.Don’t wait for the command buffer to finish executing beforeregistering the drawable’s presentation.

图层仅在不在屏幕上并且没有对它的强引用时才重用可绘制对象。它们存在于有限且可重复使用的资源池中,当您请求时,可绘制对象可能可用也可能不可用。如果没有可用的,Core Animation 会阻塞您的调用线程,直到有新的可绘制对象可用——通常是在下一个显示刷新间隔。

在您的情况下,帧记录器对您的可绘制对象的引用保留时间过长,这就是导致帧丢失的原因。为了避免这种情况,您应该实现三重缓冲模型。在考虑处理器空闲时间、内存开销和帧延迟时,添加第三个动态数据缓冲区是理想的解决方案。

enter image description here

关于swift - 由于 texture.getbytes 函数, Metal View 的记录很慢 - Swift,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/66769266/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com