gpt4 book ai didi

ios - 如何从 RPScreenRecorder.shared().startCapture 中的 CMSampleBuffer 获取视频帧?

转载 作者:行者123 更新时间:2023-11-28 17:31:18 30 4
gpt4 key购买 nike

我使用 RPScreenRecorder.shared().startCapture 进行屏幕录制,并使用 AVAssetWriterInput 编码为 h264 视频文件,但它直接给我 .mp4,我希望在录制屏幕进行流式传输时逐帧播放 h264 视频文件。有什么方法可以访问来自 RPScreenRecorder.shared().startCapture 的样本缓冲区数据吗?这是代码。在这里我得到了整个 mp4 文件,但我只想要视频帧

import Foundation
import ReplayKit
import AVKit


class ScreenRecorder
{
var assetWriter:AVAssetWriter!
var videoInput:AVAssetWriterInput!

let viewOverlay = WindowUtil()

let fileNameTxt = "Test"
let dir = try? FileManager.default.url(for: .documentDirectory,
in: .userDomainMask, appropriateFor: nil, create: true)
var sampleFileBuffer : String = ""

//MARK: Screen Recording
func startRecording(withFileName fileName: String, recordingHandler:@escaping (Error?)-> Void)
{
if #available(iOS 11.0, *)
{

let fileURL = URL(fileURLWithPath: ReplayFileUtil.filePath(fileName))
assetWriter = try! AVAssetWriter(outputURL: fileURL, fileType:
AVFileType.mp4)
let videoOutputSettings: Dictionary<String, Any> = [
AVVideoCodecKey : AVVideoCodecType.h264,
AVVideoWidthKey : UIScreen.main.bounds.size.width,
AVVideoHeightKey : UIScreen.main.bounds.size.height
];


videoInput = AVAssetWriterInput (mediaType: AVMediaType.video, outputSettings: videoOutputSettings)
videoInput.expectsMediaDataInRealTime = true
assetWriter.add(videoInput)

// If the directory was found, we write a file to it and read it back
let fileURLTxt = dir?.appendingPathComponent(fileNameTxt).appendingPathExtension("txt")


RPScreenRecorder.shared().startCapture(handler: { (sample, bufferType, error) in
//print(sample, bufferType, error)

recordingHandler(error)

if CMSampleBufferDataIsReady(sample)
{
if self.assetWriter.status == AVAssetWriterStatus.unknown
{
self.assetWriter.startWriting()
self.assetWriter.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sample))
}

if self.assetWriter.status == AVAssetWriterStatus.failed {
print("Error occured, status = \(self.assetWriter.status.rawValue), \(self.assetWriter.error!.localizedDescription) \(String(describing: self.assetWriter.error))")
return
}

if (bufferType == .video)
{

if self.videoInput.isReadyForMoreMediaData
{
self.videoInput.append(sample)
// self.sampleFileBuffer = self.videoInput as! String
self.sampleFileBuffer = String(sample as! String) //sample as! String
do {

try self.sampleFileBuffer.write(to: fileURLTxt!, atomically: true, encoding: .utf8)
} catch {
print("Failed writing to URL: \(fileURLTxt), Error: " + error.localizedDescription)
}


}
}
self.sampleFileBuffer = ""
}

}) { (error) in
recordingHandler(error)

}
} else
{
// Fallback on earlier versions
}
}

func stopRecording(handler: @escaping (Error?) -> Void)
{
if #available(iOS 11.0, *)
{
RPScreenRecorder.shared().stopCapture
{ (error) in
handler(error)
self.assetWriter.finishWriting
{
print(ReplayFileUtil.fetchAllReplays())

}
}
}
}


}

最佳答案

在您的代码中,样本是 CMSampleBuffer。调用 CMSampleBufferGetImageBuffer() 并获取 CVImageBuffer。要锁定帧缓冲区,请调用 CVPixelBufferLockBaseAddress(imageBuffer)。就我而言,imageBuffer 有 2 个平面,Y 和 UV。调用 CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0) 并获取 Y 平面地址。使用 planeIndex=1 调用相同的 API 并获取 UV 平面地址。

一旦获得平面的基址,就可以读取为 uint8*。调用 CVPixelBufferGetXXX API 获取宽度、高度、每行字节数。不要忘记调用 CVPixelBufferUnlockBaseAddress。

关于ios - 如何从 RPScreenRecorder.shared().startCapture 中的 CMSampleBuffer 获取视频帧?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52383474/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com