gpt4 book ai didi

iOS 11 Objective-C - 使用 AVAssetWriterInputPixelBufferAdaptor 从 ReplayKit 处理图像缓冲区

转载 作者:可可西里 更新时间:2023-11-01 04:55:20 25 4
gpt4 key购买 nike

我正在尝试使用 ReplayKit 记录我的应用程序的屏幕,在录制视频时裁剪掉其中的某些部分。不太顺利。

ReplayKit 将捕获整个屏幕,因此我决定从 ReplayKit 接收每一帧(作为 CMSampleBuffer 通过 startCaptureWithHandler ),在那里裁剪并通过 AVAssetWriterInputPixelBufferAdaptor 将其提供给视频编写器.但是我在裁剪之前硬复制图像缓冲区时遇到了麻烦。

这是我记录整个屏幕的工作代码:

// Starts recording with a completion/error handler
-(void)startRecordingWithHandler: (RPHandler)handler
{
// Sets up AVAssetWriter that will generate a video file from the recording.
self.writer = [AVAssetWriter assetWriterWithURL:self.outputFileURL
fileType:AVFileTypeQuickTimeMovie
error:nil];

NSDictionary* outputSettings =
@{
AVVideoWidthKey : @(screen.size.width), // The whole width of the entire screen.
AVVideoHeightKey : @(screen.size.height), // The whole height of the entire screen.
AVVideoCodecKey : AVVideoCodecTypeH264,
};

// Sets up AVAssetWriterInput that will feed ReplayKit's frame buffers to the writer.
self.videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:outputSettings];

// Lets it know that the input will be realtime using ReplayKit.
[self.videoInput setExpectsMediaDataInRealTime:YES];

NSDictionary* sourcePixelBufferAttributes =
@{
(NSString*) kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA),
(NSString*) kCVPixelBufferWidthKey : @(screen.size.width),
(NSString*) kCVPixelBufferHeightKey : @(screen.size.height),
};

// Adds the video input to the writer.
[self.writer addInput:self.videoInput];

// Sets up ReplayKit itself.
self.recorder = [RPScreenRecorder sharedRecorder];

// Arranges the pipleline from ReplayKit to the input.
RPBufferHandler bufferHandler = ^(CMSampleBufferRef sampleBuffer, RPSampleBufferType bufferType, NSError* error) {
[self captureSampleBuffer:sampleBuffer withBufferType:bufferType];
};

RPHandler errorHandler = ^(NSError* error) {
if (error) handler(error);
};

// Starts ReplayKit's recording session.
// Sample buffers will be sent to `captureSampleBuffer` method.
[self.recorder startCaptureWithHandler:bufferHandler completionHandler:errorHandler];
}

// Receives a sample buffer from ReplayKit every frame.
-(void)captureSampleBuffer:(CMSampleBufferRef)sampleBuffer withBufferType:(RPSampleBufferType)bufferType
{
// Uses a queue in sync so that the writer-starting logic won't be invoked twice.
dispatch_sync(dispatch_get_main_queue(), ^{
// Starts the writer if not started yet. We do this here in order to get the proper source time later.
if (self.writer.status == AVAssetWriterStatusUnknown) {
[self.writer startWriting];
return;
}

// Receives a sample buffer from ReplayKit.
switch (bufferType) {
case RPSampleBufferTypeVideo:{
// Initializes the source time when a video frame buffer is received the first time.
// This prevents the output video from starting with blank frames.
if (!self.startedWriting) {
NSLog(@"self.writer startSessionAtSourceTime");
[self.writer startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
self.startedWriting = YES;
}

// Appends a received video frame buffer to the writer.
[self.input append:sampleBuffer];
break;
}
}
});
}

// Stops the current recording session, and saves the output file to the user photo album.
-(void)stopRecordingWithHandler:(RPHandler)handler
{
// Closes the input.
[self.videoInput markAsFinished];

// Finishes up the writer.
[self.writer finishWritingWithCompletionHandler:^{
handler(self.writer.error);

// Saves the output video to the user photo album.
[[PHPhotoLibrary sharedPhotoLibrary] performChanges: ^{ [PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL: self.outputFileURL]; }
completionHandler: ^(BOOL s, NSError* e) { }];
}];

// Stops ReplayKit's recording.
[self.recorder stopCaptureWithHandler:nil];
}

来自 ReplayKit 的每个样本缓冲区将直接馈送到编写器(在 captureSampleBuffer 方法中),从而记录整个屏幕。

然后,我使用 AVAssetWriterInputPixelBufferAdaptor 用相同的逻辑替换了该部分,效果很好:

...
case RPSampleBufferTypeVideo:{
... // Initializes source time.

// Gets the timestamp of the sample buffer.
CMTime time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

// Extracts the pixel image buffer from the sample buffer.
CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

// Appends a received sample buffer as an image buffer to the writer via the adaptor.
[self.videoAdaptor appendPixelBuffer:imageBuffer withPresentationTime:time];
break;
}
...

适配器设置为:

NSDictionary* sourcePixelBufferAttributes =
@{
(NSString*) kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA),
(NSString*) kCVPixelBufferWidthKey : @(screen.size.width),
(NSString*) kCVPixelBufferHeightKey : @(screen.size.height),
};

self.videoAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.videoInput
sourcePixelBufferAttributes:sourcePixelBufferAttributes];

所以管道正在运行。

然后,我在主内存中创建了图像缓冲区的硬拷贝并将其提供给适配器:

...
case RPSampleBufferTypeVideo:{
... // Initializes source time.

// Gets the timestamp of the sample buffer.
CMTime time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

// Extracts the pixel image buffer from the sample buffer.
CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

// Hard-copies the image buffer.
CVPixelBufferRef copiedImageBuffer = [self copy:imageBuffer];

// Appends a received video frame buffer to the writer via the adaptor.
[self.adaptor appendPixelBuffer:copiedImageBuffer withPresentationTime:time];
break;
}
...

// Hard-copies the pixel buffer.
-(CVPixelBufferRef)copy:(CVPixelBufferRef)inputBuffer
{
// Locks the base address of the buffer
// so that GPU won't change the data until unlocked later.
CVPixelBufferLockBaseAddress(inputBuffer, 0); //-------------------------------

char* baseAddress = (char*)CVPixelBufferGetBaseAddress(inputBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(inputBuffer);
size_t width = CVPixelBufferGetWidth(inputBuffer);
size_t height = CVPixelBufferGetHeight(inputBuffer);
size_t length = bytesPerRow * height;

// Mallocs the same length as the input buffer for copying.
char* outputAddress = (char*)malloc(length);

// Copies the input buffer's data to the malloced space.
for (int i = 0; i < length; i++) {
outputAddress[i] = baseAddress[i];
}

// Create a new image buffer using the copied data.
CVPixelBufferRef outputBuffer;
CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_32BGRA,
outputAddress,
bytesPerRow,
&releaseCallback, // Releases the malloced space.
NULL,
NULL,
&outputBuffer);

// Unlocks the base address of the input buffer
// So that GPU can restart using the data.
CVPixelBufferUnlockBaseAddress(inputBuffer, 0); //-------------------------------

return outputBuffer;
}

// Releases the malloced space.
void releaseCallback(void *releaseRefCon, const void *baseAddress)
{
free((void *)baseAddress);
}

这行不通——保存的视频看起来像右边的屏幕截图:

broken

似乎每行的字节数和颜色格式有误。我已经研究并试验了以下内容,但没有用:

  • 硬编码 4 * width 每行字节数 -> “错误访问”。
  • 使用 intdouble 而不是 char -> 一些奇怪的调试器终止异常。
  • 使用其他图像格式 ->“不支持”或访问错误。

此外,releaseCallback 永远不会被调用——内存将在 10 秒的记录后耗尽。

从这个输出的外观来看,潜在的原因是什么?

最佳答案

您可以先按原样保存视频。然后通过使用 AVMutableComposition 类,您可以通过向其添加指令和图层指令来裁剪视频。

关于iOS 11 Objective-C - 使用 AVAssetWriterInputPixelBufferAdaptor 从 ReplayKit 处理图像缓冲区,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46849528/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com