gpt4 book ai didi

ios - 屏幕截图,包括带有叠加按钮的 AVCaptureVideoPreviewLayer

转载 作者:行者123 更新时间:2023-11-29 03:30:56 26 4
gpt4 key购买 nike

我正在使用screen Recorder到屏幕上。当 iPhone 屏幕上充满 View 时,它工作得很好。当 AVCaptureVideoPreviewLayer 显示时带有叠加按钮,则保存的屏幕捕获视频会显示没有 AVCaptureVideoPreviewLayer 的叠加按钮。我用过this添加叠加层的教程。如何解决这个问题?

最佳答案

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{

@autoreleasepool {

if ([connection isVideoOrientationSupported])
[connection setVideoOrientation:[self cameraOrientation]];

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);

size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);

/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);

/*We release some components*/

CVPixelBufferUnlockBaseAddress(imageBuffer,0);

CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);

UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
image1= [UIImage imageWithCGImage:newImage];
/*We relase the CGImageRef*/

CGImageRelease(newImage);


dispatch_sync(dispatch_get_main_queue(), ^{
[self.imageView setImage:image1];
});

}

}

使用NSTimer运行writeaSample

-(void) writeSample: (NSTimer*) _timer {

if (assetWriterInput.readyForMoreMediaData) {
// CMSampleBufferRef sample = nil;
@autoreleasepool {
CVReturn cvErr = kCVReturnSuccess;

// get screenshot image!

UIGraphicsBeginImageContext(baseViewOne.frame.size);
[[baseViewOne layer] renderInContext:UIGraphicsGetCurrentContext()];
screenshota = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();


//CGImageRef image = (CGImageRef) [[self screenshot] CGImage];
CGImageRef image = (CGImageRef) [screenshota CGImage];
//NSLog (@"made screenshot");

// prepare the pixel buffer
CVPixelBufferRef pixelBuffer = NULL;
CFDataRef imageData= CGDataProviderCopyData(CGImageGetDataProvider(image));
//NSLog (@"copied image data");
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
baseViewOne.frame.size.width,baseViewOne.frame.size.height,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(imageData),
CGImageGetBytesPerRow(image),
NULL,
NULL,
NULL,
&pixelBuffer);
//NSLog (@"CVPixelBufferCreateWithBytes returned %d", cvErr);

// calculate the time
CMTime presentationTime;

CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
elapsedTime = thisFrameWallClockTime - (firstFrameWallClockTime+pausedFrameTime);
// NSLog (@"elapsedTime: %f", elapsedTime);
presentationTime = CMTimeMake (elapsedTime * TIME_SCALE, TIME_SCALE);
BOOL appended = [assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];

if (appended) {
CVPixelBufferRelease( pixelBuffer );
CFRelease(imageData);
pixelBuffer = nil;
//NSLog (@"appended sample at time %lf", CMTimeGetSeconds(presentationTime));
} else {
[self stopRecording];
}



}
}


}

关于ios - 屏幕截图,包括带有叠加按钮的 AVCaptureVideoPreviewLayer,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/19785745/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com