gpt4 book ai didi

ios - AVCaptureVideoDataOutput 和图像在屏幕上的显示大小

转载 作者:行者123 更新时间:2023-11-29 01:30:38 24 4
gpt4 key购买 nike

我使用 AVCaptureVideoDataOutput 从相机检索图像并显示在 iPhone 显示屏上。我在使用 iOS8.4iPhone 6 plus 上运行代码,效果很好。图像全屏显示。但是当我使用iPhone4 with iOS 7.1iPad mini with iOS 8.3时,图像无法全屏显示,并且左右有空白(没有图像)显示)的屏幕。这个问题的原因可能是什么?我的代码如下所示。

- (void)viewDidLoad { 
dispatch_async(sessionQueue, ^{
[self setBackgroundRecordingID:UIBackgroundTaskInvalid];

NSError *error = nil;

AVCaptureDevice *videoDevice = [RecordViewController deviceWithMediaType:AVMediaTypeVideo preferringPosition:AVCaptureDevicePositionBack];


AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];

if (error)
{
NSLog(@"%@", error);
}

if ([session canAddInput:videoDeviceInput])
{
[session addInput:videoDeviceInput];
[self setVideoDeviceInput:videoDeviceInput];

dispatch_async(dispatch_get_main_queue(), ^{
// Why are we dispatching this to the main queue?
// Because AVCaptureVideoPreviewLayer is the backing layer for AVCamPreviewView and UIView can only be manipulated on main thread.
// Note: As an exception to the above rule, it is not necessary to serialize video orientation changes on the AVCaptureVideoPreviewLayer’s connection with other session manipulation.
//[self previewView] layer
[[(AVCaptureVideoPreviewLayer *)[[self previewView] layer] connection] setVideoOrientation:(AVCaptureVideoOrientation)[[UIApplication sharedApplication] statusBarOrientation]];
});
}

AVCaptureDevice *audioDevice = [[AVCaptureDevice devicesWithMediaType:AVMediaTypeAudio] firstObject];
AVCaptureDeviceInput *audioDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error];

if (error)
{
NSLog(@"%@", error);
}

if ([session canAddInput:audioDeviceInput])
{
[session addInput:audioDeviceInput];
}

AVCaptureVideoDataOutput *vid_Output = [[AVCaptureVideoDataOutput alloc] init];
[vid_Output setSampleBufferDelegate:self queue:im_processingQueue];
vid_Output.alwaysDiscardsLateVideoFrames = YES;
// Set the video output to store frame in BGRA (It is supposed to be faster)
NSDictionary* videoSettings = @{(__bridge NSString*)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]};
[vid_Output setVideoSettings:videoSettings];
if ([session canAddOutput:vid_Output])
{
[session addOutput:vid_Output];
AVCaptureConnection *connection = [vid_Output connectionWithMediaType:AVMediaTypeVideo];
if ([connection isVideoStabilizationSupported])
//[connection setEnablesVideoStabilizationWhenAvailable:YES];
connection.preferredVideoStabilizationMode = AVCaptureVideoStabilizationModeAuto;
[self setVid_Output:vid_Output];

}


});
}

- (void)viewWillAppear:(BOOL)animated
{
//[super viewWillAppear:YES];
dispatch_async([self sessionQueue], ^{
[self addObserver:self forKeyPath:@"sessionRunningAndDeviceAuthorized" options:(NSKeyValueObservingOptionOld | NSKeyValueObservingOptionNew) context:SessionRunningAndDeviceAuthorizedContext];

[self addObserver:self forKeyPath:@"vid_Output.recording" options:(NSKeyValueObservingOptionOld | NSKeyValueObservingOptionNew) context:RecordingContext];
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(subjectAreaDidChange:) name:AVCaptureDeviceSubjectAreaDidChangeNotification object:[[self videoDeviceInput] device]];

__weak RecordViewController *weakSelf = self;
[self setRuntimeErrorHandlingObserver:[[NSNotificationCenter defaultCenter] addObserverForName:AVCaptureSessionRuntimeErrorNotification object:[self session] queue:nil usingBlock:^(NSNotification *note) {
RecordViewController *strongSelf = weakSelf;
dispatch_async([strongSelf sessionQueue], ^{
// Manually restarting the session since it must have been stopped due to an error.
[[strongSelf session] startRunning];

});
}]];
[[self session] startRunning];
});


}
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer

uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
//uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationUp];//UIImageOrientationRight
self.videoOrientation = UIImageOrientationUp;
CGContextRelease(newContext);
CGImageRelease(newImage);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!

return image;
}

最佳答案

尝试添加

[[(AVCaptureVideoPreviewLayer *)[[self previewView] layer] setVideoGravity:AVLayerVideoGravityResizeAspectFill];

这将确保预览层填满整个屏幕

关于ios - AVCaptureVideoDataOutput 和图像在屏幕上的显示大小,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33503575/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com