gpt4 book ai didi

ios - AVFoundation - 检测面部和裁剪面部区域?

转载 作者:塔克拉玛干 更新时间:2023-11-02 07:46:02 26 4
gpt4 key购买 nike

正如标题所说,我想检测面部,然后只裁剪面部区域。这是我目前所拥有的:

- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection {

for (AVMetadataObject *face in metadataObjects) {
if ([face.type isEqualToString:AVMetadataObjectTypeFace]) {

AVCaptureConnection *stillConnection = [_stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
stillConnection.videoOrientation = [self videoOrientationFromCurrentDeviceOrientation];
[_stillImageOutput captureStillImageAsynchronouslyFromConnection:stillConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (error) {
NSLog(@"There was a problem");
return;
}

NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *stillImage = [UIImage imageWithData:jpegData];

CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:[CIContext contextWithOptions:nil] options:nil];
CIImage *ciimage = [CIImage imageWithData:jpegData];

NSArray *features = [faceDetector featuresInImage:ciimage];
self.captureImageView.image = stillImage;

for(CIFeature *feature in features) {
if ([feature isKindOfClass:[CIFaceFeature class]]) {
CIFaceFeature *faceFeature = (CIFaceFeature *)feature;

CGImageRef imageRef = CGImageCreateWithImageInRect([stillImage CGImage], faceFeature.bounds);
self.detectedFaceImageView.image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
}
//[_session stopRunning];
}];
}
}

这段代码部分有效,它可以检测到人脸,但不能裁剪面部部分,它总是裁剪错误的区域,它根本裁剪了一些东西。我一直在浏览堆栈寻找答案,尝试这个那个,但无济于事。

最佳答案

答案在这里

- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

// when do we start face detection
if (!_canStartDetection) return;

CIImage *ciimage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)];
NSArray *features = [_faceDetector featuresInImage:ciimage options:nil];

// find face feature
for(CIFeature *feature in features) {

// if not face feature ignore
if (![feature isKindOfClass:[CIFaceFeature class]]) continue;

// face detected
_canStartDetection = NO;
CIFaceFeature *faceFeature = (CIFaceFeature *)feature;

// crop detected face
CIVector *cropRect = [CIVector vectorWithCGRect:faceFeature.bounds];
CIFilter *cropFilter = [CIFilter filterWithName:@"CICrop"];
[cropFilter setValue:ciimage forKey:@"inputImage"];
[cropFilter setValue:cropRect forKey:@"inputRectangle"];
CIImage *croppedImage = [cropFilter valueForKey:@"outputImage"];
UIImage *stillImage = [UIImage imageWithCIImage:ciimage];
}

请注意,我这次使用了AVCaptureVideoDataOutput,这是代码:

// set output for face frames
AVCaptureVideoDataOutput *output2 = [[AVCaptureVideoDataOutput alloc] init];
[_session addOutput:output2];
output2.videoSettings = @{(NSString*)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA)};
output2.alwaysDiscardsLateVideoFrames = YES;
dispatch_queue_t queue = dispatch_queue_create("com.myapp.faceDetectionQueueSerial", DISPATCH_QUEUE_SERIAL);
[output2 setSampleBufferDelegate:self queue:queue];

关于ios - AVFoundation - 检测面部和裁剪面部区域?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/24083110/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com