gpt4 book ai didi

ios - 由于缩放,使用 ImageView 进行人脸检测和缩放效果不佳?

转载 作者:可可西里 更新时间:2023-11-01 04:46:47 27 4
gpt4 key购买 nike

再一次,我很接近,但没有香蕉。

我正在尝试学习一些关于面部识别的教程。使用以下代码几乎就可以了,但我认为在缩放和在面部周围放置 UIImageView 边框方面我缺少一些东西。

我在照片库中的照片大小不同(出于某些无法解释的原因),所以我想 CIDetector 正在寻找人脸,我正在应用 CGAffineTransforms 等,并尝试将它们放在 UIImageView 中。但是,正如您从图像(也在下方)中看到的那样,它没有绘制在正确的位置。

UIImageView 为 280x500 并设置为 Scale to Fill。

任何帮助弄清楚正在发生的事情都会很棒!

-(void)detectFaces {

CIContext *context = [CIContext contextWithOptions:nil];
CIImage *image = [CIImage imageWithCGImage:_imagePhotoChosen.image.CGImage options:nil];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:context options:@{CIDetectorAccuracy : CIDetectorAccuracyHigh}];
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform, 0, -_imagePhotoChosen.image.size.height);
NSArray *features = [detector featuresInImage:image];
NSLog(@"I have found %lu faces", (long unsigned)features.count);
for (CIFaceFeature *faceFeature in features)
{
const CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);
NSLog(@"I have the original frame as: %@", NSStringFromCGRect(faceRect));
const CGFloat scaleWidth = _imagePhotoChosen.frame.size.width/_imagePhotoChosen.image.size.width;
const CGFloat scaleHeight = _imagePhotoChosen.frame.size.height/_imagePhotoChosen.image.size.height;

CGRect faceFrame = CGRectMake(faceRect.origin.x * scaleWidth, faceRect.origin.y * scaleHeight, faceRect.size.width * scaleWidth, faceRect.size.height * scaleHeight);

UIView *faceView = [[UIView alloc] initWithFrame:faceFrame];
NSLog(@"I have the bounds as: %@", NSStringFromCGRect(faceFrame));
faceView.layer.borderColor = [[UIColor redColor] CGColor];
faceView.layer.borderWidth = 1.0f;

[self.view addSubview:faceView];
}

}

-(void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info{

_imagePhotoChosen.image = info[UIImagePickerControllerOriginalImage];
//[_imagePhotoChosen sizeToFit];

[self.view addSubview:_viewChosenPhoto];
[picker dismissViewControllerAnimated:YES completion:nil];
[self detectFaces];

}

我已经留在 NSLog 语句中,因为我一直在尝试计算数学是否错误,但似乎无法确定它是否正确!而且我也是一名数学老师......叹息....

Not quite right now is it?

再次感谢您为我指明正确方向所做的一切。

更新

作为对想知道我是如何解决它的人的回应......这对我来说真的是一个愚蠢的错误。

我将 subview 添加到主窗口,而不是 UIImageView。因此我删除了这一行:

[self.view addSubview:faceView];

并将其替换为:

[_imagePhotoChosen addSubview:faceView];

这样就可以将框架放置在正确的位置。公认的解决方案为我提供了线索!所以,更新后的代码(从那以后我又做了一些改动)变成了:

-(void)detectFaces:(UIImage *)selectedImage {

_imagePhotoChosen.image = selectedImage;

CIImage *image = [CIImage imageWithCGImage:selectedImage.CGImage options:nil];

CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:@{CIDetectorAccuracy : CIDetectorAccuracyHigh}];
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform, 0, -selectedImage.size.height);
NSArray *features = [detector featuresInImage:image];
int i = 0;
for (CIFaceFeature *faceFeature in features)
{

const CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);

const CGFloat scaleWidth = _imagePhotoChosen.frame.size.width/_imagePhotoChosen.image.size.width;
const CGFloat scaleHeight = _imagePhotoChosen.frame.size.height/_imagePhotoChosen.image.size.height;

CGRect faceFrame = CGRectMake(faceRect.origin.x * scaleWidth, faceRect.origin.y * scaleHeight, faceRect.size.width * scaleWidth, faceRect.size.height * scaleHeight);

UIView *faceView = [[UIView alloc] initWithFrame:faceFrame];
faceView.layer.borderColor = [[UIColor redColor] CGColor];
faceView.layer.borderWidth = 1.0f;
faceView.tag = i;

UITapGestureRecognizer *selectPhotoTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(selectPhoto)];
selectPhotoTap.numberOfTapsRequired = 1;
selectPhotoTap.numberOfTouchesRequired = 1;
[faceView addGestureRecognizer:selectPhotoTap];

[_imagePhotoChosen addSubview:faceView];
i++;
}

}

最佳答案

其实你做的是完全正确的,把这一行换成

    CGRect faceFrame = CGRectMake(_imagePhotoChosen.frame.origin.x+ faceRect.origin.x * scaleWidth,_imagePhotoChosen.frame.origin.y+ faceRect.origin.y * scaleHeight, faceRect.size.width * scaleWidth, faceRect.size.height * scaleHeight);

关于ios - 由于缩放,使用 ImageView 进行人脸检测和缩放效果不佳?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/21507406/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com