gpt4 book ai didi

ios - 如何使用覆盖调整捕获的图像

转载 作者:行者123 更新时间:2023-11-29 13:15:13 25 4
gpt4 key购买 nike

我想实时捕捉带有叠加层的图像,就像立方体狗的工作原理一样 - 在下面完成代码,点击此处 http://www.musicalgeometry.com/?p=1681

我知道如何在预览层中叠加图像并捕获图像,我查看了 Apple 示例代码,如果在相机胶卷中检测到面部,则会保存一个红色方框。

编辑:

我想将它保存在 1920 X 1080 的后置摄像头和 1280 X 960 中,下面的代码可以实时保存叠加层和图像,但是对齐是关闭的,我不知道为什么有人可以帮忙吗?

干杯

enter image description here这是预览层

enter image description here这是捕获之后

- (id)init {
if ((self = [super init])) {
[self setCaptureSession:[[AVCaptureSession alloc] init]];
[self.captureSession setSessionPreset:AVCaptureSessionPresetHigh];
}
NSLog(@"init called");
return self;
}


-(void)takePictureWithOverlay:(UIImage*)overlay andRect:(CGRect)overlayRect
{
// Find out the current orientation and tell the still image output.
AVCaptureConnection *stillImageConnection = [self.stillImageOutput connectionWithMediaType:AVMediaTypeVideo];

//UIDeviceOrientation curDeviceOrientation = [[UIDevice currentDevice] orientation];
// AVCaptureVideoOrientation avcaptureOrientation = [self avOrientationForDeviceOrientation:curDeviceOrientation];

[stillImageConnection setVideoOrientation:AVCaptureVideoOrientationPortrait];

[stillImageConnection setVideoScaleAndCropFactor:self.effectiveScale];

[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:stillImageConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (error) {
[self displayErrorOnMainQueue:error withMessage:@"Take picture failed"];
}
else {
// trivial simple JPEG case
NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];

UIImage *image = [[UIImage alloc] initWithData:jpegData];

/////
CGSize imageSize = [image size];
CGSize overlaySize = [overlay size];

UIGraphicsBeginImageContext(imageSize);

[image drawInRect:CGRectMake(0, 0, imageSize.width, imageSize.height)];

NSLog(@"aaa %f", [UIScreen mainScreen].applicationFrame.size.width);
NSLog(@"aaa %f", [UIScreen mainScreen].applicationFrame.size.height);
NSLog(@"aaa %f", [[UIScreen mainScreen] bounds].size.height);

CGFloat xScaleFactor = imageSize.width / 320;//320;
CGFloat yScaleFactor = imageSize.height / 568;//480;//568;

NSLog(@"xScaleFactor size %F",xScaleFactor);
NSLog(@"yScaleFactor size %F",yScaleFactor);

//144 for 568
[overlay drawInRect:CGRectMake(overlayRect.origin.x * xScaleFactor, overlayRect.origin.y*yScaleFactor
, overlaySize.width * xScaleFactor, overlaySize.height * yScaleFactor)]; // rect used in AROverlayViewController was (30,100,260,200)
UIImage *combinedImage = UIGraphicsGetImageFromCurrentImageContext();
[self setStillImage:combinedImage];
UIGraphicsEndImageContext();
/////
}
[[NSNotificationCenter defaultCenter] postNotificationName:kImageCapturedSuccessfully object:nil];

}];


}

最佳答案

从这里找到我的答案。 http://developer.apple.com/library/ios/#qa/qa1714/_index.html

// Render the UIView into the CGContextRef using the
// CALayer/-renderInContext: method
- (void)renderView:(UIView*)view inContext:(CGContextRef)context
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [view center].x, [view center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [view transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[view bounds].size.width * [[view layer] anchorPoint].x,
-[view bounds].size.height * [[view layer] anchorPoint].y);

// Render the layer hierarchy to the current context
[[view layer] renderInContext:context];

// Restore the context
CGContextRestoreGState(context);
}

-(void)takePictureWithOverlay:(UIView *)overlay andRect:(CGRect)overlayRect
{
// Find out the current orientation and tell the still image output.
self.videoConnection = [self.stillImageOutput connectionWithMediaType:AVMediaTypeVideo];

//UIDeviceOrientation curDeviceOrientation = [[UIDevice currentDevice] orientation];
// AVCaptureVideoOrientation avcaptureOrientation = [self avOrientationForDeviceOrientation:curDeviceOrientation];

[self.videoConnection setVideoOrientation:AVCaptureVideoOrientationPortrait];

[self.videoConnection setVideoScaleAndCropFactor:self.effectiveScale];

[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:self.videoConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (error) {
[self displayErrorOnMainQueue:error withMessage:@"Take picture failed"];
}
else {
// trivial simple JPEG case
NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];

UIImage *image = [[UIImage alloc] initWithData:jpegData];
NSLog(@"cgsize of image %@", NSStringFromCGSize(image.size));
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
NSLog(@"cgsize %@", NSStringFromCGSize(imageSize));

UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Draw the image returned by the camera sample buffer into the context.
// Draw it into the same sized rectangle as the view that is displayed on the screen.
float menubarUIOffset = 200.0;
UIGraphicsPushContext(context);
[image drawInRect:CGRectMake(0, 0, imageSize.width, imageSize.height)];
UIGraphicsPopContext();

// Render the camera overlay view into the graphic context that we created above.
[self renderView:overlay inContext:context];

//Retrieve the screenshot image containing both the camera content and the overlay view
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
[self setStillImage:screenshot];
UIGraphicsEndImageContext();
/////
}
[[NSNotificationCenter defaultCenter] postNotificationName:kImageCapturedSuccessfully object:nil];

}];


}

关于ios - 如何使用覆盖调整捕获的图像,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/16007080/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com