gpt4 book ai didi

ios - 如何使用 AVFoundation 框架捕获图像?

转载 作者:行者123 更新时间:2023-11-28 21:56:23 24 4
gpt4 key购买 nike

我有以下代码可以在 UIView 中打开相机,现在可以正常工作。

但是我有两个按钮,就像这个 screen shot一个用于拍摄照片,另一个用于从库中上传照片。

如何在不使用 native 相机的情况下拍摄照片?

这是我的.h文件代码

#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>

@interface bgCameraController : UIViewController<AVCaptureMetadataOutputObjectsDelegate>

@property (weak, nonatomic) IBOutlet UIView *cam;
@property (strong, nonatomic) IBOutlet UIImageView *imageView;

- (IBAction)takePhoto: (UIButton *)sender;
- (IBAction)selectPhoto:(UIButton *)sender;
@end

这是我的.m文件代码

#import "bgCameraController.h"

@interface bgCameraController ()
@property (nonatomic, strong) AVCaptureSession *captureSession;
@property (nonatomic, strong) AVCaptureVideoPreviewLayer *videoPreviewLayer;
@property (nonatomic, strong) AVAudioPlayer *audioPlayer;
@property (nonatomic) BOOL isReading;

-(BOOL)startReading;
-(void)stopReading;
-(void)loadBeepSound;
@end

@implementation bgCameraController

- (void)viewDidLoad {
[super viewDidLoad];
[self loadBeepSound];
[self startReading];

// Do any additional setup after loading the view.
}

- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}

- (BOOL)startReading {
NSError *error;

// Get an instance of the AVCaptureDevice class to initialize a device object and provide the video
// as the media type parameter.
AVCaptureDevice *captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

// Get an instance of the AVCaptureDeviceInput class using the previous device object.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];

if (!input) {
// If any error occurs, simply log the description of it and don't continue any more.
NSLog(@"%@", [error localizedDescription]);
return NO;
}

// Initialize the captureSession object.
_captureSession = [[AVCaptureSession alloc] init];
// Set the input device on the capture session.
[_captureSession addInput:input];


// Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session.
AVCaptureMetadataOutput *captureMetadataOutput = [[AVCaptureMetadataOutput alloc] init];
[_captureSession addOutput:captureMetadataOutput];

// Create a new serial dispatch queue.
dispatch_queue_t dispatchQueue;
dispatchQueue = dispatch_queue_create("myQueue", NULL);
[captureMetadataOutput setMetadataObjectsDelegate:self queue:dispatchQueue];
[captureMetadataOutput setMetadataObjectTypes:[NSArray arrayWithObject:AVMetadataObjectTypeQRCode]];

// Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
_videoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:_captureSession];
[_videoPreviewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[_videoPreviewLayer setFrame:_cam.layer.bounds];
[_cam.layer addSublayer:_videoPreviewLayer];


// Start video capture.
[_captureSession startRunning];

return YES;
}


-(void)stopReading{
// Stop video capture and make the capture session object nil.
[_captureSession stopRunning];
_captureSession = nil;

// Remove the video preview layer from the viewPreview view's layer.
//[_videoPreviewLayer removeFromSuperlayer];
}


-(void)loadBeepSound{
// Get the path to the beep.mp3 file and convert it to a NSURL object.
NSString *beepFilePath = [[NSBundle mainBundle] pathForResource:@"beep" ofType:@"mp3"];
NSURL *beepURL = [NSURL URLWithString:beepFilePath];

NSError *error;

// Initialize the audio player object using the NSURL object previously set.
_audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:beepURL error:&error];
if (error) {
// If the audio player cannot be initialized then log a message.
// NSLog(@"Could not play beep file.");
//NSLog(@"%@", [error localizedDescription]);
}
else{
// If the audio player was successfully initialized then load it in memory.
[_audioPlayer prepareToPlay];
}
}
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection{

// Check if the metadataObjects array is not nil and it contains at least one object.
if (metadataObjects != nil && [metadataObjects count] > 0) {
// Get the metadata object.
// NSLog(@"%@",metadataObjects);
AVMetadataMachineReadableCodeObject *metadataObj = [metadataObjects objectAtIndex:0];
if ([[metadataObj type] isEqualToString:AVMetadataObjectTypeQRCode]) {
// If the found metadata is equal to the QR code metadata then update the status label's text,
// stop reading and change the bar button item's title and the flag's value.
// Everything is done on the main thread.
NSString *result=[metadataObj stringValue];
[self performSelectorOnMainThread:@selector(setQRcodeValues:) withObject:result waitUntilDone:NO];
// [_result performSelectorOnMainThread:@selector(setText:) withObject:[metadataObj stringValue] waitUntilDone:NO];

[self performSelectorOnMainThread:@selector(stopReading) withObject:nil waitUntilDone:NO];
// [_button performSelectorOnMainThread:@selector(setTitle:) withObject:@"Start!" waitUntilDone:NO];

_isReading = NO;

// If the audio player is not nil, then play the sound effect.
if (_audioPlayer) {
[_audioPlayer play];
}
}
}


}

/*
#pragma mark - Navigation

// In a storyboard-based application, you will often want to do a little preparation before navigation
- (void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender {
// Get the new view controller using [segue destinationViewController].
// Pass the selected object to the new view controller.
}
*/

@end

请帮助我...通过单击该按钮拍摄照片(检查链接图片)

最佳答案

我有 captured image同时scanning QRCode像这样:

1) 首先添加AVCaptureStillImageOutput's的属性

@property (strong, nonatomic) AVCaptureStillImageOutput *stillImageOutput;

2)AVCaptureSession中添加 session 预设初始化之后

[self.session setSessionPreset:AVCaptureSessionPreset640x480];

3)现在添加AVCaptureStillImageOutput's作为 AVCaptureSession 中的输出

// Prepare an output for snapshotting
self.stillImageOutput = [AVCaptureStillImageOutput new];
[self.session addOutput:self.stillImageOutput];
self.stillImageOutput.outputSettings = @{AVVideoCodecKey: AVVideoCodecJPEG};

4) 添加以下代码以在委托(delegate)方法中捕获扫描图像captureOutput:didOutputMetadataObjects:fromConnection:connection

 __block UIImage *scannedImg = nil;
// Take an image of the face and pass to CoreImage for detection
AVCaptureConnection *stillConnection = [self.stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:stillConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if(error) {
NSLog(@"There was a problem");
return;
}

NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];

scannedImg = [UIImage imageWithData:jpegData];
NSLog(@"scannedImg : %@",scannedImg);
}];

引用使用CodeScanViewController

就是这样@Enjoy

关于ios - 如何使用 AVFoundation 框架捕获图像?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/26382254/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com