gpt4 book ai didi

ios - 使用 AVCaptureSession 捕获视频,使用 EAGLContext 没有可见输出

转载 作者:行者123 更新时间:2023-11-29 10:45:15 25 4
gpt4 key购买 nike

我正在使用 iPhone 上的后置摄像头使用 AVCaptureSession 捕获实时视频,使用 CoreImage 应用一些过滤器,然后尝试使用 OpenGL ES 输出生成的视频。大部分代码来自 WWDC 2012 session “核心图像技术”的示例。

使用 [UIImage imageWithCIImage:...] 或通过为每一帧创建 CGImageRef 来显示过滤器链的输出效果很好。但是,当尝试使用 OpenGL ES 进行显示时,我得到的只是黑屏。

在类(class)中,他们使用自定义 View 类来显示输出,但是该类的代码不可用。我的 View Controller 类扩展了 GLKViewController 并且它的 View 类被设置为 GLKView。

我已经搜索并下载了我能找到的所有 GLKit 教程和示例,但没有任何帮助。特别是当我尝试运行来自 here 的示例时,我无法获得任何视频输出。任何一个。谁能指出我正确的方向?

#import "VideoViewController.h"

@interface VideoViewController ()
{
AVCaptureSession *_session;

EAGLContext *_eaglContext;
CIContext *_ciContext;

CIFilter *_sepia;
CIFilter *_bumpDistortion;
}

- (void)setupCamera;
- (void)setupFilters;

@end

@implementation VideoViewController

- (void)viewDidLoad
{
[super viewDidLoad];

GLKView *view = (GLKView *)self.view;

_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
[EAGLContext setCurrentContext:_eaglContext];

view.context = _eaglContext;

// Configure renderbuffers created by the view
view.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
view.drawableStencilFormat = GLKViewDrawableStencilFormat8;

[self setupCamera];
[self setupFilters];
}

- (void)setupCamera {
_session = [AVCaptureSession new];
[_session beginConfiguration];

[_session setSessionPreset:AVCaptureSessionPreset640x480];

AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
[_session addInput:input];

AVCaptureVideoDataOutput *dataOutput = [AVCaptureVideoDataOutput new];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES];

NSDictionary *options;
options = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] };

[dataOutput setVideoSettings:options];

[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

[_session addOutput:dataOutput];
[_session commitConfiguration];
}

#pragma mark Setup Filters
- (void)setupFilters {
_sepia = [CIFilter filterWithName:@"CISepiaTone"];
[_sepia setValue:@0.7 forKey:@"inputIntensity"];

_bumpDistortion = [CIFilter filterWithName:@"CIBumpDistortion"];
[_bumpDistortion setValue:[CIVector vectorWithX:240 Y:320] forKey:@"inputCenter"];
[_bumpDistortion setValue:[NSNumber numberWithFloat:200] forKey:@"inputRadius"];
[_bumpDistortion setValue:[NSNumber numberWithFloat:3.0] forKey:@"inputScale"];
}

#pragma mark Main Loop
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// Grab the pixel buffer
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);

// null colorspace to avoid colormatching
NSDictionary *options = @{ (id)kCIImageColorSpace : (id)kCFNull };
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer options:options];

image = [image imageByApplyingTransform:CGAffineTransformMakeRotation(-M_PI/2.0)];
CGPoint origin = [image extent].origin;
image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-origin.x, -origin.y)];

// Pass it through the filter chain
[_sepia setValue:image forKey:@"inputImage"];
[_bumpDistortion setValue:_sepia.outputImage forKey:@"inputImage"];

// Grab the final output image
image = _bumpDistortion.outputImage;

// draw to GLES context
[_ciContext drawImage:image inRect:CGRectMake(0, 0, 480, 640) fromRect:[image extent]];

// and present to screen
[_eaglContext presentRenderbuffer:GL_RENDERBUFFER];

NSLog(@"frame hatched");

[_sepia setValue:nil forKey:@"inputImage"];
}

- (void)loadView {
[super loadView];

// Initialize the CIContext with a null working space
NSDictionary *options = @{ (id)kCIContextWorkingColorSpace : (id)kCFNull };
_ciContext = [CIContext contextWithEAGLContext:_eaglContext options:options];
}

- (void)viewWillAppear:(BOOL)animated {
[super viewWillAppear:animated];

[_session startRunning];
}

- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}

@end

最佳答案

哇,我居然自己想出来了。毕竟这行可能适合我 ;)

首先,无论出于何种原因,此代码仅适用于 OpenGL ES 2,不适用于 3。尚未找出原因。

其次,我在 loadView 方法中设置了 CIContext,它显然在 viewDidLoad 方法之前运行,因此使用了一个尚未初始化的 EAGLContext。

关于ios - 使用 AVCaptureSession 捕获视频,使用 EAGLContext 没有可见输出,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/22642140/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com