- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试使用 AVFoundation
实现实时相机应用程序, GLKit
和 Core Image
(未使用 GPUImage
)
所以,我找到了这个教程
http://altitudelabs.com/blog/real-time-filter/
它是用 Objective-C 编写的,所以我用 Swift4.0、XCode9 重写了该代码
它似乎工作正常但有时(很少),它因以下错误而崩溃。当GLKView
的 display
方法被调用
EXC_BAD_ACCESS(code=1, addresss+0x********)
崩溃时,GLKView 存在(非零),EAGLContext 存在,CIContext 存在。我的代码如下
import UIKitimport AVFoundationimport GLKitimport OpenGLESclass ViewController: UIViewController { var videoDevice : AVCaptureDevice! var captureSession : AVCaptureSession! var captureSessionQueue : DispatchQueue! var videoPreviewView: GLKView! var ciContext: CIContext! var eaglContext: EAGLContext! var videoPreviewViewBounds: CGRect = CGRect.zero override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. // remove the view's background color; this allows us not to use the opaque property (self.view.opaque = NO) since we remove the background color drawing altogether self.view.backgroundColor = UIColor.clear // setup the GLKView for video/image preview let window : UIView = UIApplication.shared.delegate!.window!! eaglContext = EAGLContext(api: .openGLES2) videoPreviewView = GLKView(frame: videoPreviewViewBounds, context: eaglContext) videoPreviewView.enableSetNeedsDisplay = false // because the native video image from the back camera is in UIDeviceOrientationLandscapeLeft (i.e. the home button is on the right), we need to apply a clockwise 90 degree transform so that we can draw the video preview as if we were in a landscape-oriented view; if you're using the front camera and you want to have a mirrored preview (so that the user is seeing themselves in the mirror), you need to apply an additional horizontal flip (by concatenating CGAffineTransformMakeScale(-1.0, 1.0) to the rotation transform) videoPreviewView.transform = CGAffineTransform(rotationAngle: CGFloat.pi/2.0) videoPreviewView.frame = window.bounds // we make our video preview view a subview of the window, and send it to the back; this makes ViewController's view (and its UI elements) on top of the video preview, and also makes video preview unaffected by device rotation window.addSubview(videoPreviewView) window.sendSubview(toBack: videoPreviewView) // bind the frame buffer to get the frame buffer width and height; // the bounds used by CIContext when drawing to a GLKView are in pixels (not points), // hence the need to read from the frame buffer's width and height; // in addition, since we will be accessing the bounds in another queue (_captureSessionQueue), // we want to obtain this piece of information so that we won't be // accessing _videoPreviewView's properties from another thread/queue videoPreviewView.bindDrawable() videoPreviewViewBounds = CGRect.zero videoPreviewViewBounds.size.width = CGFloat(videoPreviewView.drawableWidth) videoPreviewViewBounds.size.height = CGFloat(videoPreviewView.drawableHeight) // create the CIContext instance, note that this must be done after _videoPreviewView is properly set up ciContext = CIContext(eaglContext: eaglContext, options: [kCIContextWorkingColorSpace: NSNull()]) if AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInDualCamera, .builtInTelephotoCamera, .builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back).devices.count > 0 { start() } else { print("No device with AVMediaTypeVideo") } } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } func start() { let videoDevices = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back).devices videoDevice = videoDevices.first var videoDeviceInput : AVCaptureInput! do { videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice) } catch let error { print("Unable to obtain video device input, error: \(error)") return } let preset = AVCaptureSession.Preset.high captureSession = AVCaptureSession() captureSession.sessionPreset = preset // core image watns bgra pixel format let outputSetting = [String(kCVPixelBufferPixelFormatTypeKey): kCVPixelFormatType_32BGRA] // crate and configure video data output let videoDataOutput = AVCaptureVideoDataOutput() videoDataOutput.videoSettings = outputSetting // create the dispatch queue for handling capture session delegate method calls captureSessionQueue = DispatchQueue(label: "capture_session_queue") videoDataOutput.setSampleBufferDelegate(self, queue: captureSessionQueue) videoDataOutput.alwaysDiscardsLateVideoFrames = true captureSession.beginConfiguration() if !captureSession.canAddOutput(videoDataOutput) { print("Cannot add video data output") captureSession = nil return } captureSession.addInput(videoDeviceInput) captureSession.addOutput(videoDataOutput) captureSession.commitConfiguration() captureSession.startRunning() }}extension ViewController : AVCaptureVideoDataOutputSampleBufferDelegate { func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { let imageBuffer : CVImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)! let sourceImage = CIImage(cvImageBuffer: imageBuffer, options: nil) let sourceExtent = sourceImage.extent let vignetteFilter = CIFilter(name: "CIVignetteEffect", withInputParameters: nil) vignetteFilter?.setValue(sourceImage, forKey: kCIInputImageKey) vignetteFilter?.setValue(CIVector(x: sourceExtent.size.width/2.0, y: sourceExtent.size.height/2.0), forKey: kCIInputCenterKey) vignetteFilter?.setValue(sourceExtent.width/2.0, forKey: kCIInputRadiusKey) let filteredImage = vignetteFilter?.outputImage let sourceAspect = sourceExtent.width/sourceExtent.height let previewAspect = videoPreviewViewBounds.width/videoPreviewViewBounds.height // we want to maintain the aspect radio of the screen size, so we clip the video image var drawRect = sourceExtent if sourceAspect > previewAspect { // use full height of the video image, and center crop the width drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0 drawRect.size.width = drawRect.size.height * previewAspect } else { // use full width of the video image, and center crop the height drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0; drawRect.size.height = drawRect.size.width / previewAspect; } videoPreviewView.bindDrawable() if eaglContext != EAGLContext.current() { EAGLContext.setCurrent(eaglContext) } print("current thread \(Thread.current)") // clear eagl view to grey glClearColor(0.5, 0.5, 0.5, 1.0); glClear(GLbitfield(GL_COLOR_BUFFER_BIT)); // set the blend mode to "source over" so that CI will use that glEnable(GLenum(GL_BLEND)); glBlendFunc(GLenum(GL_ONE), GLenum(GL_ONE_MINUS_SRC_ALPHA)); if let filteredImage = filteredImage { ciContext.draw(filteredImage, in: videoPreviewViewBounds, from: drawRect) } videoPreviewView.display() }}
崩溃时的栈是
* thread #5, queue = 'com.apple.avfoundation.videodataoutput.bufferqueue', stop reason = EXC_BAD_ACCESS (code=1, address=0x8000000000000000)frame #0: 0x00000001a496f098 AGXGLDriver`___lldb_unnamed_symbol149$$AGXGLDriver + 332frame #1: 0x00000001923c029c OpenGLES`-[EAGLContext getParameter:to:] + 80frame #2: 0x000000010038bca4 libglInterpose.dylib`__clang_call_terminate + 1976832frame #3: 0x00000001001ab75c libglInterpose.dylib`__clang_call_terminate + 9400frame #4: 0x000000010038b8b4 libglInterpose.dylib`__clang_call_terminate + 1975824frame #5: 0x00000001001af098 libglInterpose.dylib`__clang_call_terminate + 24052frame #6: 0x00000001001abe5c libglInterpose.dylib`__clang_call_terminate + 11192frame #7: 0x000000010038f9dc libglInterpose.dylib`__clang_call_terminate + 1992504frame #8: 0x000000010038d5b8 libglInterpose.dylib`__clang_call_terminate + 1983252frame #9: 0x000000019a1e2a20 GLKit`-[GLKView _display:] + 308* frame #10: 0x0000000100065e78 RealTimeCameraPractice`ViewController.captureOutput(output=0x0000000174034820, sampleBuffer=0x0000000119e25e70, connection=0x0000000174008850, self=0x0000000119d032d0) at ViewController.swift:160frame #11: 0x00000001000662dc RealTimeCameraPractice`@objc ViewController.captureOutput(_:didOutput:from:) at ViewController.swift:0frame #12: 0x00000001977ec310 AVFoundation`-[AVCaptureVideoDataOutput _handleRemoteQueueOperation:] + 308frame #13: 0x00000001977ec14c AVFoundation`__47-[AVCaptureVideoDataOutput _updateRemoteQueue:]_block_invoke + 100frame #14: 0x00000001926bdf38 CoreMedia`__FigRemoteOperationReceiverCreateMessageReceiver_block_invoke + 260frame #15: 0x00000001926dce9c CoreMedia`__FigRemoteQueueReceiverSetHandler_block_invoke.2 + 224frame #16: 0x000000010111da10 libdispatch.dylib`_dispatch_client_callout + 16frame #17: 0x0000000101129a84 libdispatch.dylib`_dispatch_continuation_pop + 552frame #18: 0x00000001011381f8 libdispatch.dylib`_dispatch_source_latch_and_call + 204frame #19: 0x000000010111fa60 libdispatch.dylib`_dispatch_source_invoke + 828frame #20: 0x000000010112b128 libdispatch.dylib`_dispatch_queue_serial_drain + 692frame #21: 0x0000000101121634 libdispatch.dylib`_dispatch_queue_invoke + 852frame #22: 0x000000010112b128 libdispatch.dylib`_dispatch_queue_serial_drain + 692frame #23: 0x0000000101121634 libdispatch.dylib`_dispatch_queue_invoke + 852frame #24: 0x000000010112c358 libdispatch.dylib`_dispatch_root_queue_drain_deferred_item + 276frame #25: 0x000000010113457c libdispatch.dylib`_dispatch_kevent_worker_thread + 764frame #26: 0x000000018ee56fbc libsystem_pthread.dylib`_pthread_wqthread + 772frame #27: 0x000000018ee56cac libsystem_pthread.dylib`start_wqthread + 4
我的项目在github上
https://github.com/hegrecom/iOS-RealTimeCameraPractice
最佳答案
这里的解决方案: iOS 11 beta 4 presentRenderbuffer crash
转到管理方案->选项->GPU帧捕获->禁用
关于ios - GLKView.display() 方法有时会导致崩溃。 EXC_BAD_ACCESS,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46722455/
从 iOS 11 开始,当我们在 GLKView 上调用显示方法时,我注意到 OpenGL-ES 崩溃。我们已确保 GLKView 和 EAGLContext 已正确实例化并从适当的线程调用。 下面是
我刚刚开始研究适用于 iOS 的 Open GL ES,我正在查看苹果模板代码上的源代码。问题是两个 - (void)glkView:(GLKView *)view drawInRect:(CGRec
当继承 GLKViewController 时,它有两个方法: - (void)update, 和- (void)glkView:(GLKView *)view drawInRect:(CGRect)
我的 iOS 应用程序窗口顶部的导航栏和底部的工具栏之间有一个 GLKView (OpenGL ES2.0)。我已经使用 UIPinchGestureRecognizer 实现了捏合缩放,但是当我缩小
我无法找到正确的位置来为使用 GLKView 和 GLKViewController 的 OpenGLES 应用程序进行着色器设置。 看起来 viewDidLoad 是一个很自然的地方,但是当我在这里
我使用外部渲染器创建应用程序以及如何按请求调用重绘?例如,在 Android 上,我可以使用 requestRenderer() 和 RENDERMODE_WHEN_DIRTY 渲染模式来完成此操作。
如果我 NSLog 我的 GLKView 类,我得到 GLKView 但为什么它没有可以分配给 EAGLContext 的上下文属性?为什么我必须将它转换为另一个变量,然后分配 EAGLContext
我正在开发 iPad OpenGL ES 2.0 应用程序,需要 1024x768 的后备帧缓冲区。我使用 GLKView 和 iOS 5.1 来管理后端帧/渲染缓冲区的创建(使用 XCode 提供的
如果我在界面生成器中创建 GLKView,我会看到几个与 gl 上下文相关的可配置属性(例如颜色格式)。如果我必须手动创建上下文,为什么还要存在这些属性? 最佳答案 颜色格式,无论是否有深度缓冲区等,
我正在尝试使用 GLKView 创建带有触摸输入的签名。但现在我需要一个 UIImage 位于签名下方。简而言之:我想使用自定义 GLKView 在 UIImage 上方绘制线条。 问题是我的线每次都
我有一个显示 CIFilter 的 CIImage 的 GLKView。当我通过捏合手势调整 View 框架大小时,我的帧率很低。这背后的问题是什么?我怎样才能适应它? 最佳答案 由于此 API 的工
我有一个基于“OpenGL 游戏”Xcode 模板的应用程序,用于日常测试/开发我想在模拟器中渲染全屏但分辨率较低(例如 1/2 或 1/4)。有什么有效/精明的方法来实现这一点? (如果有人想知道,
我很难找到任何示例来说明如何将 Core Image 与 GLKView 一起正确使用,以便顺畅地呈现 Core Image“配方”以响应用户输入。所以,看完Core Image Programmin
我遇到了这个异常: 2014-02-19 19:08:34.590 MyApp[42353:70b] *** Terminating app due to uncaught exception 'NS
当 GLKView 调整大小时,会在该 GLKView 的缓冲区和上下文中发生一些幕后操作。在执行这些幕后操作期间,绘制到 GLKView 不会产生正确的结果。 在我的场景中,我有一个启用了 setN
我正在尝试使用 GLKView 将图像放入 UITableViewCell 中以绘制图像。 我在 Storyboard中有一个原型(prototype)单元,其中有一个 GLKView。它没有选中启用
我正在学习 OpenGLES,我正在尝试将 GLKViewer 放入 UIViewController 中。 我知道我可以通过使用 GLViewController 来解决主要问题,但我正在努力学习如
我正在学习适用于 iOS 的 OpenGL ES 并在线学习 Ray Wenderlich 的教程。他做的第一件事是使用 OpenGL ES 2.0 将 View Controller 中的 View
我正在使用 GLKit 在 iPhone 上创建游戏。设置 GLKit 上下文并在其上绘制 Sprite 并不难,但是当我尝试向其添加文本时,这似乎是不可能的。我四处寻找答案,但我试图找到的所有解决方
我有一个 MainMenuViewController 和一个 GameViewController,它是一个 GLKViewConrtroller。 我第一次从主菜单转到 GameViewContr
我是一名优秀的程序员,十分优秀!