gpt4 book ai didi

ios - AVCaptureAudioDataOutputSampleBufferDelegate 未调用 captureOutput

转载 作者:行者123 更新时间:2023-11-28 14:25:53 24 4
gpt4 key购买 nike

我有一个录制视频的应用,但我需要它向用户实时显示麦克风捕获的声音的音高水平。我已经能够使用 AVCaptureSession 成功地将音频和视频录制到 MP4 .但是,当我添加 AVCaptureAudioDataOutput到 session 并分配 AVCaptureAudioDataOutputSampleBufferDelegate我没有收到任何错误,但是 captureOutput一旦 session 开始,函数就永远不会被调用。

代码如下:

import UIKit
import AVFoundation
import CoreLocation


class ViewController: UIViewController,
AVCaptureVideoDataOutputSampleBufferDelegate,
AVCaptureFileOutputRecordingDelegate, CLLocationManagerDelegate ,
AVCaptureAudioDataOutputSampleBufferDelegate {

var videoFileOutput: AVCaptureMovieFileOutput!
let session = AVCaptureSession()
var outputURL: URL!
var timer:Timer!
var locationManager:CLLocationManager!
var currentMagnitudeValue:CGFloat!
var defaultMagnitudeValue:CGFloat!
var visualMagnitudeValue:CGFloat!
var soundLiveOutput: AVCaptureAudioDataOutput!


override func viewDidLoad() {
super.viewDidLoad()
self.setupAVCapture()
}


func setupAVCapture(){

session.beginConfiguration()

//Add the camera INPUT to the session
let videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video, position: .front)
guard
let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice!),
session.canAddInput(videoDeviceInput)
else { return }
session.addInput(videoDeviceInput)

//Add the microphone INPUT to the session
let microphoneDevice = AVCaptureDevice.default(.builtInMicrophone, for: .audio, position: .unspecified)
guard
let audioDeviceInput = try? AVCaptureDeviceInput(device: microphoneDevice!),
session.canAddInput(audioDeviceInput)
else { return }
session.addInput(audioDeviceInput)

//Add the video file OUTPUT to the session
videoFileOutput = AVCaptureMovieFileOutput()
guard session.canAddOutput(videoFileOutput) else {return}
if (session.canAddOutput(videoFileOutput)) {
session.addOutput(videoFileOutput)
}

//Add the audio output so we can get PITCH of the sounds
//AND assign the SampleBufferDelegate
soundLiveOutput = AVCaptureAudioDataOutput()
soundLiveOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "test"))
if (session.canAddOutput(soundLiveOutput)) {
session.addOutput(soundLiveOutput)
print ("Live AudioDataOutput added")
} else
{
print("Could not add AudioDataOutput")
}



//Preview Layer
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
let rootLayer :CALayer = self.cameraView.layer
rootLayer.masksToBounds=true
previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(previewLayer)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill;

//Finalize the session
session.commitConfiguration()

//Begin the session
session.startRunning()


}

func captureOutput(_: AVCaptureOutput, didOutput: CMSampleBuffer, from:
AVCaptureConnection) {
print("Bingo")
}

}

预期输出:

Bingo
Bingo
Bingo
...

我已阅读:

StackOverflow: captureOutput not being called - 用户未正确声明 captureOutput 方法。

StackOverflow: AVCaptureVideoDataOutput captureOutput not being called - 用户根本没有声明 captureOutput 方法。

Apple - AVCaptureAudioDataOutputSampleBufferDelegate - Apple 关于委托(delegate)及其方法的文档 - 该方法与我声明的方法相匹配。

我在网上遇到的其他常见错误:

  • 使用旧版本 Swift 的声明(我使用的是 v4.1)
  • 显然是在 Swift 4.0 之后的一篇文章中,AVCaptureMetadataOutput替换 AVCaptureAudioDataOutput - 虽然我在 Apple 的文档中找不到这个,但我也试过这个,但类似的是,metadataOutput函数永远不会被调用。

我没有想法。我是否遗漏了一些明显的东西?

最佳答案

好吧,没有人回复我,但在玩弄它之后,我找到了为 Swift4 声明 captureOutput 方法的正确方法如下:

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//Do your stuff here
}

不幸的是,这个在线文档很差。我猜你只需要完全正确 - 如果你拼错或错误命名变量,也不会抛出任何错误,因为它是一个可选函数。

关于ios - AVCaptureAudioDataOutputSampleBufferDelegate 未调用 captureOutput,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51573039/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com