gpt4 book ai didi

ios - 同步 AVAudioPlayerNode 并开始录制 AVAudioEngine

转载 作者:行者123 更新时间:2023-12-03 17:22:19 25 4
gpt4 key购买 nike

我正在使用 AVAudioEngine 播放和录制音频。对于我的用例,我需要在开始录制音频时准确地播放声音。目前,我的录音似乎在播放声音之前就开始了。如何让声音和录音同时开始?理想情况下,我希望录音开始和声音同时播放,而不是在后期处理中同步它们。
这是我目前的代码:

class Recorder {
enum RecordingState {
case recording, paused, stopped
}

private var engine: AVAudioEngine!
private var mixerNode: AVAudioMixerNode!
private var state: RecordingState = .stopped



private var audioPlayer = AVAudioPlayerNode()

init() {
setupSession()
setupEngine()

}


fileprivate func setupSession() {
let session = AVAudioSession.sharedInstance()
try? session.setCategory(.playAndRecord, options: [.mixWithOthers, .defaultToSpeaker])
try? session.setActive(true, options: .notifyOthersOnDeactivation)
}

fileprivate func setupEngine() {
engine = AVAudioEngine()
mixerNode = AVAudioMixerNode()

// Set volume to 0 to avoid audio feedback while recording.
mixerNode.volume = 0

engine.attach(mixerNode)

engine.attach(audioPlayer)

makeConnections()

// Prepare the engine in advance, in order for the system to allocate the necessary resources.
engine.prepare()
}


fileprivate func makeConnections() {

let inputNode = engine.inputNode
let inputFormat = inputNode.outputFormat(forBus: 0)
print("Input Sample Rate: \(inputFormat.sampleRate)")
engine.connect(inputNode, to: mixerNode, format: inputFormat)

let mainMixerNode = engine.mainMixerNode
let mixerFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: inputFormat.sampleRate, channels: 1, interleaved: false)

engine.connect(mixerNode, to: mainMixerNode, format: mixerFormat)

let path = Bundle.main.path(forResource: "effect1.wav", ofType:nil)!
let url = URL(fileURLWithPath: path)
let file = try! AVAudioFile(forReading: url)
audioPlayer.scheduleFile(file, at: nil)
engine.connect(audioPlayer, to: mainMixerNode, format: nil)

}


//MARK: Start Recording Function
func startRecording() throws {
print("Start Recording!")
let tapNode: AVAudioNode = mixerNode
let format = tapNode.outputFormat(forBus: 0)

let documentURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]

// AVAudioFile uses the Core Audio Format (CAF) to write to disk.
// So we're using the caf file extension.
let file = try AVAudioFile(forWriting: documentURL.appendingPathComponent("recording.caf"), settings: format.settings)

tapNode.installTap(onBus: 0, bufferSize: 4096, format: format, block: {
(buffer, time) in

try? file.write(from: buffer)
print(buffer.description)
print(buffer.stride)

//Do Stuff
print("Doing Stuff")
})


try engine.start()
audioPlayer.play()
state = .recording
}


//MARK: Other recording functions
func resumeRecording() throws {
try engine.start()
state = .recording
}

func pauseRecording() {
engine.pause()
state = .paused
}

func stopRecording() {
// Remove existing taps on nodes
mixerNode.removeTap(onBus: 0)

engine.stop()
state = .stopped
}




}

最佳答案

您是否尝试在安装水龙头之前启动播放器?

// Stop the player to be sure the engine.start calls the prepare function
audioPlayer.stop()
try engine.start()
audioPlayer.play()
state = .recording

tapNode.installTap(onBus: 0, bufferSize: 4096, format: format, block: {
(buffer, time) in
try? file.write(from: buffer)
})
如果在这种情况下,您的录音有点晚,可以尝试使用 player.outputPresentationLatency 进行补偿。 .
根据文档,这是最大值。这意味着时间可能会稍差。我希望它值得一试。
print(player.outputPresentationLatency)
// 0.009999999776482582

let nanoseconds = Int(player.outputPresentationLatency * pow(10,9))
let dispatchTimeInterval = DispatchTimeInterval.nanoseconds(nanoseconds)

player.play()
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + dispatchTimeInterval) {
self.installTap()
self.state = .recording
}

关于ios - 同步 AVAudioPlayerNode 并开始录制 AVAudioEngine,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/66909530/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com