gpt4 book ai didi

ios - 使用 AVAssetWriter 捕获音频和视频的损坏视频

转载 作者:行者123 更新时间:2023-11-28 06:19:52 25 4
gpt4 key购买 nike

我正在使用 AVCaptureSession 来使用视频和音频输入,并使用 AVAssetWriter 对 H.264 视频进行编码。

如果我不编写音频,则视频会按预期进行编码。但是,如果我编写音频,则会收到损坏的视频。

如果我检查提供给 AVAssetWriter 的音频 CMSampleBuffer,它会显示以下信息:

invalid = NO
dataReady = YES
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
formatDescription = <CMAudioFormatDescription 0x17410ba30 [0x1b3a70bb8]> {
mediaType:'soun'
mediaSubType:'lpcm'
mediaSpecific: {
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'lpcm'
mFormatFlags: 0xc
mBytesPerPacket: 2
mFramesPerPacket: 1
mBytesPerFrame: 2
mChannelsPerFrame: 1
mBitsPerChannel: 16 }
cookie: {(null)}
ACL: {(null)}
FormatList Array: {(null)}
}
extensions: {(null)}

因为它提供 lpcm 音频,我已经用这个声音设置配置了 AVAssetWriterInput(我已经尝试了一个和两个 channel ):

var channelLayout = AudioChannelLayout()
memset(&channelLayout, 0, MemoryLayout<AudioChannelLayout>.size);
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Mono

let audioOutputSettings:[String: Any] = [AVFormatIDKey as String:UInt(kAudioFormatLinearPCM),
AVNumberOfChannelsKey as String:1,
AVSampleRateKey as String:44100.0,
AVLinearPCMIsBigEndianKey as String:false,
AVLinearPCMIsFloatKey as String:false,
AVLinearPCMBitDepthKey as String:16,
AVLinearPCMIsNonInterleaved as String:false,
AVChannelLayoutKey: NSData(bytes:&channelLayout, length:MemoryLayout<AudioChannelLayout>.size)]

self.assetWriterAudioInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioOutputSettings)
self.assetWriter.add(self.assetWriterAudioInput)

当我使用上面的 lpcm 设置时,我无法用任何应用程序打开视频。我已尝试使用 kAudioFormatMPEG4AACkAudioFormatAppleLossless,但我仍然收到损坏的视频,但我可以使用 QuickTime Player 8(不是 QuickTime Player 7)观看视频,但它是对视频的持续时间感到困惑,并且没有播放声音。

录音完成后我调用:

func endRecording(_ completionHandler: @escaping () -> ()) {
isRecording = false
assetWriterVideoInput.markAsFinished()
assetWriterAudioInput.markAsFinished()
assetWriter.finishWriting(completionHandler: completionHandler)
}

这是 AVCaptureSession 的配置方式:

func setupCapture() {

captureSession = AVCaptureSession()

if (captureSession == nil) {
fatalError("ERROR: Couldnt create a capture session")
}

captureSession?.beginConfiguration()
captureSession?.sessionPreset = AVCaptureSessionPreset1280x720

let frontDevices = AVCaptureDevice.devices().filter{ ($0 as AnyObject).hasMediaType(AVMediaTypeVideo) && ($0 as AnyObject).position == AVCaptureDevicePosition.front }

if let captureDevice = frontDevices.first as? AVCaptureDevice {
do {
let videoDeviceInput: AVCaptureDeviceInput
do {
videoDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
}
catch {
fatalError("Could not create AVCaptureDeviceInput instance with error: \(error).")
}
guard (captureSession?.canAddInput(videoDeviceInput))! else {
fatalError()
}
captureSession?.addInput(videoDeviceInput)
}
}

do {
let audioDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio)
let audioDeviceInput: AVCaptureDeviceInput
do {
audioDeviceInput = try AVCaptureDeviceInput(device: audioDevice)
}
catch {
fatalError("Could not create AVCaptureDeviceInput instance with error: \(error).")
}
guard (captureSession?.canAddInput(audioDeviceInput))! else {
fatalError()
}
captureSession?.addInput(audioDeviceInput)
}

do {
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String : kCVPixelFormatType_32BGRA]
dataOutput.alwaysDiscardsLateVideoFrames = true
let queue = DispatchQueue(label: "com.3DTOPO.videosamplequeue")
dataOutput.setSampleBufferDelegate(self, queue: queue)
guard (captureSession?.canAddOutput(dataOutput))! else {
fatalError()
}
captureSession?.addOutput(dataOutput)

videoConnection = dataOutput.connection(withMediaType: AVMediaTypeVideo)
}

do {
let audioDataOutput = AVCaptureAudioDataOutput()
let queue = DispatchQueue(label: "com.3DTOPO.audiosamplequeue")
audioDataOutput.setSampleBufferDelegate(self, queue: queue)
guard (captureSession?.canAddOutput(audioDataOutput))! else {
fatalError()
}
captureSession?.addOutput(audioDataOutput)

audioConnection = audioDataOutput.connection(withMediaType: AVMediaTypeAudio)
}

captureSession?.commitConfiguration()

// this will trigger capture on its own queue
captureSession?.startRunning()
}

AVCaptureVideoDataOutput 委托(delegate)方法:

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
// func captureOutput(captureOutput: AVCaptureOutput, sampleBuffer: CMSampleBuffer, connection:AVCaptureConnection) {

var error: CVReturn

if (connection == audioConnection) {
delegate?.audioSampleUpdated(sampleBuffer: sampleBuffer)
return
}

// ... Write video buffer ...//
}

调用:

func audioSampleUpdated(sampleBuffer: CMSampleBuffer) {
if (isRecording) {
while !assetWriterAudioInput.isReadyForMoreMediaData {}
if (!assetWriterAudioInput.append(sampleBuffer)) {
print("Unable to write to audio input");
}
}
}

如果我禁用上面的 assetWriterAudioInput.append() 调用,那么视频没有损坏,但当然我没有编码音频。如何让视频和音频编码同时工作?

最佳答案

我想通了。我将 assetWriter.startSession 源时间设置为 0,然后从当前 CACurrentMediaTime() 中减去开始时间以写入像素数据。

我将 assetWriter.startSession 源时间更改为 CACurrentMediaTime() 并且在写入视频帧时不减去当前时间。

旧的开始 session 代码:

assetWriter.startWriting()
assetWriter.startSession(atSourceTime: kCMTimeZero)

有效的新代码:

let presentationStartTime = CMTimeMakeWithSeconds(CACurrentMediaTime(), 240)

assetWriter.startWriting()
assetWriter.startSession(atSourceTime: presentationStartTime)

关于ios - 使用 AVAssetWriter 捕获音频和视频的损坏视频,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43959376/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com