gpt4 book ai didi

ios - 如何使用 AVCaptureVideoDataOutput 和 AVCaptureAudioDataOutput 同时写入视频和音频?

转载 作者:可可西里 更新时间:2023-11-01 05:00:29 24 4
gpt4 key购买 nike

我试图通过使用“AVFoundation”制作一个像藤蔓一样的视频应用程序。现在我可以通过 AVCaptureVideoDataOutput 保存视频并可以播放。但不知何故音频不工作,我不知道为什么。我是 iOS 应用程序的初学者,所以解释起来可能不清楚。希望您能理解我的意思并给我一些提示。

这是我正在使用的代码。

设置AVCaptureVideoDataOutputAVCaptureAudioDataOutput:

AVCaptureVideoDataOutput* videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
[CaptureSession addOutput:videoDataOutput];

videoDataOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA],kCVPixelBufferPixelFormatTypeKey,
nil];

dispatch_queue_t videoQueue = dispatch_queue_create("VideoQueue", NULL);
[videoDataOutput setSampleBufferDelegate:self queue:videoQueue];

AVCaptureAudioDataOutput *audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
[CaptureSession addOutput:audioDataOutput];

dispatch_queue_t audioQueue = dispatch_queue_create("AudioQueue", NULL);
[audioDataOutput setSampleBufferDelegate:self queue:audioQueue];

设置AVAssetWriteAVAssetWriterInput:

- (void)makeWriter{
pathString = [NSHomeDirectory()stringByAppendingPathComponent:@"Documents/capture.mov"];
exportURL = [NSURL fileURLWithPath:pathString];

if ([[NSFileManager defaultManager] fileExistsAtPath:exportURL.path])
{
[[NSFileManager defaultManager] removeItemAtPath:exportURL.path error:nil];
}
NSError* error;
writer = [[AVAssetWriter alloc] initWithURL:exportURL
fileType:AVFileTypeQuickTimeMovie
error:&error];
NSDictionary* videoSetting = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:1280], AVVideoWidthKey,
[NSNumber numberWithInt:720], AVVideoHeightKey,
nil];

videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSetting];


videoWriterInput.expectsMediaDataInRealTime = YES;

// Add the audio input
AudioChannelLayout acl;
bzero( &acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;

NSDictionary* audioOutputSettings = nil;
// Both type of audio inputs causes output video file to be corrupted.
if( NO ) {
// should work from iphone 3GS on and from ipod 3rd generation
audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[ NSNumber numberWithInt: kAudioFormatMPEG4AAC ], AVFormatIDKey,
[ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,
[ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
[ NSNumber numberWithInt: 64000 ], AVEncoderBitRateKey,
[ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
nil];
} else {
// should work on any device requires more space
audioOutputSettings = [ NSDictionary dictionaryWithObjectsAndKeys:
[ NSNumber numberWithInt: kAudioFormatAppleLossless ], AVFormatIDKey,
[ NSNumber numberWithInt: 16 ], AVEncoderBitDepthHintKey,
[ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
[ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,
[ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
nil ];
}

audioWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType: AVMediaTypeAudio
outputSettings: audioOutputSettings ];

audioWriterInput.expectsMediaDataInRealTime = YES;
// add input
[writer addInput:videoWriterInput];
[writer addInput:audioWriterInput];

}

最后是 CaptureOutput 代码:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
if ((isPause) && (isRecording)) { return; }
if( !CMSampleBufferDataIsReady(sampleBuffer) ){return;}
if( isRecording == YES ) {
isWritting = YES;
if( writer.status != AVAssetWriterStatusWriting ) {
[writer startWriting];

[writer startSessionAtSourceTime:kCMTimeZero];
}

if( [videoWriterInput isReadyForMoreMediaData] ) {
CFRetain(sampleBuffer);
CMSampleBufferRef newSampleBuffer = [self offsetTimmingWithSampleBufferForVideo:sampleBuffer];
[videoWriterInput appendSampleBuffer:newSampleBuffer];

CFRelease(sampleBuffer);
CFRelease(newSampleBuffer);
}
writeFrames++;

}
}

- (CMSampleBufferRef)offsetTimmingWithSampleBufferForVideo:(CMSampleBufferRef)sampleBuffer
{
CMSampleBufferRef newSampleBuffer;
CMSampleTimingInfo sampleTimingInfo;
sampleTimingInfo.duration = CMTimeMake(1, 30);
sampleTimingInfo.presentationTimeStamp = CMTimeMake(writeFrames, 30);
sampleTimingInfo.decodeTimeStamp = kCMTimeInvalid;

CMSampleBufferCreateCopyWithNewTiming(kCFAllocatorDefault,
sampleBuffer,
1,
&sampleTimingInfo,
&newSampleBuffer);


return newSampleBuffer;
}

最佳答案

至少有一个问题是您将所有样本缓冲区都放入了视频编写器输入中。您需要将来自音频缓冲区的样本放入音频编写器输入中。

您应该查看此 SO 问题和答案!

performance-issues-when-using-avcapturevideodataoutput-and-avcaptureaudiodataout

关于ios - 如何使用 AVCaptureVideoDataOutput 和 AVCaptureAudioDataOutput 同时写入视频和音频?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23776820/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com