gpt4 book ai didi

iphone - 使用 AVCaptureVideoDataOutput 和 AVCaptureAudioDataOutput 时的性能问题

转载 作者:可可西里 更新时间:2023-11-01 03:56:18 24 4
gpt4 key购买 nike

我在使用 AVCaptureVideoDataOutput 和 AVCaptureAudioDataOutput 录制音频+视频时遇到延迟问题。有时视频会停顿几毫秒,有时音频与视频不同步。

我插入了一些日志并观察到首先我在 captureOutput 回调中得到了很多视频缓冲区,一段时间后我得到了音频缓冲区(有时我根本没有收到音频缓冲区,并且生成的视频没有声音)。如果我注释处理视频缓冲区的代码,我将毫无问题地获得音频缓冲区。

这是我正在使用的代码:

-(void)initMovieOutput:(AVCaptureSession *)captureSessionLocal
{
AVCaptureVideoDataOutput *dataOutput = [[AVCaptureVideoDataOutput alloc] init];
self._videoOutput = dataOutput;
[dataOutput release];

self._videoOutput.alwaysDiscardsLateVideoFrames = NO;
self._videoOutput.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange]
forKey:(id)kCVPixelBufferPixelFormatTypeKey
];
AVCaptureAudioDataOutput *audioOutput = [[AVCaptureAudioDataOutput alloc] init];
self._audioOutput = audioOutput;
[audioOutput release];

[captureSessionLocal addOutput:self._videoOutput];
[captureSessionLocal addOutput:self._audioOutput];


// Setup the queue
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[self._videoOutput setSampleBufferDelegate:self queue:queue];
[self._audioOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
}

这里我设置了writer:

-(BOOL) setupWriter:(NSURL *)videoURL session:(AVCaptureSession *)captureSessionLocal
{
NSError *error = nil;
self._videoWriter = [[AVAssetWriter alloc] initWithURL:videoURL fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(self._videoWriter);


// Add video input
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:640], AVVideoWidthKey,
[NSNumber numberWithInt:480], AVVideoHeightKey,
nil];

self._videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];


NSParameterAssert(self._videoWriterInput);
self._videoWriterInput.expectsMediaDataInRealTime = YES;
self._videoWriterInput.transform = [self returnOrientation];

// Add the audio input
AudioChannelLayout acl;
bzero( &acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;


NSDictionary* audioOutputSettings = nil;
// Both type of audio inputs causes output video file to be corrupted.

// should work on any device requires more space
audioOutputSettings = [ NSDictionary dictionaryWithObjectsAndKeys:
[ NSNumber numberWithInt: kAudioFormatAppleLossless ], AVFormatIDKey,
[ NSNumber numberWithInt: 16 ], AVEncoderBitDepthHintKey,
[ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
[ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,
[ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
nil ];

self._audioWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType: AVMediaTypeAudio
outputSettings: audioOutputSettings ];

self._audioWriterInput.expectsMediaDataInRealTime = YES;

// add input
[self._videoWriter addInput:_videoWriterInput];
[self._videoWriter addInput:_audioWriterInput];

return YES;
}

这是回调:

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{

if( !CMSampleBufferDataIsReady(sampleBuffer) )
{
NSLog( @"sample buffer is not ready. Skipping sample" );
return;
}
if( _videoWriter.status != AVAssetWriterStatusCompleted )
{
if( _videoWriter.status != AVAssetWriterStatusWriting )
{
CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
[_videoWriter startWriting];
[_videoWriter startSessionAtSourceTime:lastSampleTime];
}

if( captureOutput == _videoOutput )
{
if( [self._videoWriterInput isReadyForMoreMediaData] )
{

[self newVideoSample:sampleBuffer];

}
}
else if( captureOutput == _audioOutput )
{
if( [self._audioWriterInput isReadyForMoreMediaData] )
{

[self newAudioSample:sampleBuffer];


}
}
}

}

-(void) newAudioSample:(CMSampleBufferRef)sampleBuffer
{

if( _videoWriter.status > AVAssetWriterStatusWriting )
{

[self NSLogPrint:[NSString stringWithFormat:@"Audio:Warning: writer status is %d", _videoWriter.status]];
if( _videoWriter.status == AVAssetWriterStatusFailed )
[self NSLogPrint:[NSString stringWithFormat:@"Audio:Error: %@", _videoWriter.error]];
return;
}

if( ![_audioWriterInput appendSampleBuffer:sampleBuffer] )
[self NSLogPrint:[NSString stringWithFormat:@"Unable to write to audio input"]];

}

-(void) newVideoSample:(CMSampleBufferRef)sampleBuffer
{
if( _videoWriter.status > AVAssetWriterStatusWriting )
{
[self NSLogPrint:[NSString stringWithFormat:@"Video:Warning: writer status is %d", _videoWriter.status]];
if( _videoWriter.status == AVAssetWriterStatusFailed )
[self NSLogPrint:[NSString stringWithFormat:@"Video:Error: %@", _videoWriter.error]];
return;
}


if( ![_videoWriterInput appendSampleBuffer:sampleBuffer] )
[self NSLogPrint:[NSString stringWithFormat:@"Unable to write to video input"]];
}

是不是我的代码有问题,为什么视频卡顿?(我在 Iphone 4 ios 4.2.1 上测试它)

最佳答案

看起来您正在使用串行队列。音频输出队列就在视频输出队列之后。考虑使用并发队列。

关于iphone - 使用 AVCaptureVideoDataOutput 和 AVCaptureAudioDataOutput 时的性能问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/11274652/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com