gpt4 book ai didi

ios - 将音频样本从流附加到AVAssetWriter

转载 作者:行者123 更新时间:2023-12-02 23:43:25 28 4
gpt4 key购买 nike

当我从摄像机录制视频时,我正在使用一个项目,但是音频来自流媒体。音频帧显然与视频帧不同步。
如果我不使用视频而使用AVAssetWriter,则通过流录制音频帧效果很好。但是,如果我添加视频和音频帧,则听不到任何声音。

这是将音频数据从流转换为CMsampleBuffer的方法

AudioStreamBasicDescription monoStreamFormat = [self getAudioDescription];


CMFormatDescriptionRef format = NULL;
OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &monoStreamFormat, 0,NULL, 0, NULL, NULL, &format);
if (status != noErr) {
// really shouldn't happen
return nil;
}

CMSampleTimingInfo timing = { CMTimeMake(1, 44100.0), kCMTimeZero, kCMTimeInvalid };


CMSampleBufferRef sampleBuffer = NULL;
status = CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numSamples, 1, &timing, 0, NULL, &sampleBuffer);
if (status != noErr) {
// couldn't create the sample alguiebuffer
NSLog(@"Failed to create sample buffer");
CFRelease(format);
return nil;
}

// add the samples to the buffer
status = CMSampleBufferSetDataBufferFromAudioBufferList(sampleBuffer,
kCFAllocatorDefault,
kCFAllocatorDefault,
0,
samples);
if (status != noErr) {
NSLog(@"Failed to add samples to sample buffer");
CFRelease(sampleBuffer);
CFRelease(format);
return nil;
}

我不知道这是否与时间有关。但我想附加视频第一秒的音频帧。

有可能吗?

谢谢

最佳答案

最后我做到了

uint64_t _hostTimeToNSFactor = hostTime;

_hostTimeToNSFactor *= info.numer;
_hostTimeToNSFactor /= info.denom;

uint64_t timeNS = (uint64_t)(hostTime * _hostTimeToNSFactor);
CMTime presentationTime = self.initialiseTime;//CMTimeMake(timeNS, 1000000000);
CMSampleTimingInfo timing = { CMTimeMake(1, 44100), presentationTime, kCMTimeInvalid };

关于ios - 将音频样本从流附加到AVAssetWriter,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35059852/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com