gpt4 book ai didi

iphone - 从线性 PCM 中提取音频 channel

转载 作者:行者123 更新时间:2023-12-03 18:36:16 25 4
gpt4 key购买 nike

我想从 LPCM 原始文件中提取 channel 音频,即提取立体声 LPCM 文件的左 channel 和右 channel 。 LPCM 是 16 位深度、交错、2 channel 、小字节序。根据我收集的信息,字节顺序是 {LeftChannel,RightChannel,LeftChannel,RightChannel...} 并且由于它是 16 位深度,因此每个 channel 将有 2 个字节的样本,对吗?

所以我的问题是,如果我想提取左 channel ,那么我会获取 0,2,4,6...n*2 地址中的字节吗?而右 channel 为 1,3,4,...(n*2+1)。

另外,提取音频 channel 后,我应该将提取的 channel 的格式设置为16位深度,1 channel 吗?

提前致谢

这是我目前用来从 AssetReader 中提取 PCM 音频的代码。该代码可以很好地编写音乐文件,而无需提取其 channel ,因此我可能是由格式或其他原因引起的...

    NSURL *assetURL = [song valueForProperty:MPMediaItemPropertyAssetURL];
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
// [NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)], AVChannelLayoutKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
NSError *assetError = nil;
AVAssetReader *assetReader = [[AVAssetReader assetReaderWithAsset:songAsset
error:&assetError]
retain];
if (assetError) {
NSLog (@"error: %@", assetError);
return;
}

AVAssetReaderOutput *assetReaderOutput = [[AVAssetReaderAudioMixOutput
assetReaderAudioMixOutputWithAudioTracks:songAsset.tracks
audioSettings: outputSettings]
retain];
if (! [assetReader canAddOutput: assetReaderOutput]) {
NSLog (@"can't add reader output... die!");
return;
}
[assetReader addOutput: assetReaderOutput];


NSArray *dirs = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectoryPath = [dirs objectAtIndex:0];

//CODE TO SPLIT STEREO
[self setupAudioWithFormatMono:kAudioFormatLinearPCM];
NSString *splitExportPath = [[documentsDirectoryPath stringByAppendingPathComponent:@"monoleft.caf"] retain];
if ([[NSFileManager defaultManager] fileExistsAtPath:splitExportPath]) {
[[NSFileManager defaultManager] removeItemAtPath:splitExportPath error:nil];
}

AudioFileID mRecordFile;
NSURL *splitExportURL = [NSURL fileURLWithPath:splitExportPath];


OSStatus status = AudioFileCreateWithURL(splitExportURL, kAudioFileCAFType, &_streamFormat, kAudioFileFlags_EraseFile,
&mRecordFile);

NSLog(@"status os %d",status);

[assetReader startReading];

CMSampleBufferRef sampBuffer = [assetReaderOutput copyNextSampleBuffer];
UInt32 countsamp= CMSampleBufferGetNumSamples(sampBuffer);
NSLog(@"number of samples %d",countsamp);

SInt64 countByteBuf = 0;
SInt64 countPacketBuf = 0;
UInt32 numBytesIO = 0;
UInt32 numPacketsIO = 0;
NSMutableData * bufferMono = [NSMutableData new];
while (sampBuffer) {


AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for (int y=0; y<audioBufferList.mNumberBuffers; y++) {
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
//frames = audioBuffer.mData;
NSLog(@"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels);
NSLog(@"The buffer size is %d",audioBuffer.mDataByteSize);






//Append mono left to buffer data
for (int i=0; i<audioBuffer.mDataByteSize; i= i+4) {
[bufferMono appendBytes:(audioBuffer.mData+i) length:2];
}

//the number of bytes in the mutable data containing mono audio file
numBytesIO = [bufferMono length];
numPacketsIO = numBytesIO/2;
NSLog(@"numpacketsIO %d",numPacketsIO);
status = AudioFileWritePackets(mRecordFile, NO, numBytesIO, &_packetFormat, countPacketBuf, &numPacketsIO, audioBuffer.mData);
NSLog(@"status for writebyte %d, packets written %d",status,numPacketsIO);
if(numPacketsIO != (numBytesIO/2)){
NSLog(@"Something wrong");
assert(0);
}


countPacketBuf = countPacketBuf + numPacketsIO;
[bufferMono setLength:0];


}

sampBuffer = [assetReaderOutput copyNextSampleBuffer];
countsamp= CMSampleBufferGetNumSamples(sampBuffer);
NSLog(@"number of samples %d",countsamp);
}
AudioFileClose(mRecordFile);
[assetReader cancelReading];
[self performSelectorOnMainThread:@selector(updateCompletedSizeLabel:)
withObject:0
waitUntilDone:NO];

audiofileservices 的输出格式如下:

        _streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
_streamFormat.mBitsPerChannel = 16;
_streamFormat.mChannelsPerFrame = 1;
_streamFormat.mBytesPerPacket = 2;
_streamFormat.mBytesPerFrame = 2;// (_streamFormat.mBitsPerChannel / 8) * _streamFormat.mChannelsPerFrame;
_streamFormat.mFramesPerPacket = 1;
_streamFormat.mSampleRate = 44100.0;

_packetFormat.mStartOffset = 0;
_packetFormat.mVariableFramesInPacket = 0;
_packetFormat.mDataByteSize = 2;

最佳答案

听起来几乎正确 - 您有 16 位深度,因此这意味着每个样本将占用 2 个字节。这意味着左声道数据将以字节 {0,1}、{4,5}、{8,9} 等为单位。交错意味着样本是交错的,而不是字节。除此之外,我会尝试一下,看看您的代码是否有任何问题。

Also after extracting the audio channel, should i set the format of the extracted channel as 16 bit depth ,1 channel?

提取后,两个 channel 中仅剩下一个,所以是的,这是正确的。

关于iphone - 从线性 PCM 中提取音频 channel ,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/4630969/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com