- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在记录来自麦克风的 float 流(应该是静音)
我设置了缓冲区大小为 256 个 float 的音频队列
典型的缓冲区如下所示:
PACKET 0.004791, 0.012512,0.008423,0.000122,-0.000519,-0.002991,-0.000031,0.001801,-0.000641, 0.001190,-0.003143,-0.001587,0.001587,-0.015503,-0.019440,-0.015167,-0.017670, -0.018158,-0.019928,-0.019409,-0.024017,-0.019684,-0.024719,-0.044128,-0.043579, -0.043152,-0.046417,-0.045380,-0.050079,-0.050262,-0.049164,-0.040710,-0.036713, -0.051056,-0.045868,-0.035034,-0.033722,-0.028534,-0.027161,-0.022186,-0.018036, -0.012207,0.004303,-0.000824,-0.000610,0.014496,0.018005,0.019745,0.019226, 0.016144,0.013184,0.009003,0.014557,0.003357,-0.011353,-0.007751,-0.007660, -0.006409,-0.003357,-0.003510,-0.001038,-0.000092,0.007690,0.002655,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,-0.006897,-0.000549,0.003174,0.003540,0.003632, 0.004578,0.005280,0.001831,0.014771,0.014954,0.001801,0.009247,0.011139, 0.005249,0.008087,0.008636,0.007385,0.007263,0.016571,0.020264,0.010590, 0.014801,0.023132,0.027039,0.031128,0.031799,0.037109,0.038757,0.049438, 0.057098,0.042786,0.045593,0.052032,0.045380,0.045227,0.045837,0.043793, 0.041931,0.043976,0.046570,0.030182,0.024475,0.029877,0.026184,0.026001, 0.026611,0.031921,0.035736,0.040710,0.053070,0.042572,0.039917,0.051636, 0.053009,0.053528,0.053009,0.054962,0.055603,0.053833,0.060638,0.050171, 0.041779,0.049194,0.046356,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.041931, 0.038879,0.034515,0.031494,0.026337,0.034576,0.028992,0.014038,0.018127, 0.017822,0.015137,0.015778,0.013519,0.015564,0.014832,0.023285,0.022034, 0.006317,0.010254,0.010742,0.004303,0.003784,-0.000153,-0.002502, ~
我不明白为什么输入信号中似乎有随机的零串。似乎有什么不连续的地方。
我首先想到也许我有左声道和右声道,而右声道总是记录零。但查看我的代码,我已经清楚地将其设置为单个 channel 。
然后我想也许这些只是信号中的沉默之地。但这没有意义。如果我只有十几个零,那么 0.000000 肯定会出现非常非常小的数字,例如 0.000007 .000014,但非零数字似乎在 0.01 左右。
我刚刚尝试将音频输入切换到外部 USB 麦克风,这提高了分辨率。非零数字现在似乎在 0.001 左右。但仍然存在明显的不连续性...
请问芯片上是否进行了一些四舍五入到0的计算,如果是这种情况,可以校准吗?到底是怎么回事?
我注意到的第二个非常奇怪的问题是流氓值(value)观。
这是一个示例数据包,其中包含其中一些值(这次使用 USB 麦克风;您可以看到分辨率如何提高):
~ PACKET -0.001343, -0.001190,-0.001526,-0.001373,-0.000946,-0.001526,-0.001221,-0.001190,-0.001221, -0.001251,-0.001373,-0.001190,-0.001312,-0.001312,-0.001434,-0.001282,-0.001312, -0.001099,-0.001007,-0.001221,-0.001160,-0.001312,-0.001343,-0.001221,-0.001007, -0.001099,-0.001404,-0.001068,-0.001038,-0.001404,-0.001038,-0.001190,-0.001404, -0.001099,-0.001282,-0.001221,-0.001007,-0.001007,-0.001099,-0.001221,-0.001160, -0.001038,-0.001038,-0.001007,-0.000946,-0.001129,-0.000916,-0.000946,-0.000946, -0.000946,-0.000824,-0.000824,-0.001007,-0.000763,-0.001038,-0.000854,-0.000977, -0.000916,-0.000641,-0.000977,-0.000916,-0.000946,-0.000732,-0.000824,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, -0.000000,2.000000,-2.000000,0.000000,-0.000000,36893488147419103232.000000,-36893488147419103232.000000,0.000000, -0.000000,8589934592.000000,-8589934592.000000,0.000000,-0.000000,158456325028528675187087900672.000000,-158456325028528675187087900672.000000,0.000000, -0.000000,131072.000000,-131072.000000,0.000000,-0.000000,2417851639229258349412352.000000,-2417851639229258349412352.000000,0.000000, -0.000000,562949953421312.000000,-562949953421312.000000,0.000031,-0.000031,10384593717069655257060992658440192.000000,-10384593717069655257060992658440192.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000, 0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,
这让我很困惑。这些故障的发生率很低;不到十分之一的帧。
这是否意味着我必须预处理我的音频流?
上次我使用音频设备时,我从未这样做过。我只是将音频直接输入到音调检测例程中。那没有问题。所以我想知道我是否也遇到了故障......
我的内置 MacBook 麦克风和外置 USB 麦克风均出现故障
这是我的代码:
//
// MicRecord.m
// PitchDetect
//
// Created by Pi on 05/01/2011.
//
#import "MicRecord.h"
void AudioInputCallback(
void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription * inPacketDescs) ;
@implementation MicRecord
@synthesize fftFrame;
/*
- (id) init
{
if (self = [super init])
{
[self setupWithSampleRate: 44100
buffers: 12
bufLen: 512 ];
}
return self;
}
*/
// - - - - - - - -
- (void) setupWithSampleRate: (int) in_sampRate
buffers: (int) in_nBuffers
step: (int) in_step
frameSize: (int) in_frameSize
target: (id) in_target
sel: (SEL) in_sel
{
sampRate = in_sampRate;
nBuffers = in_nBuffers;
bufLen = in_step;
frameSize = in_frameSize;
targ = in_target;
sel = in_sel;
audioBuffer = calloc(nBuffers, sizeof(AudioQueueBufferRef *) );
[self setupAudioFormat];
[self setupAudioQueue];
fftFrame = calloc(frameSize, sizeof(float) );
}
// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- (void) setupAudioFormat
{
// Set the format to 32 bit, single channel, floating point, linear PCM
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
memset(& dataFormat,
(int) 0x00,
sizeof(dataFormat) );
dataFormat.mSampleRate = sampRate;
dataFormat.mFormatID = kAudioFormatLinearPCM;
dataFormat.mFormatFlags = kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
dataFormat.mBytesPerPacket = four_bytes_per_float;
dataFormat.mFramesPerPacket = 1;
dataFormat.mBytesPerFrame = four_bytes_per_float;
dataFormat.mChannelsPerFrame = 1;
dataFormat.mBitsPerChannel = four_bytes_per_float * eight_bits_per_byte;
}
// - - - - - - - - - - - - - - - - -
- (void) setupAudioQueue
{
currentPacket = 0;
OSStatus status;
status = AudioQueueNewInput(& dataFormat,
AudioInputCallback,
self,
CFRunLoopGetCurrent(),
kCFRunLoopCommonModes,
0,
& queue);
for(int i = 0; i < nBuffers; i++)
{
status = AudioQueueAllocateBuffer(queue,
bufLen,
& audioBuffer[i]);
status = AudioQueueEnqueueBuffer(queue,
audioBuffer[i], 0, NULL);
}
status = AudioQueueFlush (queue);
printf("Status: %d", (int) status);
}
// - - - - - - - - - - - - - - - - -
- (void) start
{
OSStatus status = AudioQueueStart(queue, NULL);
printf("Status: %d", (int) status);
}
// - - - - - - - - - - - - - - -
- (void) stop
{
AudioQueueStop(queue, true);
for(int i = 0; i < nBuffers; i++)
AudioQueueFreeBuffer(queue, audioBuffer[i]);
AudioQueueDispose(queue, true);
}
// - - - - - - - - - -
- (void) dealloc
{
[self stop];
free (audioBuffer);
[super dealloc];
}
@end
// = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
void AudioInputCallback(
void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription * inPacketDescs
)
{
MicRecord * x = (MicRecord *) inUserData;
//if(inNumberPacketDescriptions == 0 && recordState->dataFormat.mBytesPerPacket != 0)
//{
// inNumberPacketDescriptions = inBuffer->mAudioDataByteSize / recordState->dataFormat.mBytesPerPacket;
//}
if (0)
printf("Handling buffer %d\n", (int) x->currentPacket);
int step = x->bufLen;
if (inBuffer->mAudioDataBytesCapacity != step)
{
printf("---");
}
static int k = -1;
k++;
static float lastVal = 0;
static int count = 0;
if (k < 32) {
if (k == 0)
printf("\nfloat buf[32*%d=%d] = {", step, 32*step);
float * in_buf = (float *) inBuffer->mAudioData;
printf("\n ~\nPACKET\n");
for (int i = 0; i < step; i++)
{
/*
if (fabs(in_buf[i]) < .0001 && fabs(lastVal) > .0001)
{
printf("%d Nonzeros\n",count);
count = 0;
}
if (fabs(in_buf[i]) > .0001 && fabs(lastVal) < .0001)
{
printf("%d Zeros\n",count);
count = 0;
}
count++;
lastVal = in_buf[i];*/
printf("%f,", in_buf[i] );
if (i % 8 == 0)
printf("\n");
//if (count % (8 * 64) == 0)
// printf("\n");
count++;
}
if (k == 31)
printf("}\n");
}
// shifty frame data down by 'step' elements
// to make room for new data
// imagine cutting out elts [0] thru [step-1] (ie 'step' of them)
// first new elt at pos [0] will be [step]
memmove(& x->fftFrame[0], // dest first
& x->fftFrame[step], // src
x->frameSize - step
);
memcpy(& x->fftFrame[x->frameSize - step],
inBuffer->mAudioData,
step * sizeof(float)
);
x->currentPacket += inNumberPacketDescriptions;
// }
AudioQueueEnqueueBuffer(x->queue, inBuffer, 0, NULL);
[x->targ performSelector: x->sel];
}
最佳答案
我的第一个建议是将所有 printf
移出最低级别的回调。如果这些速度很慢,则完全有可能您在这里或那里缺少缓冲区。我不知道这是否会显示为您正在观察的零 block 或虚假样本,但有可能。
如果填充队列的速度比清空队列的速度快,会发生什么情况?
关于iOS 音频队列 : glitches in audio float-stream,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/4654791/
我有一个小型WordPress网站。我做了很多音频工作,并且试图在WordPress的博客条目中发布HTML5音频剪辑。由于某种原因,它不起作用。它可能与我在WordPress网站上使用的样式有关,但
我在让 html5 标签与 Web Audio API .createMediaElementSource() 方法配合使用时遇到问题。请参阅下面的 jsFiddle/代码。任何想法这里出了什么问题将
我尝试安装ffmpeg $ brew install ffmpeg 并运行 ffmpeg $ ffmpeg -i audio.m4a -ar 8000 -ab 12.2k audio.amr 我收到以
我已使用Web Audio API中的getByteFrequencyData方法使用了来自Analyzer节点的FFT数据来创建频谱可视化器,如下所示: 在这种情况下,我有256个数据箱。这个数字到
Google VR刚刚为wwise制作了一个VR插件: https://developers.google.com/vr/audio/wwise-getting-started https://git
如何将新记录追加到现有记录中的选定位置或特定位置? 例如,有一个5秒的录制,我想再次录制,但是将此录制追加到先前录制的特定位置,说出来:在3秒钟的录制长度之后追加此录制。 最佳答案 您可以使用getC
我们基于 Raspberry Pi 和 omxplayer 构建简单的网络控制视频播放器。我们遇到的问题是任何使用 ffmpeg 转码的文件都有音频不同步。在 iPad 上制作并直接上传到 Pi 的视
我正在尝试了解Web Audio API的引入对基于Web的游戏的开发意味着什么。 Flash游戏当然可以执行一些相当高级的音频处理,对于简单的游戏,音频元素也许就足够了。但是Web Audio AP
我已经在如何用简单的音频引擎循环播放声音效果方面进行了广泛的搜索,但是在cocos2d论坛上除了hello with looping sfx之外,它并没有取得太大进展,因为它存在多个问题。如何在Sim
我的任务是打开一个扩展名为 mka 的现有音频文件(Matroska 容器)并提取原始音频数据。 This示例仅显示了从 mp2 文件中提取原始数据的示例。我不知道如何使用 mka 容器执行此操作。我
我是Lync 2013 SDK的新手(现在已经使用了几周),并且能够弄清除此以外的大部分东西…… 当我加入 session 时(使用ConversationManager.JoinConference
我好奇。如何实现有史以来最简单的音频引擎?我有一些类似使用默认音频设备的音频数据流的想法。玩了很多 RtAudio,我认为如果可以放弃一些功能,这是可能的。有人知道从哪里开始吗? 最佳答案 我会这样做
我一直在玩网络音频API。 我正在使用getByteFrequencyData来显示频带的分贝数据,但是我想更改显示频带的整个范围,因为现在重要的音频都被压缩为一对频带。 有关如何执行此操作的任何想法
我想在音频 session 以NAudio开始和结束时接收回调。以下代码正在运行: private void SetupMediaSessionCallbacks() {
我可以用trackPosition,offset以某种方式记录并输出到WAV。当在浏览器中播放时,它工作正常,我只想输出到WAV文件。 for (var i = 0; i 0) {
在哪种情况下,我们可以不将Google Resonance Audio SDK与耳机配合使用,而应将其与真实的扬声器配合使用(例如,安装在360°的音圈设置中)? 还是所有算法都不适用于真实的扬声器输
AudioPannerNode是一个处理节点,用于在三维空间中定位/空间化传入的音频流。有没有一种方法可以将其用于常规LR平移,请记住它使用3D笛卡尔坐标系与侦听器结合使用,该侦听器的位置和方向与平移
我有一个带有两个源的音频对象,分别为M4A和OGG格式。 代码如下: 然后,我可以调用document.getElementById('audio1')。play()并开始播放。 它适用于所有
我正在尝试构建一个允许将时间/节奏(可能是音高)输入到 Web 音频振荡器节点的界面。实际上创建了一个“步进音序器”。 为 Web Audio API 振荡器节点触发预定 NoteOn 的最佳方式是什
是否可以使用 Core Audio 以亚毫秒级延迟播放声音? 我尝试过使用具有不同大小和缓冲区数量的 AudioQueues,也尝试过使用 AudioUnits,但我一直无法将延迟降低到 30 毫秒以
我是一名优秀的程序员,十分优秀!