- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我有两个 USB 摄像头。一个是low-cost WebCam ,另一个是low-cost USB microscope ;都是从 eBay 买的。显微镜实际上只是另一个网络摄像头。
我想在 Mac OS X 10.5 和 QTKit 上使用 USB 显微镜。MyRecorder 与我的低成本网络摄像头配合得很好,但当我连接显微镜时它只显示黑色视频。
如果我打开 QuickTime Player 并创建影片录制,我会收到错误消息:“录制失败,因为未收到数据。|请确保媒体输入源已打开并正在播放。”
序列采集器演示适用于两个摄像头。
miXscope 也适用于两个摄像头(似乎它使用序列捕获器)。
这是精简后的 MyRecorder(为了更好地概述):
- (void)awakeFromNib
{
NSError *error;
mCaptureSession = [[QTCaptureSession alloc] init];
QTCaptureDevice *videoDevice = [QTCaptureDevice defaultInputDeviceWithMediaType:QTMediaTypeVideo];
BOOL success = [videoDevice open:&error];
if(!success)
{
videoDevice = [QTCaptureDevice defaultInputDeviceWithMediaType:QTMediaTypeMuxed];
success = [videoDevice open:&error];
}
if(!success) return;
mCaptureVideoDeviceInput = [[QTCaptureDeviceInput alloc] initWithDevice:videoDevice];
success = [mCaptureSession addInput:mCaptureVideoDeviceInput error:&error];
if(!success) return;
if(![videoDevice hasMediaType:QTMediaTypeSound] && ![videoDevice hasMediaType:QTMediaTypeMuxed])
{
QTCaptureDevice *audioDevice = [QTCaptureDevice defaultInputDeviceWithMediaType:QTMediaTypeSound];
success = audioDevice && [audioDevice open:&error];
if(success)
{
mCaptureAudioDeviceInput = [[QTCaptureDeviceInput alloc] initWithDevice:audioDevice];
success = [mCaptureSession addInput:mCaptureAudioDeviceInput error:&error];
}
}
mCaptureMovieFileOutput = [[QTCaptureMovieFileOutput alloc] init];
success = [mCaptureSession addOutput:mCaptureMovieFileOutput error:&error];
if(!success) return;
[mCaptureMovieFileOutput setDelegate:self];
[mCaptureView setCaptureSession:mCaptureSession];
[mCaptureSession startRunning];
}
为了让我的显微镜与 MyRecorder 配合使用,我需要添加/更改什么?(我已经尝试记录我能想到的所有内容,但我调用的任何 QTKit 方法都没有收到错误)。
注意:我已经浏览了我能找到的有关该主题的所有 StackOverflow 问题,有两个问题很接近,但它们没有解决这个问题。
最佳答案
√ - 检查以下代码:
- (id)init
{
self = [super init];
if (self) {
[self setOutputFile:[@"~/Desktop/Audio Recording.aif" stringByStandardizingPath]];
}
return self;
}
- (void)awakeFromNib
{
BOOL success;
NSError *error;
/* Find and open an audio input device. */
QTCaptureDevice *audioDevice = [QTCaptureDevice defaultInputDeviceWithMediaType:QTMediaTypeSound];
success = [audioDevice open:&error];
if (!success) {
[[NSAlert alertWithError:error] runModal];
return;
}
/* Create the capture session. */
captureSession = [[QTCaptureSession alloc] init];
/* Add a device input for the audio device to the session. */
captureAudioDeviceInput = [[QTCaptureDeviceInput alloc] initWithDevice:audioDevice];
success = [captureSession addInput:captureAudioDeviceInput error:&error];
if (!success) {
[captureAudioDeviceInput release];
captureAudioDeviceInput = nil;
[audioDevice close];
[captureSession release];
captureSession = nil;
[[NSAlert alertWithError:error] runModal];
return;
}
/* Create an audio data output for reading captured audio buffers and add it to the capture session. */
captureAudioDataOutput = [[QTCaptureDecompressedAudioOutput alloc] init];
[captureAudioDataOutput setDelegate:self]; /* Captured audio buffers will be provided to the delegate via the captureOutput:didOutputAudioSampleBuffer:fromConnection: delegate method. */
success = [captureSession addOutput:captureAudioDataOutput error:&error];
if (!success) {
[captureAudioDeviceInput release];
captureAudioDeviceInput = nil;
[audioDevice close];
[captureAudioDataOutput release];
captureAudioDataOutput = nil;
[captureSession release];
captureSession = nil;
[[NSAlert alertWithError:error] runModal];
return;
}
/* Create an effect audio unit to add an effect to the audio before it is written to a file. */
OSStatus err = noErr;
AudioComponentDescription effectAudioUnitComponentDescription;
effectAudioUnitComponentDescription.componentType = kAudioUnitType_Effect;
effectAudioUnitComponentDescription.componentSubType = kAudioUnitSubType_Delay;
effectAudioUnitComponentDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
effectAudioUnitComponentDescription.componentFlags = 0;
effectAudioUnitComponentDescription.componentFlagsMask = 0;
AudioComponent effectAudioUnitComponent = AudioComponentFindNext(NULL, &effectAudioUnitComponentDescription);
err = AudioComponentInstanceNew(effectAudioUnitComponent, &effectAudioUnit);
if (noErr == err) {
/* Set a callback on the effect unit that will supply the audio buffers received from the audio data output. */
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = PushCurrentInputBufferIntoAudioUnit;
renderCallbackStruct.inputProcRefCon = self;
err = AudioUnitSetProperty(effectAudioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &renderCallbackStruct, sizeof(renderCallbackStruct));
}
if (noErr != err) {
if (effectAudioUnit) {
AudioComponentInstanceDispose(effectAudioUnit);
effectAudioUnit = NULL;
}
[captureAudioDeviceInput release];
captureAudioDeviceInput = nil;
[audioDevice close];
[captureSession release];
captureSession = nil;
[[NSAlert alertWithError:[NSError errorWithDomain:NSOSStatusErrorDomain code:err userInfo:nil]] runModal];
return;
}
/* Start the capture session. This will cause the audo data output delegate method to be called for each new audio buffer that is captured from the input device. */
[captureSession startRunning];
/* Become the window's delegate so that the capture session can be stopped and cleaned up immediately after the window is closed. */
[window setDelegate:self];
}
- (void)windowWillClose:(NSNotification *)notification
{
[self setRecording:NO];
[captureSession stopRunning];
QTCaptureDevice *audioDevice = [captureAudioDeviceInput device];
if ([audioDevice isOpen])
[audioDevice close];
}
- (void)dealloc
{
[captureSession release];
[captureAudioDeviceInput release];
[captureAudioDataOutput release];
[outputFile release];
if (extAudioFile)
ExtAudioFileDispose(extAudioFile);
if (effectAudioUnit) {
if (didSetUpAudioUnits)
AudioUnitUninitialize(effectAudioUnit);
AudioComponentInstanceDispose(effectAudioUnit);
}
[super dealloc];
}
#pragma mark ======== Audio capture methods =========
/*
Called periodically by the QTCaptureAudioDataOutput as it receives QTSampleBuffer objects containing audio frames captured by the QTCaptureSession.
Each QTSampleBuffer will contain multiple frames of audio encoded in the canonical non-interleaved linear PCM format compatible with AudioUnits.
*/
- (void)captureOutput:(QTCaptureOutput *)captureOutput didOutputAudioSampleBuffer:(QTSampleBuffer *)sampleBuffer fromConnection:(QTCaptureConnection *)connection
{
OSStatus err = noErr;
BOOL isRecording = [self isRecording];
/* Get the sample buffer's AudioStreamBasicDescription, which will be used to set the input format of the effect audio unit and the ExtAudioFile. */
QTFormatDescription *formatDescription = [sampleBuffer formatDescription];
NSValue *sampleBufferASBDValue = [formatDescription attributeForKey:QTFormatDescriptionAudioStreamBasicDescriptionAttribute];
if (!sampleBufferASBDValue)
return;
AudioStreamBasicDescription sampleBufferASBD = {0};
[sampleBufferASBDValue getValue:&sampleBufferASBD];
if ((sampleBufferASBD.mChannelsPerFrame != currentInputASBD.mChannelsPerFrame) || (sampleBufferASBD.mSampleRate != currentInputASBD.mSampleRate)) {
/* Although QTCaptureAudioDataOutput guarantees that it will output sample buffers in the canonical format, the number of channels or the sample rate of the audio can changes at any time while the capture session is running. If this occurs, the audio unit receiving the buffers from the QTCaptureAudioDataOutput needs to be reconfigured with the new format. This also must be done when a buffer is received for the first time. */
currentInputASBD = sampleBufferASBD;
if (didSetUpAudioUnits) {
/* The audio units were previously set up, so they must be uninitialized now. */
AudioUnitUninitialize(effectAudioUnit);
/* If recording was in progress, the recording needs to be stopped because the audio format changed. */
if (extAudioFile) {
ExtAudioFileDispose(extAudioFile);
extAudioFile = NULL;
}
} else {
didSetUpAudioUnits = YES;
}
/* Set the input and output formats of the effect audio unit to match that of the sample buffer. */
err = AudioUnitSetProperty(effectAudioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, ¤tInputASBD, sizeof(currentInputASBD));
if (noErr == err)
err = AudioUnitSetProperty(effectAudioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, ¤tInputASBD, sizeof(currentInputASBD));
if (noErr == err)
err = AudioUnitInitialize(effectAudioUnit);
if (noErr != err) {
NSLog(@"Failed to set up audio units (%d)", err);
didSetUpAudioUnits = NO;
bzero(¤tInputASBD, sizeof(currentInputASBD));
}
}
if (isRecording && !extAudioFile) {
/* Start recording by creating an ExtAudioFile and configuring it with the same sample rate and channel layout as those of the current sample buffer. */
AudioStreamBasicDescription recordedASBD = {0};
recordedASBD.mSampleRate = currentInputASBD.mSampleRate;
recordedASBD.mFormatID = kAudioFormatLinearPCM;
recordedASBD.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
recordedASBD.mBytesPerPacket = 2 * currentInputASBD.mChannelsPerFrame;
recordedASBD.mFramesPerPacket = 1;
recordedASBD.mBytesPerFrame = 2 * currentInputASBD.mChannelsPerFrame;
recordedASBD.mChannelsPerFrame = currentInputASBD.mChannelsPerFrame;
recordedASBD.mBitsPerChannel = 16;
NSData *inputChannelLayoutData = [formatDescription attributeForKey:QTFormatDescriptionAudioChannelLayoutAttribute];
AudioChannelLayout *recordedChannelLayout = (AudioChannelLayout *)[inputChannelLayoutData bytes];
err = ExtAudioFileCreateWithURL((CFURLRef)[NSURL fileURLWithPath:[self outputFile]], kAudioFileAIFFType, &recordedASBD, recordedChannelLayout, kAudioFileFlags_EraseFile, &extAudioFile);
if (noErr == err)
err = ExtAudioFileSetProperty(extAudioFile, kExtAudioFileProperty_ClientDataFormat, sizeof(currentInputASBD), ¤tInputASBD);
if (noErr != err) {
NSLog(@"Failed to set up ExtAudioFile (%d)", err);
ExtAudioFileDispose(extAudioFile);
extAudioFile = NULL;
}
} else if (!isRecording && extAudioFile) {
/* Stop recording by disposing of the ExtAudioFile. */
ExtAudioFileDispose(extAudioFile);
extAudioFile = NULL;
}
NSUInteger numberOfFrames = [sampleBuffer numberOfSamples];
/* -[QTSampleBuffer numberOfSamples] corresponds to the number of CoreAudio audio frames. */
/* In order to render continuously, the effect audio unit needs a new time stamp for each buffer. Use the number of frames for each unit of time. */
currentSampleTime += (double)numberOfFrames;
AudioTimeStamp timeStamp = {0};
timeStamp.mSampleTime = currentSampleTime;
timeStamp.mFlags |= kAudioTimeStampSampleTimeValid;
AudioUnitRenderActionFlags flags = 0;
/* Create an AudioBufferList large enough to hold the number of frames from the sample buffer in 32-bit floating point PCM format. */
AudioBufferList *outputABL = calloc(1, sizeof(*outputABL) + (currentInputASBD.mChannelsPerFrame - 1)*sizeof(outputABL->mBuffers[0]));
outputABL->mNumberBuffers = currentInputASBD.mChannelsPerFrame;
UInt32 channelIndex;
for (channelIndex = 0; channelIndex < currentInputASBD.mChannelsPerFrame; channelIndex++) {
UInt32 dataSize = numberOfFrames * currentInputASBD.mBytesPerFrame;
outputABL->mBuffers[channelIndex].mDataByteSize = dataSize;
outputABL->mBuffers[channelIndex].mData = malloc(dataSize);
outputABL->mBuffers[channelIndex].mNumberChannels = 1;
}
/*
Get an audio buffer list from the sample buffer and assign it to the currentInputAudioBufferList instance variable.
The the effect audio unit render callback, PushCurrentInputBufferIntoAudioUnit(), can access this value by calling the currentInputAudioBufferList method.
*/
currentInputAudioBufferList = [sampleBuffer audioBufferListWithOptions:QTSampleBufferAudioBufferListOptionAssure16ByteAlignment];
/* Tell the effect audio unit to render. This will synchronously call PushCurrentInputBufferIntoAudioUnit(), which will feed the audio buffer list into the effect audio unit. */
err = AudioUnitRender(effectAudioUnit, &flags, &timeStamp, 0, numberOfFrames, outputABL);
currentInputAudioBufferList = NULL;
if ((noErr == err) && extAudioFile) {
err = ExtAudioFileWriteAsync(extAudioFile, numberOfFrames, outputABL);
}
for (channelIndex = 0; channelIndex < currentInputASBD.mChannelsPerFrame; channelIndex++) {
free(outputABL->mBuffers[channelIndex].mData);
}
free(outputABL);
}
/* Used by PushCurrentInputBufferIntoAudioUnit() to access the current audio buffer list that has been output by the QTCaptureAudioDataOutput. */
- (AudioBufferList *)currentInputAudioBufferList
{
return currentInputAudioBufferList;
}
这来自THIS教程,还可以尝试进一步查看教程中提供的示例代码中的音频捕获方法 #prama mark
。
希望这有帮助!
关于cocoa - QTCaptureSession 没有从相机接收任何数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32709720/
初学者 android 问题。好的,我已经成功写入文件。例如。 //获取文件名 String filename = getResources().getString(R.string.filename
我已经将相同的图像保存到/data/data/mypackage/img/中,现在我想显示这个全屏,我曾尝试使用 ACTION_VIEW 来显示 android 标准程序,但它不是从/data/dat
我正在使用Xcode 9,Swift 4。 我正在尝试使用以下代码从URL在ImageView中显示图像: func getImageFromUrl(sourceUrl: String) -> UII
我的 Ubuntu 安装 genymotion 有问题。主要是我无法调试我的数据库,因为通过 eclipse 中的 DBMS 和 shell 中的 adb 我无法查看/data/文件夹的内容。没有显示
我正在尝试用 PHP 发布一些 JSON 数据。但是出了点问题。 这是我的 html -- {% for x in sets %}
我观察到两种方法的结果不同。为什么是这样?我知道 lm 上发生了什么,但无法弄清楚 tslm 上发生了什么。 > library(forecast) > set.seed(2) > tts lm(t
我不确定为什么会这样!我有一个由 spring data elasticsearch 和 spring data jpa 使用的类,但是当我尝试运行我的应用程序时出现错误。 Error creatin
在 this vega 图表,如果我下载并转换 flare-dependencies.json使用以下 jq 到 csv命令, jq -r '(map(keys) | add | unique) as
我正在提交一个项目,我必须在其中创建一个带有表的 mysql 数据库。一切都在我这边进行,所以我只想检查如何将我所有的压缩文件发送给使用不同计算机的人。基本上,我如何为另一台计算机创建我的数据库文件,
我有一个应用程序可以将文本文件写入内部存储。我想仔细看看我的电脑。 我运行了 Toast.makeText 来显示路径,它说:/数据/数据/我的包 但是当我转到 Android Studio 的 An
我喜欢使用 Genymotion 模拟器以如此出色的速度加载 Android。它有非常好的速度,但仍然有一些不稳定的性能。 如何从 Eclipse 中的文件资源管理器访问 Genymotion 模拟器
我需要更改 Silverlight 中文本框的格式。数据通过 MVVM 绑定(bind)。 例如,有一个 int 属性,我将 1 添加到 setter 中的值并调用 OnPropertyChanged
我想向 Youtube Data API 提出请求,但我不需要访问任何用户信息。我只想浏览公共(public)视频并根据搜索词显示视频。 我可以在未经授权的情况下这样做吗? 最佳答案 YouTube
我已经设置了一个 Twilio 应用程序,我想向人们发送更新,但我不想回复单个文本。我只是想让他们在有问题时打电话。我一切正常,但我想在发送文本时显示传入文本,以确保我不会错过任何问题。我正在使用 p
我有一个带有表单的网站(目前它是纯 HTML,但我们正在切换到 JQuery)。流程是这样的: 接受用户的输入 --- 5 个整数 通过 REST 调用网络服务 在服务器端运行一些计算...并生成一个
假设我们有一个名为 configuration.js 的文件,当我们查看内部时,我们会看到: 'use strict'; var profile = { "project": "%Projec
这部分是对 Previous Question 的扩展我的: 我现在可以从我的 CI Controller 成功返回 JSON 数据,它返回: {"results":[{"id":"1","Sourc
有什么有效的方法可以删除 ios 中 CBL 的所有文档存储?我对此有疑问,或者,如果有人知道如何从本质上使该应用程序像刚刚安装一样,那也会非常有帮助。我们正在努力确保我们的注销实际上将应用程序设置为
我有一个 Rails 应用程序,它与其他 Rails 应用程序通信以进行数据插入。我使用 jQuery $.post 方法进行数据插入。对于插入,我的其他 Rails 应用程序显示 200 OK。但在
我正在为服务于发布请求的 API 调用运行单元测试。我正在传递请求正文,并且必须将响应作为帐户数据返回。但我只收到断言错误 注意:数据是从 Azure 中获取的 spec.js const accou
我是一名优秀的程序员,十分优秀!