gpt4 book ai didi

cocoa - QTCaptureSession 没有从相机接收任何数据

转载 作者:行者123 更新时间:2023-12-03 16:11:02 25 4
gpt4 key购买 nike

我有两个 USB 摄像头。一个是low-cost WebCam ,另一个是low-cost USB microscope ;都是从 eBay 买的。显微镜实际上只是另一个网络摄像头。

我想在 Mac OS X 10.5 和 QTKit 上使用 USB 显微镜。MyRecorder 与我的低成本网络摄像头配合得很好,但当我连接显微镜时它只显示黑色视频。

如果我打开 QuickTime Player 并创建影片录制,我会收到错误消息:“录制失败,因为未收到数据。|请确保媒体输入源已打开并正在播放。”

序列采集器演示适用于两个摄像头。

miXscope 也适用于两个摄像头(似乎它使用序列捕获器)。

这是精简后的 MyRecorder(为了更好地概述):

- (void)awakeFromNib
{
NSError *error;

mCaptureSession = [[QTCaptureSession alloc] init];
QTCaptureDevice *videoDevice = [QTCaptureDevice defaultInputDeviceWithMediaType:QTMediaTypeVideo];
BOOL success = [videoDevice open:&error];
if(!success)
{
videoDevice = [QTCaptureDevice defaultInputDeviceWithMediaType:QTMediaTypeMuxed];
success = [videoDevice open:&error];
}
if(!success) return;
mCaptureVideoDeviceInput = [[QTCaptureDeviceInput alloc] initWithDevice:videoDevice];
success = [mCaptureSession addInput:mCaptureVideoDeviceInput error:&error];
if(!success) return;
if(![videoDevice hasMediaType:QTMediaTypeSound] && ![videoDevice hasMediaType:QTMediaTypeMuxed])
{
QTCaptureDevice *audioDevice = [QTCaptureDevice defaultInputDeviceWithMediaType:QTMediaTypeSound];
success = audioDevice && [audioDevice open:&error];
if(success)
{
mCaptureAudioDeviceInput = [[QTCaptureDeviceInput alloc] initWithDevice:audioDevice];
success = [mCaptureSession addInput:mCaptureAudioDeviceInput error:&error];
}
}
mCaptureMovieFileOutput = [[QTCaptureMovieFileOutput alloc] init];
success = [mCaptureSession addOutput:mCaptureMovieFileOutput error:&error];
if(!success) return;
[mCaptureMovieFileOutput setDelegate:self];
[mCaptureView setCaptureSession:mCaptureSession];
[mCaptureSession startRunning];
}

为了让我的显微镜与 MyRecorder 配合使用,我需要添加/更改什么?(我已经尝试记录我能想到的所有内容,但我调用的任何 QTKit 方法都没有收到错误)。

注意:我已经浏览了我能找到的有关该主题的所有 StackOverflow 问题,有两个问题很接近,但它们没有解决这个问题。

最佳答案

  1. 查找并打开音频输入设备。
  2. 创建捕获 session 。
  3. 将音频设备的设备输入添加到 session 中。
  4. 创建音频数据输出以读取捕获的音频缓冲区并将其添加到捕获 session 中。
  5. 在效果单元上设置回调,该回调将提供从音频数据输出接收到的音频缓冲区。
  6. 启动捕获 session 。

√ - 检查以下代码:

    - (id)init
{
self = [super init];
if (self) {
[self setOutputFile:[@"~/Desktop/Audio Recording.aif" stringByStandardizingPath]];
}
return self;
}
- (void)awakeFromNib
{
BOOL success;
NSError *error;
/* Find and open an audio input device. */
QTCaptureDevice *audioDevice = [QTCaptureDevice defaultInputDeviceWithMediaType:QTMediaTypeSound];
success = [audioDevice open:&error];
if (!success) {
[[NSAlert alertWithError:error] runModal];
return;
}
/* Create the capture session. */
captureSession = [[QTCaptureSession alloc] init];
/* Add a device input for the audio device to the session. */
captureAudioDeviceInput = [[QTCaptureDeviceInput alloc] initWithDevice:audioDevice];
success = [captureSession addInput:captureAudioDeviceInput error:&error];
if (!success) {
[captureAudioDeviceInput release];
captureAudioDeviceInput = nil;
[audioDevice close];
[captureSession release];
captureSession = nil;
[[NSAlert alertWithError:error] runModal];
return;
}
/* Create an audio data output for reading captured audio buffers and add it to the capture session. */
captureAudioDataOutput = [[QTCaptureDecompressedAudioOutput alloc] init];
[captureAudioDataOutput setDelegate:self]; /* Captured audio buffers will be provided to the delegate via the captureOutput:didOutputAudioSampleBuffer:fromConnection: delegate method. */
success = [captureSession addOutput:captureAudioDataOutput error:&error];
if (!success) {
[captureAudioDeviceInput release];
captureAudioDeviceInput = nil;
[audioDevice close];
[captureAudioDataOutput release];
captureAudioDataOutput = nil;
[captureSession release];
captureSession = nil;
[[NSAlert alertWithError:error] runModal];
return;
}
/* Create an effect audio unit to add an effect to the audio before it is written to a file. */
OSStatus err = noErr;
AudioComponentDescription effectAudioUnitComponentDescription;
effectAudioUnitComponentDescription.componentType = kAudioUnitType_Effect;
effectAudioUnitComponentDescription.componentSubType = kAudioUnitSubType_Delay;
effectAudioUnitComponentDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
effectAudioUnitComponentDescription.componentFlags = 0;
effectAudioUnitComponentDescription.componentFlagsMask = 0;
AudioComponent effectAudioUnitComponent = AudioComponentFindNext(NULL, &effectAudioUnitComponentDescription);
err = AudioComponentInstanceNew(effectAudioUnitComponent, &effectAudioUnit);
if (noErr == err) {
/* Set a callback on the effect unit that will supply the audio buffers received from the audio data output. */
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = PushCurrentInputBufferIntoAudioUnit;
renderCallbackStruct.inputProcRefCon = self;
err = AudioUnitSetProperty(effectAudioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &renderCallbackStruct, sizeof(renderCallbackStruct));
}
if (noErr != err) {
if (effectAudioUnit) {
AudioComponentInstanceDispose(effectAudioUnit);
effectAudioUnit = NULL;
}
[captureAudioDeviceInput release];
captureAudioDeviceInput = nil;
[audioDevice close];
[captureSession release];
captureSession = nil;
[[NSAlert alertWithError:[NSError errorWithDomain:NSOSStatusErrorDomain code:err userInfo:nil]] runModal];
return;
}
/* Start the capture session. This will cause the audo data output delegate method to be called for each new audio buffer that is captured from the input device. */
[captureSession startRunning];
/* Become the window's delegate so that the capture session can be stopped and cleaned up immediately after the window is closed. */
[window setDelegate:self];
}
- (void)windowWillClose:(NSNotification *)notification
{
[self setRecording:NO];
[captureSession stopRunning];
QTCaptureDevice *audioDevice = [captureAudioDeviceInput device];
if ([audioDevice isOpen])
[audioDevice close];
}
- (void)dealloc
{
[captureSession release];
[captureAudioDeviceInput release];
[captureAudioDataOutput release];
[outputFile release];
if (extAudioFile)
ExtAudioFileDispose(extAudioFile);
if (effectAudioUnit) {
if (didSetUpAudioUnits)
AudioUnitUninitialize(effectAudioUnit);
AudioComponentInstanceDispose(effectAudioUnit);
}
[super dealloc];
}
#pragma mark ======== Audio capture methods =========
/*
Called periodically by the QTCaptureAudioDataOutput as it receives QTSampleBuffer objects containing audio frames captured by the QTCaptureSession.
Each QTSampleBuffer will contain multiple frames of audio encoded in the canonical non-interleaved linear PCM format compatible with AudioUnits.
*/
- (void)captureOutput:(QTCaptureOutput *)captureOutput didOutputAudioSampleBuffer:(QTSampleBuffer *)sampleBuffer fromConnection:(QTCaptureConnection *)connection
{
OSStatus err = noErr;
BOOL isRecording = [self isRecording];
/* Get the sample buffer's AudioStreamBasicDescription, which will be used to set the input format of the effect audio unit and the ExtAudioFile. */
QTFormatDescription *formatDescription = [sampleBuffer formatDescription];
NSValue *sampleBufferASBDValue = [formatDescription attributeForKey:QTFormatDescriptionAudioStreamBasicDescriptionAttribute];
if (!sampleBufferASBDValue)
return;
AudioStreamBasicDescription sampleBufferASBD = {0};
[sampleBufferASBDValue getValue:&sampleBufferASBD];
if ((sampleBufferASBD.mChannelsPerFrame != currentInputASBD.mChannelsPerFrame) || (sampleBufferASBD.mSampleRate != currentInputASBD.mSampleRate)) {
/* Although QTCaptureAudioDataOutput guarantees that it will output sample buffers in the canonical format, the number of channels or the sample rate of the audio can changes at any time while the capture session is running. If this occurs, the audio unit receiving the buffers from the QTCaptureAudioDataOutput needs to be reconfigured with the new format. This also must be done when a buffer is received for the first time. */
currentInputASBD = sampleBufferASBD;
if (didSetUpAudioUnits) {
/* The audio units were previously set up, so they must be uninitialized now. */
AudioUnitUninitialize(effectAudioUnit);
/* If recording was in progress, the recording needs to be stopped because the audio format changed. */
if (extAudioFile) {
ExtAudioFileDispose(extAudioFile);
extAudioFile = NULL;
}
} else {
didSetUpAudioUnits = YES;
}
/* Set the input and output formats of the effect audio unit to match that of the sample buffer. */
err = AudioUnitSetProperty(effectAudioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &currentInputASBD, sizeof(currentInputASBD));
if (noErr == err)
err = AudioUnitSetProperty(effectAudioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &currentInputASBD, sizeof(currentInputASBD));
if (noErr == err)
err = AudioUnitInitialize(effectAudioUnit);
if (noErr != err) {
NSLog(@"Failed to set up audio units (%d)", err);
didSetUpAudioUnits = NO;
bzero(&currentInputASBD, sizeof(currentInputASBD));
}
}
if (isRecording && !extAudioFile) {
/* Start recording by creating an ExtAudioFile and configuring it with the same sample rate and channel layout as those of the current sample buffer. */
AudioStreamBasicDescription recordedASBD = {0};
recordedASBD.mSampleRate = currentInputASBD.mSampleRate;
recordedASBD.mFormatID = kAudioFormatLinearPCM;
recordedASBD.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
recordedASBD.mBytesPerPacket = 2 * currentInputASBD.mChannelsPerFrame;
recordedASBD.mFramesPerPacket = 1;
recordedASBD.mBytesPerFrame = 2 * currentInputASBD.mChannelsPerFrame;
recordedASBD.mChannelsPerFrame = currentInputASBD.mChannelsPerFrame;
recordedASBD.mBitsPerChannel = 16;
NSData *inputChannelLayoutData = [formatDescription attributeForKey:QTFormatDescriptionAudioChannelLayoutAttribute];
AudioChannelLayout *recordedChannelLayout = (AudioChannelLayout *)[inputChannelLayoutData bytes];
err = ExtAudioFileCreateWithURL((CFURLRef)[NSURL fileURLWithPath:[self outputFile]], kAudioFileAIFFType, &recordedASBD, recordedChannelLayout, kAudioFileFlags_EraseFile, &extAudioFile);
if (noErr == err)
err = ExtAudioFileSetProperty(extAudioFile, kExtAudioFileProperty_ClientDataFormat, sizeof(currentInputASBD), &currentInputASBD);
if (noErr != err) {
NSLog(@"Failed to set up ExtAudioFile (%d)", err);
ExtAudioFileDispose(extAudioFile);
extAudioFile = NULL;
}
} else if (!isRecording && extAudioFile) {
/* Stop recording by disposing of the ExtAudioFile. */
ExtAudioFileDispose(extAudioFile);
extAudioFile = NULL;
}
NSUInteger numberOfFrames = [sampleBuffer numberOfSamples];
/* -[QTSampleBuffer numberOfSamples] corresponds to the number of CoreAudio audio frames. */
/* In order to render continuously, the effect audio unit needs a new time stamp for each buffer. Use the number of frames for each unit of time. */
currentSampleTime += (double)numberOfFrames;
AudioTimeStamp timeStamp = {0};
timeStamp.mSampleTime = currentSampleTime;
timeStamp.mFlags |= kAudioTimeStampSampleTimeValid;
AudioUnitRenderActionFlags flags = 0;
/* Create an AudioBufferList large enough to hold the number of frames from the sample buffer in 32-bit floating point PCM format. */
AudioBufferList *outputABL = calloc(1, sizeof(*outputABL) + (currentInputASBD.mChannelsPerFrame - 1)*sizeof(outputABL->mBuffers[0]));
outputABL->mNumberBuffers = currentInputASBD.mChannelsPerFrame;
UInt32 channelIndex;
for (channelIndex = 0; channelIndex < currentInputASBD.mChannelsPerFrame; channelIndex++) {
UInt32 dataSize = numberOfFrames * currentInputASBD.mBytesPerFrame;
outputABL->mBuffers[channelIndex].mDataByteSize = dataSize;
outputABL->mBuffers[channelIndex].mData = malloc(dataSize);
outputABL->mBuffers[channelIndex].mNumberChannels = 1;
}
/*
Get an audio buffer list from the sample buffer and assign it to the currentInputAudioBufferList instance variable.
The the effect audio unit render callback, PushCurrentInputBufferIntoAudioUnit(), can access this value by calling the currentInputAudioBufferList method.
*/
currentInputAudioBufferList = [sampleBuffer audioBufferListWithOptions:QTSampleBufferAudioBufferListOptionAssure16ByteAlignment];
/* Tell the effect audio unit to render. This will synchronously call PushCurrentInputBufferIntoAudioUnit(), which will feed the audio buffer list into the effect audio unit. */
err = AudioUnitRender(effectAudioUnit, &flags, &timeStamp, 0, numberOfFrames, outputABL);
currentInputAudioBufferList = NULL;
if ((noErr == err) && extAudioFile) {
err = ExtAudioFileWriteAsync(extAudioFile, numberOfFrames, outputABL);
}
for (channelIndex = 0; channelIndex < currentInputASBD.mChannelsPerFrame; channelIndex++) {
free(outputABL->mBuffers[channelIndex].mData);
}
free(outputABL);
}
/* Used by PushCurrentInputBufferIntoAudioUnit() to access the current audio buffer list that has been output by the QTCaptureAudioDataOutput. */
- (AudioBufferList *)currentInputAudioBufferList
{
return currentInputAudioBufferList;
}

这来自THIS教程,还可以尝试进一步查看教程中提供的示例代码中的音频捕获方法 #prama mark

希望这有帮助!

添加引用

关于cocoa - QTCaptureSession 没有从相机接收任何数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32709720/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com