- Java 双重比较
- java - 比较器与 Apache BeanComparator
- Objective-C 完成 block 导致额外的方法调用?
- database - RESTful URI 是否应该公开数据库主键?
我正在开发 iOS 应用程序,我想在其中录制分段视频。我读过https://developer.apple.com/library/content/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/00_Introduction.html我有一个使用 AVCaptureVideoDataOutput
的工作解决方案,我在其中捕获帧并使用 AVAssetWriter
将它们写入文件。我将 AVCaptureVideoDataOutput
添加到 AVCaptureSession
,如下所示:
// Setup videoDataOutput in order to capture samplebuffers
let videoDataOutput = AVCaptureVideoDataOutput()
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable : Int(kCVPixelFormatType_32BGRA)]
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.setSampleBufferDelegate(self, queue: CaptureManager.CAPTURE_QUEUE)
guard captureSession.canAddOutput(videoDataOutput) else {
return
}
captureSession.addOutput(videoDataOutput)
self.videoDataOutput = videoDataOutput
效果很好,我可以成功运行捕获 session 并获得可播放的电影文件。
现在我想插入音频。所以我想像这样做同样的事情:
// Setup audioDataOutput in order to capture audio
let audioDataOutput = AVCaptureAudioDataOutput()
audioDataOutput.audioSettings = ...
audioDataOutput.setSampleBufferDelegate(self, queue: CaptureManager.CAPTURE_QUEUE)
guard captureSession.canAddOutput(audioDataOutput) else {
return
}
captureSession.addOutput(audioDataOutput)
self.audioDataOutput = audioDataOutput
疯狂的是 AVCaptureAudioDataOutput
上没有属性 audioSettings
!!!文档是这样说的:https://developer.apple.com/reference/avfoundation/avcaptureaudiodataoutput/1388527-audiosettings但是 Swift header 没有这样的成员(如下)。
这到底是怎么回事?我正在使用 XCode 8.1。以下类 AVCaptureAudioDataOutput
的 Swift header :
import AVFoundation
import CoreMedia
import Foundation
/*!
@class AVCaptureAudioDataOutput
@abstract
AVCaptureAudioDataOutput is a concrete subclass of AVCaptureOutput that can be used to process uncompressed or compressed samples from the audio being captured.
@discussion
Instances of AVCaptureAudioDataOutput produce audio sample buffers suitable for processing using other media APIs. Applications can access the sample buffers with the captureOutput:didOutputSampleBuffer:fromConnection: delegate method.
*/
@available(iOS 4.0, *)
open class AVCaptureAudioDataOutput : AVCaptureOutput {
/*!
@method setSampleBufferDelegate:queue:
@abstract
Sets the receiver's delegate that will accept captured buffers and dispatch queue on which the delegate will be called.
@param sampleBufferDelegate
An object conforming to the AVCaptureAudioDataOutputSampleBufferDelegate protocol that will receive sample buffers after they are captured.
@param sampleBufferCallbackQueue
A dispatch queue on which all sample buffer delegate methods will be called.
@discussion
When a new audio sample buffer is captured it will be vended to the sample buffer delegate using the captureOutput:didOutputSampleBuffer:fromConnection: delegate method. All delegate methods will be called on the specified dispatch queue. If the queue is blocked when new samples are captured, those samples will be automatically dropped when they become sufficiently late. This allows clients to process existing samples on the same queue without having to manage the potential memory usage increases that would otherwise occur when that processing is unable to keep up with the rate of incoming samples.
Clients that need to minimize the chances of samples being dropped should specify a queue on which a sufficiently small amount of processing is being done outside of receiving sample buffers. However, if such clients migrate extra processing to another queue, they are responsible for ensuring that memory usage does not grow without bound from samples that have not been processed.
A serial dispatch queue must be used to guarantee that audio samples will be delivered in order. The sampleBufferCallbackQueue parameter may not be NULL, except when setting sampleBufferDelegate to nil.
*/
open func setSampleBufferDelegate(_ sampleBufferDelegate: AVCaptureAudioDataOutputSampleBufferDelegate!, queue sampleBufferCallbackQueue: DispatchQueue!)
/*!
@property sampleBufferDelegate
@abstract
The receiver's delegate.
@discussion
The value of this property is an object conforming to the AVCaptureAudioDataOutputSampleBufferDelegate protocol that will receive sample buffers after they are captured. The delegate is set using the setSampleBufferDelegate:queue: method.
*/
open var sampleBufferDelegate: AVCaptureAudioDataOutputSampleBufferDelegate! { get }
/*!
@property sampleBufferCallbackQueue
@abstract
The dispatch queue on which all sample buffer delegate methods will be called.
@discussion
The value of this property is a dispatch_queue_t. The queue is set using the setSampleBufferDelegate:queue: method.
*/
open var sampleBufferCallbackQueue: DispatchQueue! { get }
/*!
@property audioSettings
@abstract
Specifies the settings used to decode or re-encode audio before it is output by the receiver.
@discussion
The value of this property is an NSDictionary containing values for audio settings keys defined in AVAudioSettings.h. When audioSettings is set to nil, the AVCaptureAudioDataOutput vends samples in their device native format.
*/
// (TARGET_OS_MAC && !(TARGET_OS_EMBEDDED || TARGET_OS_IPHONE))
/*!
@method recommendedAudioSettingsForAssetWriterWithOutputFileType:
@abstract
Specifies the recommended settings for use with an AVAssetWriterInput.
@param outputFileType
Specifies the UTI of the file type to be written (see AVMediaFormat.h for a list of file format UTIs).
@return
A fully populated dictionary of keys and values that are compatible with AVAssetWriter.
@discussion
The value of this property is an NSDictionary containing values for compression settings keys defined in AVAudioSettings.h. This dictionary is suitable for use as the "outputSettings" parameter when creating an AVAssetWriterInput, such as,
[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:outputSettings sourceFormatHint:hint];
The dictionary returned contains all necessary keys and values needed by AVAssetWriter (see AVAssetWriterInput.h, -initWithMediaType:outputSettings: for a more in depth discussion). For QuickTime movie and ISO files, the recommended audio settings will always produce output comparable to that of AVCaptureMovieFileOutput.
Note that the dictionary of settings is dependent on the current configuration of the receiver's AVCaptureSession and its inputs. The settings dictionary may change if the session's configuration changes. As such, you should configure your session first, then query the recommended audio settings.
*/
@available(iOS 7.0, *)
open func recommendedAudioSettingsForAssetWriter(withOutputFileType outputFileType: String!) -> [AnyHashable : Any]!
}
/*!
@protocol AVCaptureAudioDataOutputSampleBufferDelegate
@abstract
Defines an interface for delegates of AVCaptureAudioDataOutput to receive captured audio sample buffers.
*/
public protocol AVCaptureAudioDataOutputSampleBufferDelegate : NSObjectProtocol {
/*!
@method captureOutput:didOutputSampleBuffer:fromConnection:
@abstract
Called whenever an AVCaptureAudioDataOutput instance outputs a new audio sample buffer.
@param captureOutput
The AVCaptureAudioDataOutput instance that output the samples.
@param sampleBuffer
A CMSampleBuffer object containing the audio samples and additional information about them, such as their format and presentation time.
@param connection
The AVCaptureConnection from which the audio was received.
@discussion
Delegates receive this message whenever the output captures and outputs new audio samples, decoding or re-encoding as specified by the audioSettings property. Delegates can use the provided sample buffer in conjunction with other APIs for further processing. This method will be called on the dispatch queue specified by the output's sampleBufferCallbackQueue property. This method is called periodically, so it must be efficient to prevent capture performance problems, including dropped audio samples.
Clients that need to reference the CMSampleBuffer object outside of the scope of this method must CFRetain it and then CFRelease it when they are finished with it.
*/
@available(iOS 4.0, *)
optional public func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
}
最佳答案
AVCaptureAudioDataOutput.audioSettings
仅在 osx 上可用。您可以使用 AVAudioSession
修改采样率,但否则您将不得不安排任何您想要进行的转换。
有很多方法可以做到这一点,但是 AVAssetWriterInput.init(mediaType:, outputSettings:)
的 outputSettings
似乎是一个不错的起点。
关于ios - Swift header 中的 AVCaptureAudioDataOutput 中是否缺少 audioSettings 属性?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41152448/
我需要将文本放在 中在一个 Div 中,在另一个 Div 中,在另一个 Div 中。所以这是它的样子: #document Change PIN
奇怪的事情发生了。 我有一个基本的 html 代码。 html,头部, body 。(因为我收到了一些反对票,这里是完整的代码) 这是我的CSS: html { backgroun
我正在尝试将 Assets 中的一组图像加载到 UICollectionview 中存在的 ImageView 中,但每当我运行应用程序时它都会显示错误。而且也没有显示图像。 我在ViewDidLoa
我需要根据带参数的 perl 脚本的输出更改一些环境变量。在 tcsh 中,我可以使用别名命令来评估 perl 脚本的输出。 tcsh: alias setsdk 'eval `/localhome/
我使用 Windows 身份验证创建了一个新的 Blazor(服务器端)应用程序,并使用 IIS Express 运行它。它将显示一条消息“Hello Domain\User!”来自右上方的以下 Ra
这是我的方法 void login(Event event);我想知道 Kotlin 中应该如何 最佳答案 在 Kotlin 中通配符运算符是 * 。它指示编译器它是未知的,但一旦知道,就不会有其他类
看下面的代码 for story in book if story.title.length < 140 - var story
我正在尝试用 C 语言学习字符串处理。我写了一个程序,它存储了一些音乐轨道,并帮助用户检查他/她想到的歌曲是否存在于存储的轨道中。这是通过要求用户输入一串字符来完成的。然后程序使用 strstr()
我正在学习 sscanf 并遇到如下格式字符串: sscanf("%[^:]:%[^*=]%*[*=]%n",a,b,&c); 我理解 %[^:] 部分意味着扫描直到遇到 ':' 并将其分配给 a。:
def char_check(x,y): if (str(x) in y or x.find(y) > -1) or (str(y) in x or y.find(x) > -1):
我有一种情况,我想将文本文件中的现有行包含到一个新 block 中。 line 1 line 2 line in block line 3 line 4 应该变成 line 1 line 2 line
我有一个新项目,我正在尝试设置 Django 调试工具栏。首先,我尝试了快速设置,它只涉及将 'debug_toolbar' 添加到我的已安装应用程序列表中。有了这个,当我转到我的根 URL 时,调试
在 Matlab 中,如果我有一个函数 f,例如签名是 f(a,b,c),我可以创建一个只有一个变量 b 的函数,它将使用固定的 a=a1 和 c=c1 调用 f: g = @(b) f(a1, b,
我不明白为什么 ForEach 中的元素之间有多余的垂直间距在 VStack 里面在 ScrollView 里面使用 GeometryReader 时渲染自定义水平分隔线。 Scrol
我想知道,是否有关于何时使用 session 和 cookie 的指南或最佳实践? 什么应该和什么不应该存储在其中?谢谢! 最佳答案 这些文档很好地了解了 session cookie 的安全问题以及
我在 scipy/numpy 中有一个 Nx3 矩阵,我想用它制作一个 3 维条形图,其中 X 轴和 Y 轴由矩阵的第一列和第二列的值、高度确定每个条形的 是矩阵中的第三列,条形的数量由 N 确定。
假设我用两种不同的方式初始化信号量 sem_init(&randomsem,0,1) sem_init(&randomsem,0,0) 现在, sem_wait(&randomsem) 在这两种情况下
我怀疑该值如何存储在“WORD”中,因为 PStr 包含实际输出。? 既然Pstr中存储的是小写到大写的字母,那么在printf中如何将其给出为“WORD”。有人可以吗?解释一下? #include
我有一个 3x3 数组: var my_array = [[0,1,2], [3,4,5], [6,7,8]]; 并想获得它的第一个 2
我意识到您可以使用如下方式轻松检查焦点: var hasFocus = true; $(window).blur(function(){ hasFocus = false; }); $(win
我是一名优秀的程序员,十分优秀!