- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
第一次在这里提问。我希望这篇文章是清晰的并且示例代码的格式正确。
我正在尝试 AVFoundation 和延时摄影。
我的目的是从 iOS 设备(我的 iPod touch,版本 4)的摄像机中抓取每 N 个帧,并将每个帧写入文件以创建延时摄影。我正在使用 AVCaptureVideoDataOutput、AVAssetWriter 和 AVAssetWriterInput。
问题是,如果我使用传递给
captureOutput:idOutputSampleBuffer:fromConnection:, the playback of each frame is the length of time between original input frames. A frame rate of say 1fps. I'm looking to get 30fps.
I've tried using
CMSampleBufferCreateCopyWithNewTiming(), but then after 13 frames are written to the file, the
captureOutput:idOutputSampleBuffer:fromConnection:stops being called. The interface is active and I can tap a button to stop the capture and save it to the photo library for playback. It appears to play back as I want it, 30fps, but it only has those 13 frames.
How can I accomplish my goal of 30fps playback?How can I tell where the app is getting lost and why?
I've placed a flag called useNativeTime so I can test both cases. When set to YES, I get all frames I'm interested in as the callback doesn't 'get lost'. When I set that flag to NO, I only ever get 13 frames processed and am never returned to that method again. As mentioned above, in both cases I can playback the video.
Thanks for any help.
Here is where I'm trying to do the retiming.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
BOOL useNativeTime = NO;
BOOL appendSuccessFlag = NO;
//NSLog(@"in captureOutpput sample buffer method");
if( !CMSampleBufferDataIsReady(sampleBuffer) )
{
NSLog( @"sample buffer is not ready. Skipping sample" );
//CMSampleBufferInvalidate(sampleBuffer);
return;
}
if (! [inputWriterBuffer isReadyForMoreMediaData])
{
NSLog(@"Not ready for data.");
}
else {
// Write every first frame of n frames (30 native from camera).
intervalFrames++;
if (intervalFrames > 30) {
intervalFrames = 1;
}
else if (intervalFrames != 1) {
//CMSampleBufferInvalidate(sampleBuffer);
return;
}
// Need to initialize start session time.
if (writtenFrames < 1) {
if (useNativeTime) imageSourceTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
else imageSourceTime = CMTimeMake( 0 * 20 ,600); //CMTimeMake(1,30);
[outputWriter startSessionAtSourceTime: imageSourceTime];
NSLog(@"Starting CMtime");
CMTimeShow(imageSourceTime);
}
if (useNativeTime) {
imageSourceTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CMTimeShow(imageSourceTime);
// CMTime myTiming = CMTimeMake(writtenFrames * 20,600);
// CMSampleBufferSetOutputPresentationTimeStamp(sampleBuffer, myTiming); // Tried but has no affect.
appendSuccessFlag = [inputWriterBuffer appendSampleBuffer:sampleBuffer];
}
else {
CMSampleBufferRef newSampleBuffer;
CMSampleTimingInfo sampleTimingInfo;
sampleTimingInfo.duration = CMTimeMake(20,600);
sampleTimingInfo.presentationTimeStamp = CMTimeMake( (writtenFrames + 0) * 20,600);
sampleTimingInfo.decodeTimeStamp = kCMTimeInvalid;
OSStatus myStatus;
//NSLog(@"numSamples of sampleBuffer: %i", CMSampleBufferGetNumSamples(sampleBuffer) );
myStatus = CMSampleBufferCreateCopyWithNewTiming(kCFAllocatorDefault,
sampleBuffer,
1,
&sampleTimingInfo, // maybe a little confused on this param.
&newSampleBuffer);
// These confirm the good heath of our newSampleBuffer.
if (myStatus != 0) NSLog(@"CMSampleBufferCreateCopyWithNewTiming() myStatus: %i",myStatus);
if (! CMSampleBufferIsValid(newSampleBuffer)) NSLog(@"CMSampleBufferIsValid NOT!");
// No affect.
//myStatus = CMSampleBufferMakeDataReady(newSampleBuffer); // How is this different; CMSampleBufferSetDataReady ?
//if (myStatus != 0) NSLog(@"CMSampleBufferMakeDataReady() myStatus: %i",myStatus);
imageSourceTime = CMSampleBufferGetPresentationTimeStamp(newSampleBuffer);
CMTimeShow(imageSourceTime);
appendSuccessFlag = [inputWriterBuffer appendSampleBuffer:newSampleBuffer];
//CMSampleBufferInvalidate(sampleBuffer); // Docs don't describe action. WTF does it do? Doesn't seem to affect my problem. Used with CMSampleBufferSetInvalidateCallback maybe?
//CFRelease(sampleBuffer); // - Not surprisingly - “EXC_BAD_ACCESS”
}
if (!appendSuccessFlag)
{
NSLog(@"Failed to append pixel buffer");
}
else {
writtenFrames++;
NSLog(@"writtenFrames: %i", writtenFrames);
}
}
//[self displayOuptutWritterStatus]; // Expect and see AVAssetWriterStatusWriting.
}
的 CMSampleBufferRef
我的设置例程。
- (IBAction) recordingStartStop: (id) sender
{
NSError * error;
if (self.isRecording) {
NSLog(@"~~~~~~~~~ STOPPING RECORDING ~~~~~~~~~");
self.isRecording = NO;
[recordingStarStop setTitle: @"Record" forState: UIControlStateNormal];
//[self.captureSession stopRunning];
[inputWriterBuffer markAsFinished];
[outputWriter endSessionAtSourceTime:imageSourceTime];
[outputWriter finishWriting]; // Blocks until file is completely written, or an error occurs.
NSLog(@"finished CMtime");
CMTimeShow(imageSourceTime);
// Really, I should loop through the outputs and close all of them or target specific ones.
// Since I'm only recording video right now, I feel safe doing this.
[self.captureSession removeOutput: [[self.captureSession outputs] objectAtIndex: 0]];
[videoOutput release];
[inputWriterBuffer release];
[outputWriter release];
videoOutput = nil;
inputWriterBuffer = nil;
outputWriter = nil;
NSLog(@"~~~~~~~~~ STOPPED RECORDING ~~~~~~~~~");
NSLog(@"Calling UIVideoAtPathIsCompatibleWithSavedPhotosAlbum.");
NSLog(@"filePath: %@", [projectPaths movieFilePath]);
if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum([projectPaths movieFilePath])) {
NSLog(@"Calling UISaveVideoAtPathToSavedPhotosAlbum.");
UISaveVideoAtPathToSavedPhotosAlbum ([projectPaths movieFilePath], self, @selector(video:didFinishSavingWithError: contextInfo:), nil);
}
NSLog(@"~~~~~~~~~ WROTE RECORDING to PhotosAlbum ~~~~~~~~~");
}
else {
NSLog(@"~~~~~~~~~ STARTING RECORDING ~~~~~~~~~");
projectPaths = [[ProjectPaths alloc] initWithProjectFolder: @"TestProject"];
intervalFrames = 30;
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
NSMutableDictionary * cameraVideoSettings = [[[NSMutableDictionary alloc] init] autorelease];
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt: kCVPixelFormatType_32BGRA]; //kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
[cameraVideoSettings setValue: value forKey: key];
[videoOutput setVideoSettings: cameraVideoSettings];
[videoOutput setMinFrameDuration: CMTimeMake(20, 600)]; //CMTimeMake(1, 30)]; // 30fps
[videoOutput setAlwaysDiscardsLateVideoFrames: YES];
queue = dispatch_queue_create("cameraQueue", NULL);
[videoOutput setSampleBufferDelegate: self queue: queue];
dispatch_release(queue);
NSMutableDictionary *outputSettings = [[[NSMutableDictionary alloc] init] autorelease];
[outputSettings setValue: AVVideoCodecH264 forKey: AVVideoCodecKey];
[outputSettings setValue: [NSNumber numberWithInt: 1280] forKey: AVVideoWidthKey]; // currently assuming
[outputSettings setValue: [NSNumber numberWithInt: 720] forKey: AVVideoHeightKey];
NSMutableDictionary *compressionSettings = [[[NSMutableDictionary alloc] init] autorelease];
[compressionSettings setValue: AVVideoProfileLevelH264Main30 forKey: AVVideoProfileLevelKey];
//[compressionSettings setValue: [NSNumber numberWithDouble:1024.0*1024.0] forKey: AVVideoAverageBitRateKey];
[outputSettings setValue: compressionSettings forKey: AVVideoCompressionPropertiesKey];
inputWriterBuffer = [AVAssetWriterInput assetWriterInputWithMediaType: AVMediaTypeVideo outputSettings: outputSettings];
[inputWriterBuffer retain];
inputWriterBuffer.expectsMediaDataInRealTime = YES;
outputWriter = [AVAssetWriter assetWriterWithURL: [projectPaths movieURLPath] fileType: AVFileTypeQuickTimeMovie error: &error];
[outputWriter retain];
if (error) NSLog(@"error for outputWriter = [AVAssetWriter assetWriterWithURL:fileType:error:");
if ([outputWriter canAddInput: inputWriterBuffer]) [outputWriter addInput: inputWriterBuffer];
else NSLog(@"can not add input");
if (![outputWriter canApplyOutputSettings: outputSettings forMediaType:AVMediaTypeVideo]) NSLog(@"ouptutSettings are NOT supported");
if ([captureSession canAddOutput: videoOutput]) [self.captureSession addOutput: videoOutput];
else NSLog(@"could not addOutput: videoOutput to captureSession");
//[self.captureSession startRunning];
self.isRecording = YES;
[recordingStarStop setTitle: @"Stop" forState: UIControlStateNormal];
writtenFrames = 0;
imageSourceTime = kCMTimeZero;
[outputWriter startWriting];
//[outputWriter startSessionAtSourceTime: imageSourceTime];
NSLog(@"~~~~~~~~~ STARTED RECORDING ~~~~~~~~~");
NSLog (@"recording to fileURL: %@", [projectPaths movieURLPath]);
}
NSLog(@"isRecording: %@", self.isRecording ? @"YES" : @"NO");
[self displayOuptutWritterStatus];
}
最佳答案
好的,我在第一篇文章中发现了该错误。
使用时
myStatus = CMSampleBufferCreateCopyWithNewTiming(kCFAllocatorDefault,
sampleBuffer,
1,
&sampleTimingInfo,
&newSampleBuffer);
您需要使用 CFRelease(newSampleBuffer);
当将 CVPixelBufferRef 与 AVAssetWriterInputPixelBufferAdaptor 实例的 piexBufferPool 一起使用时,同样的想法也适用。您可以在调用 appendPixelBuffer: withPresentationTime:
方法后使用 CVPixelBufferRelease(yourCVPixelBufferRef);
。
希望这对其他人有帮助。
关于iphone - AVFoundation - 重新定时 CMSampleBufferRef 视频输出,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/5136633/
我正在使用 OUTFILE 命令,但由于权限问题和安全风险,我想将 shell 的输出转储到文件中,但出现了一些错误。我试过的 #This is a simple shell to connect t
我刚刚开始学习 Java,我想克服在尝试为这个“问题”创建 Java 程序时出现的障碍。这是我必须创建一个程序来解决的问题: Tandy 喜欢分发糖果,但只有 n 颗糖果。对于她给第 i 个糖果的人,
你好,我想知道我是否可以得到一些帮助来解决我在 C++ 中打印出 vector 内容的问题 我试图以特定顺序在一个或两个函数调用中输出一个类的所有变量。但是我在遍历 vector 时收到一个奇怪的错误
我正在将 intellij (2019.1.1) 用于 java gradle (5.4.1) 项目,并使用 lombok (1.18.6) 来自动生成代码。 Intellij 将生成的源放在 out
编辑:在与 guest271314 交流后,我意识到问题的措辞(在我的问题正文中)可能具有误导性。我保留了旧版本并更好地改写了新版本 背景: 从远程服务器获取 JSON 时,响应 header 包含一
我的问题可能有点令人困惑。我遇到的问题是我正在使用来自 Java 的 StoredProcedureCall 调用过程,例如: StoredProcedureCall call = new Store
在我使用的一些IDL中,我注意到在方法中标记返回值有2个约定-[in, out]和[out, retval]。 当存在多个返回值时,似乎使用了[in, out],例如: HRESULT MyMetho
当我查看 gar -h 的帮助输出时,它告诉我: [...] gar: supported targets: elf64-x86-64 elf32-i386 a.out-i386-linux [...
我想循环遍历一个列表,并以 HTML 格式打印其中的一部分,以代码格式打印其中的一部分。所以更准确地说:我想产生与这相同的输出 1 is a great number 2 is a great
我有下面的tekton管道,并尝试在Google Cloud上运行。集群角色绑定。集群角色。该服务帐户具有以下权限。。例外。不确定需要为服务帐户设置什么权限。
当尝试从 make 过滤非常长的输出以获取特定警告或错误消息时,第一个想法是这样的: $ make | grep -i 'warning: someone set up us the bomb' 然而
我正在创建一个抽象工具类,该类对另一组外部类(不受我控制)进行操作。外部类在某些接口(interface)点概念上相似,但访问它们相似属性的语法不同。它们还具有不同的语法来应用工具操作的结果。我创建了
这个问题已经有答案了: What do numbers starting with 0 mean in python? (9 个回答) 已关闭 7 年前。 在我的代码中使用按位与运算符 (&) 时,我
我写了这段代码来解析输入文件中的行输入格式:电影 ID 可以有多个条目,所以我们应该计算平均值输出:**没有重复(这是问题所在) import re f = open("ratings2.txt",
我需要处理超过 1000 万个光谱数据集。数据结构如下:大约有 1000 个 .fits(.fits 是某种数据存储格式)文件,每个文件包含大约 600-1000 个光谱,其中每个光谱中有大约 450
我编写了一个简单的 C 程序,它读取一个文件并生成一个包含每个单词及其出现频率的表格。 该程序有效,我已经能够在 Linux 上运行的终端中获得显示的输出,但是,我不确定如何获得生成的显示以生成包含词
很难说出这里要问什么。这个问题模棱两可、含糊不清、不完整、过于宽泛或夸夸其谈,无法以目前的形式得到合理的回答。如需帮助澄清此问题以便重新打开,visit the help center . 关闭 1
1.普通的输出: print(str)#str是任意一个字符串,数字··· 2.格式化输出: ?
我无法让 logstash 正常工作。 Basic logstash Example作品。但后来我与 Advanced Pipeline Example 作斗争.也许这也可能是 Elasticsear
这是我想要做的: 我想让用户给我的程序一些声音数据(通过麦克风输入),然后保持 250 毫秒,然后通过扬声器输出。 我已经使用 Java Sound API 做到了这一点。问题是它有点慢。从发出声音到
我是一名优秀的程序员,十分优秀!