gpt4 book ai didi

cocoa - 在不使用 QuickTime API 的情况下将视频帧录制到电影中的 10.6 兼容方法是什么?

转载 作者:行者123 更新时间:2023-12-03 16:03:14 24 4
gpt4 key购买 nike

我正在更新一个应用程序以兼容 64 位,但我在使用我们的电影录制代码时遇到了一些困难。我们有一个 FireWire 相机,它将 YUV 帧输入到我们的应用程序中,我们对其进行处理并编码到 MPEG4 电影中的磁盘上。目前,我们使用基于 C 的 QuickTime API 来执行此操作(使用 Image Compression Manager 等),但旧的 QuickTime API 不支持 64 位。

我的第一次尝试是使用 QTKit 的 QTMovie 并使用 -addImage:forDuration:withAttributes: 对各个帧进行编码,但这需要为每个帧创建一个 NSImage (这在计算上是昂贵的)并且它does not do temporal compression ,因此它不会生成最紧凑的文件。

我想使用 QTKit Capture 的 QTCaptureMovieFileOutput 之类的东西,但我不知道如何将原始帧输入到与 QTCaptureInput 无关的帧中。我们不能直接将相机与 QTKit Capture 一起使用,因为我们需要手动控制它的增益、曝光等。

在 Lion 上,我们现在在 AVFoundation 中有 AVAssetWriter 类,可以让您执行此操作,但我暂时仍需要针对 Snow Leopard,因此我正在尝试找到一个也适用于该处的解决方案。

因此,有没有一种方法可以进行非 QuickTime 逐帧视频录制,比 QTMovie 的 -addImage:forDuration:withAttributes: 更高效,并且生成的文件大小与较旧的 QuickTime API 可以吗?

最佳答案

最后,我决定采用TiansHUo建议的方法,在这里使用libavcodec进行视频压缩。根据 Martin 的指示 here ,我downloaded the FFmpeg source并使用构建了必要库的 64 位兼容版本

./configure --disable-gpl --arch=x86_64 --cpu=core2 --enable-shared --disable-amd3dnow --enable-memalign-hack --cc=llvm-gcc
make
sudo make install

这将为 Mac 中的 64 位 Core2 处理器创建 LGPL 共享库。 不幸的是,我还没有找到一种方法,可以让库在启用 MMX 优化时运行而不崩溃,因此现在已禁用。这会稍微减慢编码速度。 经过一些实验,我发现我可以使用上述配置选项构建一个启用了 MMX 优化并且在 Mac 上稳定的 64 位版本的库。编码时这比禁用 MMX 构建的库要快得多。

请注意,如果您使用这些共享库,则应确保遵循 LGPL compliance instructions在 FFmpeg 的网站上。

为了使这些共享库在放置在我的 Mac 应用程序包中的正确文件夹中时能够正常运行,我需要使用 install_name_tool 调整这些库中的内部搜索路径以指向它们的新路径应用程序包中 Frameworks 目录中的位置:

install_name_tool -id @executable_path/../Frameworks/libavutil.51.9.1.dylib libavutil.51.9.1.dylib

install_name_tool -id @executable_path/../Frameworks/libavcodec.53.7.0.dylib libavcodec.53.7.0.dylib
install_name_tool -change /usr/local/lib/libavutil.dylib @executable_path/../Frameworks/libavutil.51.9.1.dylib libavcodec.53.7.0.dylib

install_name_tool -id @executable_path/../Frameworks/libavformat.53.4.0.dylib libavformat.53.4.0.dylib
install_name_tool -change /usr/local/lib/libavutil.dylib @executable_path/../Frameworks/libavutil.51.9.1.dylib libavformat.53.4.0.dylib
install_name_tool -change /usr/local/lib/libavcodec.dylib @executable_path/../Frameworks/libavcodec.53.7.0.dylib libavformat.53.4.0.dylib

install_name_tool -id @executable_path/../Frameworks/libswscale.2.0.0.dylib libswscale.2.0.0.dylib
install_name_tool -change /usr/local/lib/libavutil.dylib @executable_path/../Frameworks/libavutil.51.9.1.dylib libswscale.2.0.0.dylib

您的具体路径可能会有所不同。此调整使它们可以在应用程序包中工作,而无需将它们安装在用户系统上的/usr/local/lib 中。

然后,我将 Xcode 项目链接到这些库,并创建了一个单独的类来处理视频编码。此类通过 videoFrameToEncode 属性获取原始视频帧(BGRA 格式),并将它们在 movieFileName 文件中编码为 MP4 容器中的 MPEG4 视频。代码如下:

SPVideoRecorder.h

#import <Foundation/Foundation.h>

#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"

uint64_t getNanoseconds(void);

@interface SPVideoRecorder : NSObject
{
NSString *movieFileName;
CGFloat framesPerSecond;
AVCodecContext *codecContext;
AVStream *videoStream;
AVOutputFormat *outputFormat;
AVFormatContext *outputFormatContext;
AVFrame *videoFrame;
AVPicture inputRGBAFrame;

uint8_t *pictureBuffer;
uint8_t *outputBuffer;
unsigned int outputBufferSize;
int frameColorCounter;

unsigned char *videoFrameToEncode;

dispatch_queue_t videoRecordingQueue;
dispatch_semaphore_t frameEncodingSemaphore;
uint64_t movieStartTime;
}

@property(readwrite, assign) CGFloat framesPerSecond;
@property(readwrite, assign) unsigned char *videoFrameToEncode;
@property(readwrite, copy) NSString *movieFileName;

// Movie recording control
- (void)startRecordingMovie;
- (void)encodeNewFrameToMovie;
- (void)stopRecordingMovie;


@end

SPVideoRecorder.m

#import "SPVideoRecorder.h"
#include <sys/time.h>

@implementation SPVideoRecorder

uint64_t getNanoseconds(void)
{
struct timeval now;
gettimeofday(&now, NULL);
return now.tv_sec * NSEC_PER_SEC + now.tv_usec * NSEC_PER_USEC;
}

#pragma mark -
#pragma mark Initialization and teardown

- (id)init;
{
if (!(self = [super init]))
{
return nil;
}

/* must be called before using avcodec lib */
avcodec_init();

/* register all the codecs */
avcodec_register_all();
av_register_all();

av_log_set_level( AV_LOG_ERROR );

videoRecordingQueue = dispatch_queue_create("com.sonoplot.videoRecordingQueue", NULL);;
frameEncodingSemaphore = dispatch_semaphore_create(1);

return self;
}

#pragma mark -
#pragma mark Movie recording control

- (void)startRecordingMovie;
{
dispatch_async(videoRecordingQueue, ^{
NSLog(@"Start recording to file: %@", movieFileName);

const char *filename = [movieFileName UTF8String];

// Use an MP4 container, in the standard QuickTime format so it's readable on the Mac
outputFormat = av_guess_format("mov", NULL, NULL);
if (!outputFormat) {
NSLog(@"Could not set output format");
}

outputFormatContext = avformat_alloc_context();
if (!outputFormatContext)
{
NSLog(@"avformat_alloc_context Error!");
}

outputFormatContext->oformat = outputFormat;
snprintf(outputFormatContext->filename, sizeof(outputFormatContext->filename), "%s", filename);

// Add a video stream to the MP4 file
videoStream = av_new_stream(outputFormatContext,0);
if (!videoStream)
{
NSLog(@"av_new_stream Error!");
}


// Use the MPEG4 encoder (other DiVX-style encoders aren't compatible with this container, and x264 is GPL-licensed)
AVCodec *codec = avcodec_find_encoder(CODEC_ID_MPEG4);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}

codecContext = videoStream->codec;

codecContext->codec_id = codec->id;
codecContext->codec_type = AVMEDIA_TYPE_VIDEO;
codecContext->bit_rate = 4800000;
codecContext->width = 640;
codecContext->height = 480;
codecContext->pix_fmt = PIX_FMT_YUV420P;
// codecContext->time_base = (AVRational){1,(int)round(framesPerSecond)};
// videoStream->time_base = (AVRational){1,(int)round(framesPerSecond)};
codecContext->time_base = (AVRational){1,200}; // Set it to 200 FPS so that we give a little wiggle room when recording at 50 FPS
videoStream->time_base = (AVRational){1,200};
// codecContext->max_b_frames = 3;
// codecContext->b_frame_strategy = 1;
codecContext->qmin = 1;
codecContext->qmax = 10;
// codecContext->mb_decision = 2; // -mbd 2
// codecContext->me_cmp = 2; // -cmp 2
// codecContext->me_sub_cmp = 2; // -subcmp 2
codecContext->keyint_min = (int)round(framesPerSecond);
// codecContext->flags |= CODEC_FLAG_4MV; // 4mv
// codecContext->flags |= CODEC_FLAG_LOOP_FILTER;
codecContext->i_quant_factor = 0.71;
codecContext->qcompress = 0.6;
// codecContext->max_qdiff = 4;
codecContext->flags2 |= CODEC_FLAG2_FASTPSKIP;

if(outputFormat->flags & AVFMT_GLOBALHEADER)
{
codecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
}

// Open the codec
if (avcodec_open(codecContext, codec) < 0)
{
NSLog(@"Couldn't initialize the codec");
return;
}

// Open the file for recording
if (avio_open(&outputFormatContext->pb, outputFormatContext->filename, AVIO_FLAG_WRITE) < 0)
{
NSLog(@"Couldn't open file");
return;
}

// Start by writing the video header
if (avformat_write_header(outputFormatContext, NULL) < 0)
{
NSLog(@"Couldn't write video header");
return;
}

// Set up the video frame and output buffers
outputBufferSize = 400000;
outputBuffer = malloc(outputBufferSize);
int size = codecContext->width * codecContext->height;

int pictureBytes = avpicture_get_size(PIX_FMT_YUV420P, codecContext->width, codecContext->height);
pictureBuffer = (uint8_t *)av_malloc(pictureBytes);

videoFrame = avcodec_alloc_frame();
videoFrame->data[0] = pictureBuffer;
videoFrame->data[1] = videoFrame->data[0] + size;
videoFrame->data[2] = videoFrame->data[1] + size / 4;
videoFrame->linesize[0] = codecContext->width;
videoFrame->linesize[1] = codecContext->width / 2;
videoFrame->linesize[2] = codecContext->width / 2;

avpicture_alloc(&inputRGBAFrame, PIX_FMT_BGRA, codecContext->width, codecContext->height);

frameColorCounter = 0;

movieStartTime = getNanoseconds();
});
}

- (void)encodeNewFrameToMovie;
{
// NSLog(@"Encode frame");

if (dispatch_semaphore_wait(frameEncodingSemaphore, DISPATCH_TIME_NOW) != 0)
{
return;
}

dispatch_async(videoRecordingQueue, ^{
// CFTimeInterval previousTimestamp = CFAbsoluteTimeGetCurrent();
frameColorCounter++;

if (codecContext == NULL)
{
return;
}

// Take the input BGRA texture data and convert it to a YUV 4:2:0 planar frame
avpicture_fill(&inputRGBAFrame, videoFrameToEncode, PIX_FMT_BGRA, codecContext->width, codecContext->height);
struct SwsContext * img_convert_ctx = sws_getContext(codecContext->width, codecContext->height, PIX_FMT_BGRA, codecContext->width, codecContext->height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
sws_scale(img_convert_ctx, (const uint8_t* const *)inputRGBAFrame.data, inputRGBAFrame.linesize, 0, codecContext->height, videoFrame->data, videoFrame->linesize);

// Encode the frame
int out_size = avcodec_encode_video(codecContext, outputBuffer, outputBufferSize, videoFrame);

// Generate a packet and insert in the video stream
if (out_size != 0)
{
AVPacket videoPacket;
av_init_packet(&videoPacket);

if (codecContext->coded_frame->pts != AV_NOPTS_VALUE)
{
uint64_t currentFrameTime = getNanoseconds();

videoPacket.pts = av_rescale_q(((uint64_t)currentFrameTime - (uint64_t)movieStartTime) / 1000ull/*codecContext->coded_frame->pts*/, AV_TIME_BASE_Q/*codecContext->time_base*/, videoStream->time_base);

// NSLog(@"Frame time %lld, converted time: %lld", ((uint64_t)currentFrameTime - (uint64_t)movieStartTime) / 1000ull, videoPacket.pts);
}

if(codecContext->coded_frame->key_frame)
{
videoPacket.flags |= AV_PKT_FLAG_KEY;
}
videoPacket.stream_index = videoStream->index;
videoPacket.data = outputBuffer;
videoPacket.size = out_size;

int ret = av_write_frame(outputFormatContext, &videoPacket);
if (ret < 0)
{
av_log(outputFormatContext, AV_LOG_ERROR, "%s","Error while writing frame.\n");
av_free_packet(&videoPacket);
return;
}

av_free_packet(&videoPacket);
}

// CFTimeInterval frameDuration = CFAbsoluteTimeGetCurrent() - previousTimestamp;
// NSLog(@"Frame duration: %f ms", frameDuration * 1000.0);

dispatch_semaphore_signal(frameEncodingSemaphore);
});
}

- (void)stopRecordingMovie;
{
dispatch_async(videoRecordingQueue, ^{
// Write out the video trailer
if (av_write_trailer(outputFormatContext) < 0)
{
av_log(outputFormatContext, AV_LOG_ERROR, "%s","Error while writing trailer.\n");
exit(1);
}

// Close out the file
if (!(outputFormat->flags & AVFMT_NOFILE))
{
avio_close(outputFormatContext->pb);
}

// Free up all movie-related resources
avcodec_close(codecContext);
av_free(codecContext);
codecContext = NULL;

free(pictureBuffer);
free(outputBuffer);

av_free(videoFrame);
av_free(outputFormatContext);
av_free(videoStream);
});

}

#pragma mark -
#pragma mark Accessors

@synthesize framesPerSecond, videoFrameToEncode, movieFileName;

@end

这可以在 64 位应用程序的 Lion 和 Snow Leopard 下运行。它的记录比特率与我之前基于 QuickTime 的方法相同,总体 CPU 使用率较低。

希望这能帮助其他处于类似情况的人。

关于cocoa - 在不使用 QuickTime API 的情况下将视频帧录制到电影中的 10.6 兼容方法是什么?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/6795157/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com