gpt4 book ai didi

ios - 修改 CMSampleBuffer 内容的最有效方法

转载 作者:可可西里 更新时间:2023-11-01 04:28:17 27 4
gpt4 key购买 nike

我想修改CMSampleBuffer的内容,然后用AVAssetWriter/AVAssetWriterInput写入文件。

我这样做的方法是创建一个 Core Graphics 位图上下文,然后在其中绘图,但速度太慢了。具体来说,我需要将图像绘制到缓冲区中。

那么,是否可以就如何更有效地执行此操作提供某种提示或建议?

我想过用OpenGL来实现,即首先从CMSampleBuffer创建一个纹理A。然后将根据我要绘制的图像创建的纹理 B 渲染到纹理 A 中,然后从 OpenGL 检索支持纹理 A 的数据,最后将该数据交给 AVAssetWriter/AVAssetWriterInput。但是文档说将纹理数据从 GPU 传输回 CPU 有点昂贵。

那么,关于如何处理它有什么建议吗?

提前致谢

最佳答案

OpenGL 可能是要走的路。不过,渲染到屏幕外帧缓冲区而不是纹理可能稍微更有效。

从样本缓冲区中提取纹理:

// Note the caller is responsible for calling glDeleteTextures on the return value.
- (GLuint)textureFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
GLuint texture = 0;

glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int width = CVPixelBufferGetWidth(pixelBuffer);
int height = CVPixelBufferGetHeight(pixelBuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(pixelBuffer));
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

return texture;
}

要通过 OpenGL 处理纹理,您可以执行如下操作:

// This function exists to free the malloced data when the CGDataProviderRef is
// eventually freed.
void dataProviderFreeData(void *info, const void *data, size_t size){
free((void *)data);
}

// Returns an autoreleased CGImageRef.
- (CGImageRef)processTexture:(GLuint)texture width:(int)width height:(int)height {
CGImageRef newImage = NULL;

// Set up framebuffer and renderbuffer.
GLuint framebuffer;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);

GLuint colorRenderbuffer;
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8_OES, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);

GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(@"Failed to create OpenGL frame buffer: %x", status);
} else {
glViewport(0, 0, width, height);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);

// Do whatever is necessary to actually draw the texture to the framebuffer
[self renderTextureToCurrentFrameBuffer:texture];

// Read the pixels out of the framebuffer
void *data = malloc(width * height * 4);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);

// Convert the data to a CGImageRef. Note that CGDataProviderRef takes
// ownership of our malloced data buffer, and the CGImageRef internally
// retains the CGDataProviderRef. Hence the callback above, to free the data
// buffer when the provider is finally released.
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, data, width * height * 4, dataProviderFreeData);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
newImage = CGImageCreate(width, height, 8, 32, width*4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, dataProvider, NULL, true, kCGRenderingIntentDefault);
CFRelease(dataProvider);
CGColorSpaceRelease(colorspace);

// Autorelease the CGImageRef
newImage = (CGImageRef)[NSMakeCollectable(newImage) autorelease];
}

// Clean up the framebuffer and renderbuffer.
glDeleteRenderbuffers(1, &colorRenderbuffer);
glDeleteFramebuffers(1, &framebuffer);

return newImage;
}

关于ios - 修改 CMSampleBuffer 内容的最有效方法,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/4662789/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com