gpt4 book ai didi

ios - iPad2 上的 CVOpenGLESTextureCacheCreateTextureFromImage 太慢了,需要将近 30 毫秒,太疯狂了

转载 作者:可可西里 更新时间:2023-11-01 05:37:56 33 4
gpt4 key购买 nike

我使用 opengl es 在 iPad 上显示 bgr24 数据,我是 opengl es 的新手,所以在显示视频部分我使用 RosyWriter 的一个 APPLE 示例代码。它有效,但 CVOpenGLESTextureCacheCreateTextureFromImage 函数花费超过 30 毫秒,而在 RosyWriter 中它的成本可以忽略不计。我所做的是首先将 BGR24 转换为 BGRA 像素格式,然后使用 CVPixelBufferCreateWithBytes 函数创建一个 CVPixelBufferRef,然后通过 CVOpenGLESTextureCacheCreateTextureFromImage 获得一个 CVOpenGLESTextureRef。我的代码如下,

- (void)transformBGRToBGRA:(const UInt8 *)pict width:(int)width height:(int)height
{
rgb.data = (void *)pict;

vImage_Error error = vImageConvert_RGB888toARGB8888(&rgb,NULL,0,&argb,NO,kvImageNoFlags);
if (error != kvImageNoError) {
NSLog(@"vImageConvert_RGB888toARGB8888 error");
}

const uint8_t permuteMap[4] = {1,2,3,0};

error = vImagePermuteChannels_ARGB8888(&argb,&bgra,permuteMap,kvImageNoFlags);
if (error != kvImageNoError) {
NSLog(@"vImagePermuteChannels_ARGB8888 error");
}

free((void *)pict);
}

转换后会生成CVPixelBufferRef,代码如下,

[self transformBGRToBGRA:pict width:width height:height];

CVPixelBufferRef pixelBuffer;
CVReturn err = CVPixelBufferCreateWithBytes(NULL,
width,
height,
kCVPixelFormatType_32BGRA,
(void*)bgraData,
bytesByRow,
NULL,
0,
NULL,
&pixelBuffer);

if(!pixelBuffer || err)
{
NSLog(@"CVPixelBufferCreateWithBytes failed (error: %d)", err);
return;
}

CVOpenGLESTextureRef texture = NULL;
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,
width,
height,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);


if (!texture || err) {
NSLog(@"CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);
CVPixelBufferRelease(pixelBuffer);
return;
}

其他代码与RosyWriter 示例几乎相似,包括着色器。所以我想知道为什么,如何解决这个问题。

最佳答案

我这几天的研究,我发现为什么CVOpenGLESTextureCacheCreateTextureFromImage要花很多时间,当数据很大时,这里是3M,分配,复制和移动操作成本是相当大的,尤其是 Copy 操作。然后使用像素缓冲池大大提高了CVOpenGLESTextureCacheCreateTextureFromImage的性能,从30ms到5ms,与glTexImage2D()的水平相同。我的解决方案如下:

NSMutableDictionary*     attributes;
attributes = [NSMutableDictionary dictionary];


[attributes setObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
[attributes setObject:[NSNumber numberWithInt:videoWidth] forKey: (NSString*)kCVPixelBufferWidthKey];
[attributes setObject:[NSNumber numberWithInt:videoHeight] forKey: (NSString*)kCVPixelBufferHeightKey];

CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (CFDictionaryRef) attributes, &bufferPool);

CVPixelBufferPoolCreatePixelBuffer (NULL,bufferPool,&pixelBuffer);

CVPixelBufferLockBaseAddress(pixelBuffer,0);

UInt8 * baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer);

memcpy(baseAddress, bgraData, bytesByRow * videoHeight);

CVPixelBufferUnlockBaseAddress(pixelBuffer,0);

有了这个新创建的pixelBuffer,你可以让它变得更快。

attribtes中添加以下配置可以使其性能达到最佳,小于1ms。

 NSDictionary *IOSurfaceProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], @"IOSurfaceOpenGLESFBOCompatibility",[NSNumber numberWithBool:YES], @"IOSurfaceOpenGLESTextureCompatibility",nil];

[attributes setObject:IOSurfaceProperties forKey:(NSString*)kCVPixelBufferIOSurfacePropertiesKey];

关于ios - iPad2 上的 CVOpenGLESTextureCacheCreateTextureFromImage 太慢了,需要将近 30 毫秒,太疯狂了,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/11607753/

33 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com