gpt4 book ai didi

iphone - (iOS) OpenGL ES (2.0) 应用程序,如何移动Z 中的对象?

转载 作者:行者123 更新时间:2023-11-29 11:24:17 24 4
gpt4 key购买 nike

我在 XCode 中使用 OpenGL ES (2.0) 应用程序 (iOS) 源代码并试图让彩色方 block 在 Z 坐标中移动,所以我尝试更改着色器代码从

gl_Position.y += sin(翻译)/2.0;

gl_Position.z += sin(翻译)/2.0;没有成功。

方 block 根本不动。它在 X 和 Y 思想中移动得很好......初始化 OpenGL 时是否需要激活某些选项?谢谢!

更新:我上传了一个例子。这大致是 XCode 生成的 OpenGL ES 模板,我只是添加了创建深度缓冲区的调用,并将 gl_Position.x 添加到 Shader.vsh 中的 gl_Position.z += sin(translate)/2.0f。

我希望看到正方形在 Z 坐标上以正弦形式移动,但它不会。它要么保持静止,要么如果我乘以 sin(),它会循环出现和消失。

如果有人能帮助我,我将不胜感激,因为我真的不知道还能做什么,相信我,我已经尝试了很多......

源代码在一个 zip 中:http://cl.ly/24240x2D1t2A3I0c1l1P

谢谢!

最佳答案

您正在查看的示例没有深度缓冲区和用于 2D GL 的透视矩阵。请查看 aurioTouch 示例。在 EAGLView 类中,您会注意到一个用于实现深度缓冲区的选项。两者结合(因为 aurioTouch 没有实现着色器)应该可以更好地理解

我认为您方法中的操作顺序导致了问题。这是我在我的应用程序“Live Effects Cam”中使用的代码,它将实时相机作为形状上的 GL 纹理放置:

#define DEGREES_TO_RADIANS(__ANGLE__) ((__ANGLE__) / 180.0 * M_PI)



@interface GLView : UIView
{
@private
/* The pixel dimensions of the backbuffer */
GLint backingWidth;
GLint backingHeight;

EAGLContext *context;

/* OpenGL names for the renderbuffer and framebuffers used to render to this view */
GLuint viewRenderbuffer;
GLuint viewFramebuffer;
GLuint depthRenderbuffer;

/* OpenGL name for the sprite texture */
GLuint spriteTexture;
}

@property (readonly) GLint backingWidth;
@property (readonly) GLint backingHeight;
@property (readonly) EAGLContext *context;


- (void) drawView;
- (BOOL) createFramebuffer;
- (void) destroyFramebuffer;
+ (UIImage *) snapshot:(GLView *)eaglview;

@end




@implementation GLView


@synthesize backingWidth;
@synthesize backingHeight;
@synthesize context;


+ (Class) layerClass
{
return [CAEAGLLayer class];
}



- (id)init
{
self = [[super init] initWithFrame:CGRectMake(0.0, 0.0, 480.0, 640.0)]; // size of the camera image being captures

if ( self==nil )
return self;


// Set Content Scaling
//
if ( HIRESDEVICE )
{
self.contentScaleFactor = (CGFloat)2.0;
}

// Get our backing layer
//
CAEAGLLayer *eaglLayer = (CAEAGLLayer*) self.layer;

// Configure it so that it is opaque, does not retain the contents of the backbuffer when displayed, and uses RGBA8888 color.
//
eaglLayer.opaque = YES;

eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:FALSE], kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat,
nil];

// Create our EAGLContext, and if successful make it current and create our framebuffer.
//
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];

if(!context || ![EAGLContext setCurrentContext:context] || ![self createFramebuffer])
{
[self release];
return nil;
}

// Final View Settings
//
[self setOpaque:YES];
self.multipleTouchEnabled = YES;
self.backgroundColor = [UIColor clearColor];

[EAGLContext setCurrentContext:context];

glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);

glMatrixMode(GL_PROJECTION);
glLoadIdentity();

GLfloat zNear = 1.0;
GLfloat zFar = 1000.0;
GLfloat fieldOfView = 90; // Lens Angle of View
GLfloat size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0);
CGRect rect = CGRectMake( (CGFloat)0.0, (CGFloat)0.0, backingWidth, backingHeight);

glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar);

glViewport(0, 0, backingWidth, backingHeight);

glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glEnable(GL_MULTISAMPLE);
glEnable(GL_LINE_SMOOTH);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
glHint(GL_POINT_SMOOTH_HINT, GL_NICEST);
glDisable(GL_ALPHA_TEST);

// Turn Translucent Textures: OFF
//
glDisable(GL_BLEND);

// // Turn Translucent Textures: ON
// //
// glEnable(GL_BLEND);
// glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

return self;
}



- (void) drawView
{
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}




- (BOOL)createFramebuffer
{
// Generate IDs for a framebuffer object and a color renderbuffer
//
glGenFramebuffersOES(1, &viewFramebuffer);
glGenRenderbuffersOES(1, &viewRenderbuffer);

glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);

// This call associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer)
// allowing us to draw into a buffer that will later be rendered to screen whereever the layer is (which corresponds with our view).
//
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(CAEAGLLayer*)self.layer];

glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);

glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);

// If this app uses a depth buffer, we'll create and attach one via another renderbuffer.
//
if ( YES )
{
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
}

if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES)
{
NSLog(@"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
return NO;
}

return YES;
}


- (void) destroyFramebuffer
{
glDeleteFramebuffersOES(1, &viewFramebuffer);
viewFramebuffer = 0;

glDeleteRenderbuffersOES(1, &viewRenderbuffer);
viewRenderbuffer = 0;

if(depthRenderbuffer)
{
glDeleteRenderbuffersOES(1, &depthRenderbuffer);
depthRenderbuffer = 0;
}
}




+ (UIImage *) snapshot:(GLView *)eaglview
{
NSInteger x = 0;
NSInteger y = 0;
NSInteger width = [eaglview backingWidth];
NSInteger height = [eaglview backingHeight];
NSInteger dataLength = width * height * 4;

// Need to do this to get it to flush before taking the snapshit
//
NSUInteger i;
for ( i=0; i<100; i++ )
{
glFlush();
CFRunLoopRunInMode(kCFRunLoopDefaultMode, (float)1.0/(float)60.0, FALSE);
}

GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));

// Read pixel data from the framebuffer
//
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);

// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
//
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, ref, NULL, true, kCGRenderingIntentDefault);

// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
//
NSInteger widthInPoints;
NSInteger heightInPoints;

if (NULL != UIGraphicsBeginImageContextWithOptions)
{
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
//
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else
{
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
//
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}

CGContextRef cgcontext = UIGraphicsGetCurrentContext();

// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
//
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);

// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); // autoreleased image

UIGraphicsEndImageContext();

// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);

return image;
}


@end

关于iphone - (iOS) OpenGL ES (2.0) 应用程序,如何移动Z 中的对象?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/4280231/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com