

I’m going to go forward with the glCopyTexSubImage2D to just get some numbers, but I’m open to better alternatives. Are there other alternatives to get the behavior I want (copy FBO color attachment to a texture) which would be less costly? Would the glCopyTexSubImage2D be any more costly than writing to a texture (instead of render buffer) and doing a full screen blit of the texture?Īny help would be greatly appreciated. From what I have read, due to the TBDR parallelization of the PowerVR chips, using glCopyTexSubImage2D is costly. To do the copying to a texture, I was planning to use glCopyTexSubImage2D to perform this. Not an ideal scenario, but I am having difficulty thinking of a different work around. I only have to do this copy once per camera and only if the cameras have post processors attached to them. I can copy the contents from the “default FBO” (with the render buffer color attachment) to a “scratch FBO” that has a color and depth texture when the camera has post processors.Since rendering to textures is slower than rendering to a render buffer (from my understanding) this becomes a performance hit when no post processors are present. If there are no post processors on any camera then the camera’s will be rendering to textures that don’t need to exist. I think this is problematic because of the performance.

I can have all cameras at all time render to different “default FBOPrime” which renders to a FBO which has a color and/or depth texture associated with it and then, at the end of every frame, bind the “default FBO” (the one with color render buffer), and screen blit "default FBOPrime"s color texture.So, for my PostProcessor API to behave properly in all cases (as well as supporting other features such as temporal motion blur), I need to do potentially one of two things as I can see it. The default FBO is, in most cases, rendering to a render buffer for the color attachment instead of to a texture. An example of this is when the cameras are rendering to the “default FBO” (which is constructed on iOS devices using EAGLContext’s renderbufferStorageFromDrawable function when passing in a EAGLLayer). My problem comes in the fact that sometimes the cameras are being rendered to FBOs which ARE NOT GUARANTEED to have a color texture. These textures represent the color and/or depth that the camera has rendered. The API for the PostProcessor defines that the PostProcessors take, as input, a color and/or depth texture. The camera class can also have on it any number of “PostProcessors” (such as turning everything grayscale, tilt shift, some weather effects, etc.). In my engine I have a “Camera” class which is exposed to users of the engine. I’ve been working on my own engine and have come up on a problem. I’ve searched the net and haven’t found a definitive “best practice” so, I’ve come to this forum to seek some guidance.
