By using this site, you agree to our Privacy Policy and our Terms of Use. Close
NJ5 said:
@Squilliam: I can definitely understand processing 1 frame in advance, to let the GPU and CPU work at the same time. However adding more frames of lag to exploit more cores was news for me. It is also not a scalable method of using multiple cores, unless we're prepared for n frames of lag.

Where are you getting this? Look at the GoW PDF you mentioned, slides 13 and 14:

slide 13 is how you work with a single CPU and a GPU: the CPU prepares the scene of frame n+1 and then runs the input elaboration/physics/AI for frame n+2 while the GPU renders frame n.

slide 14 is how you work by offloading to SPUs: the scene is always frame n+1 and the simulation is always frame n+2, but it is parallelized among SPUs together with part of the rendering so that the time needed to end is lesser and you can actually render a higher framerate consistently (cpu stuff not in frame means skipping a frame)

In both cases they showed a 2 frames offset between rendered frame and simulated frame where player input is read. Of course there will be more lag frames in practice, at the very least the usual one for double buffering I suppose.



"All you need in life is ignorance and confidence; then success is sure." - Mark Twain

"..." - Gordon Freeman