By using this site, you agree to our Privacy Policy and our Terms of Use. Close
WereKitten said:
NJ5 said:
@Squilliam: I can definitely understand processing 1 frame in advance, to let the GPU and CPU work at the same time. However adding more frames of lag to exploit more cores was news for me. It is also not a scalable method of using multiple cores, unless we're prepared for n frames of lag.

Where are you getting this? Look at the GoW PDF you mentioned, slides 13 and 14:

slide 13 is how you work with a single CPU and a GPU: the CPU prepares the scene of frame n+1 and then runs the input elaboration/physics/AI for frame n+2 while the GPU renders frame n.

slide 14 is how you work by offloading to SPUs: the scene is always frame n+1 and the simulation is always frame n+2, but it is parallelized among SPUs together with part of the rendering so that the time needed to end is lesser and you can actually render a higher framerate consistently (cpu stuff not in frame means skipping a frame)

In both cases they showed a 2 frames offset between rendered frame and simulated frame where player input is read. Of course there will be more lag frames in practice, at the very least the usual one for double buffering I suppose.

Slide 4 shows the traditional way of doing things since GPUs have been used (2 simultaneous frames with CPU->GPU pipeline).

Slide 5 shows the same thing with an additional CPU in the pipeline (3 frames, CPU0->CPU1->GPU).

Slide 13 shows GOW3's way before they used SPUs (3 frames, PPU->PPU->GPU)

Slide 14 shows GOW3's way after offloading certain parts to the SPUs (3 frames, PPU->SPUs->GPU).

haxxiy said:
Squilliam said:
@NJ5 Its also common practice on computer games for the CPU to process 1-2 frames ahead of the rendering.

 

1-2 frames is too few. Default for NVIDIA cards are three I think.

Interesting. Where can I read about that? Where are each of the frames located at each moment?

 



My Mario Kart Wii friend code: 2707-1866-0957