By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - XB1 GPU Has 8 Graphics Contexts, Uses Multiple GPU Command Streams To Reduce CPU-GPU Latency

 

     During the Holidays, a hacker group leaked out the Xbox One SDK along with its complete documentation. The documentation revealed a few interesting tidbits such as Microsoft giving a 7th CPU core for developers to use and the different kind of updates such as graphics driver optimization that the Xbox One has received in the past. However it seems there are more details in the documentation, specifically related to the Xbox One’s GPU.

Discovered by a user on Beyond3d forum who pasted text from what seems to be from the SDK documentation, it has been revealed that the Xbox One GPU has eight graphics contexts. For those who are unaware, a graphics context consists of all drawing parameters and information to carry out commands related to drawing. It essentially consists of basic information such as color, width of the line, styling data and other relevant data. The Xbox One allocates seven of such graphics contexts to games.

It was also revealed that the console supports multiple GPU command streams which consists of instructions for rendering and compute. Both commands pass through the GPU simultaneously which allows to have two parallel processes of compute and rendering work, which share the same bandwidth resources. This results into a low latency exchange between GPU and CPU. However we are not sure whether developers can explicitly push one task type ahead of the other or whether the GPU automatically prioritizes the queue elements.

It must be noted that we are not sure whether the PlayStation 4 follows a similar implementation as the Xbox One. It will be rather intriguing to know how the PS4 reduces latency during CPU-GPU exchange.

Moving ahead. The document also revealed about how developers can use command lists and draw bundles to improve CPU performance, however these two methods won’t help much if a game is GPU bound. Both methods are recorded using deferred context. Deferred context is essentially keeping records of graphics commands in a command buffer so that they can be used at some other time as per requirement.

So if a game has already entered in rendering mode, but there are other rendering tasks which can be run parallely, developers can use deferred context using another CPU thread. The commands are then recorded and are executed using immediate rendering later. This same process can be supported multiple times over multiple CPU threads, thereby improving performance.


Read more at http://gamingbolt.com/xbox-one-gpu-has-8-graphics-contexts-uses-multiple-gpu-command-streams-to-reduce-cpu-gpu-latency#vVO8AI8MxtOs1DtF.99


                                                             

                                                                      Play Me

Around the Network

At the end of the day PS4 - 1.84 tflop and Xbox One - 1.31 tflop (From 1.21 tflop; freeing mandatory kinect reserve). Nothing will change it. No matter how many turbo charger you put in the car it needs bigger horsepower to begin with.



Microsoft magic. Sill pays hey got some of the brightest people in 3D decades ago...



daredevil.shark said:
At the end of the day PS4 - 1.84 tflop and Xbox One - 1.31 tflop (From 1.21 tflop; freeing mandatory kinect reserve). Nothing will change it. No matter how many turbo charger you put in the car it needs bigger horsepower to begin with.


Nothing will change that a GPU doesn't do anything on its own.

PS4 bandwidth is going down when the CPU cores are accessing GDDR. Also almost no CPU cache (on both of them) means more cache misses. Will hurt even more when storing application data in GDDR instead of DDR. 

Don't think that Microsoft is clueless when designing hardware to run their software stack. They have a "broader picture" - but it needs some to get it painted.



daredevil.shark said:
At the end of the day PS4 - 1.84 tflop and Xbox One - 1.31 tflop (From 1.21 tflop; freeing mandatory kinect reserve). Nothing will change it. No matter how many turbo charger you put in the car it needs bigger horsepower to begin with.

It's true the PS4 will always be the more powerful of the two (and by a relatively significant amount), but that's not really relevant to the topic :p As the article says, we really don't know exactly how the PS4 tackles it. There's not much need to discuss its relevancy between the two.



Around the Network
mine said:
daredevil.shark said:
At the end of the day PS4 - 1.84 tflop and Xbox One - 1.31 tflop (From 1.21 tflop; freeing mandatory kinect reserve). Nothing will change it. No matter how many turbo charger you put in the car it needs bigger horsepower to begin with.


Nothing will change that a GPU doesn't do anything on its own.

PS4 bandwidth is going down when the CPU cores are accessing GDDR. Also almost no CPU cache (on both of them) means more cache misses. Will hurt even more when storing application data in GDDR instead of DDR. 

Don't think that Microsoft is clueless when designing hardware to run their software stack. They have a "broader picture" - but it needs some to get it painted.

But X1 has a lot less memory bandwidth in real game-applications, and PS4 has so many advantages over the X1 that I don't even know where to start from.

PS4 APU >> X1 APU.  Of course both will see better looking games year after year, as devs learn more and spend more time on it.  But there's not magic sauce. 



”Every great dream begins with a dreamer. Always remember, you have within you the strength, the patience, and the passion to reach for the stars to change the world.”

Harriet Tubman.

Actually a lot of this known or is common sense and the PS4 is similar due to the SOC being essentially the same GPU architecture. Sure one is more powerful, but don't expect AMD to reinvent the wheel with each system.... heck they are both even based on existing PC GPUs.

...



CosmicSex said:
Actually a lot of this known or is common sense and the PS4 is similar due to the SOC being essentially the same GPU architecture. Sure one is more powerful, but don't expect AMD to reinvent the wheel with each system.... heck they are both even based on existing PC GPUs.

...


Although the PS4 has what was it 8 ACE units with 64 queues total compared to Xbox one's only 2 ACE units with 16 queues.



Also you guys realize that a graphic context isn't a physical thing right? It's an aspect of the the whatever graphics language the SDK is referring to.
In other words it has no bearing on the technical capabilities of the machine...

A graphics context is a virtual container for storing information about graphics. For example in some programming languages, you have to draw line, images and other on screen objects to a 'graphics context' before it can be displayed and you can draw different objects to different context and display each as necessary, given that you have the memory to do so.



blackjackk said:
CosmicSex said:
Actually a lot of this known or is common sense and the PS4 is similar due to the SOC being essentially the same GPU architecture. Sure one is more powerful, but don't expect AMD to reinvent the wheel with each system.... heck they are both even based on existing PC GPUs.

...


Although the PS4 has what was it 8 ACE units with 64 queues total compared to Xbox one's only 2 ACE units with 16 queues.


Yes that is correct. The queues are responsible for organizing and scheduling computational tasks for the GPU commonly associated with CPU. This, in addition to the raw horsepower advantage of the extra GCN cores is most likely responsible for the huge disparity in general purpose calculation power described in Ubisoft's presentation from August. 

 

It also explains why Microstructure came out last week saying that Compute on Xbox One was being improved with SDK update. It's definitely something they know is important and they want to make their system more efficient at it.