By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft Discussion - Is “GPU Acceleration” Xbox One Secret Sauce?

 

NobleTeam360 said:

Wow, still a ton of rumors happening for the Xbox One in September? either the ******* are desperate or........ nope there is no or it's just desperate.

And you failed to actually read anything.  Way to blindly post.



Around the Network
JerCotter7 said:

He seems full of crap. First he has no idea what Sony did with the audio chip. It may be a piece of crap or it could be better than the bones chip. He also talks about what he thinks the extra processors might do. When they may do nothing worth mentioning beyond small tasks. The ESRAM being in banks of 8mb means nothing. The developers will only see the final 32mb. Same as the DDR3 ram. So no it is not being used in a unique way. PC games wont make use of it for years unless the CPU becomes the bottlekneck. PC's already have GDDR5.

His post just seems like a lot of what ifs with some bullshit thrown in.

Actually, 32MB divided up into 4 - 8MB blocks means you can have two processes reading and two writing to the memory at once.  If they're smart, no, they won't think of it as a single 32MB block.  More importantly, I think the API will make using it approrpiately easy.

A patent was awarded yesterday on how Microsoft is doing coherent memory, the patent dealt with the API or at least the images I looked at did, and it's clear that they are working to ensure developers have to do as little work as possible to take advantage of hardware features.

Sony didn't do anything with an audio chip, they don't have one.  They have a video encoder and that is for the DVR function.

Taking small tasks away from the CPU and GPU frees up processing cycles for the CPU and GPU.  The very reason you use separate processors. 

And thank you for posting such intelligent remarks as "He seems full of crap."  Your assutute, educated observations not with standing, he actually makes some reasonable, rational points.




Adinnieken said:
SvennoJ said:
Serious question, what does video encoding/decoding have to do with rendering.
Isn't enlisting the gpu to shorten video encoding times taking away time for rendering the game?

Both the Xbox One and the PS4 have DVR functions.  Having separate, discrete processors for audio/video encoding/decoding frees up both the CPU and GPU.

That problem wouldn't exist if you're just playing the game, no snap or dvr in the background.

The article you posted, gpu acceleration in general, is about using the GPU to speed up video encoding tasks. I guess instead of doing that, that's where the extra processors come in so the GPU isn't burdened with video encoding? So the speculation is there are other special processors to take other tasks from the GPU?

Confusing article as GPU acceleration is about using the GPU to offload the CPU. It does not make the GPU magically faster, contrary it takes resources from the GPU. I guess the idea is that certain CPU tasks are offloaded to the GPU's special secrect sauce processors, not burdening the GPU.

And in that comment from JNZ he mentions the extra 6 processors could be used for physics or ray-tracing. Really? A PhysX card in there plus a second powerful GPU to do ray-tracing that can't even be done in realtime on the best current GPUs. Wasn't the cloud going to do all that...
Him dismissing Sony's audio processing is a bit weird too. I would think the company that supported 7.1 192/24 with every known sound format on their current console and has an extensive background in audio processing should know how to do audio.
If anyone is jumping to conclusions it's JNZ



Adinnieken said:
JerCotter7 said:

He seems full of crap. First he has no idea what Sony did with the audio chip. It may be a piece of crap or it could be better than the bones chip. He also talks about what he thinks the extra processors might do. When they may do nothing worth mentioning beyond small tasks. The ESRAM being in banks of 8mb means nothing. The developers will only see the final 32mb. Same as the DDR3 ram. So no it is not being used in a unique way. PC games wont make use of it for years unless the CPU becomes the bottlekneck. PC's already have GDDR5.

His post just seems like a lot of what ifs with some bullshit thrown in.

Actually, 32MB divided up into 4 - 8MB blocks means you can have two processes reading and two writing to the memory at once.  If they're smart, no, they won't think of it as a single 32MB block.  More importantly, I think the API will make using it approrpiately easy.

A patent was awarded yesterday on how Microsoft is doing coherent memory, the patent dealt with the API or at least the images I looked at did, and it's clear that they are working to ensure developers have to do as little work as possible to take advantage of hardware features.

Sony didn't do anything with an audio chip, they don't have one.  They have a video encoder and that is for the DVR function.

Taking small tasks away from the CPU and GPU frees up processing cycles for the CPU and GPU.  The very reason you use separate processors. 

And thank you for posting such intelligent remarks as "He seems full of crap."  Your assutute, educated observations not with standing, he actually makes some reasonable, rational points.


Devs only see 32mb. It doesn't matter how it's split up. The API will only show it as being the full 32mb.

Sony have a low level and high level API so they also want developers to have an easy way to get the power of the system but they can go deeper if they choose.

The PS4 does have an audio chip.

Yeah I didn't say it wouldn't free up any cycles. I'm saying we don't know how much it will free up. It may not even be worth the extra programming time.

Yeah he does. No problem. He doesn't make any rational points. I skipped his whole crap about the DX11.2 feature set as that is his worst point of all. No need to take it personally but his post is full of crap.

 

EDIT: I have no problem with the main article itself. Of course the bone is more powerful than people think. But then so is the PS4. There is a lot we still don't know about either system. It's just replies like his are annoying.



Adinnieken said:

 

NobleTeam360 said:

Wow, still a ton of rumors happening for the Xbox One in September? either the ******* are desperate or........ nope there is no or it's just desperate.

And you failed to actually read anything.  Way to blindly post.

Actually I did read the OP, and what I said is accurate Xbox fanboys are just grasping at straws trying to come up with any reason to say the Xbox One is more powerful than it actually is. 



Around the Network
Adinnieken said:

 

NobleTeam360 said:

Wow, still a ton of rumors happening for the Xbox One in September? either the ******* are desperate or........ nope there is no or it's just desperate.

And you failed to actually read anything.  Way to blindly post.


Talking about blindy the guy that downplay every advantage PS4 could have (could be any source, from Cerny to Edge) and upplay every rumor pro Xbone (from dGPU to the cloudz).



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

Imaginedvl said:

Okay, if you really need to have people "accepting" the fact that "your" console is the most powerfull, "your" console is the best, etc. to help you sleep at night. Fine; just create a thread about it and I will be more than happy to sign it actually, as I believe it so... (not sure why you keep hammering me with your "obvious", "no doubt" blah blah about the PS4 being more powerfull as I already believe it and I never said the opposite)...

Now can you let this thread about "Xbox One capabilities" out of that (and all the other where you are coming in with the only intend to piss these people who are talking about the OP instead of bragging about their own console? Or just because you have the right to post stupid comments like "here we go again" or "this reall sad", you are doing in every Xbox thread talking about its capabilities/features just to remind people that the PS4 is more powerfull?

Isn't it odd how some people get so angry about people interest in knowledge. 

I found it very interesting when Sony revealed 8GB of GDDR5 memory in their console.  It was impressive, but I also realized it could make some other problems.  I was concerned they had celled themselves again.  Especially with the estimation that it's CPU will run at about half the speed of the current Xbox 360, who pioneers unified graphics memory in a console.

We do know the size of the chips is significantly different.  It does seem that the PS4 contains an audio chip or two, which is nice.  But there are so many more coprocessors on the Microsoft one, and we know what some of them do.  Some of them are very, very useful and free the CPU to make better games. Which is good.  Plus, there is that extra 8GB of Flash memory tuck in on the board.

I don't see anyone arguing that the PS4 1.8 GPU is more powerful than the 1.3 Xbox One GPU.  But that's is clearly not the end of the story and might not be significant, but somehow pointing out from the known specs, that there a whole bunch of extra power on the Xbox One's CPU and Coprocessors is not supposed to be acceptable. 

It's exciting to see both of the consoles come to market, and more information come out about them.  Wish people would let others enjoy their conversations and post.

I'm really looking forward to the head to head that is coming soon. 



 

Really not sure I see any point of Consol over PC's since Kinect, Wii and other alternative ways to play have been abandoned. 

Top 50 'most fun' game list coming soon!

 

Tell me a funny joke!

Adinnieken said:

Actually, 32MB divided up into 4 - 8MB blocks means you can have two processes reading and two writing to the memory at once.  If they're smart, no, they won't think of it as a single 32MB block.  More importantly, I think the API will make using it approrpiately easy.

A patent was awarded yesterday on how Microsoft is doing coherent memory, the patent dealt with the API or at least the images I looked at did, and it's clear that they are working to ensure developers have to do as little work as possible to take advantage of hardware features.

Sony didn't do anything with an audio chip, they don't have one.  They have a video encoder and that is for the DVR function.

Taking small tasks away from the CPU and GPU frees up processing cycles for the CPU and GPU.  The very reason you use separate processors. 

And thank you for posting such intelligent remarks as "He seems full of crap."  Your assutute, educated observations not with standing, he actually makes some reasonable, rational points.


I think the PS4 has at least one or two audio chips from reports.  However, there are also reports of Sony telling developers to use the GPU for audio. Other reports say that is for non gaming products, which would be fine.  If it's actually for games, then whoops, Houston, we have a problem.

I could see them avoiding co-processors as they wouldn't want to duplicate the same problems with the Cell and PS3.  Which would make it more of a PC with the video encoder.  

I need to go look up that coherent memory patent.  It sounds fascinating and will fulfill one of Microsoft's goals to make games even easier to develop.  I know they say they have been working on this since 2010, but really, right after they put out the Xbox 360 they up development on DirectX 10, which in many ways in my opinion, a beginning prototype test drive for the Xbox One. 



 

Really not sure I see any point of Consol over PC's since Kinect, Wii and other alternative ways to play have been abandoned. 

Top 50 'most fun' game list coming soon!

 

Tell me a funny joke!

Zappykins said:
Adinnieken said:
 

Actually, 32MB divided up into 4 - 8MB blocks means you can have two processes reading and two writing to the memory at once.  If they're smart, no, they won't think of it as a single 32MB block.  More importantly, I think the API will make using it approrpiately easy.

A patent was awarded yesterday on how Microsoft is doing coherent memory, the patent dealt with the API or at least the images I looked at did, and it's clear that they are working to ensure developers have to do as little work as possible to take advantage of hardware features.

Sony didn't do anything with an audio chip, they don't have one.  They have a video encoder and that is for the DVR function.

Taking small tasks away from the CPU and GPU frees up processing cycles for the CPU and GPU.  The very reason you use separate processors. 

And thank you for posting such intelligent remarks as "He seems full of crap."  Your assutute, educated observations not with standing, he actually makes some reasonable, rational points.


I think the PS4 has at least one or two audio chips from reports.  However, there are also reports of Sony telling developers to use the GPU for audio. Other reports say that is for non gaming products, which would be fine.  If it's actually for games, then whoops, Houston, we have a problem.

I could see them avoiding co-processors as they wouldn't want to duplicate the same problems with the Cell and PS3.  Which would make it more of a PC with the video encoder.  

I need to go look up that coherent memory patent.  It sounds fascinating and will fulfill one of Microsoft's goals to make games even easier to develop.  I know they say they have been working on this since 2010, but really, right after they put out the Xbox 360 they up development on DirectX 10, which in many ways in my opinion, a beginning prototype test drive for the Xbox One. 

That audio FUD again. Sony said that down the road the GPU can be used for other things, like ray-casting, during downtime between frames, to show how flexible the GPU is. They didn't tell developers that they have to do their sound processing on the GPU at all.

http://www.eurogamer.net/articles/digitalfoundry-face-to-face-with-mark-cerny
Now when I say that many people say, "but we want the best possible graphics". It turns out that they're not incompatible. If you look at how the GPU and its various sub-components are utilised throughout the frame, there are many portions throughout the frame - for example during the rendering of opaque shadowmaps - that the bulk of the GPU is unused. And so if you're doing compute for collision detection, physics or ray-casting for audio during those times you're not really affecting the graphics. You're utilising portions of the GPU that at that instant are otherwise under-utilised. And if you look through the frame you can see that depending on what phase it is, what portion is really available to use for compute.


http://blog.games.com/2013/07/23/sony-explains-how-developers-will-unlock-full-potential-ps4/

Cerny describes how the PS4 will be ready for the future and claims that the full power of the next-gen console will increase as the years go by. He explains what will happen when the PS4 launches as well as what will occur several years down the road.

The interview did get technical, though, so we've devised a list on what was discussed:

  • At the beginning of the PS4 cycle, most games will not unlock the PS4′s full potential.
  • Earlier PS4 Games will use "straight forward" features of the console, like its visual power and bandwidth.
  • Later on, the GPU will be used for many other objectives besides processing graphics
  • The GPU can do things like "physics, simulation, collision detecting or ray casting for audio or the like."
  • The extra stuff will not make the visuals worse.
  • "Some of these phases [within a frame of animation on-screen] don't really use all of the various modules within the GPU."
  • When there is free space on the GPU modules, it will have room for other tasks.
  • It will "improve the quality of your world's simulation without decreasing the quality of your graphics."


And this is what Sony said in April

http://techreport.com/news/24725/ps4-architect-discusses-console-custom-amd-processor
Cerny says the PS4's custom silicon incorporates not only the CPU and GPU, but also a "large number of other units." The chip has a dedicated audio unit to perform processing for voice chat and multiple audio streams. It also has a hardware block designed explicitly for zlib decompression. The main processor is backed by a secondary chip that enables an ultra-low-power mode for background downloading. In that mode, the CPU and GPU shut down, leaving only the auxiliary chip, system memory, networking, and storage active.

A 256-bit interface links the console's processor to its shared memory pool. According to Cerny, Sony considered a 128-bit implementation paired with on-chip eDRAM but deemed that solution too complex for developers to exploit. Sony has also taken steps to make it easier for developers to use the graphics component for general-purpose computing tasks.

secret sauce!



SvennoJ said:

That audio FUD again. Sony said that down the road the GPU can be used for other things, like ray-casting, during downtime between frames, to show how flexible the GPU is. They didn't tell developers that they have to do their sound processing on the GPU at all.

http://www.eurogamer.net/articles/digitalfoundry-face-to-face-with-mark-cerny
Now when I say that many people say, "but we want the best possible graphics". It turns out that they're not incompatible. If you look at how the GPU and its various sub-components are utilised throughout the frame, there are many portions throughout the frame - for example during the rendering of opaque shadowmaps - that the bulk of the GPU is unused. And so if you're doing compute for collision detection, physics or ray-casting for audio during those times you're not really affecting the graphics. You're utilising portions of the GPU that at that instant are otherwise under-utilised. And if you look through the frame you can see that depending on what phase it is, what portion is really available to use for compute.


http://blog.games.com/2013/07/23/sony-explains-how-developers-will-unlock-full-potential-ps4/

Cerny describes how the PS4 will be ready for the future and claims that the full power of the next-gen console will increase as the years go by. He explains what will happen when the PS4 launches as well as what will occur several years down the road.

The interview did get technical, though, so we've devised a list on what was discussed:

  • At the beginning of the PS4 cycle, most games will not unlock the PS4′s full potential.
  • Earlier PS4 Games will use "straight forward" features of the console, like its visual power and bandwidth.
  • Later on, the GPU will be used for many other objectives besides processing graphics
  • The GPU can do things like "physics, simulation, collision detecting or ray casting for audio or the like."
  • The extra stuff will not make the visuals worse.
  • "Some of these phases [within a frame of animation on-screen] don't really use all of the various modules within the GPU."
  • When there is free space on the GPU modules, it will have room for other tasks.
  • It will "improve the quality of your world's simulation without decreasing the quality of your graphics."


And this is what Sony said in April

http://techreport.com/news/24725/ps4-architect-discusses-console-custom-amd-processor
Cerny says the PS4's custom silicon incorporates not only the CPU and GPU, but also a "large number of other units." The chip has a dedicated audio unit to perform processing for voice chat and multiple audio streams. It also has a hardware block designed explicitly for zlib decompression. The main processor is backed by a secondary chip that enables an ultra-low-power mode for background downloading. In that mode, the CPU and GPU shut down, leaving only the auxiliary chip, system memory, networking, and storage active.

A 256-bit interface links the console's processor to its shared memory pool. According to Cerny, Sony considered a 128-bit implementation paired with on-chip eDRAM but deemed that solution too complex for developers to exploit. Sony has also taken steps to make it easier for developers to use the graphics component for general-purpose computing tasks.

secret sauce!

OK, so the screen going blank, or switching to 'shadow mode' so every game looks like Limbo, so they can have cool game sound is a desired 'feature.'  Got it.

So "Sony considered a 128-bit implementation paired with on-chip eDRAM but deemed that solution too complex for developers to exploit."  Yea, I kinda feel bad for him.  He has done his best, and so much responsibility on his shoulders.  To bad they couldn't have incorporated it into the OS to work behind the scenes so it would enhance performance, without making it more complected to program.   I am thankful there is a some company in Washington that is doing just that.

And this "At the beginning of the PS4 cycle, most games will not unlock the PS4′s full potential."  So he is apologizing for the quality of games before they are even out.  Sure everyone knows games get better, but this, is just sounding so, I wish him comfort in his solace.

But hey, perhaps it's time for Crash Bash 2. 



 

Really not sure I see any point of Consol over PC's since Kinect, Wii and other alternative ways to play have been abandoned. 

Top 50 'most fun' game list coming soon!

 

Tell me a funny joke!