By using this site, you agree to our Privacy Policy and our Terms of Use. Close
taus90 said:

No.. What does memory bandwith for meant for assests has to do with CPU profiling and scheduling... Dude both of them are on same die, and in this topic of conversation it doesnt matter what their latency is in terms of bandwidth to the cpu to the ram.. Maybe you confused it with L1 and L2 cache. eSRAM is to offset the bandwidth constraint of DDR3 not help offload cpu commands over to gpu.

I don't understand what you are trying to convey here. So I'll do my best.

eSRAM does more than just offset bandwidth constrants. It is a memory that is also closer to the SoC, which means the trip to fetch data from eSRAM is a shorter one than system memory.

The eSRAM, System memory etc' Do not "offload CPU commands" they do zero processing, thus they do not hardware accellerate/process anything.

If a CPU does not have the information it needs in cache, then it needs to fetch it from the next pool of memory down the chain, often that is system memory... A CPU can waste a ton of processing cycles just waiting for the data request to hit system memory, let alone have that request fullfilled and that information arrive at the CPU.

taus90 said:

In Simple terms. Xbox one doesnt support GPGPU framework of  GCN 1.1 on Bonaire (xbox GPU). Bonaire has two (ACE) or stream processor, X1 does use those 2 AC engine but with a custom compiler, which is not as efficient as GPGPU based in OpenCL,  But in MS defence that was not the focus of Durango, instead MS focus was to offset CPU related task to the cloud, I Hope you remember "Cloud power" (DirectX 12 does take advantage of ACE to perform GPGPU task, but xbox one boinaire SoC doesnt have the openCL framework to support it).

The Xbox One supports GPU compute. It's a feature of the GPU architecture. The ACE units are ultimately irrellevant in that aspect.
The ACE units aren't the shader pipelines where the FP16/FP32/FP64/INT8 operations and more are performed.

The main purpose of ACE units will be to accept work and to dispatch them off to the CUs for processing, you know the blocks with all the Shader Pipelines? Yeah those things.


taus90 said:

On the other hand Mark cerny was all in for asynchronous compute, thats why Sony and AMD heavily modified Bonaire with 8 ACE, 64 Command Queue and 18 CU, and to help in Asynchronous compute Sony adopted amd's GPGPU technique, coz cerny believed that it has a learning curve and will be benificial in improving games in years to come, which also lead them to down clock the CPU. so basically physics, world simulation, colision detection, decompression etc can be offset to GPU while rendering GFX at the same time. Also there is a dedicated additional 20Gbs bus bypassing L1 and L2 cache to help increase synchronization efficiency.

The Playstation 4 GPU isn't based on Bonaire.

The Playstation 4 GPU has 20 CU's all up, which means a 1280:80:32 arrangement (With 2 CU's and the accompanying texture mapping units disabled.), Bonaire topped out at 896:56:16 arrangement.

The Playstation 4 GPU closely resembles the Radeon 7870 or Pitcairn XT but with a few functional units disabled, including it's GCN 1.0 feature set.
Granted Sony did do some extra customizations such as bolster the ACE count, but that is one of the great things about Graphics Core Next.

You can take a block and only update the shader pipeline while the rest of the chip remains the same, or you can rework the geometry engines and leave everything else the same. It's highly customizable, the base hardware still stays as Graphics Core Next 1.0.

****

None of what you have stated means the Xbox One can't do something. The Xbox One can on a technical level do everything the Playstation 4 can. Just with lower quality and/or speed.


taus90 said:

Just to sum-up, I m a part of small Studio and we are looking into developing a small 3d Platformer using Unity for X1, PS4 and PC. I would say that Xbox GPU can take on few CPU task with a custom profiler like audio decompression, video decompression just minor processs but at the cost GPU performance, and thats lot of coding for a small team. Onto PS4 side there are two different 3d pipeline as oppose to one on X1, one only dedicated for gfx and other for gfx and CPU related task without compromising gfx.


To be honest. I couldn't care if you were the Queen of England. I don't debate where you come from or what you have done, I care about the argument you present and will argue based on that, rather than delve into ad-hominem.

As for Audio and Video decompression, Even the Xbox 360's video block could do that. It's been a feature of GPU's for a stupidly long time, GPU's started delving into that kind of thing back in the Nintendo 64/Playstation 1 era.
It also doesn't really cost CPU/GPU cycles as it's done purely on fixed function hardware. I suggest you lookup nVidia Pure Video or AMD UVD, Aka Universal Video Decorder, True Audio, heck even VCE.

taus90 said:

So In order to keep graphical parity, we have to develop keeping xbox one in mind. As we are profiling both the SoC, xbox is utilizing all of its 100% gpu and CPU resources, we have no other option to use it the same way on PS4, running CPU 100 % and GPU 60%, and when we offset some CPU task on GPU on PS4, CPU limiter comes down to 75% and GPU 80% and really we dont know what to do with that much power left.

So when developing a game Xbox one is lowest common denominator interms of how much instruction its CPU can handle. Speed is irrelavant.

The Xbox One's CPU is superior to the PS4's. The PS4 is the lowest common denominator for CPU performance.

You see, GPU's are highly parallel in it's processing approach.
CPU's are highly serialised in it's processing approach.

The GPU is good at processing a ton of smaller tasks all at once, the CPU is good at processing larger, more complex tasks in succession.

You simply cannot unload everything the CPU does onto a GPU or vice versa, there are reasons why both exist, even today.



--::{PC Gaming Master Race}::--