By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Machiavellian said:
Shinobi-san said:
Machiavellian said:
Yes, the difference in the GPUs are big but thats the point I raise.  MS has about 6 co-processors they are not talking about.  Without knowing what these processors do, who knows how much offloading of graphics can be done on the X1 to releave the GPU and even the CPU from their task.

The problem with only comparing the GPU and not the entire system is that people who are not coding to both platforms do not know if other parts of the hardware play a role or not.  Co-processors are ways for custom designs to offload that processing from the CPU and GPU.  This is why its not evident yet eactly how this might play out.  MS might have felt its more efficient to offload specific intensive processing to specialize hardware.

As for memory, it wasn't the expense of the memory but the timing.  MS designed their system for 8GB way before 8GB of GDDR5 was possible to fit within the console.  Do not forget that Sony was rumored to be supplying 4Gb before Samsung was able to produce 512mb chips which allowed Sony to increase the memory to 8.  MS already designed their system for 8GB and needed ESRAM to fill in the bandwidth part.  Since this was already done, there was no changing the system once 8GB of GDDR5 was possible.  The latency part will come into play for anything that cannot be offloaded to the CU.  Since the CPUs are weak anyway, the latency can become a problem if there is to much CPU task that require lower latency than bandwidth.

The NDA stuff is concernign the rumor that is going around the dGPU.  I was only mentioning it because people kept saying that MS would be telling the world they had this chip but the rumor already covered why this was not happening.

As for what Albert is talking about, he is stating that raw physical numbers (mainly the GPU) does not tell the who picture.  There are other parts within the X1 that make up the difference.  Only way we will see that is in the games.

@bolded: Fair point. I didnt really think of it that way.

Although i cant see co-processors making that much of a difference(just opinion/hunch). I mean calling it a co-processor isnt actually telling us much. But we all know that GPU's are the best components for graphics, effects etc. So in that sense, Xbone is limited to it's GPU theoretical power for certain GPU specific tasks. So is the PS4, and il say it again that the PS4 also has co-processors. We just dont know how many it has.Cerny spoke a lot about offloading tasks as well...freeing up CPU and GPU resources.

I dont see either system offloading GPU specific tasks to the co-processors but more CPU tasks to assist the weak CPU.

My point about the memory though was that if MS could have gone with GDDR5 and cost was not an issue or if there were no issues at all. They would have gone with it.

I think theres a bit of PR spin to what Albert is saying to be honest...i dont see how they can close a .5 tfl gap in performance with a few co-processors. At the end of the day these arent fully fledged GPU cores, or CPU cores as far as i understand. If they are then its another story but as far as i know they not. And apart from PR spin i think MS is just explaining how they are making their console more efficient, explaining the design decisions etc. It's not like MS went and added co-processors just to compete with Sony. If MS wanted a more beefier GPU they would have added it in. Raw computation performance just didnt seem to be the gameplan really.

Remember the Cell SPUs are considered co-processor and they were used to offload task away from the PS3 GPU.  You already see the results of how well that worked out for Sony (at least for their 1st party). Depending on what task the co-processors are used for, they can make a solid difference in performance.  Without any info on what the processor do, I will not speculate as to the difference they can make only that the possibility exist.  

Lets break down the difference between the X1 GPU and the PS4.  I will limit this to just the CU or Sharder cores as that’s were the big difference in TFlops comes into play.  The X1 has 12 CUs and the PS4 has 18.  CUs are generally used to process shaders.  Since that’s a parallel process, the PS4 can execute more shaders at one time then the X1.  Now from the Hotchips convention some interesting things came out and I was processing the information and thinking about design.  MS has made it so that all parts of the system knows what is happening to a segment of code in memory.  If that is so, then MS can leverage specialize co-processor to handle specific gaming code where it would be more process intensive for the GPU or CPU to handle.  

 

Do not forget that GPUs are designed as an add on to a system not as being the main CPU.  What I am getting at is that for efficiency within a closed box, some of the things the GPU does might make more sense to handle with specialize hardware then within the GPU.

As for Albert statements, he is a PR guy so you will always take their comments with a grain of salt.  Interesting enough him commenting on Neograf where he knows his comments are going to get a lot of push back.  Most PR people know where to pick their battles so him making his statements on Gaf says that either he loves contention or MS has a few tricks up their sleeve.

As for Tflops, do you know that the PS3 is stated as having 2.1 TFlops compared to the 1.8TFlops of the PS4.  As a measuring tool for the performance of these 2 devices, the Tflop number really might not be the difference maker.


the tflops number becomes more reliable and worthwhile comparing when the systems have near identical hardware...

We not comparing a Nvidia GPU core to an AMD core...they both have GCN cores. They both have the exact same CPU. Those are the two core components of the system.

And yes we dont know that much about the co-processors but again it cant replace conventional CPU's or GPU's....using the cell as an example is not exatcly a positive thing. The SPU's were only good and well utilised under very strict circumstances. When devs started coding on the PS3 they flat out ignored the SPU's. These co-processors seems to have a set function...so it doesnt give the dev much flexibility either.

But agian i feel like you tring to argue that these co-processors will have a great impact on increasing overall system performance. Whereas, i see it more as a way to maximise the efficiency of the system. And when comparing the two systems...the PS4 also has co-processors or is that a non issue? I've said this three times now but you never actually address that. Do you not agree or ?



Intel Core i7 3770K [3.5GHz]|MSI Big Bang Z77 Mpower|Corsair Vengeance DDR3-1866 2 x 4GB|MSI GeForce GTX 560 ti Twin Frozr 2|OCZ Vertex 4 128GB|Corsair HX750|Cooler Master CM 690II Advanced|