By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - VGLeaks: Orbis unveiled!

ethomaz said:
Aielyn said:
 For the GPU, estimates for the Wii U put it somewhere in the 0.5-1.5 TFLOPS range. So lets say that the PS4, according to this leak, has somewhere between 1.3-3x the power of the Wii U, in terms of GPU.

The Wii U have a 400 GFLOPS GPU.

Source?

The only time I saw that number mentioned was when someone was trying to estimate the "max" GFLOPS based on the assumption that the GPU was a modified Radeon something-or-other. Since it's "modified", it's stupid to assume anything. Another site says it's a modified Radeon E6760 (the E6760 is rated at 576 GFLOPS), for instance - and that particular site is referencing AMD emails regarding the Wii U chip.



Around the Network

Aielyn said:

Source?

The only time I saw that number mentioned was when someone was trying to estimate the "max" GFLOPS based on the assumption that the GPU was a modified Radeon something-or-other. Since it's "modified", it's stupid to assume anything. Another site says it's a modified Radeon E6760 (the E6760 is rated at 576 GFLOPS), for instance - and that particular site is referencing AMD emails regarding the Wii U chip.

The email was confirmed fake... the E6760 runs at 600Mhz... the Wii U GPU at 550Mhz... so even it was a E6760 there is no way it have 576GFLOPS like the E6760.

The size of the GPU says everything... in 40nm there is no way you can put a GPU over 500 GFLOPS in this space... so the Wii U have a raw power ~400GFLOPS.

That's the common sense in Beyond3D thread: http://forum.beyond3d.com/showthread.php?t=60501



From Beyond3D.

Shifty Geezer:

Those CUs aren't going to be sitting idle, so whatever work they do is saved from the CPU and GPU. If Durango is processing physics on CPU, Orbis will have better physics, and if Durango is processing physics on GPGPU for 3 ms, Orbis will save that time and spend 9 ms using the physics leaving 3 ms bonus for the graphics. 

In simple terms, Sony has selected more compute resources and they will come into effect, so the difference will be there in some manner. The specifics of what the 4 CUs do will help under the system. I'm certainly curious how it relates to PC, as it seems a concept that could only be pulled off in a console and may be quite valuable. GPGPU augmentations, especially when audio and video is being taken care of by custom silicon, could mean quite a lot of versatile computer resources for non-direct-graphics work. Maybe light propagation systems, AI sorting, or whatever.

 

Ruskie:

Just to add on Shifty's post, Even if you have "only" 14 CUs available for rendering, these 4 CUs doing their job will still have to "fit" somewhere in Durango.

I think this situation is even worse than the one before for MS since Sony would practically make devs push extra compute in any direction they want and they would still have more shading performance than MS in that case. Not to talk about ROPs, texture units etc. I doubt Durango matches it there anyway.



The 14 + 4 CUs approach sounds better and better... more from Beyond3D.

According to these rumors, Orbis is basically an APU + GPU design, but integrated into a single SoC instead of a MCM or SiP, which allows for minimum latency and maximum bandwith at the same time.

On the APU side we have the eight Jaguars combined with 256 GCN shaders, together burning an incredible amount of 512 GFLOPS! Just for the record: An Intel Core i7 4770k desktop CPU (only) delivers 448 GFLOPS. Imagine what this could mean for AI, animations, physics, etc. On the GPU side we have a processor that is basically a small HD7850 Pitcairn. In my eyes this GPU is strong enough to deliver enjoyable graphics for a next gen gaming console, especially when keeping the perverse TDP of modern high end cards in mind. We're most likely talking about 3rd gen HSA for Orbis, which means a unified adress space for CPU and GPU(s), pageable system memory for the GPU(s) with CPU pointers, and a fully coherent memory between CPU and GPU(s). Simply put, no copy work between the CPU and the two iGPs. This will be a hell of a speedup compared to a modularly designed PC (take a look at the superb 28nm Temash SoC rendering Dirt: Showdown at 1920x1080 with 5W!!!)

The reasons for going with APU + GPU are better programmability and lower latencies. You can look it up in this slide from the 2012 Fusion Developer Summit. You don't want your GPU to be saturated because of both GPGPU algorithms and graphics tasks. A single (AMD) GPU can't handle either task at the same time, which would end in a lot of headache for the programmer. AMD names two solutions for this problem: You can wait for the 2014 20nm feature Graphics Pre-Emption, or you just use an APU dedicated to computing together with a second GPU dedicated to graphics rendering. Sony is doing the latter, obviously.

It seems as if the Orbis rumors are getting more and more specific. The difference between this leak (8 x Jaguar/ 256 GCN SPs APU + 896 GCN SPs GPU) and the last leak (8x Jaguar CPU + 1152 GCN SPs GPU) is a much better balance between the computing power and the graphics power. Eight Jaguars for a Pitcairn seemed a bit underpowered, anyway. But one thing is missing in this leak: There is no dedicated DRM hardware at all, neither ARM nor SPE. It would really surprise me if they are launching without a proper in-hardware DRM.

http://beyond3d.com/showpost.php?p=1699631&postcount=187



haxxiy said:
HappySqurriel said:
Just looking at the specs of the APU demonstrates that this rumor is 100% BS ...

The A10-5800K (AMD's current top of the line APU) uses 100 Watts of power and is (roughly) 1/3 the processing power of this APU, so either Sony's processor will start fires or the rumor is crap.


You clearly didn't do your research very well did you. The A10-5800K is a 3.8GHz heavyweight that is basically a standard mid-high range PC CPU with a 7660 embedded and even the chip as a whole has 100W of TDP (which is not the same as power consumption by the way) but the ammount of heat to be dissipated.

A mobile GPU packing over 2 TFLOPS from AMD has a TDP of only 75W and a Jaguar CPU core consumes no more than 5W each. I doubt the system will be close to using even 200W as a whole. 

And just so you can compare the GPUs inside the PS3 and X360 used over 110W at launch... can't actually get a decent source but by 2005-2006 GPUs didn't go beyond 2 GFLOPS per watt, and the X360 and the PS3 packed 240 and 228 GFLOPS in their GPUs, respectively.

Oh, and that's four and then some times better than the GPU in the Wii-U. Roughly a Dreamcast versus Xbox difference. Deal with it.

Suppose they do go with their top of the line mobile GPU, which is large and expensive chip, and then add an 8 core Jaguar CPU to it ... Then Sony has a nice 115 Watt APU that will be perfect for them to release in a $600 system.

Seriously, expect about 1/2 of what this rumor is suggesting and you won't be disappointed when Sony actually announces their system.



Around the Network
ethomaz said:
Aielyn said:
 For the GPU, estimates for the Wii U put it somewhere in the 0.5-1.5 TFLOPS range. So lets say that the PS4, according to this leak, has somewhere between 1.3-3x the power of the Wii U, in terms of GPU.

The Wii U have a 400 GFLOPS GPU.


The consistent details about the Wii U GPU have it with 480 stream processors, using AMD's Turks core, being manufactured using a 40nm process, and using about 20 to 25 Watts ... The retail GPUs AMD makes that are similar to this are the Radeon 7670M, Radeon HD 7690M, and Radeon HD 7690M XT which have performance of 575 to 700 GFLOPs



HappySqurriel said:

The consistent details about the Wii U GPU have it with 480 stream processors, using AMD's Turks core, being manufactured using a 40nm process, and using about 20 to 25 Watts ... The retail GPUs AMD makes that are similar to this are the Radeon 7670M, Radeon HD 7690M, and Radeon HD 7690M XT which have performance of 575 to 700 GFLOPs

If there is a 7670M GPU inside Wii U the raw power will not be 576 GFLOPS because the Wii U runs at 550Mhz and not 600Mhz... just that put the raw power near 528 GLOFS using this same GPU.

But there are other thing to look for...

The Wii U GPU have 156mm^2 with eDRAM... if you remove the eDRAM you have a dize size ~104mm^2... the 7670M "Thames" die size is 118mm^2. The best GPU that fits with the 104mm ^2 is the HD 5670 "Redwood" that have 400 shaders units... 400 SPs running at 55Mhz give us 440 GFLOPS raw power.

I don't know how many units have the Wii U GPU but you can't put more units using the same 40nm process... so the Wii U GPU will have between 400 and 500 GFLOPS... not more than that.



if you ask me, those 8 1.6Ghz cores sound really underpowered...
other than that, pretty good specs, still pricing will be the key factor



DieAppleDie said:
if you ask me, those 8 1.6Ghz cores sound really underpowered...
other than that, pretty good specs, still pricing will be the key factor

cute



 

So performance significantly higher than HD 7850.