By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - VGLeaks: Orbis unveiled!

ethomaz said:
The email was confirmed fake... the E6760 runs at 600Mhz... the Wii U GPU at 550Mhz... so even it was a E6760 there is no way it have 576GFLOPS like the E6760.

The size of the GPU says everything... in 40nm there is no way you can put a GPU over 500 GFLOPS in this space... so the Wii U have a raw power ~400GFLOPS.

That's the common sense in Beyond3D thread: http://forum.beyond3d.com/showthread.php?t=60501

"in 40nm there is no way you can put a GPU over 500 GFLOPS in this space"

This confuses me, as the E6760 itself was produced on a 40nm process. So clearly this claim is false, unless you're talking about the size of the chip itself (in which case, why say 40nm?). And the GPU die size is 156.21 mm^2 (source), compared with the E6760's die size, which is just 118 mm^2 (source). Indeed, at the 40nm process, and a die size not much higher than that of the Wii U's GPU, with a clock speed that isn't multiples larger (675 MHz rather than 550 MHz), the Radeon HD 6850M manages to push 1080 GFLOPS. Factoring in the clock speed difference, this would be equivalent to 880 GFLOPS at the Wii U's clock speed... so what was that again of not being able to get a GPU over 500 GFLOPS in this space?

Some have been suggesting that it might be based on the ATI Radeon HD 4770, which is a 40 nm process, with a slightly smaller die size, clocked at 750 MHz. It pushes 960 GFLOPS. Underclocked to 550 MHz, this would lower the speed to 704 GFLOPS, and due to the non-linear nature of power requirements for processors, this would also significantly reduce the power draw. Also note that the 4770 is a 2009 chip, so it's not like it needs to be new tech for this to happen, either.

Meanwhile, the thread you linked to starts in July 2011, and lasts for 175 pages. I'm not going to read through 175 pages to try to find the "common sense" you claim is somewhere hidden within it. You're going to have to do better than to make a vague argument with at least one verifiable inconsistency, and then point to a 175 page thread's first page, which contains posts from a year and a half ago.

Now, unless you have a source backing up your claim that it's a 400 GFLOPS GPU, or can provide a more solid argument the one I just tore apart, don't bother responding again.



Around the Network
ethomaz said:

 

So performance significantly higher than HD 7850.

0_O

My body is ready

...

...

to sell, I need to save up more money to throw at it. Can't wait.



Before the PS3 everyone was nice to me :(

DieAppleDie said:
if you ask me, those 8 1.6Ghz cores sound really underpowered...
other than that, pretty good specs, still pricing will be the key factor

Compared to the Cell processor, everything is underpowered.



Aielyn said:

"in 40nm there is no way you can put a GPU over 500 GFLOPS in this space"

This confuses me, as the E6760 itself was produced on a 40nm process. So clearly this claim is false, unless you're talking about the size of the chip itself (in which case, why say 40nm?). And the GPU die size is 156.21 mm^2 (source), compared with the E6760's die size, which is just 118 mm^2 (source). Indeed, at the 40nm process, and a die size not much higher than that of the Wii U's GPU, with a clock speed that isn't multiples larger (675 MHz rather than 550 MHz), the Radeon HD 6850M manages to push 1080 GFLOPS. Factoring in the clock speed difference, this would be equivalent to 880 GFLOPS at the Wii U's clock speed... so what was that again of not being able to get a GPU over 500 GFLOPS in this space?

Some have been suggesting that it might be based on the ATI Radeon HD 4770, which is a 40 nm process, with a slightly smaller die size, clocked at 750 MHz. It pushes 960 GFLOPS. Underclocked to 550 MHz, this would lower the speed to 704 GFLOPS, and due to the non-linear nature of power requirements for processors, this would also significantly reduce the power draw. Also note that the 4770 is a 2009 chip, so it's not like it needs to be new tech for this to happen, either.

Meanwhile, the thread you linked to starts in July 2011, and lasts for 175 pages. I'm not going to read through 175 pages to try to find the "common sense" you claim is somewhere hidden within it. You're going to have to do better than to make a vague argument with at least one verifiable inconsistency, and then point to a 175 page thread's first page, which contains posts from a year and a half ago.

Now, unless you have a source backing up your claim that it's a 400 GFLOPS GPU, or can provide a more solid argument the one I just tore apart, don't bother responding again.

The Wii U GPU die size is 156mm^2 with eDRAM... 32MB eDRAM... without it the part of die for the GPU alone is below 104mm^2.

For over 500 GLOPS using the HD 7000 arch you need at least 480 SPs running at 550Mhz (I'm using the Wii U GPU clock)... 480 SPs in this arch have a die size of ~115mm^2... that is bigger than 104mm^2 of the Wii U GPU.

The HD 4770 (704GFLOS @ 55Mhz) have a die size of 137mm^2... bigger than the 104mm^2 too.

There is no AMD tech to put over 520 GFLOPS in 104mm^2 running at 550Mhz... or you have a die size bigger or you run the GPU at higher clock to archive over 500 GFLOPS.

You know witch AMD GPU fits exactaly in the 104mm^2? Radeon HD 5670 (Redwood)... 400 SPs @ 550Mhz = 440 GFLOPS.

Remeber... the 32MB eDRAM uses at least 55mm^2 of the 156mm^2.



ethomaz said:
The Wii U GPU die size is 156mm^2 with eDRAM... 32MB eDRAM... without it the part of die for the GPU alone is below 104mm^2.

For over 500 GLOPS using the HD 7000 arch you need at least 480 SPs running at 550Mhz (I'm using the Wii U GPU clock)... 480 SPs in this arch have a die size of ~115mm^2... that is bigger than 104mm^2 of the Wii U GPU.

The HD 4770 (704GFLOS @ 55Mhz) have a die size of 137mm^2... bigger than the 104mm^2 too.

There is no AMD tech to put over 520 GFLOPS in 104mm^2 running at 550Mhz... or you have a die size bigger or you run the GPU at higher clock to archive over 500 GFLOPS.

You know witch AMD GPU fits exactaly in the 104mm^2? Radeon HD 5670 (Redwood)... 400 SPs @ 550Mhz = 440 GFLOPS.

Remeber... the 32MB eDRAM uses at least 55mm^2 of the 156mm^2.

I'm sorry, but at this point, I won't accept what you're saying until you provide sources backing up your claims. And from what I can find, the argument of "the GPU alone is 104 mm^2" comes from the assumption that it's based on Redwood, not vice versa.

Also, your numbers don't match up. You claim that the eDRAM must use at least 55 mm^2... but that leaves just 101 mm^2, less than the 104 mm^2 that you claimed the Wii U GPU must be using.

Meanwhile, there's the AMD Radeon HD 7670M, which has a die size of 104 mm^2, a 40 nm process, a clock speed of 600 MHz (Redwood has a clockspeed of 775 MHz), and pushes 576 GFLOPS - at 550 MHz, that's 528 GFLOPS, which again contradicts your claim, which was that there is no AMD tech that can put over 520 GFLOPS in 104 mm^2 running at 550 MHz. And at 600 MHz, it uses only 25 W, if I'm reading this correctly, which means that it'll use maybe 20-22 W at 550 MHz... which is right around where it should be, if I understand correctly - my understanding is that the system uses 40 W when active.

And if it's based on the 7690M XT, then it should work out even lower in power usage, since it still has 25 W with a higher clock speed (and therefore should have an even lower power usage when underclocked). Or perhaps it's based on the 7590M, which performs pretty much in line with the 7670, except only requiring 18 W.

Meanwhile, the only reference I could see to the association of 32 MB of eDRAM to 55 mm^2 goes back to 2004/2005, except one instance on Beyond3D, where one guy speculated that, since IBM get 61.4 mm^2 at their 45nm process for 32 MB of eDRAM, then at 40nm, it should be about 55 mm^2. The problem is, this guy sucks at mathematics and scaling. 61.4*40/45 = 54.5, which is likely where he got 55 from. But area works with the square of the scale, so it's actually 61.4*40^2/45^2 = 48.5.



Around the Network

Aielyn said:

I'm sorry, but at this point, I won't accept what you're saying until you provide sources backing up your claims. And from what I can find, the argument of "the GPU alone is 104 mm^2" comes from the assumption that it's based on Redwood, not vice versa.

Also, your numbers don't match up. You claim that the eDRAM must use at least 55 mm^2... but that leaves just 101 mm^2, less than the 104 mm^2 that you claimed the Wii U GPU must be using.

Meanwhile, there's the AMD Radeon HD 7670M, which has a die size of 104 mm^2, a 40 nm process, a clock speed of 600 MHz (Redwood has a clockspeed of 775 MHz), and pushes 576 GFLOPS - at 550 MHz, that's 528 GFLOPS, which again contradicts your claim, which was that there is no AMD tech that can put over 520 GFLOPS in 104 mm^2 running at 550 MHz. And at 600 MHz, it uses only 25 W, if I'm reading this correctly, which means that it'll use maybe 20-22 W at 550 MHz... which is right around where it should be, if I understand correctly - my understanding is that the system uses 40 W when active.

And if it's based on the 7690M XT, then it should work out even lower in power usage, since it still has 25 W with a higher clock speed (and therefore should have an even lower power usage when underclocked). Or perhaps it's based on the 7590M, which performs pretty much in line with the 7670, except only requiring 18 W.

Meanwhile, the only reference I could see to the association of 32 MB of eDRAM to 55 mm^2 goes back to 2004/2005, except one instance on Beyond3D, where one guy speculated that, since IBM get 61.4 mm^2 at their 45nm process for 32 MB of eDRAM, then at 40nm, it should be about 55 mm^2. The problem is, this guy sucks at mathematics and scaling. 61.4*40/45 = 54.5, which is likely where he got 55 from. But area works with the square of the scale, so it's actually 61.4*40^2/45^2 = 48.5.

Again the AMD Radeon HD 7670M have a die size of 118mm^2... the 104mm^2 of you article is wrong.

GPU-Z: http://www.techpowerup.com/gpuz/3eyre/

Anyway for me 400 or 500 GFLOPS is the same for next-gen... really weak... the Wii U GPU is unbelievable weak for the next-gen... I don't need to prove anything because even the biggest Nintendo fans knows that... and that is way off topic to try correct your wrong numbers.

The size of the eDRAM is the best scenario too... 32MB eDRAM can be bigger than 55mm^2.



ethomaz said:

Aielyn said:

I'm sorry, but at this point, I won't accept what you're saying until you provide sources backing up your claims. And from what I can find, the argument of "the GPU alone is 104 mm^2" comes from the assumption that it's based on Redwood, not vice versa.

Also, your numbers don't match up. You claim that the eDRAM must use at least 55 mm^2... but that leaves just 101 mm^2, less than the 104 mm^2 that you claimed the Wii U GPU must be using.

Meanwhile, there's the AMD Radeon HD 7670M, which has a die size of 104 mm^2, a 40 nm process, a clock speed of 600 MHz (Redwood has a clockspeed of 775 MHz), and pushes 576 GFLOPS - at 550 MHz, that's 528 GFLOPS, which again contradicts your claim, which was that there is no AMD tech that can put over 520 GFLOPS in 104 mm^2 running at 550 MHz. And at 600 MHz, it uses only 25 W, if I'm reading this correctly, which means that it'll use maybe 20-22 W at 550 MHz... which is right around where it should be, if I understand correctly - my understanding is that the system uses 40 W when active.

And if it's based on the 7690M XT, then it should work out even lower in power usage, since it still has 25 W with a higher clock speed (and therefore should have an even lower power usage when underclocked). Or perhaps it's based on the 7590M, which performs pretty much in line with the 7670, except only requiring 18 W.

Meanwhile, the only reference I could see to the association of 32 MB of eDRAM to 55 mm^2 goes back to 2004/2005, except one instance on Beyond3D, where one guy speculated that, since IBM get 61.4 mm^2 at their 45nm process for 32 MB of eDRAM, then at 40nm, it should be about 55 mm^2. The problem is, this guy sucks at mathematics and scaling. 61.4*40/45 = 54.5, which is likely where he got 55 from. But area works with the square of the scale, so it's actually 61.4*40^2/45^2 = 48.5.

Again the AMD Radeon HD 7670M have a die size of 118mm^2... the 104mm^2 of you article is wrong.

GPU-Z: http://www.techpowerup.com/gpuz/3eyre/

Anyway for me 400 or 500 GFLOPS is the same for next-gen... really weak... the Wii U GPU is unbelievable weak for the next-gen... I don't need to prove anything because even the biggest Nintendo fans knows that... and that is way off topic to try correct your wrong numbers.

The size of the eDRAM is the best scenario too... 32MB eDRAM can be bigger than 55mm^2.


you guys understand that no matter what the Wii U's GPU is based on, it's still a customized chip so it wouldn't follow the general rule of a PC part right? Beyond3d is also not the mother load of all accurate information, a lot of those people are pretty dumb, just because they might be in the business, it doesn't mean they are the best, I'd know since I've been a member there since the begining. :P not to mention Wii U is pretty much 1/4th to 1/3rd the power of the Orbis as we have all been speculating anyways if the specs are true, which is really not that powerful if you look at how powerful PCs are these days, it is pretty much on the mark, we should all be happy that it's not the Wii situation again and that if Nintendo can make a game that's better than Galaxy 2, then it's good news for all gamers, and Sony will continue to make good exclusives as well from their 1st party. We need to accept the fact that no 8th gen consoles will wow people as much as the 7th gen jump did being only roughly 6x-8x more powerful.



ethomaz said:
Again the AMD Radeon HD 7670M have a die size of 118mm^2... the 104mm^2 of you article is wrong.

GPU-Z: http://www.techpowerup.com/gpuz/3eyre/

Anyway for me 400 or 500 GFLOPS is the same for next-gen... really weak... the Wii U GPU is unbelievable weak for the next-gen... I don't need to prove anything because even the biggest Nintendo fans knows that... and that is way off topic to try correct your wrong numbers.

The size of the eDRAM is the best scenario too... 32MB eDRAM can be bigger than 55mm^2.

As you still haven't provided any sort of source for the claim that 32MB eDRAM must necessarily be at least 55 mm^2, I'm inclined to assume that you're working off what you've been told by someone on Beyond3D without any actual checking of facts.

What is particularly notable is that we don't yet actually know the fabrication process size, yet. For all we know, it could be 28 nm. Or 55 nm. Until we know that, it's still nothing but speculation, in that regard.

And it's not off-topic, because when discussing the PS4 "leaked" stats, we need to have some sort of reference point in order to get a handle on just how powerful these numbers are supposed to be. But you seem to be doing a good job of dodging the calls for sources and solid arguments, so I'm guessing this is just another dodge.

The fact that you use phrases like "unbelievably weak" tells me that you're not being reasonable in this discussion, but rather, that you have a rather strong bias. I'd really rather get solid numbers, thank you very much. If the numbers aren't solid, I'd rather you say so, instead of spouting speculation as though it's fact, and then decrying my demonstration of the holes in your argument (another of which is the point I've emphasised a few times: "modified").

 

 

Anyway, here's a different way of looking at things. According to Wikipedia's pages on Xenos and RSX (the GPUs in the 360 and PS3), they pushed 240 GFLOPS and 400 GFLOPS, respectively. The leaks for the new systems say 1.2 TFLOPS and 1.8 TFLOPS. That makes them 4.5-5x the power of the 360 and PS3 with regards to GPU. The CPU of the PS4 is supposed to be slower than the Cell in the PS3. So unlike the massive jump that occurred Xbox->360 and PS2->PS3, this is a relatively modest boost. In fact, it's significantly slower than one would anticipate based on Moore's Law, which would have suggested that the new systems would be somewhere around 11-16x the power of the previous generation.

This, again, supports my suspicion that Sony, at least, and probably MS as well, are trying to reduce costs relative to the previous generation. I predicted this quite strongly in the case of Sony - I said that the power would be only a modest improvement on the previous generation, and even less so compared with Wii U. That said, I predicted that MS would push further, as they would want greater parity with PC, whereas Sony would be more concerned with keeping costs down. Either way, Sony do appear to be staying far more conservative, in my opinion, if these numbers are true, and that's what I expected.



ethomaz said:
Aielyn said:
 For the GPU, estimates for the Wii U put it somewhere in the 0.5-1.5 TFLOPS range. So lets say that the PS4, according to this leak, has somewhere between 1.3-3x the power of the Wii U, in terms of GPU.

The Wii U have a 400 GFLOPS GPU.

So based on this calculation... how many times more powerful the Orbis?

Using the same metric, how many times the PS3 was more powerful than the Wii?

It is obvious that the power difference was going to be present, how big is the real question, big enough so 3rd party developers don't make a Wii U version of their games? not big enough?+

Also, from prices and costs, I think it is a U$400 machine without loss, so Sony will try to sale it at 450 with a bundled game or a year of PSN+ included



flagstaad said:
So based on this calculation... how many times more powerful the Orbis?

Using the same metric, how many times the PS3 was more powerful than the Wii?

It is obvious that the power difference was going to be present, how big is the real question, big enough so 3rd party developers don't make a Wii U version of their games? not big enough?+

Also, from prices and costs, I think it is a U$400 machine without loss, so Sony will try to sale it at 450 with a bundled game or a year of PSN+ included

We can do look at Wii vs PS3/360 fairly easily, since the numbers are pretty much all available.

Wii: 2.9 GFLOPS CPU, ~12 GFLOPS GPU

360: 115.2 GFLOPS CPU, ~240 GFLOPS GPU

PS3: 218 GFLOPS CPU, ~400 GFLOPS GPU

It's worth noting, though, that the PS3 had no eDRAM, so while it may have had a technically faster GPU, it was limited badly by bandwidth.

Anyway, to compare Wii with 360, you get a very big CPU difference, and a GPU difference of about 20x. Note that the GPU numbers for 360 and PS3 are "official" numbers (like the 1.8 TFLOPS number given here for Orbis, assuming they're actually real), as opposed to real numbers (which tend to be lower, because chips rarely run at 100% capability) - Wii numbers are purely real numbers - they've been determined through testing, as the official numbers haven't been released.

It's worth factoring in, when looking at these numbers, that games were mostly optimised for 720 or so on 360 and PS3, and so the effective difference is smaller. But we're probably still looking at 8-10x difference.

Even based on 400 GFLOPS for Wii U, the difference between it and the PS4 would be only 4.5x, making it a much smaller difference than the 20x between the Wii and 360 (with an even bigger gap to the PS3). If we assume the more likely 480-540 GFLOPS range, then you're looking at 3.33-3.75x faster. And perhaps worryingly, there's still no mention of eDRAM or similar being included with the GPU or CPU (360 had 10 MB of eDRAM, Wii U has 32 MB of eDRAM, PS3 had none).