By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - SONY to reveal The PS4 Processor November 13th at AMD APU13

Pemalite said:
Kyuu said:


I thought by now it should be possible to accelerate and improve Cell to the extent to easily outperform a 1.6 ghz Jaguar in most regards. Because apparently even a 7 year old Cell isn't too far behind the newer Jaguar Architecture. Plus, the fastest supercomputer in the world was Cell powered at one point.

I think Sony might've benefited from the Cell given the fact that it was developed in house with IBM and Toshiba. Not sure how it all econimically works for each compnay but since they've already invested over a billion dollar for the project. Why would they let it all go? where they too hopeful? did they miscaclulate or was the Cell planned as a temporary product from the get-go. I'd like to hear your take on the matter.

I really don't know much about this but wasn't Cell initially supposed to handle graphics alone without a GPU? Does it have an edge in graphics processing over similarly priced/powerful traditional CPUs or is this just gibberish?


The fastest super computer in the world currently isn't the fastest because of the CPU type, but the fact it's powered by nVidia's Tesla.
Besides, when you build a Super Computer you can take advantage of a particular CPU's strengths in order to extract maximum performance anyway, it's not always so variable like a game engine.

For sure, it would be relatively easy to improve Cell to the point it would make Jaguar look like something from the 1980's, but again that would cost Billions in R&D, money that Sony really doesn't have when they keep reporting losses to shareholders, not to mention the extra difficulty it creates for developers which might have created a situation where the Xbox One might have been the lead platform for all multiplatforms. (x86 is relatively easy to build games for.)
Not to mention it would have driven up costs for the Playstation 4 a situation that Sony would not have benifitted from. AMD has already spent the cash on R&D building Jaguar remember.
Thus, in the end it really all comes down to costs, which actually beneffited the consumer with a cheaper console.

As for the Cell doing graphics processing, that was mostly just advertising fluff.
If you remember decades ago, games would actually fall back to a form of "software rendering" - Where-as the CPU handled all the graphics effects in a game, this isn't something that's unique to the Cell and that was being performed on the old 486 CPU's, which are half the speed of the origional Pentiums.
Heck, even the Xbox 360 used it's CPU in some games to improve the graphics in a game by using CPU compute time to perform Morphological Anti-Aliasing which is merely just a filter on screen and very cheap to implement.
Half the problem though with CPU's doing GPU work is that rendering a game is stupidly parallel, CPU's however are very serial in the way they process information. (And Core counts!)
The best example for this is to use a book.
A CPU will read each page in a book, one page after the other untill it gets to the end, the GPU will read every single page in a book at the same time, the Cell isn't exactly GPU-like in the way it processes information, thus even though it could potentially render an entire game which even a CPU from 20 years ago could do, it wouldn't have been feasible from an image quality of performance perspective.


I love when people use great analogies like that for explaining things. The car one was also pretty good.



Around the Network
eyeofcore said:
Pemalite said:
eyeofcore said:

 

I have a question involving Wii U's CPU, I don't truly agree that is really slow and can you compare it to Xbox 360 and PlayStation 3 CPU?

No they can't for multiple reasons.
Mostly to the fact that it's Out-of-Order execution, better integer and floating point performance, more cache etc'.
I wouldn't be surprised if it ate the Xbox 360 and Playstation 3's CPU's for lunch.

At the instruction set level they can be compared.

eyeofcore said:

I know that Wii U's CPU is PowerPC 750CL so it is an old CPU yet I would not underestimate it that easily, it has 4 stage pipeline and that is really short and should have little to no "bubbles" compared to atrocious Xbox 360/PlayStation 3 CPU with their 32 to 40 stage pipeline that are also in order versus out of order that Gamecube/Wii/Wii U CPU is even thought it is kinda limited as I read in some forums.

The number of pipelines is only a problem if everything is kept equal.
A processor with a longer pipeline but with lots of cache, uop, loop buffer, lots of low latency bandwidth to System Ram and a really good branch predictor (Something the Xbox 360 and Playstation 3 lacks.) can make it all a non-issue, plus a longer pipeline can assist in reaching a higher frequency for an overall larger performance benefit.
My i7 3930K for instance has "up-to" a 19 stage pipeline, one of the fastest CPU's money can buy, but because of all the other benefits, it's certainly significantly faster than PowerPC.
I'm not arguing that the Wii U isn't more powerfull than the Xbox 360/Playstation 3, but it certainly isn't as fast as the Xbox One or Playstation 4 in any regard to the physical processors in all the machines.

eyeofcore said:

I read this article and it seems that WIi U's CPU can directly access and use eDRAM from Espresso, maybe I am wrong;

It can, you aren't wrong.

 

eyeofcore said:

Wii U has DSP while Xbox 360/PlayStation 3 don't have DSP so audio is done on one of their CPU cores? Right? So only two cores are really for game while a third one acts as DSP also the OS is partially ran on one of those cores compared to Wii U that as rumored has 2 ARM cores that are used as "background" cores also there is another ARM core for backward compatibility with Wii so it could also be used.

I know that Xbox 360 had a bottleneck involving RAM, it was GDDR3 with 22.8GB/s yet the FSB or what ever is called that kind of chip could only push 10.8GB/s and PlayStation 3 also had some sort of bottleneck. While Wii U does nto have any kind of bottleneck and uses DDR3 1600mhz so it has 12.8GB/s like most computers nowadays also it has 1GB for games thus it has like almost 3 times more memory for game assets/data to store temporally. DDR3 has much lower latency than GDDR3 so it is great for the OS and games, right?

 

Up-to a point, the Xbox 360 for instance had a DAP that will offload Audio Processing "up-to" 256 channels, 48 KHz, 16-bit tracks.
Thus if you wanted to do 24bit or 32bit Audio you would have to use CPU time.
Converesly, the Xbox 360's GPU could also offload some Audio tasks if a developer saw fit, it's "just" flexible enough in order to do so. (More so than the Playstation 3, that's for sure.)

As for Bottlenecks, every computer system, be it a PC, Gaming Console or Phone has some form of bottleneck, be it storage, graphics, processor or system memory etc'.
Essentially the bottleneck is whatever limitation the developers run into first, usually they build within the limitations of the hardware, but bottlenecks can change from one frame to the next whilst rendering a scene due to the fact that data being processed is always changing.

For example a big bottleneck on all the current and next generation consoles if they "theoretically" ran a game like StarCraft 2 would actually be the CPU due to the sheer amount of units that can be on screen at any one time.
If you fired up a game of Battlefield 4, you would find that it's more GPU limited due to the heavy effects that the game employs.

Or if you could run Civilization IV, you would be GPU limited whilst playing your turn, but when you finish your turn and the Computer players do their turn, you would quickly find to be CPU limited.

As for memory latency, both GDDR3 and GDDR5 typically have higher latency than DDR3, however it's not significant, you're looking at 20-30% tops, even then that's going to have a neglible performance difference anyway, due in part to caches and eDRAM/eSRAM and all it's other variations.
Plus, consoles are typically more GPU orientated, GPU's really don't care about memory latency, but rather bandwidth is the determining factor.

 

 


Thanks, I know what a bottleneck is and I know that every device has it even our body. LMAO

How much is Wii U's CPU faster than Xbox 360 and PlayStation 3 CPU's on average if you could make an rough estimation/approximation?!

Also how core to core is comparable Wii U CPU to CPU's in Xbox One/PlayStation 4?

Do you have an rough idea of what are the benefits of Wii U's CPU to access directly to eDRAM of Wii U's GPU? Could it have similar effect to AMD's hUMA in away, not the same implementation or that stuff. I mean the effect on performance and programming if it is easier than regular route on UMA systems.

Starcraft 2 is only a bottleneck in CPU department because a lot of things are done on CPU and also it only uses 2 cores, if it was using 4 cores natively and properly then it would ran okay on next generation systems, Battlefield 4 is GPU intensive primarily because of DirectX 11 effects and other things.

I have been doing an investigation into Wii U's GPU and I found it silly that Wii U's GPU is supposedly just 320 Shaders or basically something like Radeon HD 5550 or as some people tried to force those ridicilous 160 Shaders yet I found out that a GPU like Radeon HD 6630M would fit in and since AMD's GPU's are done on relatively cheap process in TSMC's fabs while Wii U's GPU is also done in TSMC's fabs but at CMOS process or something like that.

So I assume that is done on a better process/silicon or something and that it would allow higher density and other things.

The wii u's cpu is isn't stronger at all! It's estimated performance comes in at around 15 gflops and I'm willing to bet it's integer performance isn't too hot either.

 

Core for core their features are similar but performance is a different story altogether to the point where the ibm espresso doesn't come anywhere near the jaguar cores.

 

The only benefit of the wiiu accessing edram is increases cpu performance that is it. Btw the wii u doesn't even have GCN cores to begin supporting the hUMA model!

 

Actually this where ninjablade might be right. Somebody here who is named nylevia basically told us that that the wiiu has around 150 to 200 shaders. Why do you find it silly that the wiiu only has 320 sahders (maybe even less than we suggest because nylevia basically confirmed that it doesn't have over 200 shaders) ? It's not that surprising because cod ghosts is still running at a sub hd resolution which only further proves ninjablades point.

 

Look here the wii u's gpu is done on a 40nm node process which is ancient by todays standards but to suggest that it has more shaders while cod ghosts is running at sub hd resolutions is asinine. The wii u has current gen performance deal with it!

 



fatslob-:O said:
 

The wii u's cpu is isn't stronger at all! It's estimated performance comes in at around 15 gflops and I'm willing to bet it's integer performance isn't too hot either.

 

Core for core their features are similar but performance is a different story altogether to the point where the ibm espresso doesn't come anywhere near the jaguar cores.

 

The only benefit of the wiiu accessing edram is increases cpu performance that is it. Btw the wii u doesn't even have GCN cores to begin supporting the hUMA model!

 

Actually this where ninjablade might be right. Somebody here who is named nylevia basically told us that that the wiiu has around 150 to 200 shaders. Why do you find it silly that the wiiu only has 320 sahders (maybe even less than we suggest because nylevia basically confirmed that it doesn't have over 200 shaders) ? It's not that surprising because cod ghosts is still running at a sub hd resolution which only further proves ninjablades point.

 

Look here the wii u's gpu is done on a 40nm node process which is ancient by todays standards but to suggest that it has more shaders while cod ghosts is running at sub hd resolutions is asinine. The wii u has current gen performance deal with it!

 


Wow... I caught attention of a fanboy... LMFAO

Wii U's CPU is stronger than Xbox 360's Xenon and PlayStation 3's CELL and is it that hard to believe? We are talking about strictly CPU tasks where both Xenon and CELL are failures and Anand himself said that it was better to put a dual core from AMD or Intel than the mess that is in Xbox 360 and PlayStation 3.

15 GFLOPS if all cores have 256KB of L2 Cache per core, it is 17.5Gflops total since Core 0 has 512KB Core 1 has 2MB and Core 2 has 512KB of L2 Cache. Yet this is all mere speculation since we don't know the effect of eDRAM as L2 Cache nor were those cores and that architecture being modified/customized a bit at all. So we can only speculate since Nintendo/IBM did not disclosed anything involving Espresso except that it is based on IBM's designs/architecture.

Did you know that Jaguar IPC is mostly below Core 2 Duo IPC? Probably not... :3

So another benefit over Xbox 360 and PlayStation 3, so it is a direct communication between CPU and GPU also that is practically the basis of hUMA and your reaction is hilarious. I know that WIi U does not have Graphic Core Next(that can be only used with x86 architecture according to agrement with Intel) so it can not use hUMA model yet I did not say it has hUMA. I said that it would have similar effect like hUMA and that is direct communication between CPU and GPU and that is what hUMA achieves primarily, so it is not hUMA yet it does what one(or more) thing(s) that hUMA does...

So you believe somebody that may actually not be right at all and that might be FOS and BSing you that also could be a fanboy spreading FUD! So he "confirmed" it yet nobody on NeoGAF, Beyond3D or any other forum knows at what are they looking and you only believe his information because you have a bias and/or an interest and any information that does not fit your agenda will be automatically be denied.

I bolded the part where you failed hard and exposed yourself and your lack of knowledge!

Call Of Duty Black Ops 2 is a cheap port of Xbox 360 version and it ran okay and fine after a patch, Call Of Duty Ghost is also another cheap port of Xbox 360 version so you can not expect higher resolution except some minor details looking better plus those teams that port are just handful of people. A cheap port running poorly on Wii U does not support his point/claim/"confirmation" at all also you ignore other games...

Need For Speed Most Wanted on Wii U ran practicaly rock solid at 30 fps, has better lightning and some higher quality over Xbox 360/PlayStation 3 version.

Batman Arkham Origins runs better on Wii U with less bugs and glitches and with a few framerate drops in some areas compared to Xbox 360/PlayStation 3.

Deus Ex Human Revolution Director's Cut is all around better and Xbox 360/PlayStation 3 version have framerate issues and even more when using smartglass/vita as secondary screen plus then there is no anykind of AA.

All three examples from above say otherwise so how in the hell just 160/200 shaders at 550mhz, so a GPU with 176-220GFLOPS beats a GPU that is 240GFLOPS? Look, you need to set a side your bias if you consider yourself a professional and smart because your reaction points out to oposite of that in a nano second.

40nm is old, but it is not ancient by your claims and is still widely used in the industry because if it was ancient then it would not be used at all to any degree. >_>

You think Wii U has current generation performance? Wow... Such a gold failure of a statement... Lets make a proper comparison.

PlayStation All-Stars Battle Royale is a Super Smash Bros clone or is being inspired by it is 720p60fps

Super Smash Bros 4/U/Wii U is 1080p60fps and that is confirmed by Nintendo

Also why Bayonetta 2 looks way better than Bayonetta? If Wii U has current generation performance also how the hell is X running on Wii U also how the hell is Mario Kart 8 running on Wii U and why is this looking so good?

http://fast.shinen.com/neo/

Shin'en confirmed that the vehicle on the site is rendered in game(read replies);

Also explain me why Cryengine 3.5/4 is supported for Wii U if it has curren generation performance? It does not support Xbox 360/PlayStation 3;

http://www.crytek.com/news/crytek-announces-the-arrival-of-the-new-cryengine

Radeon HD 6650M should fit and be in Wii U, you will say it is not possible yet it is possible. Firs tof all AMD produces their GPU's at relatively cheap process/silicon while Nintendo is producing their GPU "Latte" at TSMC CMOS process/silicon that is more expensive and higher quality so it should allow some higher density, right? Also Radeon HD 6000 series were released in 2010 so I doubt that AMD did not find a way to increase density and possibly save some space in two years and 6000 series are used in Trinity and Richland APU's that are 32nm that may use improved technique of manufacturing the chips.

Nintendo most likely customized the chip and we know that they don't waste any kind of silicon, they are being cheap ass when comes to silicon and they will try to get most out of the silicon for less and save couple of cents if possible while they will use higher quality process/silicon to potentially reduce power consumption and increase efficency of their hardware. Also don't say it is 4000 series because they 5000 series is just 4000 series at 40nm with implemented DirectX 11 support and 6000 series is improved 5000 series basically with changes.

http://www.techpowerup.com/gpudb/326/radeon-hd-6650m.html

http://www.notebookcheck.net/AMD-Radeon-HD-6650M.43962.0.html

If it was just 160 to 200 shaders then Nintendo would not put 32MB of eDRAM at all because it would be a waste of die space and money, we are talking about Nintendo and not Microsoft or Sony! Nintendo saw their mistakes with Xbox 360 and PlayStation 3 in design! You love to underestimate things, right? :P

Also don't dare to call me a Nintendo fanboy because I am not... I don't own any games or platforms from Nintendo. ;)

~ Mod edit ~

This user has been moderated by TruckOSaurus for this post.



Pemalite said:
Alby_da_Wolf said:

Thanks for the suggestions and yes, this is what I get from the current AMD offer and infos available. If you're right about PS4 and XBOne's APUs TDP, I guess too I'll have to choose a different config, and APU + discrete GPU Hybrid-Crossfire could be the best alternative, more expensive  but also a lot more powerful than my minimum specs. Anyway, I guess I'll have to wait until January to decide, as the 13th November we'll get only half of the infos needed to build a PC witth the newest AMD components and making sure it equal or exceed 8th gen consoles specs. Doing it with existing components would be probably easier, but I want to see what's new in AMD low-power consumption offer, I hate fan noise and high-power with liquid cooling isn't a solution, as, environment considerations aside, electricity is very expensive in Italy, so there's a double incentive to save it.


I hear you on the power thing, Australia's energy prices are even more expensive than Italy's.

With Hybrid Crossfire though, if you're not running a game, the second card generally shuts off. :)
Performance when you need it, power consumption lowered when you don't.

With that in mind though, you could actually save more energy by going with an Intel Core i3 or i5 over AMD's APU's. - AMD's CPU's in comparison are power hogs for the performance you get. - More so with the Core i5, because it's significantly faster and can "hurry up and idle" far sooner.
Plus, you should not need to upgrade it as often either, just drop in a new GPU next time you think you need an upgrade. (Unless you do something CPU heavy like lots of transcoding to multiple devices.)
Drop in a Noctua Air cooler and you would have silence.

You could even do a Mini-ITX build, Intel seems to have a far larger choice of Mini-ITX motherboards these days.

I'm quite a big AMD fan for many reasons: my first PC's chipset, an Intel 430TX, was quite a rip-off, while my first CPU, an AMD k6-166 was very good for its price, and what's more, at that time there still were buggy Pentiums around. Later I had a Duron, quite better than a Celeron for the same price, then an Athlon XP when Intel offered really disappointing P4's, and the current one has a quite power frugal AMD CPU and the HD3300, besides being cool enough to just need passive cooling, was the most powerful onboard GPU in its times. So I'm willing to wait and see what AMD will be able to offer with its new APUs. BTW I believe we need AMD to stay afloat if we want Intel to behave fairly enough, without competition it could become really ruthless...



Stwike him, Centuwion. Stwike him vewy wuffly! (Pontius Pilate, "Life of Brian")
A fart without stink is like a sky without stars.
TGS, Third Grade Shooter: brand new genre invented by Kevin Butler exclusively for Natal WiiToo Kinect. PEW! PEW-PEW-PEW! 
 


eyeofcore said:


Wow... I caught attention of a fanboy... LMFAO

LOLOL you got yourself banned. (I should probably warn the mods at IGN forums but whatever. Oh geez I just found out you hang out at anandtech too! I hope the guys won't rip at you right there. You have my blessings for being brave to even show your attitude there LOL.)

eyeofcore said:


Wii U's CPU is stronger than Xbox 360's Xenon and PlayStation 3's CELL and is it that hard to believe? We are talking about strictly CPU tasks where both Xenon and CELL are failures and Anand himself said that it was better to put a dual core from AMD or Intel than the mess that is in Xbox 360 and PlayStation 3.

Why is it so hard to believe that the WII Us CPU is weak! CPU tasks ? (Apparently you don't know what the purpose of a CPU even is!) The bigger failure is the ibm espresso LOL. Apparently you don't know that the ibm espresso is using the same cores from the gamecube LOL. Your system of choice got denied by 4A games. Bwahahaha

eyeofcore said:


15 GFLOPS if all cores have 256KB of L2 Cache per core, it is 17.5Gflops total since Core 0 has 512KB Core 1 has 2MB and Core 2 has 512KB of L2 Cache. Yet this is all mere speculation since we don't know the effect of eDRAM as L2 Cache nor were those cores and that architecture being modified/customized a bit at all. So we can only speculate since Nintendo/IBM did not disclosed anything involving Espresso except that it is based on IBM's designs/architecture.

You further showed ignorance of how caches work even AFTER the fact that pemalite lectured you. Hehehe. You probably don't even know the purpose of a cache! Caches are used to maintain performance not increase it!

eyeofcore said:


Did you know that Jaguar IPC is mostly below Core 2 Duo IPC? Probably not... :3

Did you know that the disaster that is the ibm espresso has an even lower IPC than every processor on the market ? (I'm willing to bet that you will deny the fact. LOL) 5 gflops at 1.2 Ghz is pathetic by todays standards. 

eyeofcore said:


So another benefit over Xbox 360 and PlayStation 3, so it is a direct communication between CPU and GPU also that is practically the basis of hUMA and your reaction is hilarious. I know that WIi U does not have Graphic Core Next(that can be only used with x86 architecture according to agrement with Intel) so it can not use hUMA model yet I did not say it has hUMA. I said that it would have similar effect like hUMA and that is direct communication between CPU and GPU and that is what hUMA achieves primarily, so it is not hUMA yet it does what one(or more) thing(s) that hUMA does...

Except the WII U can't communicate in harmony together with the GPU or CPU. The caches must be separate and the GPU doesn't provide any flat addressing methods which renders the WII U been incapable of any similar affect to hUMA. (Dat ignorance of yours showing LOL.) BTW what makes this post even more hilarious is the fact that GCN isn't exclusive to any CPU architectures. Why would ARM partner up with AMD ? (I'm willing to bet that you don't know this answer.)

eyeofcore said:


So you believe somebody that may actually not be right at all and that might be FOS and BSing you that also could be a fanboy spreading FUD! So he "confirmed" it yet nobody on NeoGAF, Beyond3D or any other forum knows at what are they looking and you only believe his information because you have a bias and/or an interest and any information that does not fit your agenda will be automatically be denied.

That so called "somebody" is a developer. If your going to deny a developers words then I don't have anymore to say. 

eyeofcore said:


I bolded the part where you failed hard and exposed yourself and your lack of knowledge!

You mean you bolded the part where you showed even more ignorance ? 0.0

eyeofcore said:


Call Of Duty Black Ops 2 is a cheap port of Xbox 360 version and it ran okay and fine after a patch, Call Of Duty Ghost is also another cheap port of Xbox 360 version so you can not expect higher resolution except some minor details looking better plus those teams that port are just handful of people. A cheap port running poorly on Wii U does not support his point/claim/"confirmation" at all also you ignore other games...

How can you say that when the WII U is literally cheap hardware! That so called "cheap port" is done on a system that has literally almost identical gamecube CPU architecture while sporting a weak GPU from the PC space. It's the WII Us fault that it can't handle BLOPS 2 at 720p!

eyeofcore said:


Need For Speed Most Wanted on Wii U ran practicaly rock solid at 30 fps, has better lightning and some higher quality over Xbox 360/PlayStation 3 version.

It doesn't change the fact that it's closer to consoles!

eyeofcore said:


Batman Arkham Origins runs better on Wii U with less bugs and glitches and with a few framerate drops in some areas compared to Xbox 360/PlayStation 3.

Neither DF or LOT did the analysis plus I wouldn't be to too sure of claiming the definitive version after the fact that arkham city ran worse on the WII U and what's more is that arkham origins is using the same engine!

eyeofcore said:


Deus Ex Human Revolution Director's Cut is all around better and Xbox 360/PlayStation 3 version have framerate issues and even more when using smartglass/vita as secondary screen plus then there is no anykind of AA.

Again neither DF or LOT did an analysis yet.

eyeofcore said:


All three examples from above say otherwise so how in the hell just 160/200 shaders at 550mhz, so a GPU with 176-220GFLOPS beats a GPU that is 240GFLOPS? Look, you need to set a side your bias if you consider yourself a professional and smart because your reaction points out to oposite of that in a nano second.

Those 3 examples turned into one because you couldn't provide an adequate source to your claims LOL. Look you need to set aside your bias and deal with the fact that the WII U is weak. If your a professional then you should know why the WII U is failing to out the PS360 in alot of games. 

eyeofcore said:


40nm is old, but it is not ancient by your claims and is still widely used in the industry because if it was ancient then it would not be used at all to any degree. >_>

If a poor company like AMD could then why is nintendo so lazy with transitioning to the process ? AMD and smartphone manufacturers did it at the beginning of 2012! Deal with the fact that 40nm is pathetic by todays standard then!

eyeofcore said:


You think Wii U has current generation performance? Wow... Such a gold failure of a statement... Lets make a proper comparison.

What's the failure is the fact that you have showed absolutely no understanding thus far!

eyeofcore said:

 

PlayStation All-Stars Battle Royale is a Super Smash Bros clone or is being inspired by it is 720p60fps

Super Smash Bros 4/U/Wii U is 1080p60fps and that is confirmed by Nintendo

Smash bros is only confirmed to be 1080p by nintendo. http://www.eurogamer.net/articles/2013-06-11-super-smash-bros-for-wii-u-and-3ds-due-2014 That 60fps part doesn't even have any backing by nintendo themselves LOL. Uh Oh you might have to deal with 30fps! Bwahahaha

eyeofcore said:


Also why Bayonetta 2 looks way better than Bayonetta? If Wii U has current generation performance also how the hell is X running on Wii U also how the hell is Mario Kart 8 running on Wii U and why is this looking so good?

They all look average at best. I guess your so used to sub hd you feel that leap that PS360 users did in 2006 LOL.

eyeofcore said:

 

http://fast.shinen.com/neo/

Shin'en confirmed that the vehicle on the site is rendered in game(read replies);

I wonder how the rest will turn out. Remeber that neo assault game ? It turned out to be not so impressive if you ask anyone LOL.

eyeofcore said:

 

Also explain me why Cryengine 3.5/4 is supported for Wii U if it has curren generation performance? It does not support Xbox 360/PlayStation 3;

http://www.crytek.com/news/crytek-announces-the-arrival-of-the-new-cryengine

Cryengine unversioned is even supported by current gen consoles! http://www.gamespot.com/articles/new-cryengine-revealed/1100-6413371/ So much for trying to differentiate between current gen consoles for the WII U LOL.

eyeofcore said:


Radeon HD 6650M should fit and be in Wii U, you will say it is not possible yet it is possible. Firs tof all AMD produces their GPU's at relatively cheap process/silicon while Nintendo is producing their GPU "Latte" at TSMC CMOS process/silicon that is more expensive and higher quality so it should allow some higher density, right? Also Radeon HD 6000 series were released in 2010 so I doubt that AMD did not find a way to increase density and possibly save some space in two years and 6000 series are used in Trinity and Richland APU's that are 32nm that may use improved technique of manufacturing the chips.

AMD always fabs their GPU at TSMC so if anything the quality ain't any different. Wow you don't follow AMD like you claim in your youtube videos! -_- Except that increased density is small compared to a node transition. At best they can probably only squeeze out 10%.

eyeofcore said:


Nintendo most likely customized the chip and we know that they don't waste any kind of silicon, they are being cheap ass when comes to silicon and they will try to get most out of the silicon for less and save couple of cents if possible while they will use higher quality process/silicon to potentially reduce power consumption and increase efficency of their hardware. Also don't say it is 4000 series because they 5000 series is just 4000 series at 40nm with implemented DirectX 11 support and 6000 series is improved 5000 series basically with changes.

http://www.techpowerup.com/gpudb/326/radeon-hd-6650m.html

http://www.notebookcheck.net/AMD-Radeon-HD-6650M.43962.0.html

Thanks for proving that nintendo is cheap when it comes to processs nodes. *rolls eyes* BTW the WII U's processing components probably take less than 30 watts! http://www.anandtech.com/show/6465/nintendo-wii-u-teardown I assume that the CPU take like 5 watts while the GPU might take around up to 25 watts. Us PC users all know that a Radeon HD 5550 TDP is about 40 watts max!. If we assume half of the shaders are to be non functional then that exactly matches up with the power consumption in the WII U so ninjablade might be right afterall! The WII U is closer to the HD5000 series rather than the HD 6000 series. The WII U IS looking to have less than 200 gflops in total! Deal with it!

eyeofcore said:


If it was just 160 to 200 shaders then Nintendo would not put 32MB of eDRAM at all because it would be a waste of die space and money, we are talking about Nintendo and not Microsoft or Sony! Nintendo saw their mistakes with Xbox 360 and PlayStation 3 in design! You love to underestimate things, right? :P

Nintendo are more incompetent when it comes to designing hardware! That 32mb of eDRAM isn't exactly a waste. What is a waste is that nintendo cheaped out on the hardware so don't be trying to deny this! Just because nintendo saw their mistakes does not mean that nintendo will learn from them! No I don't love to underestimate, instead I over evaluated that nintendo would have the better hardware completely! Now you just further changed my view on how weak the WII U truly is! You did not make your case better in fact those cases were used against you! I think I have solved the puzzle on the WII Us hardware mysteries!

eyeofcore said:


Also don't dare to call me a Nintendo fanboy because I am not... I don't own any games or platforms from Nintendo. ;)

SHMH. Your youtube channel says very differently and so do the users at anandtech.



Around the Network
eyeofcore said:

All three examples from above say otherwise so how in the hell just 160/200 shaders at 550mhz, so a GPU with 176-220GFLOPS beats a GPU that is 240GFLOPS? Look, you need to set a side your bias if you consider yourself a professional and smart because your reaction points out to oposite of that in a nano second.

40nm is old, but it is not ancient by your claims and is still widely used in the industry because if it was ancient then it would not be used at all to any degree. >_>


Even if the Wii U's GPU's Gigaflop performance was lower than the Xbox 360 or Playstation 3's GPU's, it would still be faster, it's a pointless metric to use when comparing different architectures.

Basically, the Wii U's GPU can do *more* per Gigaflop thanks to technologies such as better compression algorithms, better culling, more powerfull geometry engines, you name it.
If a GPU only had to deal with floating point math, then it would be an accurate metric to gauge performance, but that's not reality.

As for 40nm vs 28nm, well. The law of Physics plays here, 28nm is pretty much superior.
With that said, 40nm is also stupidly cheap and very very mature.
You can actually optimise the type of transister for a given fabrication process in order to minimise leakage, what that means is you can pack more transisters into the same space without dropping down a lithographic node.

For example the Radeon 290X has 43% more Transisters than the Radeon 7970, yet it's die size is only 20% larger at the same fabrication process.



--::{PC Gaming Master Race}::--

Pemalite said:
eyeofcore said:

All three examples from above say otherwise so how in the hell just 160/200 shaders at 550mhz, so a GPU with 176-220GFLOPS beats a GPU that is 240GFLOPS? Look, you need to set a side your bias if you consider yourself a professional and smart because your reaction points out to oposite of that in a nano second.

40nm is old, but it is not ancient by your claims and is still widely used in the industry because if it was ancient then it would not be used at all to any degree. >_>


Even if the Wii U's GPU's Gigaflop performance was lower than the Xbox 360 or Playstation 3's GPU's, it would still be faster, it's a pointless metric to use when comparing different architectures.

Basically, the Wii U's GPU can do *more* per Gigaflop thanks to technologies such as better compression algorithms, better culling, more powerfull geometry engines, you name it.
If a GPU only had to deal with floating point math, then it would be an accurate metric to gauge performance, but that's not reality.

As for 40nm vs 28nm, well. The law of Physics plays here, 28nm is pretty much superior.
With that said, 40nm is also stupidly cheap and very very mature.
You can actually optimise the type of transister for a given fabrication process in order to minimise leakage, what that means is you can pack more transisters into the same space without dropping down a lithographic node.

For example the Radeon 290X has 43% more Transisters than the Radeon 7970, yet it's die size is only 20% larger at the same fabrication process.

The WII U may be superior due to the fact that it has a larger memory. BTW Radeon HD 5000 tessellators were awful. (Uhhhg how painful that was.) Anyways agreed with everything else.



fatslob-:O said:

The WII U may be superior due to the fact that it has a larger memory. BTW Radeon HD 5000 tessellators were awful. (Uhhhg how painful that was.) Anyways agreed with everything else.


Still, the Radeon 5000 class Tessellators are far better than the non-existent Tessellator in the Playstation 3 and the Truform based Tessellator in the Xbox 360. :P



--::{PC Gaming Master Race}::--

Pemalite said:
fatslob-:O said:

The WII U may be superior due to the fact that it has a larger memory. BTW Radeon HD 5000 tessellators were awful. (Uhhhg how painful that was.) Anyways agreed with everything else.


Still, the Radeon 5000 class Tessellators are far better than the non-existent Tessellator in the Playstation 3 and the Truform based Tessellator in the Xbox 360. :P

Touche.



fatslob-:O said:

eyeofcore said:


Wow... I caught attention of a fanboy... LMFAO

LOLOL you got yourself banned. (I should probably warn the mods at IGN forums but whatever. Oh geez I just found out you hang out at anandtech too! I hope the guys won't rip at you right there. You have my blessings for being brave to even show your attitude there LOL.)

So...? hahaha You get exicted for nothing. lol

eyeofcore said:


Wii U's CPU is stronger than Xbox 360's Xenon and PlayStation 3's CELL and is it that hard to believe? We are talking about strictly CPU tasks where both Xenon and CELL are failures and Anand himself said that it was better to put a dual core from AMD or Intel than the mess that is in Xbox 360 and PlayStation 3.

Why is it so hard to believe that the WII Us CPU is weak! CPU tasks ? (Apparently you don't know what the purpose of a CPU even is!) The bigger failure is the ibm espresso LOL. Apparently you don't know that the ibm espresso is using the same cores from the gamecube LOL. Your system of choice got denied by 4A games. Bwahahaha

*facepalm* You failed...

Wow... Why is hard to believe that Wii U's CPU is not weak?!

I love how you force one premise that is not valid as I said CPU task is CPU task and you don't know what I meant by CPU task? Wow! CPU is primarily doing one thing aka serial tasks while it is not good at parallel tasks like the GPU is.

Actually the bigger failure is Xbox 360's Xenon and PlayStation 3's CELL that have 32-40 stage pipeline thus it has a lot of "bubbles" and 1MB/768KB of cache for three cores is not enough plus Gekko/Broadway/Espresso is Out Of Order Execution versus In Order Execution.

Apparently you really love to force premises and try to low ball me without asking a question, just because it is "apparently" that does not mean it is and you will only make yourself a easier target to get shotdown and pwned in debates because if you assume that apparently that and that person surely does not know that and that can backfire you like DRM thing with Xbox One and Microsoft.

Also Gamecube and Wii/Wii U cores are not the same really, Gamecube's Gekko is PPCDBK(iteration of PowerPC750 CX) while Wii/Wii U is Broadway/Espresso is PowerPC750CL yet Wii U's CPU is first PowerPC750 that has multi-core support and other things thus it is not really a true PowerPC750 design since it got couple of small things from Power 7 eg eDRAM implementation.

My system of choice did not got denied by 4A Games, did 4A games said that PC(x86-64) is weak? No... LMAO

eyeofcore said:


15 GFLOPS if all cores have 256KB of L2 Cache per core, it is 17.5Gflops total since Core 0 has 512KB Core 1 has 2MB and Core 2 has 512KB of L2 Cache. Yet this is all mere speculation since we don't know the effect of eDRAM as L2 Cache nor were those cores and that architecture being modified/customized a bit at all. So we can only speculate since Nintendo/IBM did not disclosed anything involving Espresso except that it is based on IBM's designs/architecture.

You further showed ignorance of how caches work even AFTER the fact that pemalite lectured you. Hehehe. You probably don't even know the purpose of a cache! Caches are used to maintain performance not increase it!

You further decided to fail and I can only laugh at you since purpose of Cache is to store informations eg code and microcode like AI, Physics, Caches maintain the performance thus it increases the performance thus CPU's in the long run compared to equilavent with less Cache! I don't talk about GFLOPS performance as you apparently think that I do. lol

eyeofcore said:


Did you know that Jaguar IPC is mostly below Core 2 Duo IPC? Probably not... :3

Did you know that the disaster that is the ibm espresso has an even lower IPC than every processor on the market ? (I'm willing to bet that you will deny the fact. LOL) 5 gflops at 1.2 Ghz is pathetic by todays standards. 

It is not a disaster for what it is... Espresso matches in GFLOPS;

i5 480M Core's;2/Thread's;4/Litography;32nm/Clock;2.66-2.9Ghz/Cache;3MB/Die Size;81mm^2/Transistor's;328 million/TDP; 35 watts

Pentium E5800 Core's;2/Litography;45nm/Clock;3.2Ghz/Cache;2MB/Die Size;82mm^2/Transistor's; 228 million/TDP; 65 watts

Are both of these from above disasters for what they are compared to Espresso that has 3 core's, produced at litography of 45nm, clocked at 1.243Ghz, has 3MB of L3 Cache, die size of just 27.7mm^2 with 60 million transistor's while TDP is like 10-15 watts?!

You deny the fact that it is not a disaster for what it is, for the architecture it is for performance per mm^2 it has at 45nm litography.

eyeofcore said:


So another benefit over Xbox 360 and PlayStation 3, so it is a direct communication between CPU and GPU also that is practically the basis of hUMA and your reaction is hilarious. I know that WIi U does not have Graphic Core Next(that can be only used with x86 architecture according to agrement with Intel) so it can not use hUMA model yet I did not say it has hUMA. I said that it would have similar effect like hUMA and that is direct communication between CPU and GPU and that is what hUMA achieves primarily, so it is not hUMA yet it does what one(or more) thing(s) that hUMA does...

Except the WII U can't communicate in harmony together with the GPU or CPU. The caches must be separate and the GPU doesn't provide any flat addressing methods which renders the WII U been incapable of any similar affect to hUMA. (Dat ignorance of yours showing LOL.) BTW what makes this post even more hilarious is the fact that GCN isn't exclusive to any CPU architectures. Why would ARM partner up with AMD ? (I'm willing to bet that you don't know this answer.)

Espresso has seperate eDRAM cache for each CPU core and Latte has 32MB of eDRAM plus seperate 2MB of eDRAM, quote from NeoGAF;

"The interface running around the lower right corner of the die is the DDR3 memory interface (the DDR3 is known as MEM2).

Running along the top and left sides of the die, along with a small section on the upper right side of the die, are general purpose I/O (GP I/O). The GP I/O is likely dedicated in large part to communication with the CPU, but may also be used for lower-bandwidth off-chip communication, such as the Blu-Ray drive or SD card slot.

On the bottom left of the die there are two high-speed I/O (HS I/O) interfaces, such as SERDES (serialiser/deserialiser), which are used to achieve very high bandwidth over relatively few wires. Proposed applications of these include:

- Communication with the hardware that handles video transmission to the gamepad
- Communication with the CPU (to provide high-bandwidth/low-latency eDRAM access)
- USB interfaces
- SATA interface
- Flash memory interface
- HDMI

There are also two blocks on the right side of the chip above the DDR3 interface that are currently unknown. These may be part of the DDR3 interface, or may be I/O elements in and of themselves."

Your responses are ignorant, arrogant and above all egoistic... That's kinda anti social.

Graphic Core Next architecture acts similar as x86 and I read on Anandtech forums that it can not be used in hUMA/APU with other CPU architectures configuration except x86, that is what I know. AMD partnered with ARM in research involving HSA also because to diversify them self to increase chances of profit and survivability plus expand their presence in the market.

eyeofcore said:


So you believe somebody that may actually not be right at all and that might be FOS and BSing you that also could be a fanboy spreading FUD! So he "confirmed" it yet nobody on NeoGAF, Beyond3D or any other forum knows at what are they looking and you only believe his information because you have a bias and/or an interest and any information that does not fit your agenda will be automatically be denied.

That so called "somebody" is a developer. If your going to deny a developers words then I don't have anymore to say. 

How can you be sure that this "developer" does not have a agenda and even then how can be sure since he is not an engineer also how can I, you and him be sure that hardware is not locked or something because the system is not stable. Something like Xbox One's Kinect thing where Microsoft dedicated/allocated 10% of GPU power for it yet they can't allow it usage for games since the system is unstable.

Remember when Cliff said that Unreal 4 can not run on Wii U yet few days after he pulled back his statemenet, remember? Yes?

Did he developed for the Wii U? Can he confirm it? DId he developed a fully fledged game? Is he a programmer? Does developers have complet access to the hardware or is it limited to the firmware that could be updated... Countless factors.

eyeofcore said:


I bolded the part where you failed hard and exposed yourself and your lack of knowledge!

You mean you bolded the part where you showed even more ignorance ? 0.0

Fail... You said that guys point is valid because a cheap port is not looking good on it or is not native 720p/HD. It is a cheap port from Xbox 360 version, it was done by a handful of people, not a fully fledged team of 100+ that could to complete engine changes and optimizations wrapped around Wii U's hardware and its virtues.

eyeofcore said:


Call Of Duty Black Ops 2 is a cheap port of Xbox 360 version and it ran okay and fine after a patch, Call Of Duty Ghost is also another cheap port of Xbox 360 version so you can not expect higher resolution except some minor details looking better plus those teams that port are just handful of people. A cheap port running poorly on Wii U does not support his point/claim/"confirmation" at all also you ignore other games...

How can you say that when the WII U is literally cheap hardware! That so called "cheap port" is done on a system that has literally almost identical gamecube CPU architecture while sporting a weak GPU from the PC space. It's the WII Us fault that it can't handle BLOPS 2 at 720p!

Another fail.

So you have something against cost effective hardware? eSRAM/SRAM is six times larger than eDRAM that has a bit higher latency, a bit lower bandwidth and performance yet it is good enough at 45nm and better than regular L2 Cache that is SRAM 90nm that is in Xbox 360/PlayStation 3 and to maintain compatibility when shrinking the components thus it has lower latency thus games would be incompatible that are optimized for original latency of Xbox 360/PlayStation 3's 90nm SRAM thus they need to change some things to negate some improvements of die shrinking to not break compatibility with the software.

It is a cheap port no matter how you flip flop it... It is not Wii U's fault, so you are saying that it is not Bethesda's fault for their games running poorly on PlayStation 3 like Skyrim, it is the hardwares fault... Right? *facepalm*

eyeofcore said:


Need For Speed Most Wanted on Wii U ran practicaly rock solid at 30 fps, has better lightning and some higher quality over Xbox 360/PlayStation 3 version.

It doesn't change the fact that it's closer to consoles!

The fact is that is overal better and is a good port when looking that a handful people were doing that port, also do you know a game called Project CARS? It was set for PlayStation 3, Xbox 360, Wii U and PC. Now it is set for Wii U, Xbox One, PlayStation 4 so explain me why they did not ditch the Wii U version... :3

Hint; Engine developer in SMS for Project CARS worked on Need For Speed Most Wanted U port so he knows the capability of the Wii U's hardware yet he decided to not ditch it and always in report involving Wii U build, DirectX 11 a like features... DirectX 11 is taxing yet it is being used in Wii U version and Wii U version did not get ditched.

This will be the first Wii U's game that is fully next generation.

eyeofcore said:


Batman Arkham Origins runs better on Wii U with less bugs and glitches and with a few framerate drops in some areas compared to Xbox 360/PlayStation 3.

Neither DF or LOT did the analysis plus I wouldn't be to too sure of claiming the definitive version after the fact that arkham city ran worse on the WII U and what's more is that arkham origins is using the same engine!

Another failure... Arkham City Wii U Edition used 2 cores, said by the developers at NeoGAF and he said they barely used 3rd core and I know it is using the same engine yet it does not mean that successor Arkham Origins would run exactly the same as Arkham City. Developers need time to get the know the hardware, they learned what to do on Wii U and how to optimize it so they use what they learned.

So by your logic Launch Games on Xbox 360 and PlayStation 3 used most out of the system since you deny that developers can learn how to optimize for the hardware and resolve the issues. So that should be also valid for PlayStation 4 and Call Of Duty Ghost that has some frame rate issues so successor to Ghost should also run with some frame rate issues also?

eyeofcore said:


Deus Ex Human Revolution Director's Cut is all around better and Xbox 360/PlayStation 3 version have framerate issues and even more when using smartglass/vita as secondary screen plus then there is no anykind of AA.

Again neither DF or LOT did an analysis yet.

So you deny the reports from NeoGAF users... Reviewers touted Wii U version as definitive version, long time ago I stumbled upon a thread at NeoGAF,; http://www.neogaf.com/forum/showthread.php?t=702931

SSAO is dialed back and frame rate/performance get a hit when in Dual Screen mode in PlayStation 3 version... Same thing is most likely valid on Xbox 360 version since they are not designed to have dual screen in mind compared to Wii U version.

eyeofcore said:


All three examples from above say otherwise so how in the hell just 160/200 shaders at 550mhz, so a GPU with 176-220GFLOPS beats a GPU that is 240GFLOPS? Look, you need to set a side your bias if you consider yourself a professional and smart because your reaction points out to oposite of that in a nano second.

Those 3 examples turned into one because you couldn't provide an adequate source to your claims LOL. Look you need to set aside your bias and deal with the fact that the WII U is weak. If your a professional then you should know why the WII U is failing to out the PS360 in alot of games. 

I am sorry, you have a bias and you need to set side your bias since I don't have a bias and that is your premise that you force.

You can't deal with the fact that Wii U is getting cheap ports of Xbox 360 version, is that hard to accept the truth? Don't you remember the PlayStation 3 when it got cheap ports? Are you that naive to think that developers can use up all the potential in one single freaking year and that they already know how to optimize for it? First of all not many game developers had experience with Wii's CPU let alone a multi-core PowerPC750CL that is the first of its kind thus it is practically totally unfamiliar to every developer in the world. eDRAM as L2 Cache in CPU is completelly new thing to these developers and doing low level API optimizations on VLIW5/VLIW4 architecture is also completelly unknown to developers, let alone if it has its own custom Open GL API from Nintendo that is likely called GX2 as in leaked dev kit informations plus it could also have TEV units rather than TMU's to make things worser for them. It is like PlayStation 2, developers could not really wrap their heads around it in first year and they were literally doing zero's and one's as I head in various forums. LMAO

At least Wii U is still far simpler to program for than Xbox 360 and PlayStation 3 nightmare with a lot of serious bottlenecks and short comings.

eyeofcore said:


40nm is old, but it is not ancient by your claims and is still widely used in the industry because if it was ancient then it would not be used at all to any degree. >_>

If a poor company like AMD could then why is nintendo so lazy with transitioning to the process ? AMD and smartphone manufacturers did it at the beginning of 2012! Deal with the fact that 40nm is pathetic by todays standard then!

You need to deal with the fact that it is not pathetic even by todays standard, if it was like 55nm then it is borderline, 65nm would be pathetic by todays standard while 40nm is now mature and safe way to go for larger chips to get fabbed.

You simply can't understand it?! 32nm is nearly fully occupied since 28nm is occupied to full extent because of primarily ARM chips also it is because of the design. CPU is 45nm, GPU is 40nm and when IBM makes available for 20nm fabs for production chips then Nintendo will jump for their CPU the time TSMC gets their 20nm/16nm ready plus did you forgot about the eDRAM part? From what I know only Intel managed to get eDRAM at 22nm with their latest IGP's for Haswell and it gets trickier for eDRAM to get produced compared to eSRAM as litography process gets smaller.

When IBM and TSMC 20nm fabs get ready, Nintendo will do what Microsoft did with latest Xbox 360 chip. Unifie them into one and drastically reduce costs so that they could sell Wii U for 250$.

I can only agree with you that Nintendo should have waited rather than jumped the gun early, they should have waited for 20nm.

eyeofcore said:


You think Wii U has current generation performance? Wow... Such a gold failure of a statement... Lets make a proper comparison.

What's the failure is the fact that you have showed absolutely no understanding thus far!

Well your responses hit what with the premise that you are tryingto force.

eyeofcore said:

 

PlayStation All-Stars Battle Royale is a Super Smash Bros clone or is being inspired by it is 720p60fps

Super Smash Bros 4/U/Wii U is 1080p60fps and that is confirmed by Nintendo

Smash bros is only confirmed to be 1080p by nintendo. http://www.eurogamer.net/articles/2013-06-11-super-smash-bros-for-wii-u-and-3ds-due-2014 That 60fps part doesn't even have any backing by nintendo themselves LOL. Uh Oh you might have to deal with 30fps! Bwahahaha

I don't have deal with anything, just because they only said that is 1080p while they did not commented on frame rate does not mean that it is not 60fps. We are talking about a fighter and above all it is from Nintendo, Nintendo targets 60fps when ever possible and that is their target for Mario Kart 8.

eyeofcore said:


Also why Bayonetta 2 looks way better than Bayonetta? If Wii U has current generation performance also how the hell is X running on Wii U also how the hell is Mario Kart 8 running on Wii U and why is this looking so good?

They all look average at best. I guess your so used to sub hd you feel that leap that PS360 users did in 2006 LOL.

I am sorry, but NeoGAF does not agree with you. :3

For your information, my only console was original PlayStation and since 2004 I am gaming on my PC. I was at my sisters girl friends house and I was with their brother that had a PlayStation 3 and he played PES and it was 720p60fps so I got my first feel of HD in 2008-9 on a console. I decided to up then resolution on my PC to 1280x1024 in Half Life 2, never regret it even with a performance hit that draged me down to 30fps from relatively stable 60.

eyeofcore said:

 

http://fast.shinen.com/neo/

Shin'en confirmed that the vehicle on the site is rendered in game(read replies);

I wonder how the rest will turn out. Remeber that neo assault game ? It turned out to be not so impressive if you ask anyone LOL.

Nano Assault Neo was impressive for using an engine that was not fully optimized for Wii U, first generation engine from Wii adapted to run on Wii U and it used only one core and a fraction of Wii U's resources. The game is totally uncompressed and it is only 50MB.

Visuals are impressive for amount resources it used and is 720p60fps. Don't underestimate gods of tech demo scene.

eyeofcore said:

 

Also explain me why Cryengine 3.5/4 is supported for Wii U if it has curren generation performance? It does not support Xbox 360/PlayStation 3;

http://www.crytek.com/news/crytek-announces-the-arrival-of-the-new-cryengine

Cryengine unversioned is even supported by current gen consoles! http://www.gamespot.com/articles/new-cryengine-revealed/1100-6413371/ So much for trying to differentiate between current gen consoles for the WII U LOL.

Yet you fail again, every engine can be downgraded/adapted to run on older hardware and weaker one, Unreal 2.5 worked on Wii and 3DS. :)

It is officially supported by Crytek's CryEngine 4 while Xbox 360 and PlayStation 3 are not also when unversioned version is supported to Xbox 360 and PlayStation 3 it will be with scaled down features to DirectX 9/Open GL 2.0 and not DirectX 11/Open GL 4.3 ... ;)

So it is not a full Cry Engine 3.5/4 :D

eyeofcore said:


Radeon HD 6650M should fit and be in Wii U, you will say it is not possible yet it is possible. Firs tof all AMD produces their GPU's at relatively cheap process/silicon while Nintendo is producing their GPU "Latte" at TSMC CMOS process/silicon that is more expensive and higher quality so it should allow some higher density, right? Also Radeon HD 6000 series were released in 2010 so I doubt that AMD did not find a way to increase density and possibly save some space in two years and 6000 series are used in Trinity and Richland APU's that are 32nm that may use improved technique of manufacturing the chips.

AMD always fabs their GPU at TSMC so if anything the quality ain't any different. Wow you don't follow AMD like you claim in your youtube videos! -_- Except that increased density is small compared to a node transition. At best they can probably only squeeze out 10%.

Hmmm... Care to tell me something that I did not knew? So you deny that process/litography quality can not be improved or that a higher quality silicon can be used?

Wow... Your forceful premise and guesses are constant, I follow AMD and I follow their stock yet it is not my primariy focus compared to finding a job in this recession. Recent leaks point out that Kaveri/Steamroller will have lower floating yet higher interger performance intentionally so floating points calculations/processing will be offloaded to GPU... This will make worser performance in older games :/

So Radeon HD 6570M is 104mm^2 so they can squeeze it under 100.71mm^2, die size of 93.6mm^2. Thanks for the information.

Chipworks guy said that Latte's GPU is heavily customized and that he could not tell what kind of AMD's GPU is so it is supposedly extremely customized design, by the way. You crushed your theory and theory of that guy that it is 160-200 shaders LMAO.

eyeofcore said:


Nintendo most likely customized the chip and we know that they don't waste any kind of silicon, they are being cheap ass when comes to silicon and they will try to get most out of the silicon for less and save couple of cents if possible while they will use higher quality process/silicon to potentially reduce power consumption and increase efficency of their hardware. Also don't say it is 4000 series because they 5000 series is just 4000 series at 40nm with implemented DirectX 11 support and 6000 series is improved 5000 series basically with changes.

http://www.techpowerup.com/gpudb/326/radeon-hd-6650m.html

http://www.notebookcheck.net/AMD-Radeon-HD-6650M.43962.0.html

Thanks for proving that nintendo is cheap when it comes to processs nodes. *rolls eyes* BTW the WII U's processing components probably take less than 30 watts! http://www.anandtech.com/show/6465/nintendo-wii-u-teardown I assume that the CPU take like 5 watts while the GPU might take around up to 25 watts. Us PC users all know that a Radeon HD 5550 TDP is about 40 watts max!. If we assume half of the shaders are to be non functional then that exactly matches up with the power consumption in the WII U so ninjablade might be right afterall! The WII U is closer to the HD5000 series rather than the HD 6000 series. The WII U IS looking to have less than 200 gflops in total! Deal with it!

Nope... It is 30 watts, at worse it is 35 watts sometimes and the power supply is 75 watts with 90% efficiency so it can handle 67.5 watts at max and degradation of the power supply unit is ignorable if not stressed fully to reduce its life span.

How can we be certain that Nintendo did not locked resources? Just in case if the system/OS is unstable...

You need to deal with this... You are looking at PC GPU's not mobile/embedded ones at all also does that TDP include GDDR3/GDDR5 memory into TDP/power consumption plus the board its self? If so then remove GDDR3(I know that 1GB 65nm one uses 20watts) or if is GDDR5(2GB 46nm is 7.5-8 watts so 512MB is 1.75-2 watts) then that the board its self could take a couple of more watts. So look at Radeon HD 6570M... 30 watt rated TDP, if 1GB of GDDR3 45nm then lets remove that thus power consumption goes down to 16 watts then lower the clocks from 600 to 550 thus power consumption goes down to 13.6 then we remove the board that could use 1-2 watts and it is 11.6-12.6 then we add like 2-3 watts that 32mb eDRAM uses and we are around 12.6-15.6.

I might be wrong, you may say I am FOS yet I might be right or yet we both don't know s*** when comes to what is used in TDP calculation of GPU since GDDR3/GDDR5 consume noticeable amount of watts and can add to heat of the GPU.


eyeofcore said:


If it was just 160 to 200 shaders then Nintendo would not put 32MB of eDRAM at all because it would be a waste of die space and money, we are talking about Nintendo and not Microsoft or Sony! Nintendo saw their mistakes with Xbox 360 and PlayStation 3 in design! You love to underestimate things, right? :P

Nintendo are more incompetent when it comes to designing hardware! That 32mb of eDRAM isn't exactly a waste. What is a waste is that nintendo cheaped out on the hardware so don't be trying to deny this! Just because nintendo saw their mistakes does not mean that nintendo will learn from them! No I don't love to underestimate, instead I over evaluated that nintendo would have the better hardware completely! Now you just further changed my view on how weak the WII U truly is! You did not make your case better in fact those cases were used against you! I think I have solved the puzzle on the WII Us hardware mysteries!

Take a look at Gamecube please... Look at Xbox 360 and PlayStation 3... nuff said.

It is a waste if its potential can not be saturated, also why I would be trying to deny that they cheaped out on hardware? LMAO

So Nintendo did not learn from Nintendo 64 to not use Cartridges and to not make programming far easier? So Gamecube is a console with architecture full of bottlenecks and has cartridges for games? Nope... Gamecube has simple architecture and a disc drive and it was too small in capacity with its own disc so Wii used fully fledged DVD discs.

Yes you do, don't deny it. I've seen people come and go and still underestimate a lot of things. You only over evaluated expectactions in hardware department, look at Wii. They did not try to be a power house so why would a successor that also has a Wii in the name to be a power house? Use logic... Now.

If Wii U was weak then Super Smash Bros would not be 1080p60fps and Project CARS would have been cancelled for Wii U yet it is not...

You have not solved anything... That is the sad part. :/

eyeofcore said:


Also don't dare to call me a Nintendo fanboy because I am not... I don't own any games or platforms from Nintendo. ;)

SHMH. Your youtube channel says very differently and so do the users at anandtech.

What you see and what you they say does not mean it is the truth, yet I don't deny you to fool yourself because that is your right!

I deleted all of my videos on my channel, I decided to do a "restart". I am sick of fake ass console wars and fanboys, that was is futile and is only war of attrition, I will make some other content none related to that c***. You can call me Nintendo fanboy yet it will only make you look like a fool, I started gaming on PlayStation and I continued to game on my PC... My Steam serves me well. :D

Bolded is my response... Enjoy.