Marcusius said: ---------------------------------------------------------------------------------------------------------------------------- PS2 = Using their own custom made CPU. Xbox = Using Celeron CPU with a built-in Nvidia GPU Gforce 2 (Power house at the time). GameCube = 1st using an IBM cpu with ATi video card (Power house at the time). Winner = Ps2. --------------------------------------------------------------------------------------------------------------------------- |
The Xbox was using a Pentium/Celeron Hybrid. In that it used the same amount of cache as the Celeron, but it had the associativity of the Pentium 3, it also used the Pentium 3's faster Frontside bus of 133mhz when Celeron typically used 66Mhz or 100Mhz.
The GPU is also not a Geforce 2. The Geforce 2 was mostly a fixed-function design, the Xbox had programmable Pixel shaders.
It was infact a modified Geforce 3.
The Gamecube's GPU was made by a company called ArtX. That just happened to get purchased by ATI just before the Gamecubes release and thus an agreement also took place to throw an ATI sticker on every box.
CrazyGPU said: I don´t agree. Let say Nvidia architecture can make a 6 Tflop graphic card as capable of an AMD 8 Tflop card. That can make a 50% advantage for Nvidia, but Im saying that at the same time Sony launches a console with 4200 Gigaflops and Microsoft is preparing a 6000 Gigaflop machine, Nintendo is going for a 400 one while docked. On the cost issue, Object complexity, texture sharpness and lighting is far more time demanding in 1080p and 4k than it is at 480p or 720p. As more time is more paid hours, its far more costly. Also Nintendo confirmed that. |
Increasing a games resolution changes nothing about a games cost. Otherwise the PC would be the most expensive platform on Earth to develop for, which flies in the face that it has the most indie developers. (The PC has resolutions of 11,520x2160 if you so wanted, which makes 1920 x 1080 seem insignificant.)
What you are talking about is higher quality assets.
So you aren't actually disagreeing with me at all.
CrazyGPU said: On the tegra issue, The fact that its Tegra doesnt tell you the hole picture. They could made a Pascal tegra made at 16 nm instead of 20nm with more transistors and less power consumption. Also they can put more than 1 graphic core, they can have more shader processors, its a custom Tegra. Its just they went cheap. Its cheaper to put old tech on the SOC. Look at Sonys CPU, they are using jaguar cores, not very different than Tegra in performance, but eight of them. And what did they do with the pro GPU. They modified the old one a bit, its Polaris now, and they doubled the graphic cores. Nvidia could do that with tegra, after all, tegra uses PC graphic technology. |
16nm is 20nm. 16nm is a rebranding of 20nm, but with the inclusion of Finfet.
It's also not a custom tegra. It is Semi-custom. - Even then I doubt the customization was anything more than an alteration to some minor logic.
CrazyGPU said: Now , lets forget about CPU teraflops for now, How the CPU is feed by the memory bandwith? PS3= 22 GBytes/s SWITCH = 25.6 GBytes/s PS4 = 176 GB/s. Does that tell someone what kind of textures and lighting can we spect from SWITCH? Again. Its a great hanheld, but don´t sell it like a desktop console. |
The Switch has closer to 20GB/s of bandwidth due to the DRAM clock decrease.
However, it also has more bandwidth than the raw number imply due to hardware-based compression that the Playstation 3 and Playstation 4 do not have.
Plus, Nintendo is marketing it as both a handheld and a console. So it's both.
--::{PC Gaming Master Race}::--