zeldaring said:
you know why he wont agree but i guess we are so bored that we kept responding. Ps3 pro is exactly where the switch was which is not a bad thing at all. i mean just look all the ports of wiiu to switch and ps4 to ps4 pro. I think ps3 pro is actually where it line up perfectly. |
There are many hardware differences between a PS3 and Playstastion 4 outside of performance, aimed at improving efficiency and visuals, many of which the Switch has or sometimes even exceed.
* Polymorph Engines. (Tessellation)
* Shader model 6.7 (PS3 was SM3, PS4 SM6.5 )
* Vertex Shader 5.0
* Geometry Shaders
* Support for higher resolution (16k textures) - Not to be confused with display resolution.
* Compute Shaders.
And more. It's not just about raw throughput, but about hardware feature sets and efficiency.
So calling the Switch a "Playstation 3 Pro" is highly disingenuous, it's nothing like it from a high or low level.
Oneeee-Chan!!! said: The GPU of the PS3 is equivalent to a 7600GT. |
PS3 had a Geforce 7 class GPU with a core layout of: 24:8:24:8 (Pixel shaders: vertex shaders: texture mapping units: render output units)
Core: 550Mhz. Memory bandwidth: 20.8GB/s
It GPU core is like-for-like of the Geforce 7 7800 GTX 512MB... But with less than half the bandwidth and half the ROPS.
7800GTX: 24:8:24:16 (Pixel shaders: vertex shaders: texture mapping units: render output units)
Core: 550mhz. Memory bandwidth: 54.4GB/s.
The 7600GT has half the functional units with a core layout of 12:5:12:8 @560Mhz and 22.6GB/s of Ram.
In non-memory bandwidth (I.E. Shader heavy) scenarios, the PS3 should be able to get fairly close to the 7800GTX, but once you bog it down with Alpha effects, it will come up short.
Oneeee-Chan!!! said: Really? Besides, aren't the GPUs of PS 3 and Xbox 360 almost equivalent? If the PS 3 GPU is equivalent to 7900 GT, then which GPU is equivalent to the Xbox 360 GPU? I was thinking about x800. |
The Xbox 360's GPU was more powerful, it was the CPU where the PS3 had a commanding lead.
The Xbox 360's GPU was closely related to the Radeon 2900, but with a lower core clock and memory bandwidth.
AMD Backported the Radeon 2900 design for Microsoft and made it a custom chip.
sc94597 said: 1.7Ghz boost is only possible in the top-powered model (80W.) The lower TDP models (the one in the video was a 45W model with a max boost clock of 1.35 Ghz) have much lower base and boost rates. https://www.notebookcheck.net/NVIDIA-GeForce-RTX-3050-Laptop-GPU-Benchmarks-and-Specs.513790.0.html |
I was only talking about the 80w model, the highest-end configuration.
sc94597 said: Also the RTX 3050 mobile is VRAM limited with only 4GB. In a game like Final Fantasy 7 Intergrade (among many other more recent releases) 4GB is a significant bottleneck at 1080p. |
The 3060 with 6GB isn't much better.
It's nVidia's M.O.
sc94597 said: A Lovelace Tegra with a TDP of about 15-20W and 8GB of allocated video memory available could be competitive with an RTX 3050 mobile @35W-45W. Especially in a closed-platform where game optimization can be targeted. I'd expect it to be even more true as games continue to become VRAM hogs. " I don't think it necessarily will happen, but it is the upper-limit of possibilities. Edit: It wouldn't be surprising at all if the performance difference between the Switch 2 and an RTX 3050 35W was less than the performance difference between the RTX 3050 35W and the RTX 3050 80W. |
You might be on the money, but without confirmation it's just a hypothesis.
zeldaring said:
Its not really worthless measure, it's just series S has a vastly better CPU and everything else. |
No. It's literally a worthless measure.
RDNA3 on a flop to flop basis will perform worse than an RDNA2 part.. Because AMD introduced a dual-issue pipeline.
That means it can "theoretically" have Twice the flops, but it will never have twice the performance as it can only issue a second instruction when it can extract a second instruction from the wavefront.
And when that is unable to happen, you will only get "Half the flops".
This is the same wall AMD found itself with prior VLIW/Terascale designs and why they opted for Graphics Core Next in the first place, it then becomes very driver/compiler heavy.
This is the dumbed down version of course.
The Series S actually has a better GPU than the Playstation 4 Pro.
Teraflop numbers you see floating around are all theoretical, not real world benchmarked numbers.
--::{PC Gaming Master Race}::--