CrazyGPU said:
You don´t do an exact comparison either.
|
I have done so in the past.
CrazyGPU said:
You don´t know if the tecniques are going to save 20% of bandwith, 40% or only 10%.
|
Yes I do.
CrazyGPU said:
You don´t know how AMD will implement it inside the new hardware.
|
There are only so many ways you can skin a cat, especially as AMD is still fumbling around with Graphics Core Next and not an entirely new Architecture.
CrazyGPU said:
So you don´t know if the compresion of an uncompressed 512GB/s stream of data can be compressed to 480, 384, or 256 GB/s of data.
|
Yes I do. AMD and nVidia have the associated whitepapers to backup their implementations... And various outlets have ran compression benchmarks.
CrazyGPU said:
So even if you take those tecniques into account you are inacurate too. It´s like comparing Nvidia Teraflops to AMD Teraflops. Teraflops can be the same amount, but the Nvidia implementation makes use of those teoretical maximum teraflops much better than AMD in practise now, so you can´t compare different architectures and be accurate. But as you don´t have anything else for a proper comparison, you have to go with something. So we compare with what we have , teraflops, GB/s, and so on. And the comparison is better if we compare similar architectures of the same brand.
|
False. Your understanding of Teraflops is the issue here.
An AMD Teraflop is identical to an nVidia one... Identical.
A flop represents the theoretrical single-precision floating point performance of a part.
The reason why nVidia's GPU's perform better than an AMD alternative is simple... It's because of all the parts that have nothing to do with FLOPS.
In tasks that leverage AMD's compute strengths, AMD's GPU's will often beat nVidia's, instances such as asynchronous compute is a primary example, although nVidia is bridging the gap there.
CrazyGPU said:
with your numbers, near 0.2 Teraflops PS3 vs a little more than 1.8 Tf PS4 is 9 times more. No way the PS5 will have 9 times the teraflops of PS4.
|
That is your numbers, never once stated the Playstation 3 was 0.2 Teraflops. - Nor did I say that the Playstation 4 had 9x the Teraflops and nor did I state the Playstation 5 will have 9x the Teraflops either.
CrazyGPU said:
Also considering tecniques or not, the jump from standar ps4, 176GB/s to let say 512 GB/s, equivalent to 800 GB/s uncompressed, just to put a number, is far smaller than going from 22,4 GB/s of PS3 to 176 GB/s of PS4. And there is no way a PS5 will have 8 times more bandwith to feed the processor.
|
Take note of the resolution a console with 22.4GB/s-25.6GB/s of bandwidth operates at and the one with 176GB/s operates at.
The Playstation 5 will implement Delta Colour Compression.
AMD's Tonga for instance (First gen Delta) increased potential bandwidth by 40%... Which is why the Radeon 285 was able to compete with the Radeon 280 despite a 36.36% decrease in memory bandwidth.
nVidia has been improving Delta Colour Compression for years...
The jump from Kepler to Maxwell was a 25% increase in compression. (Varies from 20-44% depending on patterning.)
And from Maxwell to Pascal it was another 20%.
And nVidia has made more improvements even before/after then.
AMD also implemented Draw Stream Rasterization on Vega (Although not fully functional yet, but with Navi it should be.)
And the Primitive Discard Accelerator was a thing starting with Polaris, which discards polygons that are to small before being rendered.
These are ways that bandwidth and computational capability is conserved.
CrazyGPU said:
So, the two things that are really important to improve performance and have a balanced graphic architecture, the calculation (teraflops) and the feeding for that calculation (cache, memory bandwith, theorical or with tecniques), will improve less than they did before, and the improvement will feel less important than before too even if it were the same.
|
Teraflops is pretty irrelevant, you can have a GPU with less Teraflops beat a GPU with more Teraflops.
I am pretty sure we have had this "debate" in the past and I provided irrefutable evidence to substantiate my position... But more than happily to go down that path again.
CrazyGPU said:
Software is not going to solve that. PS4 performance was always similar to a Radeon HD 7850-7870 on PC and no exclusive programming changed the graphics capability of the console. And if it did for you, it never became a Geforce GTX 1060 because of that.
|
I never made a claim to the contrary. The Playstation 4 and Xbox One provide an experience I would fully expect from a Radeon 7850/7770 class graphics processor, maybe a little better, but not substantially so.
In saying that, playing a high-end game on high-end hardware is getting to the point of being a generational gap on PC.
CrazyGPU said:
With a 10-12 Teraflops PS5 machine, we would have a 5,4-6,5 improvement in theoretical Teraflops
|
And real-world flops? ;)
CrazyGPU said:
and with 800 GB/s of uncompressed bandwith (if you consider that the ps4 did not compress anything) the improvement will be 4,5 times.
|
Doubt we will be seeing 800GB/s of uncompressed bandwidth, 512GB/s is probably a more balanced and cost-effective target.
CrazyGPU said:
So again, you will have 4k, 30 fps. 60 in some games. With PS4 graphics and a little more when devs get used to it, but nothing to write home about.
|
I would expect better than Playstation 4 graphics, efficiency has come along way since the GCN 1.0 parts... Navi should take it another step farther... I mean, I am not going out and saying Navi is going to usher in a new era of graphics, far from it... It is still Graphics Core Next with all it's limitations.
But it's going to be stupidly large step-up over the Radeon 7850 derived parts in almost every aspect.
CrazyGPU said:
A great CPU, Hard Disk, or anything else is not going to change that. It´s not going to be the Ray tracing beast with new lighting and geometry many of us would wish for.
|
The CPU is going to be a massive boon... Hard Drive is probably going to be a bit faster, but we are on the cusp of next-generation mechanical disks, which the consoles might not take advantage of initially... Otherwise caching with NAND is a possibility.
And as for Ray Tracing... Games have been implementing Ray Tracing since the 7th console generation with various deferred renderers... We will be continuing down that path next gen, it will be a slow transition to a fully ray-traced world, next-gen will just be another stepping stone.
CrazyGPU said:
I´m not saying games are not going to look better, they will of course, what I say is that my opinion is that is not going to be revolutionary. It will be an evolutionary step from what we have now.
|
We haven't had a "revolutionary" jump since the start of the 6th gen consoles... It's all been progressive, iterative refinements.
I mean Call of Duty 3 on the Xbox 360 wasn't a massive overhaul over Call of Duty 3 on the Original Xbox.
But the increases in fidelity when you compare the best looking games of each generation is a substantial one.
Halo 4 on the Xbox 360 is a night and day difference to Halo Combat Evolved on the Original Xbox and Halo Infinite on 8th gen hardware (If the Slipspace demo is an example) is a night and day difference over Halo 4... It has a ton more dynamic effects going on.