DonFerrari on 03 June 2016
Pemalite said:
DonFerrari said:
That is why I asked if it's possible to offset enough from CPU to GPU, not if one or another dev will do it, because I'm pretty certain that if it's possible the likes of Naughty Dog would do it. (and I put that perhaps it would be in 45 capped). The devs will always have to choose on what to improve and maximize and on what to concede, I'm aware of that.
|
It is possible to a point. The reason why we don't do all our processing on a GPU is because GPU's are really bad at working on single large and complex tasks, they are best when working on thousands of small tasks.
CPU's though aren't as good as a GPU when it comes to thousands of small tasks, which is why both are seperate, but they do excel at big complex tasks.
With that said, CPU's have typically been doing some of those "small tasks" like Physics calculations, which can be moved over to the GPU, the Playstation 4 was already doing this, so whether such a thing can continue is another matter entirely.
JRPGfan said:
In terms of % of performance increase... wasnt the jump from PS3 -> PS4 like x10 in most area's?
This will just be a x2.3 or so increase (if it ends up being 1.84 -> 5.5 teraflop)
Permalite :
We where talking about APUs (igpu) and memory bandwidth.
You replied to me about DDR4 potentially being faster than GDDR5.
I assumed you where talking about system memory on motherboards for CPUs (cuz you know APUs and iGPUs).
I know graphics cards can use higher bus widths.
Im saying thats not gonna happend for normal consumer cpus, until we start useing Hybrid memory cubes.
When that does happend, a large portion of the discrete gpu market will go away.
|
As systems are today, you would be right. But there is still the possible potential for a 512bit bus for System Ram to feed an APU, just no one has done it because of costs, the amount of traces required to support a 512bit bus results in increased PCB layers and thus complexity, which is one of the issues HBM solved by using an interposer.
With that said, AMD is working on a large and power hungry APU for the HPC market, which is likely to be using HBM on a 1024 or higher bit bus.
DonFerrari said:
I see perma... other aspects probably won't go up 4x as you said, cost would be too high. Any chance of the extra power offseting cpu and the 1440p going up to 60fps on the 1080p30fps (or 45 capped at 30) or it's much more likely that we get other graphic usage instead of frames?
|
Very possible, but that will depend on the dev and what tasks they will offload to the GPU, not everything can be offloaded to the GPU though due to architectural and fundamental chip design differences. (Serial vs Parallel processors.)
I would think most developers would use the extra horse power to close the graphics gap between console and PC though as it shouldn't really cost them any extra development time as the work was already done for PC.
|
On the part of what CPU and GPU are better at I'm aware. The question is if we know how much we still could offset and aren't doing, right? Too bad Tachikoma left us, she could shed some light over it.
On my part I would rather have bigger resolution to make better use of my 4k or if the graphical budget for that isn't worth them put more whistle and bells since for sometime we will still be receiving most of our content in 1080p (sad)... even Netflix have a lackluster of 4k content, and in Brazil you almost can't find 4k bluray discs at sale and I still weren't able to make downloaded content in 4K run in the TV through USB (because my notebook isn't capable of showing 4k on external screen).