Otter said: I think this was always going to be the case with a mid-gen upgrade, at least on paper I don't expect anaconda to do much more than x2.5 performance of XB1X . |
First AMD needs to actually invent a GPU that is 2.5x faster than the Xbox One X GPU.
Otter said: Anaconda and Lockhart will both provide a new generation of graphical presentation that is not available on X1X: Lockhart will just do so at the expense of native 4k targets, but without any compromise to the CPU and overall performance. |
Why couldn't there be any deviation in CPU performance? This generation certainly has every console with a different CPU performance profile.
I.E. Xbox One: 1.75Ghz, Xbox One X: 2..3ghz (With some additional offloading), Playstation 4: 1.6ghz, Playstation 4 Pro: 2.13Ghz.
Trumpstyle said: Now with 2 leaks on Microsoft next-gen strategy and assuming Sony does a 399$ console, we now know pretty much the specs in these consoles as desktop zen2 and Navi has been leaked. Based on the leaks and a little speculation the specs we got is looking like this: |
We really don't though.
Trumpstyle said: Xbox Two (Lockhart) 300$ PS5 400$ Xbox Two+ (Anaconda) 500$ |
I am expecting single CCX on the CPU side for all devices. It offers the best price/performance... And allows for more of the transistor budget to be sunk into the GPU side of the equation.
Navi with 56CU's to be equivalent than a Geforce 2080 is a bit of a claim, Vega 64 has 64 CU's, which is 14% more... Yet ends up being 40-60% slower than the 2080, I highly doubt AMD has made such significant strides in boosting Graphics Core Next efficiency. Grains of salt shall be had.
Granted at 7nm, they should be able to bolster clockrates, but... Still a big gap to close.
Trumpstyle said: Will have Vapor chamber cooling for extra high clocks :) and be quiet as a stone. |
Vapor chamber doesn't guarantee high clock and silent operation. - You still need a fan.
It does allow for more efficient movement of heat however.
Trumpstyle said: We should get dev kits leaks within 7 months if history repeats itself. As in June 2012 there was already a forum post at Beyond3D that the Ps4 will a SOC containing 8-core jaguar cpu with and a gpu similiar to amd radeon 7850. |
Indeed.
Some dev kits were a deviation though as some claimed to be using Terascale at some points.
DonFerrari said: PS3 and PS4 had better HW than X360 and X1. |
Indeed. Although to be fair, the Xbox 360's GPU was the better chip at the end of the day.
DonFerrari said: And how would MS know for sure that their hardware isn't outpowered before Sony either announce or release the HW? The only way they can be sure is if they always take over 12 months later than Sony to release their HW (because not all HW change can be made fast and worse, the time to dev SW for it)... or are you suggesting MS will have access to confidential development documents from Sony? |
Precisely. They wouldn't. AMD and other hardware companies would be under a non-disclosure agreement and would not be allowed to disclose what the other company is doing.
DonFerrari said: PS4 is the first console to have ever won being the most powerful at start of gen, all other in previous gen lost. So being the most powerful doesn't warranty good sales (we have plenty overpowered duds in the past). Being less expensive but giving a not up to standard experience also doesn't solve it, as show by WiiU and others as well. |
Power doesn't guarantee success/failure, that is what history has ultimately told us.
But what does determine that is the right performance/price ratio.
The Playstation 3 was higher performing, but also over priced high on it's launch, paving the way for the Xbox 360 to take a massive chunk of marketshare.
The Playstation 4 was higher performing, but lower price... And that resonated with consumers across the spectrum.
shikamaru317 said: Actually, Lockhart shouldn't hold back Anaconda at all. The rumored specs for Lockhart are high enough that the only downgrade should be resolution, Lockhart should play next gen games at 1080p that Anaconda plays at 4K with no other graphical downgrades. It therefore shouldn't hold back graphics for exclusives or multiplats next-gen. |
Depends on how much of a cutback Lockhart has.
If it's got less ROPS, Bandwidth, Geometry units, CPU cores, CPU clocks, Texture Mapping Units, Compute and so on... That has a flow-on effect.
Developers do need to build their games for the lowest common denominator in mind.
DonFerrari said: It's not out of the blue that several PC gamers blame consoles for holding they back. Consoles sell much more SW so they are taken more in account. |
It's true though.
It's no coincidence that in 2014~ that Multiplat games took a relatively large leap forward in terms of fidelity when consoles caught up to mid-range PC's.
shikamaru317 said: So, Anaconda is actually more likely to hold back graphics than Lockhart is, simply because native 4k/60 fps is very demanding. |
Eh. Except you contradict yourself from the very outset of your post by stating that flops isn't an indicator for gauging complete system performance.
shikamaru317 said: Also, based on current PC GPU benchmarks, 2x more GPU power is actually just about enough to hit 4K with the same graphics settings. As a for instance, the 6 tflop AMD RX 580 can achieve a locked 60 fps at 1080p, ultra settings on Battlefield V, while the 10 tflop RX Vega 64 can hit 50 fps on Ultra settings at 4K (if AMD already had a 12 tflop GPU it very likely would be able to hit 60 fps at 4K, ultra settings). |
The RX 580 is mid-range. It's not a GPU that is well suited for 4k with settings dialed up... And it's certainly not a GPU that is ideal for 1080P. - It's ideal resolution is actually somewhere in between, 1440P.
At 4k, you need to start cutting back on visual effects... And at that point, you are better off with 1440P with settings pushed up.
shikamaru317 said: I know it seems like you would need 4x more GPU power to output 4x more pixels, but you actually don't. I could show you multiple PC benchmarks that prove that you only need about 2x more GPU power to hit native 4k, though obviously it varies depending on the game, some engines struggle with 4K more than others, so some games need more than 2X more power to hit 4K, but 90% of games fall somewhere between 2-2.5x more power needed to hit 4K. |
It is all completely dependent on the GPU architecture and how well it would hit higher resolutions and what the games/engines are trying to accomplish.
Back in the 90's, one of the biggest limitations to hitting higher resolutions was actually Ram capacity, if you had a GPU with a smaller amount of VRAM, you might have been limited to 640x480 rather than say... 1024x768. - Some manufacturers got around that issue by employing texture compression.
Move towards the dawn of TnL (Fixed function stuff), fillrate became a massive limitation... Often various GPU designs came out with strange multi-texturing set-ups.. And if a game didn't leverage it appropriately, fillrate could be cut in half or down to 1/4th of it's full speed, making a massive impact on total performance and thus resolution you could operate at, some GPU's got around that issue by taking a tile-based approach to rendering.
Today workloads are very much compute heavy, but if a game is heavy on the geometry or texturing it can make a fairly large impact on AMD GPU's ability to hit higher resolutions as performance will suffer.
shikamaru317 said: Now obviously if you are running "4K" textures as well, then you definitely need more GPU power and RAM that is good enough to stream those textures quickly. |
4k textures were used even during the 7th gen. Texture resolution and screen resolution aren't 1:1 pixel to pixel.
--::{PC Gaming Master Race}::--