Turkish said:
Consoles do get high-end PC hardware, the PS3 was high end as was the 360 as was the PS4. The 7800 class isnt mid end, it was high end. $300 range is high end. It's only the Xbone that was underpowered, only to account for the cost of the Kinect.
|
The Playstation 3 did have a high-end GPU relative in performance to the rest of the GPU's in nVidia's Geforce 7000 product stack.
The only hindrance the PS3 GPU had was that it's ROPS were cut in half.
Otherwise the only GPU's that were faster were nVidia's dual GPU solutions such as the GX2.
But one thing to keep in mind is that just before the Playstation 3 launched... nVidia released the G80. Geforce 8800 series were monsters back in the day, it relegated the Geforce 7 series/Playstation 3 into the mid-range in terms of comparative performance when the PS3 launched.
http://www.anandtech.com/show/2116/24
The Xbox 360 was a semi-custom hybrid design that bridged the x1000 and 2000 features. It's not really directly comparable to anything.
The Playstation 4 however... Had the Radeon 7950, 7950 boost, 7970, 7970 Ghz edition, 7990 that were all faster than it. Those were the high-end parts.
But it also had the Radeon 7870 and 7870 XT that was also faster than the PS4. A 7850 overclocked could also beat the PS4.
The upper mid-range/mainstream was the Radeon 7850, 7870, 7870 XT.
The mid-range was the 7750, 7770, 7790.
The 7730 and the re-purposed VLIW4/5 Terascale parts were low-end.
Now the 7850 GPU's were based on Pitcairn... Those same chips were repurposed (Identical from a hardware standpoint) in the 200 series with the Radeon R7 265.
Do you also think the Radeon 265 was also high-end despite the fact that the 270, 270x, 280, 280x, 285, 290, 290x, 295 x2 were all faster than it? It's the same GPU as the Radeon 7850 remember.
Turkish said:
The way the industry moves forward is that tech from the high end trickles down, it doesnt go upward. It's one of the basics really.
|
Not always. Sometimes it trickles up. Aka. AMD's small die strategy.
http://www.anandtech.com/show/2556/2
Turkish said:
Yes you did, you said industry is moving towards GDDR6 and abandoning HBM2 for the consumer market, with no proof whatsoever, you only did that because Nvidia is rumored to go with GDDR6 with their products, why else would you say that. What we know of Vega and the rumors all point towards AMD really wanting HBM2 to be their future.
|
I never said the industry was abandoning HBM2 for the consumer market. Don't put words in my mouth.
I said that HBM2 will be used for high-end and niche' products. That includes consumer segments as well you know.
The industry will transition towards GDDR6 away from GDDR5 and GDDR5X over time.
I still never said the industry revolved around nVidia.
Vega is AMD's high-end Fury successor. We know it will use HBM2.
And if you would like me to start providing evidence, then I must ask you to start doing the same, otherwise you have double standards.
Even Hynix has scaled back it's HBM2 ambitions due to how complex and costly it is and is still going full-steam with GDDR6.
A GPU with a 384-bit bus (likely nVidia) will be using it with a whopping 672GB/s of bandwidth. But don't take my word for it:
http://www.anandtech.com/show/11398/sk-hynix-advances-graphics-dram-gddr6-added-to-catalogue-gddr5-gets-faster
Turkish said:
Gotcha, the contradiction and flawed argument I was waiting for. GV100 is supposed to become cheaper and trickle down to consumer level, but HBM2 is supposed to not.
I think you really should do some research on what you claim and read twice before you post becuz it's all over the place..
|
It really isn't a contradiction. You just aren't getting it. Not surprising.
GV100 itself isn't going to become cheaper and trickle and get released at the mid-range consumer level itself. It's an 800mm2+ chip on a 12nm process, nVidia would go bankrupt.
The 12nm process is actually based on TSMC's 16nm Finfet Compact.
http://www.anandtech.com/show/11337/samsung-and-tsmc-roadmaps-12-nm-8-nm-and-6-nm-added/4
https://www.kitguru.net/components/anton-shilov/globalfoundries-we-started-to-tape-out-products-using-second-gen-14nm-process-technology/
https://www.extremetech.com/computing/245880-rumor-nvidias-volta-built-tsmcs-new-12nm-process
By the time 7nm rolls around, GV100 and Volta will be old news. They will be outdated and obsolete. What will happen is that level of performance will be available in the mid-range with a completely new and different GPU and GPU architecture.
HBM2 is the same. Untill it's manufacturing is drastically altered by new process technologies, it's extremely high price, isn't going to change, it wasn't built with a low-price in mind, the TSV and Interposer erode that possibility. It was built because GDDR5 stagnated and we needed faster memory in professional and high-end markets.
But that goes away in a large part thanks to GDDR6.
And Moores Law actually doesn't have anything to do with a chips performance or bandwidth. It's all about transistor densities and manufacturing costs.
https://en.wikipedia.org/wiki/Moore's_law
And even then it's not actually a legitimate "law".
Turkish said:
It is tho, more than 4x compute performance increase over Xbone should show a notable difference with Xbone games. There's going to be bigger worlds, more detailed environments, better character models, improved IQ, lightning etc
|
Compute performance isn't everything.
I expect imagry to look more refined, to take advantage of superior fidelity typically offered on PC. Something I have gotten used to... And then wish to claw my eyes out once I start using a console.
Turkish said:
How else do you expect Scorpio exclusives to look in the near future if the gpu is just a derivative of the Xbone?
|
Well. When I say "Derivative" I mean that in a general architectural term.
The Xbox One is based on Graphics Core Next 1.1. Scorpio is likely based on something that could resemble Graphics Core Next 4.0.
http://www.anandtech.com/show/11250/microsofts-project-scorpio-more-hardware-details-revealed
But Graphics Core Next is actually extremely modular, you can add and take away features fairly easily.
So for example you can take a Graphics Core Next 1.0 part and increase the amount of ACE units, but leave the rest of the chip entirely alone.
Polaris/Scorpio is a few generations of that kind of process when compared to Xbox One. Scorpio's GPU is fully compatible with the Xbox One's GPU from an architectural perspective.
http://www.anandtech.com/show/9886/amd-reveals-polaris-gpu-architecture/2
Turkish said:
I think it has more to do with the fact that this gen we get less AAA games and especially the first couple years were barren. 3rd parties too were slow to move on with cross gen titles, games like Fallout 4 use 15 year old engines and look like they started off on the 360.
|
Even the exclusive AAA games have been fairly marginal. Especially on Xbox as far as imagry is concerned.
Turkish said:
No but you tried to make a case the 1080 Ti couldn't possibly be next gen because you personally tried it and deemed it's not next gen performance. Unless you played games specificially programmed for the 1080 Ti you never can make that claim because all you played on it are ports and games made with lower spec hardware in mind.
The Kite demo which is made with the gtx 680 in mind looks great. The stuff Square Enix put out is amazing. Especially that demo requiring 4 Tixan X cards, so much power all just for a DX12 showcase. Imagine a game made for 1080 Ti.
|
There is more to graphics than just performance. The Geforce 1080Ti isn't loaded up with a ton of graphics features that sets it apart from our current crop of graphics processors.
It's not next generation.
Turkish said:
So you're expecting GV100 level performance out of a 2019 console? That's 7nm Navi, possibly the PS5, 12-15Tflopzz, it's going to be a bloodbath.
|
I don't expect next gen to start in 2019.
There is a chance 7nm might blow out into 2020 anyway, expecially for EUV. TSMC, Global Foundries etc' have never been known to always be on time.