I really dont see the appeal of ray teacing.
There is a ton of appeal to Ray Tracing. Reflections, Lighting, Shadowing... All see marked improvements.
Before the current "variants" of Ray Tracing (Which is all the rage thanks to nVidia's RTX) we were heading down the road of Global Illumination Path Tracing which is variant of Ray Tracing even as far back as the 7th generation of consoles. (Especially towards the end of the generation, especially so on the PC releases.)
So despite you not seeing much "appeal" for the technology... You have actually been seeing it for years.
Sound might be interesting. I could really see a good stealth game with it.
We actually had 3D positional Audio before... The PC was pioneering it with Aureal's A3D... And the Original Xbox with it's Soundstorm solution.
Then we sort of went backwards/stagnated for years in all markets.
It's not just Stealth games that see big benefits from it, it just makes everything feel more surreal and dynamic.
I really hope that next gen offers a some other feture. Maybe vr finally breaks the mainstream. That would be great.
I doubt VR will ever gain much more traction. After all these years... It never materialized into anything substantive like motion controls did with the Wii.
But hey. Happy to be proven wrong.
Radeon VII had far better sustained boost clocks than the Vega 64 did. A Radeon VII could reach a maximum of 2Ghz while Vega 64 was at most 1.7Ghz when both were OC'd. I imagine that there was at least a 20% uplift in compute performance in comparison to the Vega 64. The Radeon VII probably doesn't need 1 TB/s since it's a gaming GPU.
Overall the difference isn't that pronounced.
Vega 7 has a clockspeed advantage sure, but it's not a significant one... And Vega 64 makes up for with it's extra CU's, meaning the different in overall compute isn't that dramatic.
The Radeon VII probably doesn't need 1 TB/s since it's a gaming GPU. The only way I can reason why the Radeon VII has as much bandwidth as it does is meant to be competitive in machine learning applications with top end hardware nearly all of which are sporting HBM memory modules one way or another ... (also the Radeon VII was closer to 20-30% faster than the Vega 64 rather than 30-40% because once the Radeon VII released, the Vega 64 was already marginally ahead of the 1080)
Vega 7 is based on Radeon Instinct MI50 which is meant for deep-learning/machine-learning/GPGPU workloads.
There is also a successor part aka. MI60 with the full 64 CU compliment and higher clocks.
AMD had no answer to nVidia's high-end, so they took the instinct GPU's and rebaged it as Vega 7.
There is always a deviation in benchmarks from different time frames and even sources. - 30-40% is a rough ball park as is, no need to delve into semantics, the original point still stands.
They didn't take advantage of this during last generation.
And they didn't take advantage of it this generation either.
So how many generations do we give AMD before your statement can be regarded as true?
The Wii used an overclocked Flipper GPU which was arguably a DX7/8 feature set design and the X360 is exactly like the Adreno 2XX(!) according to an emulator developer rather than either ATI's R500/R600 ...
Functionally the Wii/Gamecube GPU's could technically do everything (I.E. From an effects perspective) the Xbox 360/Playstation 3 can do via TEV.
However due to the sheer lack of horsepower, such approaches were generally not considered.
As for the Xbox 360... It's certainly an R500 derived semi-custom part that adopted some features and ideas from R600/Terascale, I wouldn't say it closely resembled Adreno though... Because Adreno was originally derived from Radeon, so of course there are intrinsic similarities from the outset.
Why reinvent the wheel?
AMD only really started taking advantage of low level GPU optimizations during this generation ...
As a Radeon user dating back over a decade... Haven't generally seen it.
Far more so for Nvidia than AMD because with the latter they just stop updating extremely dissimilar architectures very quickly ... (this is why OpenGL support sucks for pre-GCN GPUs like the HD 5000/6000 series)
To this day, Nvidia still managed to release WDDM 2.x/DX12 drivers for Fermi ...
Actually OpenGL support wasn't to bad for the Radeon 5000/6000 series, even when I was running quad-Crossfire Radeon 6950's unclocked into 6970's. - You might recall I did some bandwidth scaling benchmarks of those cards years ago.
The Radeon 5870 was certainly a fine card that stood the test of time though, more so than my 6950 cards. (I actually have one sitting on my shelf next to me!)
With Pascal and Maxwell, the other day I heard from Switch emulator developers that their shaders were NOT compatible and that emulating half precision instructions on Pascal broke things. I VERY much doubt you can group Kepler with Fermi because you don't even have bindless texture handles or support for subgroup operations on Fermi ...
Things were worse on the CUDA side where Nvidia publicly decided to deprecate a feature known as "warp-synchronous programming" on Volta and this lead to real world breakage in applications that relied on previous hardware behaviour. Nvidia even with their OWN APIs and their humongous intermediate representation (PTX instrinsics) backend, they CAN'T even promise that their sample codes or features will actually be compatible with future versions CUDA SDKs!
At least with AMD and their GCN iterations, developers won't have to worry about application breakage no matter how tiny AMD's driver teams may be ...
Nah. Maxwell and Pascal can be lumped together.
Kepler and Fermi too.
Volta never really made a big appearance on Desktop... So Turing is the start of a fresh lineup.
And just like if you were building software targeting specific features in AMD's Next-Gen Compute unit... Compatibility will likely break with older variants of Graphics Core Next. It's just the nature of the game.
AMD re-badging hardware obviously does result in less occurrence of this happening... But I want more performance. AMD isn't delivering.
Think about it this way ...
If AMD weren't burdened by maintaining their software stack such as their drivers, they could be using those resources instead to SOLELY improve their GCN implementations much like how Intel has been evolving x86 for over 40 years!
I am actually not disagreeing with you. I actually find AMD's drivers to actually be the best in the industry right now. - Which was typically the crown nVidia held during the early Graphics Core Next and prior years.
I do run hardware from both companies so I can anecdotally share my own experiences.
They could but there's no point since ARM's designs are much too low power/performance for Nvidia's tastes so they practically have to design their own "high performance" ARM cores just like every other licensee especially if they want to compete in home consoles. Nvidia's CPU designs are trash that compiler backend writers have to work around ...
I doubt Nvidia will be able to offer backwards compatibility as well which is another hard requirement ...
ARM has higher performing cores. Certainly ones that would beat Denver.
Nvidia's newer report seems to paint a much darker picture than it did over 6 months ago so their growth is hardly organic ...
Nvidia's acquisition of Mellanox is at the mercy of Chinese regulators as well just like Qualcomm's acquisition of NXP. If the deal falls apart (likely because of China), what other 'friends' do Nvidia have to fallback to ? What happens if AMD or Intel get more ambitious with their APUs and start targeting GTX 1080 levels of graphics performance ? (possible with DDR5 and 7nm EUV)
DDR5 will not offer the appropriate levels of bandwidth to allow APU's to equal a Geforce 1080. Especially at 7nm... That claim is without any basis in reality.
Either way, Anandtechs analysis isn't painting nVidia outlook as bleak. They are growing, they are diversifying, they have billions in the bank, they are actually doing well. Which is a good thing for the entire industry, competition is a must.
Their Haswell and up lineup is fine. Certainly, older Intel parts had tons of hardware issues but that was well in the past so all they need is good drivers ..
Minus the average performance the hardware puts out... And the lack of long term support.
You would literally need to pay me to use Intel Decelerator Graphics.
Meh, I'm not as optimistic as you are unless they use another foundry to manufacture Xe because I don't trust that they'll actually launch 10nm ...
Intels 7nm seems to still be on track with it's original timeline. I am optimistic, but I am also realistic, Intel and Graphics have a shitty history, but you cannot deny that some of the features Intel is touting is wetting the nerd appetite?
It being specifically built for last generation is exactly why we should dump it ...
Crysis is relatively demanding even for today's hardware but no sane benchmark suite will include it because of it's flaw in relying heavily on single threaded performance ...
"Demanding" is not a sign of technical excellence like we saw with ARK Survival Evolved. A benchmark suite should be designed to represent the workload demands of current generation AAA game graphics, not last generation AAA game graphics ...
Crysis is demanding for today's hardware because it wasn't designed for today's hardware.
Crysis was designed at a time when clockrates were ever-increasing and Crytek thought that trajectory would continue indefinitely.
Today CPU's have gotten wider with more cores, which means that newer successive Cry Engines tend to be more performant.
In saying that, GTA 5 isn't as old as Crysis anyway and certainly has an orders-of-magnitude greater player base.
I believe that benchmarks should represent the games that people are playing today. - If a game is older and based in a Direct X 9/11 era, so be it, it's still representative of what gamers are playing... In saying that, they also need newer titles to give an idea of performance in newer titles.
It's like when Terascale was on the market, it had amazing Direct X 9 performance... But it wasn't able to keep pace in newer Direct X 11 titles, hence why AMD architectured VLIW 4, to give a better balance leaning towards Direct X 11 titles as each unit was able to be utilized more consistently.
So having a representation of mixed older and newer games is important in my opinion as it gives you a more comprehensive data point to base everything on.