By using this site, you agree to our Privacy Policy and our Terms of Use. Close
eva01beserk said:
For thouse claiming an 8.3tf ps5. Let me just ask you, would you even consider buying a next gen conso that is 33% jump from previous gen for $400?

The extra performance the Playstation 4 Pro and Xbox One X bring to the table is mostly sunk into driving higher resolutions and framerates... Games aren't being designed with those consoles as a baseline.

The jump from the base Playstation 4 and Xbox One to next gen should be a rather sizable one.

fatslob-:O said:

Don't worry about it, changes are happen now ...

It's a bit of a stretch if said changes will ever be relevant though. Because by the time it potentially does... AMD may have moved onto it's next-gen architecture on the PC. (Graphics Core Next isn't sticking around forever!)

fatslob-:O said:

TEV (24 instructions) is comparable to the original Xbox's "pixel shaders" (12 instructions) which were shader model 1.1. There's no solid definition of a shader anyways. The ATI Flipper most certainly did not have vertex shaders according to emulator developers ... (vertex pipeline was 100% fixed function)

Doing "multiple passes" is not something to be proud of and is actively frowned upon by many developers since it cuts rendering performance by a big factor ... 

Performance was one issue but the Flipper didn't have the feature set either to cope with ... 

History has shown that the Gamecube and Wii were punching around the same level as the original Xbox in terms of visuals.
But anyone who worked with the TEV could actually pull off some interesting effects, many of which could rival the Xbox 360.

https://www.youtube.com/watch?v=RwhS76r0OqE

Take the Geforce 2 for example... Using the register combiners you could pull off some shader effects that the Geforce 4 Ti was doing... And the Geforce 2 was a very much a highly fixed-function part.

Just because the hardware isn't a 1:1 match, doesn't mean you cannot pull of similar effects with obviously different performance impacts.

As for doing multiple passes... It depends on the architecture, not all architectures have a big performance impact.

fatslob-:O said:

Well he seems to want an Adreno 2XX GPU for reverse engineering the X360's alpha to coverage behaviour and he's a developer of the Xbox 360's emulator specializing on the GPU ... (he seems to be convinced that the X360 is closely related to the Adreno 2XX) 

The one console part that's truly based on the R600 was the WIIU's 'Latte' graphics chip in which case looking at open source drivers did actually help the WIIU's graphics emulation ...

You aren't getting it.
Adreno is derived from Radeon technology.
Xenos is derived from Radeon technology.

Both are derived from the same technology base of the same era... Obviously there will be architectural similarities, you don't go about reinventing the wheel if certain design philosophy's work.

Fact is... At the time, ATI used it's desktop Radeon technology as the basis for all other market segments.

As for the Wii U... The general consensus is it's R700 derived with some differences.
https://www.techinsights.com/blog/nintendo-wii-u-teardown
https://forums.anandtech.com/threads/wii-u-gpu-scans-now-up.2299839/
https://www.neogaf.com/threads/wiiu-latte-gpu-die-photo-gpu-feature-set-and-power-analysis.511628/

fatslob-:O said:

By the time the benchmark was taken, it was a SIX(!) year old game. Let's try something a little newer like Wolfenstein: The New Order ... 

An R9 290 was SLOWER than a GTX 760! (OpenGL was horrendous then for AMD, pre-GCN but even then OpenGL is still bad on GCN) 

That was kinda' the point?

fatslob-:O said:

On Kepler, they straight deprecated an ENTIRE SET of surface memory instructions compared to Fermi. Even on GCN for example from gen 1 to gen 2, they removed a total of 4 instructions at the LOWEST LEVEL but since consoles are GCN gen 2 AMD doesn't have to worry about future software breaking compatibility with GCN gen 1 hardware in the future. On the Vega ISA, they removed a grand total of 3 instructions ... 

Just consider this for a moment, PTX is just an intermediary while GCN docs are real low level details. Despite being GCN assembly, Nvidia manages to somehow change more at the higher level than AMD does at the low level so there's no telling what other sweeping changes Nvidia has applied at the low level ... 

In saying that, nVidia's approach is clearly paying off because nVidia's hardware has been superior to AMD's for gaming for generations.

fatslob-:O said:

I highly doubt Fermi or Kepler are related, at least to the degree each GCN generation are ... 

With Maxwell or Pascal that's a big maybe since reverse engineering a copy of Super Mario Odyssey revealed that there's a Pascal(!) codepath for NVN's compute engine so there may yet be an upgrade path for the Switch ... (no way in hell are they going to upgrade to either Volta or Turing though since Nvidia removed Maxwell specific instructions) 

Even Anandtech recognizes that Kepler has many of the same underpinnings as Fermi.
https://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/2

fatslob-:O said:

Also I forgot to note but the reason why Nvidia doesn't license from ARM's designs is because they want to save money ... (all of Nvidia's CPU designs suck hard)

That isn't it at all... nVidia is a full ARM Architecture licensee...
https://en.wikipedia.org/wiki/Arm_Holdings#Licensees
https://www.anandtech.com/show/7112/the-arm-diaries-part-1-how-arms-business-model-works/3

fatslob-:O said:

@Bold Is it truly ? AMD deprecated their Mantle API today so what is stopping Nvidia from doing the same with GPU accelerated PhysX that's failed to be standardized ? Eventually, Nvidia will find it is not sensible to maintain such so that becomes a feature that's lost FOREVER ... 

As for OpenGL being deprecated, I doubt it because the other industries (content creation/professional/scientific) aren't moving fast enough in comparison to game development so unless AMD offers technical assistance for them, they'll be crippled at the mercy of AMD's OpenGL stack ... 

nVidia does have more cash and more profits than AMD, so of course.
And I honestly hope PhysX does get depreciated.

Vulkan is slowly replacing OpenGL pretty much across the entire gaming spectrum.
Content Creation/Scientific tasks obviously have different requirements.

fatslob-:O said:

Seeing as how threadripper was designed with an octa-channel memory controller, there's no reason to rule out a high-end APU either ...

If bandwidth is an issue then AMD could opt to make special boards that are presoldered with APUs and GDDR5/6 memory modules like the Subor-Z ...

Nothing preventing AMD from getting 1080 levels of performance like above in a smaller, cheaper, and more efficient form factor ...

Never going to happen.

fatslob-:O said:

Every time Intel has delayed 10nm, it was also met with delays on 7nm as well so I doubt Intel could just as easily scrap their previous work and just start anew ... 

I don't trust Intel to actually deliver on their manufacturing roadmap ... 

Has it really though? Because everything points to the 7nm team hitting it's design goals.
https://www.anandtech.com/show/14312/intel-process-technology-roadmap-refined-nodes-specialized-technologies
https://www.anandtech.com/show/13683/intel-euvenabled-7nm-process-tech-is-on-track

2021 with EUVL, 7nm.

fatslob-:O said:

Careful, Minecraft is the most popular PC game ever but I doubt that'd be a benchmark ... 

What people are looking for from a current day benchmark suite is not popularity but they expect reasonably (modern) pathological (less than 5%) cases ... 

If GPU designers drop native support for older APIs (glide) and the testers had to use a translation layer (emulator) would that somehow be a good representation of how hardware handles work at all ?

Difference is... Minecraft will run perfectly fine on even the shittiest Intel Integrated Graphics today.

When I look at a benchmark today, it's due to wanting to find out how todays hardware runs todays games... And yes, some of the games I play are going to be a little older... And that is fine.
It's good to find out how newer architectures run older and newer titles... Which is why a Benchmark Suite usually includes a multitude of titles to cover all bases, it's an extra data point to provide consumers with a more comprehensive idea for their purchasing decisions.

The fact that a benchmark suite includes an older game or two really isn't a relevant complaining point, ignore those numbers if you must, but they are valuable pieces of information for other people.

Plus many games use older API's. - I mean, you said yourself that you don't think Vulkan will replace OpenGL.

fatslob-:O said:

It poses quite a few ramifications though since many of the other pieces are falling into place for AMD's immediate future and the Apex engine was a good candidate for a Vulkan renderer when technically high-end game franchises such as Just Cause, Mad Max, and Rage are featured on it ... 

Frostbite 3, Northlight, Nitrous, Asura, Snowdrop, Glacier 2, Dawn, Serious 3, Source 2, Foundation, Total War, 4A, RE and many other internal engines are changing the playing field (DX12/Vulkan) for AMD but now it's time to cut the wire (Creation/AnvilNEXT/Dunia/IW) and finally pull the plug (UE4/Unity) once and for all ... 

Only several engines left that are offending but they'll drop one by one soon enough and steady ... (engines changing are going to show up in the benchmark suites)

In short, with all those titles today, nVidia still holds an overall advantage. Those are the undeniable facts presented by benchmarks from across the entire  internet.

Trumpstyle said:

I like this leak though, from what I understand videocardz.com recieved it at or before march 12 way before any gonzalo leak showing a PS5 with 1,8ghz and I remember there was a leak on gfxbench showing a 20CU navi card with radeon 590 performance, it didn't make any sense at the time as the performance was just way to good and we thought the benchmark was misreading the CU's on the card, but maybe AMD has improved the CU's and clock by a lot on Navi.

It's a "leak" for a reason. Aka. A rumor. Take it with a grain of salt until we have empirical evidence. Aka. Real hardware in our hands.

Remember, Navi is still Graphics Core Next, keep your expectations inline for a GPU architecture that is 7+ years old.

Trumpstyle said:

You're misunderstanding the Maxwell sauce, it means better perf/teraflops and perf/mm2 gcn hasn't received this yet, its actually gone the opposite way where Vega decreased perf/mm2. But yes I think a 40CU's should land a bit below geforce 2060.

I understand perfectly. You don't seem to understand that a large chunk of the features found in Maxwells core architecture was adopted by Vega and Polaris already.
I think you should look at performance per clock of Vega and Fiji and see how things have changed.

Trumpstyle said:

I'm unsure if you mean me, but I will buy PS5 on first day or close to that. I have a ps4 pro so PS5 is a much bigger jump than 33%. The people who owns a Xbox one X is a small group of people probably maybe about 2% of total console owners, even those owners will have no choice but to get the PS5 as I suspect that Sony will have improved game versions of all theirs exclusive games giving them 4k/60fps. Giving them a big advantage against Microsoft.

While Microsoft and Nintendo has consoles on the market... There will always be a choice available not to get a Sony console.

Trumpstyle said:

You simply need to realize Moores law is DEAD and the jump from 16nm to 7nm is smaller than 28nm to 16nm, we actually a bit lucky that Sony/Microsoft even manage to double the flop numbers if I'm correct.

Citation needed.
I am sure Fatslob will like to jump on this claim with me too.

Last edited by Pemalite - on 15 May 2019

--::{PC Gaming Master Race}::--