By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
shikamaru317 said:

Radeon 7 is last gen tech, GCN, Graphics Core Next. 5700XT and the next-gen consoles are using RDNA, not GCN. 5700 XT was not designed as a replacement for the Radeon 7, it was designed to sit one performance bracket down from the Radeon 7, it is only 10 tflop to the Radeon 7’s 13 tflop, that is the only reason why it is performing under the Radeon 7, and even then it still comes very close to Radeon 7 in some games like The Division 2 and Forza Horizon 4, which you can see in the pics I just linked. You can safely assume that when AMD releases a 13 tflop RDNA card, likely later this year, that it will beat the Radeon 7 by a decent margin, because RDNA is definitely a step up from GCN in terms of real world vs flops performance. 

50% weaker than 2080ti? I hardly think so, 12 tflop of RDNA should fall around 2080 tier in performance, and that is on PC. Console games get extra optimization compared PC games.Taking into account console optimization, multiplat games on XSX should perform about the same as they do on a PC running a 2080ti. Even this rumored 9.2 tflop PS5 would likely come within 25% of a 2080ti PC taking into account console optimization.

RDNA via Navi on the PC is a hybrid GPU design that sits between RDNA 2 and Graphics Core Next, it still retains the full Graphics Core Next instruction set.
https://en.wikipedia.org/wiki/RDNA_(microarchitecture)

Trumpstyle said:

If you use 4 Memory controllers it only gives 448GB/s bandwidth, it's to low for 12TF GPU. Or Microsoft do you as you suggest, 5 memory controllers for 560GB/s memory speed with 20GB Vram, but instead mix 1GB and 2GB sticks for total 16GB Vram and save themself a lot of money. I think you agree with that.

Might have more memory controllers. You simply don't know anything yet... So would be good if you can stop asserting something as fact before you have all your ducks in a row.

Trumpstyle said:

It doesn't matter what speed they have, they will load games in 2 seconds no matter what. What matters is, will Lockhart sell with no Disc-player? Is the price of Xbox series X to high compared to PS5? I think microsoft must launch that console at $450 to compete with PS5.

Xbox consoles at 1.5GB/s speed?

PS5 at 2.5GB/s speed?

We don't even have the consoles on the market, there are no benchmarks, we don't even know how fast the SSD is.

But just because it has an SSD, doesn't mean there won't be a load time... There are games that use procedural generation which require very little in regards to transfers from storage to system memory, but require a ton of processing time to generate assets. (No Mans Land is an example.)

Consoles like the Switch are all solid state, still have load times.

Again, 2 seconds is a bold claim, you should stop asserting something as fact when you don't have all your ducks in a row, this is why there are people out in the world who believe sunscreen causes cancer, vaccines cause autism, there is no climate change, the world is flat... And more, because they assert something as fact without having all the details/evidence.

Trumpstyle said:

Looks like you know a lot about this stuff, Sony/Microsoft are supposed to use optimized NVMe to reduce loading much faster than what is possible on PC.

What is so optimized about it? Please. Fill us in.

DonFerrari said:

Considering RT on the consoles will be a designated part of the HW, not using RT will probably just make that part unused instead of working in something else.

About the gfx x resolution, they will have to find the balance to have the best IQ overall.

There is the potential to use the Ray Tracing cores for non-Ray Tracing tasks, it depends on how flexible AMD makes those cores.

Potentially though if the Ray Tracing cores are going unused, AMD could then repurpose that TDP and drive up clockrates to the rest of the chip... Either way, that is just hypothesizing, we will need to wait and see of course.

I may be wrong but no matter how flexible it is, usually if something is designed for one purpose all other use will be less optimal. So I don't think it will be anywhere near that we will get games tossing out RT to use those cores (that probably won't run up the processing capability of what the other parts of the GPU) to do other stuff a little better.

For me seems more like a situation of the X360 vs PS3, that they got the "free AA solution" because of the differences of the GPU, and instead of using that power to something else that would give they more headaches they just used a good AA standard for most games.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."