By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

Part two of the news:

Trackmania dev analyses 15 years' worth of records to root out the cheaters
https://www.pcgamer.com/uk/trackmania-dev-analyses-15-years-worth-of-records-to-root-out-the-cheaters/
In late May this year, cheating allegations began to catch up with some of the fastest names in Trackmania.(...)
The evidence here is damning, and developer Nadeo has now taken action. Studio head Florent Castelnérac has today announced that, having found a fix, the team has also analysed the past 15 years of Trackmania records to find the cheaters' records. The majority of the cheated records came from just 10 accounts.

Company of Heroes 3 is coming next year, but you can play a demo right now
https://www.pcgamer.com/uk/company-of-heroes-3-is-coming-next-year-but-you-can-play-a-demo-right-now/
Company of Heroes 3 was announced today after many years in development, and you can sign up to play what developer Relic Entertainment calls a pre-Alpha preview build of the game here.

Fans have been playing Company of Heroes 3 for years
https://www.pcgamer.com/uk/fans-have-been-playing-company-of-heroes-3-for-years/
Company of Heroes 3 is coming next year, Relic announced today, but the studio's latest World War 2 RTS has already been played by some members of the community, who first got their hands on it years ago.

Work on Apex Legends cross-progression slowed down by recent hacks
https://www.pcgamer.com/uk/work-on-apex-legends-cross-progression-slowed-down-by-recent-hacks/
Since Apex Legends season 9 kicked off a few months ago, the free-to-play battle royale has enjoyed a resurgence in popularity. But as with any popular competitive game, more general attention has also attracted more prospective cheaters. The recent influx of nefarious players and DDoS attacks have become so disruptive in Apex that it's affecting development on a pretty big planned feature: cross-progression.

id Software's Mario PC port found in a stack of discs submitted to a museum
https://www.pcgamer.com/uk/id-softwares-mario-pc-port-found-in-a-stack-of-discs-submitted-to-a-museum/
It's well known that id Software's Commander Keen tech was originally designed to accommodate a PC port for Super Mario Bros. 3, and in 2015, John Romero revealed footage of the proof of concept. Nintendo didn't go for it, but it was a breakthrough in terms of bringing smooth screen-scrolling to PC games.
A copy of that Mario demo has turned up in a submission to the Strong National Museum of Play, seemingly at random. The museum's games curator Andrew Borman tells Ars Technica that the disc was among a larger submission from an unnamed game developer. This developer didn't work on the demo, though received it "during their work."

The Amazing American Circus delayed to September
https://www.pcgamer.com/uk/the-amazing-american-circus-delayed-to-september/
The Amazing American Circus is a deckbuilder that lets you run a travelling sideshow, trekking across 19th century USA, adding to your troupe and wowing the rubes at each stop. It looks a bit like Slay the Spire only instead of throwing poisoned knives at cultists you impress audience members by playing cards like Wild Clown Chase, Dangerous Act, or Smokey Kiss. There are also rivals to defeat, drunk mime artists to deal with, and it seems like werewolves and Bigfoot are involved too? Honestly, I stopped paying attention when the clowns showed up. No, thank you.

An 'undetectable' and 'unstoppable' cheat was taken down at Activision's request
https://www.pcgamer.com/uk/an-undetectable-and-unstoppable-cheat-was-taken-down-at-activisions-request/
User Vision Pro was an aim-assist and auto-fire cheat that gained some attention recently thanks to YouTube demonstrations showing what it could achieve in Call of Duty: Warzone. Those videos have now been taken offline, but you can see what was being shown off thanks to Twitter's Anti-Cheat Police Department. The video claimed User Vision Pro would work on "any game" and on consoles as well as PC, be "undetectable" and "unstoppable" and also "extreamly [sic] fast".

Unique action-strategy game Highfleet is coming to Steam this month
https://www.pcgamer.com/uk/unique-action-strategy-game-highfleet-is-coming-to-early-access-this-month/
Konstantin Koshutin's stratospherically gorgeous action-strategy HighFleet begins its campaign later this month on Steam.
A wonderfully tactile war sim, I've had my eyes on HighFleet's burning skies for some time. You're in command of a massive, airborne imperial war machine, recruiting and refuelling across the country and staving off attacks from nippy rebel forces. Battles play out as pseudo-strategy affairs, directly attacking with your flagship as you deploy smaller craft to fight for you.

Fall Guys announced its next season with a jigsaw puzzle
https://www.pcgamer.com/uk/fall-guys-announced-its-next-season-with-a-jigsaw-puzzle/
Fall Guys fans have puzzled out the bean-stumbling battle royale's next season, after Mediatonic teased the theme by way of a jpeg jigsaw.
Yesterday, the Fall Guys Twitter account gave fans a rather cryptic hint at Season 5's theme by letting them download a folder of 1,200 square images that, when pieced together, would assemble next season's key art.

Final Fantasy 10 director says chances of 10-3 are 'not zero'
https://www.pcgamer.com/uk/final-fantasy-10-director-says-chances-of-10-3-are-not-zero/
There's a rough outline of what Final Fantasy 10-3 could look like and apparently, the chances of it getting made are "not zero."
The comments were made by character designer Tetsuya Nomura, writer Kazushige Nojima and Final Fantasy 10 director Motomu Toriyama, and spotted by the ever-vigilant Nibel on Twitter. Nomura told Famitsu in a column earlier this month that Nojima has written a synopsis for the sequel, and that while the game isn't in any form of development, "there is a concept" there.

Call of Duty: Black Ops Cold War brings its secret nukes to the rest of multiplayer
https://www.pcgamer.com/uk/call-of-duty-black-ops-cold-war-brings-its-secret-nukes-to-the-rest-of-multiplayer/
A hidden nuke scorestreak in Call of Duty: Black Ops - Cold War has been extended to the rest of multiplayer as part of today's midseason update.
Back in Season 3, Treyarch added a secret nuclear detonation scorestreak to Cold War. Long a staple of Infinity Ward's Modern Warfare games, getting 30 kills without dying would enable you to slam a nuke button to immediately kill everyone in the match.



Please excuse my bad English.

Former gaming PC: i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Current gaming PC: R5-7600, 32GB RAM 6000MT/s (CL30) and a RX 9060XT 16GB

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Around the Network

Time for some rumours! Take em with large grains of salts!

RDNA3 = TSMC 5nm + 6nm Chiplets + Doubling Infinity cache vs Lovelace = TSMC 5nm monolithic + GDDR6X

If true, next generation of GPUs are going to get even more interesting than we once thought. Especially if Lovelace will be using TSMC's 5nm instead of Samsung as that would put Nvidia on a node parity against AMD. But RDNA3 still might have an advantage with their Chiplets design. The other interesting thing if the rumour is legit would be AMD doubling their infinity cache where as Nvidia looking to continue with their higher memory bus/bandwidth route.


30 Series Super refresh is coming

Hopefully, unlike the TI series which more so slots in between cards these days, the Super series will replace cards with more performance. We will see though!

AMD Raphael Zen4 Based CPUs Rumored To Only Have 16 Cores

https://wccftech.com/amd-raphael-zen4-based-cpus-rumored-to-only-have-16-cores/

Of course, if it's 16 big cores vs Intels half big and half small cores, then it really won't matter as long as those 16 cores perform better.

ASUS ultra-rare GUNDAM non-LHR cards end up in mining rigs set up by Vietnamese retailers

https://videocardz.com/newz/asus-ultra-rare-gundam-non-lhr-cards-end-up-in-mining-rigs-set-up-by-vietnamese-retailers

Anything but those!

https://twitter.com/greymon55/status/1415112419374878725?s=20



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Captain_Yuri said:

Time for some rumours! Take em with large grains of salts!

RDNA3 = TSMC 5nm + 6nm Chiplets + Doubling Infinity cache vs Lovelace = TSMC 5nm monolithic + GDDR6X

If true, next generation of GPUs are going to get even more interesting than we once thought. Especially if Lovelace will be using TSMC's 5nm instead of Samsung as that would put Nvidia on a node parity against AMD. But RDNA3 still might have an advantage with their Chiplets design. The other interesting thing if the rumour is legit would be AMD doubling their infinity cache where as Nvidia looking to continue with their higher memory bus/bandwidth route.

AMD Raphael Zen4 Based CPUs Rumored To Only Have 16 Cores

https://wccftech.com/amd-raphael-zen4-based-cpus-rumored-to-only-have-16-cores/

Of course, if it's 16 big cores vs Intels half big and half small cores, then it really won't matter as long as those 16 cores perform better.

https://twitter.com/greymon55/status/1415112419374878725?s=20

If the GPU setups will be like this, then I think that while NVidia could certainly win out between the two at release, I fear that over time AMD will have the better legs and drop less in performance than NVidia.

Why you might ask? Well because the super-high-speed VRAM that NVidia is using is both rare and expensive, meaning that they will again put only the necessary amount for the time being and not enough for the future. Expect a 4070Ti with 10GB and a 4080 with 12GB VRAM.

This is already the limit right now: Doom Ethernal on Ultra Nightmare with Raytracing needs 9-10GB VRAM already. As such, any GPU with less Memory like, say, the 3070Ti, will get beaten by weaker GPUs with more VRAM, like, I don't know, a 3060? And even the AMD GPUs then are faster because the NVidia is too much bottlenecked: https://www.hardwaretimes.com/amds-radeon-rx-6800-and-the-rtx-3060-are-faster-than-rtx-3070-in-doom-eternal-w-ray-tracing-enabled/ 

_________________________________

As for Raphael, this was to be expected. I don't think they will change before Zen 5, where the rumour is that AMD will change to 12-CPU chiplets, which would mean 24 cores with 2 chiplets.

If this is true, then I expect AMD to build some monolithic hexacore for the lower-end GPUs, but like they're doing now for the Ryzen 3/Athlon/Sempron with Zen 3 in 12nm, just in that case probably with Zen 5 in 7nm or maybe 5nm (if Zen 5 is 3nm)



Bofferbrauer2 said:
Captain_Yuri said:

Time for some rumours! Take em with large grains of salts!

RDNA3 = TSMC 5nm + 6nm Chiplets + Doubling Infinity cache vs Lovelace = TSMC 5nm monolithic + GDDR6X

If true, next generation of GPUs are going to get even more interesting than we once thought. Especially if Lovelace will be using TSMC's 5nm instead of Samsung as that would put Nvidia on a node parity against AMD. But RDNA3 still might have an advantage with their Chiplets design. The other interesting thing if the rumour is legit would be AMD doubling their infinity cache where as Nvidia looking to continue with their higher memory bus/bandwidth route.

AMD Raphael Zen4 Based CPUs Rumored To Only Have 16 Cores

https://wccftech.com/amd-raphael-zen4-based-cpus-rumored-to-only-have-16-cores/

Of course, if it's 16 big cores vs Intels half big and half small cores, then it really won't matter as long as those 16 cores perform better.

https://twitter.com/greymon55/status/1415112419374878725?s=20

If the GPU setups will be like this, then I think that while NVidia could certainly win out between the two at release, I fear that over time AMD will have the better legs and drop less in performance than NVidia.

Why you might ask? Well because the super-high-speed VRAM that NVidia is using is both rare and expensive, meaning that they will again put only the necessary amount for the time being and not enough for the future. Expect a 4070Ti with 10GB and a 4080 with 12GB VRAM.

This is already the limit right now: Doom Ethernal on Ultra Nightmare with Raytracing needs 9-10GB VRAM already. As such, any GPU with less Memory like, say, the 3070Ti, will get beaten by weaker GPUs with more VRAM, like, I don't know, a 3060? And even the AMD GPUs then are faster because the NVidia is too much bottlenecked: https://www.hardwaretimes.com/amds-radeon-rx-6800-and-the-rtx-3060-are-faster-than-rtx-3070-in-doom-eternal-w-ray-tracing-enabled/ 

_________________________________

As for Raphael, this was to be expected. I don't think they will change before Zen 5, where the rumour is that AMD will change to 12-CPU chiplets, which would mean 24 cores with 2 chiplets.

If this is true, then I expect AMD to build some monolithic hexacore for the lower-end GPUs, but like they're doing now for the Ryzen 3/Athlon/Sempron with Zen 3 in 12nm, just in that case probably with Zen 5 in 7nm or maybe 5nm (if Zen 5 is 3nm)

Well I am sure both companies will increase the Vram capacity next generation regardless so I doubt that will be the case. GDDR6X is expensive *right now* but memory chips get cheaper over time just like everything else. GDDR6 was also expensive at one point.

As for Doom Ethernal, Ultra Nightmare settings in texture quality has no visual difference than it's lower texture quality settings according to DF. Most likely, Ultra Nightmare texture quality is for resolutions well above 4k for it to really make a visual difference. So enabling a setting that makes no visual difference other than to eat up Vram doesn't make much sense and I highly doubt will be the norm in the future. There might be settings for higher resolution texture quality sure but you shouldn't be enabling say texture  Especially as things like Direct Storage and other next generation technologies start becoming integrated into engines that is supposed to lessen the Vram usage.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Bofferbrauer2 said:
Captain_Yuri said:

Time for some rumours! Take em with large grains of salts!

RDNA3 = TSMC 5nm + 6nm Chiplets + Doubling Infinity cache vs Lovelace = TSMC 5nm monolithic + GDDR6X

If true, next generation of GPUs are going to get even more interesting than we once thought. Especially if Lovelace will be using TSMC's 5nm instead of Samsung as that would put Nvidia on a node parity against AMD. But RDNA3 still might have an advantage with their Chiplets design. The other interesting thing if the rumour is legit would be AMD doubling their infinity cache where as Nvidia looking to continue with their higher memory bus/bandwidth route.

AMD Raphael Zen4 Based CPUs Rumored To Only Have 16 Cores

https://wccftech.com/amd-raphael-zen4-based-cpus-rumored-to-only-have-16-cores/

Of course, if it's 16 big cores vs Intels half big and half small cores, then it really won't matter as long as those 16 cores perform better.

https://twitter.com/greymon55/status/1415112419374878725?s=20

If the GPU setups will be like this, then I think that while NVidia could certainly win out between the two at release, I fear that over time AMD will have the better legs and drop less in performance than NVidia.

Why you might ask? Well because the super-high-speed VRAM that NVidia is using is both rare and expensive, meaning that they will again put only the necessary amount for the time being and not enough for the future. Expect a 4070Ti with 10GB and a 4080 with 12GB VRAM.

This is already the limit right now: Doom Ethernal on Ultra Nightmare with Raytracing needs 9-10GB VRAM already. As such, any GPU with less Memory like, say, the 3070Ti, will get beaten by weaker GPUs with more VRAM, like, I don't know, a 3060? And even the AMD GPUs then are faster because the NVidia is too much bottlenecked: https://www.hardwaretimes.com/amds-radeon-rx-6800-and-the-rtx-3060-are-faster-than-rtx-3070-in-doom-eternal-w-ray-tracing-enabled/ 

_________________________________

As for Raphael, this was to be expected. I don't think they will change before Zen 5, where the rumour is that AMD will change to 12-CPU chiplets, which would mean 24 cores with 2 chiplets.

If this is true, then I expect AMD to build some monolithic hexacore for the lower-end GPUs, but like they're doing now for the Ryzen 3/Athlon/Sempron with Zen 3 in 12nm, just in that case probably with Zen 5 in 7nm or maybe 5nm (if Zen 5 is 3nm)

Well I am sure both companies will increase the Vram capacity next generation regardless so I doubt that will be the case. GDDR6X is expensive *right now* but memory chips get cheaper over time just like everything else. GDDR6 was also expensive at one point.

As for Doom Ethernal, Ultra Nightmare settings in texture quality has no visual difference than it's Ultra texture quality settings according to DF. Most likely, Ultra Nightmare texture quality is for resolutions well above 4k for it to really make a visual difference. So enabling a setting that makes no visual difference other than to eat up Vram doesn't make much sense and I highly doubt will be the norm in the future. There might be settings for higher resolution texture quality sure but you shouldn't be enabling say texture quality settings meant for 6k on a GPU meant for 1440p. Especially as things like Direct Storage and other next generation technologies start becoming integrated into engines that is supposed to lessen the Vram usage.

So I wouldn't say RDNA cards will age better as Ray Tracing continues to be implemented in more and more games.

Last edited by Jizz_Beard_thePirate - on 14 July 2021

                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Around the Network
Captain_Yuri said:
Bofferbrauer2 said:

If the GPU setups will be like this, then I think that while NVidia could certainly win out between the two at release, I fear that over time AMD will have the better legs and drop less in performance than NVidia.

Why you might ask? Well because the super-high-speed VRAM that NVidia is using is both rare and expensive, meaning that they will again put only the necessary amount for the time being and not enough for the future. Expect a 4070Ti with 10GB and a 4080 with 12GB VRAM.

This is already the limit right now: Doom Ethernal on Ultra Nightmare with Raytracing needs 9-10GB VRAM already. As such, any GPU with less Memory like, say, the 3070Ti, will get beaten by weaker GPUs with more VRAM, like, I don't know, a 3060? And even the AMD GPUs then are faster because the NVidia is too much bottlenecked: https://www.hardwaretimes.com/amds-radeon-rx-6800-and-the-rtx-3060-are-faster-than-rtx-3070-in-doom-eternal-w-ray-tracing-enabled/ 

_________________________________

As for Raphael, this was to be expected. I don't think they will change before Zen 5, where the rumour is that AMD will change to 12-CPU chiplets, which would mean 24 cores with 2 chiplets.

If this is true, then I expect AMD to build some monolithic hexacore for the lower-end GPUs, but like they're doing now for the Ryzen 3/Athlon/Sempron with Zen 3 in 12nm, just in that case probably with Zen 5 in 7nm or maybe 5nm (if Zen 5 is 3nm)

Well I am sure both companies will increase the Vram capacity next generation regardless so I doubt that will be the case. GDDR6X is expensive *right now* but memory chips get cheaper over time just like everything else. GDDR6 was also expensive at one point.

As for Doom Ethernal, Ultra Nightmare settings in texture quality has no visual difference than it's lower texture quality settings according to DF. Most likely, Ultra Nightmare texture quality is for resolutions well above 4k for it to really make a visual difference. So enabling a setting that makes no visual difference other than to eat up Vram doesn't make much sense and I highly doubt will be the norm in the future. There might be settings for higher resolution texture quality sure but you shouldn't be enabling say texture quality settings meant for 6k on a GPU meant for 1440p. Especially as things like Direct Storage and other next generation technologies start becoming integrated into engines that is supposed to lessen the Vram usage.

So I wouldn't say RDNA cards will age better as Ray Tracing continues to be implemented in more and more games.

I don't say AMD cards will necessarily age better, just that 8GiB VRAM is already too low for something of the performance class of a 3070 and especially the Ti version of the latter, and that NVidia probably won't increase the amount of VRAM very much next gen, resulting in the same problem happening again next gen.

I know Ultra Nightmare doesn't really change the look of the game, but it's an important benchmarking tool when you want to test the longevity of GPUs, and here the 8GiB GPUs simply fail.



Bofferbrauer2 said:
Captain_Yuri said:

Well I am sure both companies will increase the Vram capacity next generation regardless so I doubt that will be the case. GDDR6X is expensive *right now* but memory chips get cheaper over time just like everything else. GDDR6 was also expensive at one point.

As for Doom Ethernal, Ultra Nightmare settings in texture quality has no visual difference than it's lower texture quality settings according to DF. Most likely, Ultra Nightmare texture quality is for resolutions well above 4k for it to really make a visual difference. So enabling a setting that makes no visual difference other than to eat up Vram doesn't make much sense and I highly doubt will be the norm in the future. There might be settings for higher resolution texture quality sure but you shouldn't be enabling say texture quality settings meant for 6k on a GPU meant for 1440p. Especially as things like Direct Storage and other next generation technologies start becoming integrated into engines that is supposed to lessen the Vram usage.

So I wouldn't say RDNA cards will age better as Ray Tracing continues to be implemented in more and more games.

I don't say AMD cards will necessarily age better, just that 8GiB VRAM is already too low for something of the performance class of a 3070 and especially the Ti version of the latter, and that NVidia probably won't increase the amount of VRAM very much next gen, resulting in the same problem happening again next gen.

I know Ultra Nightmare doesn't really change the look of the game, but it's an important benchmarking tool when you want to test the longevity of GPUs, and here the 8GiB GPUs simply fail.

I think the 3070 is fine but the 3070 Ti is an absolute joke and no one should be getting it over a 6800. But there is a good reason why I keep recommending a 3060 Ti or 3080 instead of all the other Nvidia cards as those are the best all around in their class. We will see how much Vram Nvidia will increase it by but we have had notable jumps before like going from 3.5GB on the 970 to 8GB on the 1070.

The only thing Ultra Nightmare settings tests is how much Vram capacity your GPU has if you aren't using resolutions above 4k. It's not a good test for longevity because Direct Storage and texture streaming technologies is supposed to elevate the Vram capacity unlike previous generations. The longevity of a GPU is more than texture quality setting as Ray Tracing is the real next gen technology that tests the longevity of these GPUs as it gets implemented in more and more games while showing you an actual visual difference.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Captain_Yuri said:
Bofferbrauer2 said:

I don't say AMD cards will necessarily age better, just that 8GiB VRAM is already too low for something of the performance class of a 3070 and especially the Ti version of the latter, and that NVidia probably won't increase the amount of VRAM very much next gen, resulting in the same problem happening again next gen.

I know Ultra Nightmare doesn't really change the look of the game, but it's an important benchmarking tool when you want to test the longevity of GPUs, and here the 8GiB GPUs simply fail.

The only thing Ultra Nightmare settings tests is how much Vram capacity your GPU has if you aren't using resolutions above 4k. It's not a good test for longevity because Direct Storage and texture streaming technologies is supposed to elevate the Vram capacity unlike previous generations. The longevity of a GPU is more than texture quality setting as Ray Tracing is the real next gen technology that tests the longevity of these GPUs as it gets implemented in more and more games while showing you an actual visual difference.

There have always been some techniques to lower the VRAM usage. All they do is delay the need for more VRAM by a bit. They can maybe delay for longer than usual, but that's about it, at some point 8GiB will be too small for anything, and I think that will come sooner than you expect. As in, starting to be a limiting factor late next year already.



Captain_Yuri said:

*snip*

The other interesting video was about Windows 11. They show you ways to bypass Windows checks and gives you quite a lot of other info.

I'm unsure who's the target audience of that video.

One one hand, those who will be up to modify some files to fool Windows will probably have systems that pass all the requirements or, if not, are on the verge of upgrading. Meanwhile, I feel like those with older PCs that may not pass all the check-ins are less likely to modify files just to get a newer OS which they don't need.

So, yeah, good video, and it's good to know that there are ways to get W11 to work, but I doubt it will be something many will even attempt.

Captain_Yuri said:

Time for some rumours! Take em with large grains of salts!

RDNA3 = TSMC 5nm + 6nm Chiplets + Doubling Infinity cache vs Lovelace = TSMC 5nm monolithic + GDDR6X

If true, next generation of GPUs are going to get even more interesting than we once thought. Especially if Lovelace will be using TSMC's 5nm instead of Samsung as that would put Nvidia on a node parity against AMD. But RDNA3 still might have an advantage with their Chiplets design. The other interesting thing if the rumour is legit would be AMD doubling their infinity cache where as Nvidia looking to continue with their higher memory bus/bandwidth route.


30 Series Super refresh is coming

Hopefully, unlike the TI series which more so slots in between cards these days, the Super series will replace cards with more performance. We will see though!

AMD Raphael Zen4 Based CPUs Rumored To Only Have 16 Cores

https://wccftech.com/amd-raphael-zen4-based-cpus-rumored-to-only-have-16-cores/

Of course, if it's 16 big cores vs Intels half big and half small cores, then it really won't matter as long as those 16 cores perform better.

https://twitter.com/greymon55/status/1415112419374878725?s=20

I doubt the 16 cores for Zen4 will be a problem. If you need more than that, you're not an ordinary user and you may do better getting a Threadripper system.

The 30 series refresh is interesting. We'll see how much performance can they squeeze from Samsung and those chips. If they're made by Samsung, of course.

As for RDNA3... I think sticking with a 192/256-bit bus is a mistake. It's clear that the cache helps a lot, it's easy to see it at 1080p, but if we move to higher resolutions we can see that the performance advantage drops fast, all it's because memory bus can't fed the cache fast enough. That's why all the advantage Navi cards have at 1080p disappear at 1440p, and then they lose at 4K.

Increasing the cache size will help, of course, but they really need to beef up the memory bus at the same time to avoid bottlenecks.



Please excuse my bad English.

Former gaming PC: i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Current gaming PC: R5-7600, 32GB RAM 6000MT/s (CL30) and a RX 9060XT 16GB

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Bofferbrauer2 said:
Captain_Yuri said:

The only thing Ultra Nightmare settings tests is how much Vram capacity your GPU has if you aren't using resolutions above 4k. It's not a good test for longevity because Direct Storage and texture streaming technologies is supposed to elevate the Vram capacity unlike previous generations. The longevity of a GPU is more than texture quality setting as Ray Tracing is the real next gen technology that tests the longevity of these GPUs as it gets implemented in more and more games while showing you an actual visual difference.

There have always been some techniques to lower the VRAM usage. All they do is delay the need for more VRAM by a bit. They can maybe delay for longer than usual, but that's about it, at some point 8GiB will be too small for anything, and I think that will come sooner than you expect. As in, starting to be a limiting factor late next year already.

At some point, all of the Vram sizes that we currently have will be too small for anything including 16GB and 24GB. It starting to be a limiting factor late next year doesn't actually make any sense as Direct Storage and a lot of these engines with new texture streaming technologies won't be out until next year. So realistically, it will be a few generations before 8GB becomes a limitation at 1440p unless you use nonsense settings that are meant for 4k+ displays.

Realistically when a person has to choose between a 3070 vs a 6700XT, it will come down to this. On one hand, a 6700XT will give you the ability to play some games at "Ultra Nightmare" texture quality settings which has no visual difference between that setting and lowering it down to Ultra setting while losing to 3060 in increasingly Ray Tracing heavy games and losing to a 3070 in majority of the Raster games. Or a 3070 where you might have to lower down the Texture quality setting in some games with little to no visual downgrades while being faster in Raster against a 6700XT and being leagues ahead in Ray Tracing. Not to mention having the benefits of DLSS and etc. I wonder which one will age better...



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850