By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

Captain_Yuri said:

New Cyberpunk trailer

Speaking of Cyberpunk 2077...

https://www.youtube.com/watch?v=s0oY5Ms0WlI



Around the Network
Zkuq said:
JEMC said:

Io Interactive is making a James Bond game
https://www.pcgamer.com/io-interactive-is-making-a-james-bond-game/
Io Interactive has announced it's working on an untitled James Bond game, Project 007. The teaser video shows no information about the game, merely a version of the classic gun chamber opening sequence from Bond movies.

This potentially seems like a great fit.

They certainly have the capacity to make a great spy/stealth game, but the question is what kind of game they'll try to make. After all, James Bond isn't known for its stealthiness, but rather for the action and the mess it leaves behind him.

Because I assume it will be a him, not a her inspired by the next movie.

Last edited by JEMC - on 19 November 2020

Please excuse my bad English.

Former gaming PC: i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Current gaming PC: R5-7600, 32GB RAM 6000MT/s (CL30) and a RX 9060XT 16GB

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

JEMC said:
Zkuq said:
JEMC said:

Io Interactive is making a James Bond game
https://www.pcgamer.com/io-interactive-is-making-a-james-bond-game/
Io Interactive has announced it's working on an untitled James Bond game, Project 007. The teaser video shows no information about the game, merely a version of the classic gun chamber opening sequence from Bond movies.

This potentially seems like a great fit.

They certainly have the capacity to make a great spy/stealth game, but the question is what kind of game they'll try to make. After all, James Bond isn't known for its stealthiness, but rather for the action and the mess it leaves behind him.

Because I assume it will be a him, not a her inspired by the next movie.

I agree. I mean, Bond does tend to go for stealth quite often, but usually things go south and chaos ensues - which means there is some room for stealth as well! But action certainly has to work too, otherwise it won't really be a Bond game. IOI ought to have the capacity to nail the stealth part, but it's the action part that seems less certain. I imagine the game will at least have a good amount of player choice, which would fit an agent game really well - I think even a Bond game.

(Also, I'm personally not a huge fan of having to even discuss the gender/sex of an established character. If you want change, create a new character, put him/her/them/it/whatever in a great game, and I'll buy it regardless of the gender. Did I mention yet that I hate English for having gendered pronouns?)



Zkuq said:
JEMC said:
Zkuq said:
JEMC said:

Io Interactive is making a James Bond game
https://www.pcgamer.com/io-interactive-is-making-a-james-bond-game/
Io Interactive has announced it's working on an untitled James Bond game, Project 007. The teaser video shows no information about the game, merely a version of the classic gun chamber opening sequence from Bond movies.

This potentially seems like a great fit.

They certainly have the capacity to make a great spy/stealth game, but the question is what kind of game they'll try to make. After all, James Bond isn't known for its stealthiness, but rather for the action and the mess it leaves behind him.

Because I assume it will be a him, not a her inspired by the next movie.

I agree. I mean, Bond does tend to go for stealth quite often, but usually things go south and chaos ensues - which means there is some room for stealth as well! But action certainly has to work too, otherwise it won't really be a Bond game. IOI ought to have the capacity to nail the stealth part, but it's the action part that seems less certain. I imagine the game will at least have a good amount of player choice, which would fit an agent game really well - I think even a Bond game.

(Also, I'm personally not a huge fan of having to even discuss the gender/sex of an established character. If you want change, create a new character, put him/her/them/it/whatever in a great game, and I'll buy it regardless of the gender. Did I mention yet that I hate English for having gendered pronouns?)

I couldn't have said it better, all of it.



Please excuse my bad English.

Former gaming PC: i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Current gaming PC: R5-7600, 32GB RAM 6000MT/s (CL30) and a RX 9060XT 16GB

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Captain_Yuri said:

Also as for the whole DLSS on AMD. The thing to remember is that Tensor cores are just very specialized cores that accelerate INT-4 and INT8 operations and that sort of a function is also available on RDNA 2 but as with RT implementation on RDNA 2, it's just slower than even Turing but it is still doable.

DF did sort of go over the whole thing back in July with the Series X.

The main benefit of Tensor Cores is this:

What that means is, you take the frame rate the game is running at a given resolution, increase the frame time and you can get the performance penalty. Obviously with more Tensor Cores and ML accelerators, the Render Time becomes less and less.

So as an example.

If a game is playing at 30fps which is 33 ms. With DLSS render time hit, a 2060 would run the game at 28fps vs 26fps on the Series X. Now obviously the Series X is faster in Raster but the Tensor Core performance of a 2080 which is it's main raster competitor is leagues faster. And because of that, same resolution with the same settings and lets say using DLSS/DirectML on both systems, Nvidia will still be faster by a noticeable margin because of the Tensor Cores. But you don't need Tensor Cores to do something like DLSS as the performance advantage of the lower resolution outweighs the performance penalty.

Personally I think by the time AMD has a comparable DLSS competitor, it's going to be 2022 or later. RDNA 2 does one thing and one thing really well and that's Raster performance below 4k. I wouldn't get RDNA 2 for anything else.

Do we have a deep dive of RDNA2 yet? Does it have dedicated INT cores at all? Because as far as I understand it it's not just that Nvidia's Tensor cores do those operations faster, it's also that they are dedicated. That means AMD executing those functions will also rob them of cores that could be used for other operations, resulting in an even harder performance hit, especially at higher resolutions.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

Around the Network

A stock cooled 6800XT beats the 3DMark Fire Strike record of a LN2 cooled 3090. A few caveats here. It has tweaked tessellation settings so it's not eligable for the HoF. With those settings it's still 2000 points behind.

https://www.3dmark.com/hall-of-fame-2/fire+strike+3dmark+score+performance+preset/version+1.1/1+gpu

Also, this is the regular Fire Strike, meaning 1080p. So it makes sense that AMD can make good ground here due to extremely high clocks. It seems for Fire Strike Extreme you would need a 6900XT to get close and for Fire Strike Ultra it doesn't even seem to be a contest.

I can only admire how AMD managed to be as power efficient with such high clocks. There's either a cheap trick that I'm not seeing or just top notch engineering.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

vivster said:
Captain_Yuri said:

Also as for the whole DLSS on AMD. The thing to remember is that Tensor cores are just very specialized cores that accelerate INT-4 and INT8 operations and that sort of a function is also available on RDNA 2 but as with RT implementation on RDNA 2, it's just slower than even Turing but it is still doable.

DF did sort of go over the whole thing back in July with the Series X.

The main benefit of Tensor Cores is this:

What that means is, you take the frame rate the game is running at a given resolution, increase the frame time and you can get the performance penalty. Obviously with more Tensor Cores and ML accelerators, the Render Time becomes less and less.

So as an example.

If a game is playing at 30fps which is 33 ms. With DLSS render time hit, a 2060 would run the game at 28fps vs 26fps on the Series X. Now obviously the Series X is faster in Raster but the Tensor Core performance of a 2080 which is it's main raster competitor is leagues faster. And because of that, same resolution with the same settings and lets say using DLSS/DirectML on both systems, Nvidia will still be faster by a noticeable margin because of the Tensor Cores. But you don't need Tensor Cores to do something like DLSS as the performance advantage of the lower resolution outweighs the performance penalty.

Personally I think by the time AMD has a comparable DLSS competitor, it's going to be 2022 or later. RDNA 2 does one thing and one thing really well and that's Raster performance below 4k. I wouldn't get RDNA 2 for anything else.

Do we have a deep dive of RDNA2 yet? Does it have dedicated INT cores at all? Because as far as I understand it it's not just that Nvidia's Tensor cores do those operations faster, it's also that they are dedicated. That means AMD executing those functions will also rob them of cores that could be used for other operations, resulting in an even harder performance hit, especially at higher resolutions.

https://www.pcgameshardware.de/Radeon-RX-6800-XT-Grafikkarte-276951/Tests/Benchmark-Release-Review-vs-RTX-3080-1361423/2/

It's German, but that shouldn't be too much a problem for you, right?



  Bofferbrauer2 said:

https://www.pcgameshardware.de/Radeon-RX-6800-XT-Grafikkarte-276951/Tests/Benchmark-Release-Review-vs-RTX-3080-1361423/2/

It's German, but that shouldn't be too much a problem for you, right?

Interesting read, but there is still too much we don't know. A few things I took with me.

1. Whatever AMD will do against DLSS it won't be nearly as effective because they're missing the dedicated power to do so.

2. Their RT sucks balls. Looks more like an afterthought in comparison to Nvidia who really treat it as an integral part of future rendering.

3. Infinity cache is cool and all, but it feels more like a band-aid than an actual revolution. I can easily see it going away again once they realize they actually need more room for shaders. Nvidia used that room for more RT and Tensor cores and helped themselves with GDDR6X instead. Infinity cache is only twice as fast as GDDR6X, but so much smaller, which gives Nvidia the edge in versatility. While AMD needs the small edge in performance to stay competitive. It also seems that the cache is helping AMD to reach higher clocks because Cache is much more energy efficient than DRAM.

4. AMD seems to have gone intentionally for extremely high clock speeds. However that will only help them at lower resolutions as we've seen. Yet again it's AMD trying to brute force performance. 4K definitely has not been their goal with this generation.

Learning so much about the architectures of Ampere and RDNA2 makes me really excited about next gen because both of them still have a huge potential to improve while staying on the same node.

Last edited by vivster - on 20 November 2020

If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

vivster said:
 Bofferbrauer2 said:

https://www.pcgameshardware.de/Radeon-RX-6800-XT-Grafikkarte-276951/Tests/Benchmark-Release-Review-vs-RTX-3080-1361423/2/

It's German, but that shouldn't be too much a problem for you, right?

Learning so much about the architectures of Ampere and RDNA2 makes me really excited about next gen because both of them still have a huge potential to improve while staying on the same node.

It be interesting what Nvidia plans are.  They been on a every other year for new series since the launch of the 900 series.  At least according to the AMD road map they planning on release RDNA 3 base cards in 2021 most likely near the end.

Will Nvidia respond with a new generation of cards or wait till 2022.

Many might disagree but nothing would make me happier then getting back to a new gen every year.  It just means when I do finally upgrade it be a even bigger jump.  I never been the guy who cared about a new card coming out 6 or more months after I bought something that way more powerful then what I own.  Faster the progress the better even if I be further behind the newest tech for a longer period of time.



Cyran said:
vivster said:
 Bofferbrauer2 said:

https://www.pcgameshardware.de/Radeon-RX-6800-XT-Grafikkarte-276951/Tests/Benchmark-Release-Review-vs-RTX-3080-1361423/2/

It's German, but that shouldn't be too much a problem for you, right?

Learning so much about the architectures of Ampere and RDNA2 makes me really excited about next gen because both of them still have a huge potential to improve while staying on the same node.

It be interesting what Nvidia plans are.  They been on a every other year for new series since the launch of the 900 series.  At least according to the AMD road map they planning on release RDNA 3 base cards in 2021 most likely near the end.

Will Nvidia respond with a new generation of cards or wait till 2022.

Many might disagree but nothing would make me happier then getting back to a new gen every year.  It just means when I do finally upgrade it be a even bigger jump.  I never been the guy who cared about a new card coming out 6 or more months after I bought something that way more powerful then what I own.  Faster the progress the better even if I be further behind the newest tech for a longer period of time.

Tighter cycles won't really result in more performance over time. One of the biggest reasons why the cycles have drifted apart is the new nodes. Everyone is struggling to get good yields on ever smaller nodes. I mean look at Intel. The only reason why AMD is in front currently is they are using smaller nodes, otherwise even they wouldn't be able to improve that much. It's especially difficult for GPUs since they have to be high powered and huge. There is only so much you can do with architecture, since so much is parallelized more power will only come from more cores. More cores means bigger chips and as such terrible yields. ARM doesn't have that issue(yet) since their chips are tiny. When it comes to performance increase GPUs are pretty much fucked compared to any other semiconductor. They pretty much need to get wider and now with RT they need to sacrifice even more room. I doubt that 5nm is in any way ready to deliver huge GPU chips so we'll have to wait a bit for the next big jump.

I don't mind shorter cycles. They're terrible for the environment but always better for the consumer. I don't expect Nvidia to come out with a new architecture in 2021, but I do expect refreshes to compete with whatever AMD delivers. Amd are the ones who need to rpove themselves since RDNA2 is definitely not it yet.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.