By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

haxxiy said:

I get what you're saying but Turing, Ampere, RDNA3 were all neutral or underperformed node gains. The last time a new architecture had a meaningful jump in  efficiency with other factors being equal was with Maxwell and RDNA1. That would mean simply refined architectures have been faring just as well in efficiency, on average, so I'm not sure there's a trend here.

That being said, at least for the next generation, there's room between TDP and non-RT gaming in the 4090, as well as the possibitlity of making a bigger chip with safer clocks even with meager efficiency gains in 3nm...

The issue is, nodes aren't decreasing in geometric feature sizes like they used to.

The fabrication "node" size is just an advertising number and not a representation of node improvements or actual geometric size, it's why Intel moved to a different naming scheme.

TSMC/Global Foundries/Samsung have also been guilty of decreasing just the BEOL or FEOL and calling it a "new" node... When historically new node jumps include a shrink of everything, rather than just one aspect.

It does mean that we tend to see more of a linear improvement in chip manufacturing over time rather than big jumps.

Captain_Yuri said:

A: Imo at best the uplift of increasing memory bandwidth will be a few % because of how power limited the APU is. We see this because increasing the power netted big gains even though it's at the same memory speed. So even if we are generous and say 15% gen on gen uplift at 25 watts, I think it's pretty disappointing. Perhaps RDNA 4 will fix a lot of RDNA 3s short comings as Lovelace certainly fixed a lot of Amperes shortcomings after Nvidia went dual fp32 for the first time.

This. I have been playing with AMD notebooks for years... And TDP has always been your biggest limiter to performance.

If you know a game is lightly threaded, you can actually increase CPU performance by setting the affinity to enough cores (Ideally spread all over the chip rather than in one section) and get higher boost frequencies.

Sometimes, playing with power management and limiting the clockspeed of the CPU, allows the chip to funnel more TDP into the GPU if you know your TDP bound on the GPU side. - This was a tactic I used on my old Ryzen 2700u notebook with Vega 10 graphics... I still got improvements with this method on the 4700u, but it was lessened due to efficiency improvements. - The 2400mhz vs 3200mhz bandwidth between devices wasn't a deal breaker unless I started to push the higher 1080P resolutions, which was when the fillrate limits kicked in.


RDNA 4 needs to take the best parts of RDNA 3 (Which is actually not a bad architecture, just needs more refinement) and dial home that RT.



--::{PC Gaming Master Race}::--

Around the Network
Captain_Yuri said:

I hate being that guy, but I really am gonna hate the day when we've come to rely so heavily on FSR/DLSS, that one day, a dev is flat out not gonna bother with it, or not implement it proper, and then we're gonna have to rely on pure brute force, and I don't see the cards being there for that by that time (because Nvidia at this point is always using DLSSS metrics as their "true power", not actual raw computational power like we've been doing since the 80's). 

I think the day that happens, is the day I'm prob gonna leave gaming for good, forever, because ain't nothing like regressing and relying on a band-aid, that dulls and smashes your faith in humanity (like how I've lost a portion of that faith in science thanks to the dumbass that is Musk). 



Step right up come on in, feel the buzz in your veins, I'm like an chemical electrical right into your brain and I'm the one who killed the Radio, soon you'll all see

So pay up motherfuckers you belong to "V"

Chazore said:
Captain_Yuri said:

I hate being that guy, but I really am gonna hate the day when we've come to rely so heavily on FSR/DLSS, that one day, a dev is flat out not gonna bother with it, or not implement it proper, and then we're gonna have to rely on pure brute force, and I don't see the cards being there for that by that time (because Nvidia at this point is always using DLSSS metrics as their "true power", not actual raw computational power like we've been doing since the 80's). 

I think the day that happens, is the day I'm prob gonna leave gaming for good, forever, because ain't nothing like regressing and relying on a band-aid, that dulls and smashes your faith in humanity (like how I've lost a portion of that faith in science thanks to the dumbass that is Musk). 

Meh I don't think it will be that bad. I think Nvidia/AMD/Intel will continue to peruse raw power but either the performance uplift won't be as great as previous generations or the cost of the GPUs will be too expensive. It's kind of like how cars went from simple and cheap to complex and expensive. Plus we have always had some devs that has had shit optimization so that wouldn't be anything new. A brand new honda civic 15 years ago costed like $14k. These days a base honda civic starts at $25k. Hell my 2017 crv got rear ended a couple years back and it didn't look too bad in terms of visual damage but the repair cost was $3500 cause it apparently damaged the side radar. The insurance covered it of course but the point is that modern tech whether it be in computers or cars or whatever is simply getting more and more costly as they continue to add in more and more features.

Plus the funny thing is that in my experience, when there's issues with the image... Because of all the shitty effects the devs put in, it's hard to tell if the issue is caused by DLSS or some shat visual style the devs think is good. I have had a few instances in cyberpunk where something looked off so I turned off DLSS and restarted the game and... It still looked off.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

JEMC said:
Conina said:

Why do they give more for a 2080 or 2080 Super (with probably a lot more "mileage") than for a 3060 Ti?

Because it's more likely that someone with a 2 gens old GPU upgrades than someone that has a card from the previous gen? Or because someone that bought a high-end part like a 2080 is more likely to buy another high-end part like the 4080 and 4070Ti GPUs than someone that went for a mid-range model like the 3060Ti?

No, the reason is because a 80 class car will have a bigger cooler than a 60 class card. That means more recyclabe materials like copper and aluminium.



Chicho said:
JEMC said:

Because it's more likely that someone with a 2 gens old GPU upgrades than someone that has a card from the previous gen? Or because someone that bought a high-end part like a 2080 is more likely to buy another high-end part like the 4080 and 4070Ti GPUs than someone that went for a mid-range model like the 3060Ti?

No, the reason is because a 80 class car will have a bigger cooler than a 60 class card. That means more recyclabe materials like copper and aluminium.

Good point.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Around the Network
Chicho said:
JEMC said:

Because it's more likely that someone with a 2 gens old GPU upgrades than someone that has a card from the previous gen? Or because someone that bought a high-end part like a 2080 is more likely to buy another high-end part like the 4080 and 4070Ti GPUs than someone that went for a mid-range model like the 3060Ti?

No, the reason is because a 80 class car will have a bigger cooler than a 60 class card. That means more recyclabe materials like copper and aluminium.

I doubt that the components are worth over 200 pounds. It would be stupid to tear apart and recycle fully functional graphic cards instead of reselling them.



Chrome 110 brings NVIDIA RTX Super Resolution support

https://videocardz.com/newz/chrome-110-brings-nvidia-rtx-super-resolution-support

I really hope this turns out to be good cause it could be a game changer for PCs.

AMD Ryzen 7 7840HS has been tested with Cinebench R23, up to 26% faster than R7 6800H

https://videocardz.com/newz/amd-ryzen-7-7840hs-has-been-tested-with-cinebench-r23-up-to-26-faster-than-r7-6800h

Pretty good cpu performance uplift

Cyberpunk 2077 HD Reworked Project Texture Mod Announced by Halk Hogan

https://wccftech.com/cyberpunk-2077-hd-reworked-project-texture-mod-announced-by-halk-hogan/

With Textures that detailed, the vram requirements gonna be big

Hopefully there are no shader compilation stutters but we will see.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Captain_Yuri said:

Cyberpunk 2077 HD Reworked Project Texture Mod Announced by Halk Hogan

https://wccftech.com/cyberpunk-2077-hd-reworked-project-texture-mod-announced-by-halk-hogan/

With Textures that detailed, the vram requirements gonna be big

The difference is staggering. Do the vanilla textures look like that? Because, for a game praised by its visuals, they don't look so great, to be honest.

Captain_Yuri said:

Ha! I love this "Global launch time" tweet... only to post the two time zones for the US coasts. The US is the whole world!



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

JEMC said:
Captain_Yuri said:

Cyberpunk 2077 HD Reworked Project Texture Mod Announced by Halk Hogan

https://wccftech.com/cyberpunk-2077-hd-reworked-project-texture-mod-announced-by-halk-hogan/

With Textures that detailed, the vram requirements gonna be big

The difference is staggering. Do the vanilla textures look like that? Because, for a game praised by its visuals, they don't look so great, to be honest.

You don't really notice it as the textures aren't that bad everywhere. The areas where the players are mainly focused in is where they put a lot of effort into the visuals and those muddy textures aren't out of the norm for modern open world games.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Captain_Yuri said:
JEMC said:

The difference is staggering. Do the vanilla textures look like that? Because, for a game praised by its visuals, they don't look so great, to be honest.

You don't really notice it as the textures aren't that bad everywhere. The areas where the players are mainly focused in is where they put a lot of effort into the visuals and those muddy textures aren't out of the norm for modern open world games.

Yeah, I figured that not all the textures would be like that, and it wouldn't surprise me if some of those shown in the video ar from close ups, making them look worse than they really are, but it's still surprising to see some basic textures in games that are so big (in budget) and praised by its graphics.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.