By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

caffeinade said:
I don't know about you guys, but I'm very excited to see where having tensor cores, and ray tracing hardware, packed in with CUDA cores, can take us.
Denoising, audio processing, DLAA, variable rate shading, and who knows what else.
Moore's law may be dead, but that doesn't mean we can't go places with intelligent software/hardware architectures.

Mmm, I'm a bit skeptical of the long-term success of Nvidia turning their backs on the hardware design that propelled them to greatness in the first place: unified compute engines and nothing else inside their GPUs. It feels like they've contracted IBM's disease, though IBM had Intel selling more efficient and simpler hardware to keep them in their place. On the other hand AMD, though, designed what was possibly their worst GPU ever since buying ATI (and subject to the same "disease", with all of Vega's useless bells and whistles).



 

 

 

 

 

Around the Network
haxxiy said:
caffeinade said:
I don't know about you guys, but I'm very excited to see where having tensor cores, and ray tracing hardware, packed in with CUDA cores, can take us.
Denoising, audio processing, DLAA, variable rate shading, and who knows what else.
Moore's law may be dead, but that doesn't mean we can't go places with intelligent software/hardware architectures.

Mmm, I'm a bit skeptical of the long-term success of Nvidia turning their backs on the hardware design that propelled them to greatness in the first place: unified compute engines and nothing else inside their GPUs. It feels like they've contracted IBM's disease, though IBM had Intel selling more efficient and simpler hardware to keep them in their place. On the other hand AMD, though, designed what was possibly their worst GPU ever since buying ATI (and subject to the same "disease", with all of Vega's useless bells and whistles).

Where's the difference in incorporating the abilities of the tensor and RT cores into the shaders or have them seperate? It feels more flexible and clean this way. Or are you saying there shouldn't be RT and Tensor capability at all?



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

Ka-pi96 said:
vivster said:

What is exactly the scenario here?

Do you want sound from one application be heard on TV but not on the speakers? To you want to have different applications be either locked to PC speakers or TV? Do you just want a specific application sometimes play on TV and sometimes on the speakers?

This one. I want Netflix and the like locked to the TV and games on the speakers.

Never tried this, but I know there's option in Realtek Audio Manager, under Advanced Settings to separate rear and front outputs, so they will show up as two different outputs. But as I said, never really tried how that works.



Chazore said:
caffeinade said:
I don't know about you guys, but I'm very excited to see where having tensor cores, and ray tracing hardware, packed in with CUDA cores, can take us.
Denoising, audio processing, DLAA, variable rate shading, and who knows what else.
Moore's law may be dead, but that doesn't mean we can't go places with intelligent software/hardware architectures.

You are?. I mean, I'd want to be, but seeing where game desgin goes these days and how it has to run on consoles, I honestly don't see myself caring about tech I'll never be able to see, because it'd make the other closed off boxes look bad if a PC had it designed for them and used for first.

I'd be absolutely down for it if we saw many devs making games with real time ray tracing in mind for PC versions or primarily for PC made games. Just not later or after consoles get it first, because I'm honestly tired of the whole "avoid upsetting the special snowflake in the room" routine we've been having for years and years. 

https://wccftech.com/shadow-of-the-tomb-raider-bfv-nvidia-rtx/
So, yeah.
"It just works"

It'll still take some time before a game could be designed with ray tracing in mind.
It would've been irresponsible to have been working on titles with it in mind before this year.

haxxiy said:
caffeinade said:
I don't know about you guys, but I'm very excited to see where having tensor cores, and ray tracing hardware, packed in with CUDA cores, can take us.
Denoising, audio processing, DLAA, variable rate shading, and who knows what else.
Moore's law may be dead, but that doesn't mean we can't go places with intelligent software/hardware architectures.

Mmm, I'm a bit skeptical of the long-term success of Nvidia turning their backs on the hardware design that propelled them to greatness in the first place: unified compute engines and nothing else inside their GPUs. It feels like they've contracted IBM's disease, though IBM had Intel selling more efficient and simpler hardware to keep them in their place. On the other hand AMD, though, designed what was possibly their worst GPU ever since buying ATI (and subject to the same "disease", with all of Vega's useless bells and whistles).

Eh, well we'll just have to disagree then.
You can't just rely on the same tricks to work forever.
Look at Intel, and how AMD is currently clawing back market share via smart design, and a somewhat risky approach.

If Nvidia didn't try to innovate now: AMD, or Intel. or Imagination, or someone else would've tried to challenge them.
Eventually someone would've taken them down.

Plus it seems pretty clear to me that GPUs relying on compute alone would've gone about as well as having cards dedicated to rasterisation, or media decoding, or something like that.



JEMC said: 

Even if the cards have the same $100 extra that the latest Founder Edition cards had, that's still a lot of money. We need AMD to be competitive again, asap.

Well, with how AMD is steering itself with Sony/Apple and long term plans, we're looking at 2021, maybe 22 for when AMD decide to come back for high end desktop GPU's. Competition at this point is basically dead.



Mankind, in its arrogance and self-delusion, must believe they are the mirrors to God in both their image and their power. If something shatters that mirror, then it must be totally destroyed.

Around the Network
JEMC said:

Ugh, it's Monday. I guess I have to make the news, right? Oh, great! It turns out that there's a lot of them... >.>

Appreciate everything you do!

JEMC said:

SA_DirectX 2.0 mod makes Grand Theft Auto San Andreas look almost as good as modern-day video games
https://www.dsogaming.com/news/sa_directx-2-0-mod-makes-grand-theft-auto-san-andreas-look-almost-as-good-as-modern-day-video-games/
Modder ‘Makarus’ has released the final version of his graphical overhaul mod for Grand Theft Auto San Andreas, SA_DirectX 2.0. While the modder did not reveal the changes he has made to the change, he did release some screenshots that showcase the new and improved visuals achieved with this mod.
From what we have gathered, SA_DirectX 2.0 has some new shaders that Makarus has implemented via the ENBSeries mod for GTA San Andreas. The result is incredible and almost makes the game look as good as modern-day titles (of course we’re not talking about the current-gen triple-A games, but something you could find in an indie/lowered-budget title).

>>The mod is available from Nexus Mods.

This is actually impressive.
Although the Depth of Field is pretty terrible and there is a few artifacts.
Plus the reflections update at a low rate...

But all in all, considering the engines age, it's stupidly impressive.

haxxiy said:

Mmm, I'm a bit skeptical of the long-term success of Nvidia turning their backs on the hardware design that propelled them to greatness in the first place: unified compute engines and nothing else inside their GPUs. It feels like they've contracted IBM's disease, though IBM had Intel selling more efficient and simpler hardware to keep them in their place. On the other hand AMD, though, designed what was possibly their worst GPU ever since buying ATI (and subject to the same "disease", with all of Vega's useless bells and whistles).

The fact nVidia are willing to reserve so much silicon to propriety features may actually play into AMD's hands in the long run anyway.

If anyone remembers the Geforce FX for example... Allot of that chips issues was due to the fact it retained allot of the older fixed-function hardware which consumed die-space that ATI with the Radeon 9700 Pro pretty much dedicated to improving it's overall performance, especially in SM2 scenarios... End result was that ATI pretty much utterly dominated that era.

Chazore said:

Well, with how AMD is steering itself with Sony/Apple and long term plans, we're looking at 2021, maybe 22 for when AMD decide to come back for high end desktop GPU's. Competition at this point is basically dead.

Pretty much.
That isn't to say that AMD GPU's can't be impressive... It just takes a very particular scenario (Asynchronous Compute) for it to happen.

AMD's Next Gen GPU design should be something to watch out for though.

Heck, I ain't even willing to buy AMD's low-end GPU's at this point, that's very telling.




www.youtube.com/@Pemalite

Pemalite said:
haxxiy said:

Mmm, I'm a bit skeptical of the long-term success of Nvidia turning their backs on the hardware design that propelled them to greatness in the first place: unified compute engines and nothing else inside their GPUs. It feels like they've contracted IBM's disease, though IBM had Intel selling more efficient and simpler hardware to keep them in their place. On the other hand AMD, though, designed what was possibly their worst GPU ever since buying ATI (and subject to the same "disease", with all of Vega's useless bells and whistles).

The fact nVidia are willing to reserve so much silicon to propriety features may actually play into AMD's hands in the long run anyway.

If anyone remembers the Geforce FX for example... Allot of that chips issues was due to the fact it retained allot of the older fixed-function hardware which consumed die-space that ATI with the Radeon 9700 Pro pretty much dedicated to improving it's overall performance, especially in SM2 scenarios... End result was that ATI pretty much utterly dominated that era.

Yeah, FX series is what immediately popped to mind when i was reading about RTX cards - FX was when I dropped nVidia for ATI (9800 Pro, I had Ti4400 before that) - if AMD has any sense they will release fairly conservative hardware that uses all die space to improve performance in current and near future games and get back in the game.



Pemalite said: 
haxxiy said:

Mmm, I'm a bit skeptical of the long-term success of Nvidia turning their backs on the hardware design that propelled them to greatness in the first place: unified compute engines and nothing else inside their GPUs. It feels like they've contracted IBM's disease, though IBM had Intel selling more efficient and simpler hardware to keep them in their place. On the other hand AMD, though, designed what was possibly their worst GPU ever since buying ATI (and subject to the same "disease", with all of Vega's useless bells and whistles).

The fact nVidia are willing to reserve so much silicon to propriety features may actually play into AMD's hands in the long run anyway.

If anyone remembers the Geforce FX for example... Allot of that chips issues was due to the fact it retained allot of the older fixed-function hardware which consumed die-space that ATI with the Radeon 9700 Pro pretty much dedicated to improving it's overall performance, especially in SM2 scenarios... End result was that ATI pretty much utterly dominated that era.

 

HoloDust said:
Pemalite said:

The fact nVidia are willing to reserve so much silicon to propriety features may actually play into AMD's hands in the long run anyway.

If anyone remembers the Geforce FX for example... Allot of that chips issues was due to the fact it retained allot of the older fixed-function hardware which consumed die-space that ATI with the Radeon 9700 Pro pretty much dedicated to improving it's overall performance, especially in SM2 scenarios... End result was that ATI pretty much utterly dominated that era.

Yeah, FX series is what immediately popped to mind when i was reading about RTX cards - FX was when I dropped nVidia for ATI (9800 Pro, I had Ti4400 before that) - if AMD has any sense they will release fairly conservative hardware that uses all die space to improve performance in current and near future games and get back in the game.

I think people are overlooking the fact that they didn't actually compromise much space in the chip and instead just made it bigger to compensate. The 2080Ti still has a considerable upgrade in CUDA cores.Even if AMD puts out a conservative GPU they won't eclipse the rasterization performance by much, if at all. Also, the RT and Tensor cores aren't like PhysX, as in a very niche and proprietary feature. They work together with DX and they are very broadly applicable to games without much special input from the devs. RT is without a doubt the future of gaming and Tensor cores can help with a variety of tasks. For now it's just gimmicks because the hardware behind it is still too weak to actually accomplish properly what it promises but you have to start somewhere. The sooner devs get familiar with these new opportunities, the better.

Now we just have to hope that the RT and AI functionality that will eventually pop up in AMD GPUs are similar enough in framework to that in Nvidia chips.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

vivster said:

I think people are overlooking the fact that they didn't actually compromise much space in the chip and instead just made it bigger to compensate. The 2080Ti still has a considerable upgrade in CUDA cores.

The point is, they could have had more CUDA cores for the same die space.

The 2080Ti is 18.6 Billion transistors large.
The 1080Ti is 12 Billion transistors large.
That is an increase of 55% in transistor counts.

The 2080Ti has 4352 CUDA cores.
The 1080Ti has 3584 CUDA cores.
That is an increase of 21%.

We should have been looking at closer to 5,500 CUDA cores.

See the problem now?

Combine that with a slight reduction in clocks between the 1080Ti and 2080Ti... I don't expect the performance gains in typical rasterized scenario's to be generationally-ground breaking.

vivster said:

Even if AMD puts out a conservative GPU they won't eclipse the rasterization performance by much, if at all.

Well. If AMD was to take VEGA and took it to 18.6 Billion transistors... We would be looking at a part with 6000~ GCN Cores, nothing to sneeze about.


vivster said:


Even if AMD puts out a conservative GPU they won't eclipse the rasterization performance by much, if at all.

Well, AMD's performance is hindered in a ton of different areas, we are only talking hypothetical's here... AMD isn't set to catch up to nVidia with Navi, maybe Next Gen.

vivster said:


Also, the RT and Tensor cores aren't like PhysX, as in a very niche and proprietary feature. They work together with DX and they are very broadly applicable to games without much special input from the devs. RT is without a doubt the future of gaming and Tensor cores can help with a variety of tasks. For now it's just gimmicks because the hardware behind it is still too weak to actually accomplish properly what it promises but you have to start somewhere. The sooner devs get familiar with these new opportunities, the better.

Now we just have to hope that the RT and AI functionality that will eventually pop up in AMD GPUs are similar enough in framework to that in Nvidia chips.

We don't know if the GPU is going to be any good in Ray Traced scenario's anyway. This is essentially the "base line" for next-gen graphics.
Chicken and Egg and all that.
I would prefer hardware that works well in games today, not years later.

So until we actually have games and enough hardware on the market that these features become proven... Then they are all gimmicks at this stage.




www.youtube.com/@Pemalite

Pemalite said:
vivster said:

I think people are overlooking the fact that they didn't actually compromise much space in the chip and instead just made it bigger to compensate. The 2080Ti still has a considerable upgrade in CUDA cores.

The point is, they could have had more CUDA cores for the same die space.

The 2080Ti is 18.6 Billion transistors large.
The 1080Ti is 12 Billion transistors large.
That is an increase of 55% in transistor counts.

The 2080Ti has 4352 CUDA cores.
The 1080Ti has 3584 CUDA cores.
That is an increase of 21%.

We should have been looking at closer to 5,500 CUDA cores.

See the problem now?

Combine that with a slight reduction in clocks between the 1080Ti and 2080Ti... I don't expect the performance gains in typical rasterized scenario's to be generationally-ground breaking.

vivster said:


Also, the RT and Tensor cores aren't like PhysX, as in a very niche and proprietary feature. They work together with DX and they are very broadly applicable to games without much special input from the devs. RT is without a doubt the future of gaming and Tensor cores can help with a variety of tasks. For now it's just gimmicks because the hardware behind it is still too weak to actually accomplish properly what it promises but you have to start somewhere. The sooner devs get familiar with these new opportunities, the better.

Now we just have to hope that the RT and AI functionality that will eventually pop up in AMD GPUs are similar enough in framework to that in Nvidia chips.

We don't know if the GPU is going to be any good in Ray Traced scenario's anyway. This is essentially the "base line" for next-gen graphics.
Chicken and Egg and all that.
I would prefer hardware that works well in games today, not years later.

So until we actually have games and enough hardware on the market that these features become proven... Then they are all gimmicks at this stage.

When was the last time the performance jump was groundbreaking? 15-20% gen to gen seems moderate.

You cannot really expect devs to code for hardware that doesn't exist. When has that ever been the case? And yes, for now they are gimmicks, but they're starting a framework. I'd rather devs start to get familiar with it now and learn techniques to make their stuff more efficient than later when they do get the hardware power but put on such inefficient code that it won't matter anyway. We should be embracing both Nvidia and the devs who pioneer so early with technology that will inevitably become the future. We lose a few percent of performance in this and and the next few gens but we'll gain so much more in the long run.

I really hope AMD will not go completely conservative with their next GPU set to compete with Nvidia's flagship and at least take the first steps to integrate the new features.

Question: How feasible are large chips? Is there like an upper limit where we reach the ceiling of what's possible with engineering, is it a cost issue? How far are we with stacked chips?



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.