By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

JEMC said:
Captain_Yuri said:

I believe Nvidia is going Samsung this time around so that's one thing to keep in mind whiling attempting to judge it. I do think that while flagship prices will be higher, the rest of the stack will either be lower or similarly priced as Lovelace but with more performance. After the failure of the 4080 in terms of sales, I highly doubt Nvidia is going to attempt the 80 series at $1200 again. Mix that with lower demand with consumer GPUs due to high prices and economic recession and going Samsung which is supposed to be cheaper than TSMC, I think there's a chance that some of the GPUs will get restructured accordingly.

Given what's stated in the article, Mr. leather jacket already went to TMSC to secure 3nm wafers so, unless Nvidia has decided to split their chips between both TSMC and Samsung (for example going with the former for the bigger and more expensive parts, leaving the rest for Samsung), we have conflicting rumors. Which one to believe?

Also, the 4080 was a failure, true, but that doesn't mean much for the next generation. Nvidia could simply price the 4080 at $1000 instead of $1200 while leaving the rest untouched. It's unlikely, of course, because if the chips are more expensive to manufacture, Nvidia will charge more for them, plain and simple.

Yea we won't really know until it comes out but the article is using old information from October while there are articles from November that says Nvidia, Qualcomm and etc are going samsung which I am more so inclined to believe given the arrogance of TSMC and how behind Radeon is. I don't think Nvidia will overestimate them again if they can go for better margins by going Samsung. They will still be using TSMC for their datacenter imo.

And yea we will have to see what they do but I don't think their current plan is working. While I doubt Radeon is taking away any significant chunks of market share, their volume of sales is certainly down significantly compared to previous generations when looking at JPRs GPU shipment reports which isn't a good thing for them. They will likely want to get back on track so imo either they will want to bring the 80 class back down to $700-$800 or they will have significant performance increases gen on gen.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Around the Network
JEMC said:

My comment was in regards to your density and performance increase guesses, not the waffer costs. After all, the article claims that Nvidia will restructure the SM, which should influence both those parameters.

Given what's stated in the article, Mr. leather jacket already went to TMSC to secure 3nm wafers so, unless Nvidia has decided to split their chips between both TSMC and Samsung (for example going with the former for the bigger and more expensive parts, leaving the rest for Samsung), we have conflicting rumors. Which one to believe?

That's all from TSMC. The performance increase at isometric power means the expected clock speed increases, so it's just basically what Ada at 3 nm would be.

As for the second point, it wouldn't be the first time Nvidia has done it. HPC Ampere was made on TSMC's N7 while GPU Ampere got Samsung's 8LPP. The end result was that AFAIK, the Ampere GPU parts have rather worse perf per watt than HPC Ampere.

Samsung's 3GAE has gate and metal pitches much closer to TSMC's 5N than 3N, so it would be a peculiar choice for an upgrade overall, definitely one you wouldn't use for HPC if you could.



 

 

 

 

 

We need to get back to making smaller GPU dies again.
That was when AMD was at it's best and most competitive with the Radeon HD 4000/5000 series.

Fab prices aren't going to reduce going forwards, TSMC/Samsung/Intel need a return of investment for the 10's of billions in fab investment during COVID. - So making up for that with smaller chips is probably the more consumer-friendly approach.

Smaller dies doesn't mean lower performance either, it just means less resources spent on superfluous stuff that takes up space to benefit niche' markets.

Then they can bifurcate their product lines with dedicated silicon for target markets.



--::{PC Gaming Master Race}::--

haxxiy said:
JEMC said:

My comment was in regards to your density and performance increase guesses, not the waffer costs. After all, the article claims that Nvidia will restructure the SM, which should influence both those parameters.

Given what's stated in the article, Mr. leather jacket already went to TMSC to secure 3nm wafers so, unless Nvidia has decided to split their chips between both TSMC and Samsung (for example going with the former for the bigger and more expensive parts, leaving the rest for Samsung), we have conflicting rumors. Which one to believe?

That's all from TSMC. The performance increase at isometric power means the expected clock speed increases, so it's just basically what Ada at 3 nm would be.

I'm not arguing or questioning your numbers, I'm only saying that they don't tell the whole story or take everything into account because you're looking at it purely from the point of the manufacturing process while ignoring any architectural improvements that Nvidia can/will introduce with Blackwell.

What if the increased focus on ray/path tracing leads to bigger SMs? That would affect density. What if the architecture is designed to run at faster clocks? That will impact performance. What if Nvidia doesn't want to go insane with power consumption and Broadwell is more energy efficient? That will also play a part in the end result, and neither of those three architectural decisions have anything to do with the jump to a new node. They will "only" leverage them to bring the end product to a whole new level.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

JEMC said:

I'm not arguing or questioning your numbers, I'm only saying that they don't tell the whole story or take everything into account because you're looking at it purely from the point of the manufacturing process while ignoring any architectural improvements that Nvidia can/will introduce with Blackwell.

What if the increased focus on ray/path tracing leads to bigger SMs? That would affect density. What if the architecture is designed to run at faster clocks? That will impact performance. What if Nvidia doesn't want to go insane with power consumption and Broadwell is more energy efficient? That will also play a part in the end result, and neither of those three architectural decisions have anything to do with the jump to a new node. They will "only" leverage them to bring the end product to a whole new level.

I mean, it's all related since the determinant of energy efficiency and clock speeds is still the manufacturing process due to feature physical size and signal dissipation. That was the entire reasonale behind Intel's 'ticks' in the past.

I'm not too hung up on IPC here since Nvidia's IPC gains in the recent past have also correlated with higher power consumption (not entirely, but mostly) so that's the lesser factor at play here (probably).



 

 

 

 

 

Around the Network
haxxiy said:
JEMC said:

I'm not arguing or questioning your numbers, I'm only saying that they don't tell the whole story or take everything into account because you're looking at it purely from the point of the manufacturing process while ignoring any architectural improvements that Nvidia can/will introduce with Blackwell.

What if the increased focus on ray/path tracing leads to bigger SMs? That would affect density. What if the architecture is designed to run at faster clocks? That will impact performance. What if Nvidia doesn't want to go insane with power consumption and Broadwell is more energy efficient? That will also play a part in the end result, and neither of those three architectural decisions have anything to do with the jump to a new node. They will "only" leverage them to bring the end product to a whole new level.

I mean, it's all related since the determinant of energy efficiency and clock speeds is still the manufacturing process due to feature physical size and signal dissipation. That was the entire reasonale behind Intel's 'ticks' in the past.

I'm not too hung up on IPC here since Nvidia's IPC gains in the recent past have also correlated with higher power consumption (not entirely, but mostly) so that's the lesser factor at play here (probably).

It's all related, true, but even Nvidia must realize that they can't keep increasing the power consumption of their cards forever. And a 4090 pulling around 400W, and who knows what will the 4090Ti use, is already a worrying point that Nvidia may want to correct before things escalate out of control.

It's not like Nvidia hasn't been able to make more efficient architectures in the past, right?



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Guys, I've only noticed that Battlestar Galactica Deadlock is free at Steam... but only until today's refresh . So, if you read this in the next hour and a half or so hours, you may still be able to claim it: https://store.steampowered.com/app/544610/Battlestar_Galactica_Deadlock/

On another note, my PC is acting all weird. A couple weeks ago one of my drives died. It sucked major a**, but now I no longer need a motherboard with more than 4 SATA ports. Yay! (You have to keep a positive attitude, right?), but now the system acts weird. It takes a bit more time than usual to boot than before, Firefox stops working from time to time for no reason, and I have anotehr drive that keeps getting used at 100% capacity all the time which, after what happeend with the other drive, worries me.

The question is, I can't simply unplug that drive because I use frequently, but I have a "docking station" that I barely use. Is it dangerous to use a drive in one of those for long periods? I don't want to "save it" from the weirdest state my PC is only to kill it for using it the wrong way.

If it helps, the drives are vertical and the station looks like this:



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

As far as I know there shouldn't be any issues. My work has ones similar to those and they work just fine for long periods of time.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Thanks for the answer. I'll do it tomorrow.

I hope Windows won't try to use the 100% of it connected through a USB port.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

JEMC said:

Thanks for the answer. I'll do it tomorrow.

I hope Windows won't try to use the 100% of it connected through a USB port.

You shouldn't have any problems, it'll act exactly like a normal external HDD. Most externals are actually just normal 3.5" or 2.5" drives inside with Sata to usb adapters in them. I opened up an old WD MyBook 500gb usb HDD the other day and swapped in a 3TB to use it for rom storage.