By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

JEMC said:
Captain_Yuri said:

Yea that's true. Alex from DF did do some interesting theorizing to see whether or not a "Switch Pro" would be able to do DLSS. The video overall is interesting if you have some time to kill:

The results are quite insane!

Thanks for the video with very interesting results. The bigest problem in this whole theory, tho, is Nintendo actually going with an SoC that's so new that it hasn't even launched yet (only been announced as far as I know).

That said, I've been looking at Orin and found the article that WCCFTech did about it and found this excerpt:

"In a slide shared by Dylan522P over at his Twitter feed, various DRIVE configurations for the Orin SOC are listed. It seems like Orin will have a few TDP / workload-optimized variants with the base 1-Camera variant offering 36 TOPs at 15W, the 4-Camera variant offering 100 TOPs at 40W, a 2 chip variant offering 400 TOPs at 130W and the top-end 2 Orin + 2 discrete GPU variant offering up to 2000 TOPs at 750W. It looks like the TDP of the 200 TOPs, single-SOC lies somewhere around 60-70 Watts."

With the 1-Camera variant as the base of the console, Nvidia will work to do to turn those 36 TOPs at 15W to 20 TOPs for 5W, and using a more advanced process node isn't feasable as Orin is supposed to use Samsung's 8nm.

Also, back in June when everybody assumed Nintendo would reveal the Switch Pro, Kopite posted this:

If we trust him when it comes to Nvidia GPU, we should also give him the same level of trust when it comes to this, right?

Yea it will certainly be interesting to see how it turns out. Nintendo could also increase the battery capacity and increase the tdp by making it slightly more bulkier if they need to. The switch has a 4300sh Mah battery while phones these days that are smaller and slimmer than it has over 6000 Mah. So there are certainly things they can do to make it more powerful. It could increase the price but the Switch was also "expensive" when it originally launched.

But hopefully they make the right moves at the end as the Switch 2 really could be revolutionary.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Around the Network
hinch said:

I'm very disappointed in blizzard! Let's keep the celebration of boobs going!



hinch said:

Idd glory kill ruined the flow for me. Watching the same pre-canned animation for the umpteenth time gets old, fast. And I hate the way it encourages you to do it to regain health and preserve ammo. That's the reason why I'm reluctant to play Eternal where its essential to do and forces you to play in a particular way due to lack of ammo - the whole point of having fun in playing an FPS for me is shooting and not having to worry about ammo every 5 secs.

Glory kills absolutely kill Doom 2016 for me. It makes the game tense in a bad way and makes battles play out in such a formulaic way. I might be fine with glory kills if they were extra, but the game is obviously designed so that players have to use them. That's not to say there aren't any other issues with the game, but glory kills are the worst. In a way, they're also an embodiment of the game's over-the-top nature that goes far beyond that of the original Doom games, which I personally dislike as well.

(I guess I don't mind venting off about the 2016 Doom game, huh.)



I'm glad I'm not the only one who dislikes the glory kill system. To me it felt wrong to charge into my enemy, blast them, strafe backwards, shoot and then lunge forward again to finish them off.

Also having seen some top tier DE plays on Youtube, it makes me reluctant to bother with DE, because again, I dislike that system just to stay alive.



Step right up come on in, feel the buzz in your veins, I'm like an chemical electrical right into your brain and I'm the one who killed the Radio, soon you'll all see

So pay up motherfuckers you belong to "V"

Chinese websites are testing and breaking NDAs for Steam Deck which is giving us some interesting insights as to how it will perform. And with the power of google... We can translate these websites!

https://translate.google.com/translate?hl=&sl=zh-CN&tl=en&u=https%3A%2F%2Fwww.ali213.net%2Fnews%2Fhtml%2F2021-9%2F625257_5.html

I will post some tidbits that I found interesting but remember that this is translated and they are testing the Dev kit which is hard to tell what the difference between it and the release versions are so there could be more improvements or differences.

Shadow of Tomb Raider:

- Built in Benchmark
- First of all, we set the picture to the preset high quality in the image setting options. Through the benchmark test, we got an average number of frames of 36 frames
- Then after we lowered the image quality a bit, and then reduced the image quality a little bit, we can see that the frame rate performance in the game test has risen to more than 60 frames.

Doom:

- In the game, we set the game quality to medium in the advanced mode.
- In the medium quality, we can see that the number of frames of the game basically fluctuates around 60 frames.

Cyberpunk 2077:

- After about three hours of playing, the power of SteamDeck dropped from 100% to 46%. Agency did not explain the difference between these developer versions and the final version, but it is still being optimized.
- We set the picture quality in the game to be high. We can obviously feel the freeze, and the number of frames fluctuates between 20 and 30 frames, but "Cyberpunk 2077" The picture quality is very comfortable.
- At this time, after a long period of games, the back of the game is indeed a little hot, but thanks to the area of ​​the handheld and the setting of the air outlet, the temperature of the entire machine is within an acceptable range.
- Through the thermometer, we can see that the temperature on the back of the back is around 42.6°C, while the temperature at the grip is around 29°C.

Other notes:

- The Steam Deck is indeed very heavy. The weight of 669g is approximately equal to two NSs, but its size is not small, but its weight is shared. After using it for a day, I don’t even feel much tired



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Around the Network

I wouldn't even bother with 2077 on Steam Deck. The game still runs like arse for me with my 1080ti at 1440p, and I'm not even running close to ultra (not even native res as I've had to bump it down).

That being said, I don't think China gives a rats arse about NDA's or copyright laws, so I'm not surprised they are spreading this.



Step right up come on in, feel the buzz in your veins, I'm like an chemical electrical right into your brain and I'm the one who killed the Radio, soon you'll all see

So pay up motherfuckers you belong to "V"

I do think the most interesting portion is Cyberpunk in their review because of the battery life. If they actually managed to get 3+ hours while playing Cyberpunk at high settings... That's insane! It bodes really well for other games that aren't nearly as heavy.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Bofferbrauer2 said:
Pemalite said:

Doesn't help that the Switch didn't leverage the latest and greatest nVidia Tegra SoC on it's release... So it was already outdated. Pascal could have offered 50% more performance at the same powerlevel and not added much to the cost.

But that 4GB of LPDDR4 1600Mhz memory is NOT doing the switch any favors in 2021, that bandwidth is limiting fillrate and keeping resolutions low and developers are struggling to work with 3GB of work-space. (As the OS gobbles a chunk.)

The Switch is a fantastic device, it's just not aging well visually, especially as we venture forth into the Global Illumination/Ray Tracing era.

Not sure if using the X2 would have resulted in that much more performance, (pure GPU performance was ~50% higher, but when both CPU and GPU were used, the margin tended to be much lower between the two chips iirc) but it would have been the better choice for sure, especially with it's larger memory bandwidth.

I expect NVidia to create a new Tegra chip soon. Not just for the Switch, but also for the Jetson Nano and NVidia Shield devices, which use the same chips as the Switch does. In fact, I believe that without the chip crunch NVidia would already have done it and it would have been the basis of the new OLED model.

It won't be much more than a die-shrinked X2 probably (to 12nm out for originally 16nm), but that alone could result in up to 50% more performance compared to the current model. Most interesting would be that the memory could be increased to 8GB on the next model, which could definitely help some games.

Nah. Improvements were big between Maxwell to Pascal in terms of performance gains.

On the CPU side you went from...
A57 Quad-core @1.9Ghz.
A53 Quad-core @1.3Ghz.

To:
Denver2 Dual Core @2.0Ghz
A57 Quad-Core @ 2.0Ghz.

However Tegra Maxwell only had the A57 Quad-Core unit enabled...

Tegra Pascal you would run the super high-performant Denver cores... But the design also required the use of at-least a single A57 "Core0" core for I/O and interrupt, plus some other tasks.

Denver2+One ARM A57 generally provided twice the throughput in benchmarks compared to a quad-A57 cluster.

Obviously Carmel makes them both out to be a joke... But we are talking about available chips on the Switch's release.

On the GPU side of the equation...

Maxwell @ 1ghz
vs
Pascal at 1.5Ghz.

Same power envelope... 20nm vs 16nm.
Remember, TSMC's 16nm process is basically 20nm but with Finfet... Pascal from the very outset was designed to push 50% higher clockrates at the same TDP as Maxwell.
https://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/6

So even when both are on 16nm, Tegra Pascal will have a clockspeed advantage at the same TDP... As that was nVidia's original design philosophy with that GPU design, increase clocks, keep the same amount of functional units and features.

Conversely... Tegra Pascal brings forth improved delta colour compression, so it has more usable bandwidth available... But also allows for more than twice the memory bandwidth as Tegra Maxwell. (25.6GB/s vs 59.7GB/s)

And memory bandwidth is one of the biggest hindrances of the Switch from being able to achieve more than 720P in most titles, especially when allot of heavy use of Alpha effects are being thrown around.

So I am probably actually being a little conservative at stating a 50% performance improvement.



--::{PC Gaming Master Race}::--

Pemalite said:
Bofferbrauer2 said:

Not sure if using the X2 would have resulted in that much more performance, (pure GPU performance was ~50% higher, but when both CPU and GPU were used, the margin tended to be much lower between the two chips iirc) but it would have been the better choice for sure, especially with it's larger memory bandwidth.

I expect NVidia to create a new Tegra chip soon. Not just for the Switch, but also for the Jetson Nano and NVidia Shield devices, which use the same chips as the Switch does. In fact, I believe that without the chip crunch NVidia would already have done it and it would have been the basis of the new OLED model.

It won't be much more than a die-shrinked X2 probably (to 12nm out for originally 16nm), but that alone could result in up to 50% more performance compared to the current model. Most interesting would be that the memory could be increased to 8GB on the next model, which could definitely help some games.

Nah. Improvements were big between Maxwell to Pascal in terms of performance gains.

On the CPU side you went from...
A57 Quad-core @1.9Ghz.
A53 Quad-core @1.3Ghz.

To:
Denver2 Dual Core @2.0Ghz
A57 Quad-Core @ 2.0Ghz.

However Tegra Maxwell only had the A57 Quad-Core unit enabled...

Tegra Pascal you would run the super high-performant Denver cores... But the design also required the use of at-least a single A57 "Core0" core for I/O and interrupt, plus some other tasks.

Denver2+One ARM A57 generally provided twice the throughput in benchmarks compared to a quad-A57 cluster.

Obviously Carmel makes them both out to be a joke... But we are talking about available chips on the Switch's release.

On the GPU side of the equation...

Maxwell @ 1ghz
vs
Pascal at 1.5Ghz.

Same power envelope... 20nm vs 16nm.
Remember, TSMC's 16nm process is basically 20nm but with Finfet... Pascal from the very outset was designed to push 50% higher clockrates at the same TDP as Maxwell.
https://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/6

So even when both are on 16nm, Tegra Pascal will have a clockspeed advantage at the same TDP... As that was nVidia's original design philosophy with that GPU design, increase clocks, keep the same amount of functional units and features.

Conversely... Tegra Pascal brings forth improved delta colour compression, so it has more usable bandwidth available... But also allows for more than twice the memory bandwidth as Tegra Maxwell. (25.6GB/s vs 59.7GB/s)

And memory bandwidth is one of the biggest hindrances of the Switch from being able to achieve more than 720P in most titles, especially when allot of heavy use of Alpha effects are being thrown around.

So I am probably actually being a little conservative at stating a 50% performance improvement.

The Denver cores were based upon the A72, and not very good ones. In fact, due to the way scheduling works on them, having them working concurrent to the A57 could actually cost performance. Because the Denver doesn't want to share what's in his memory, the A57 might have to calculate the whole thing all over again to get the results in the Denver cache if he needs them too. This might actually be the reason why Nintendo opted for the X1 instead.

Oh, and while the GPU is more powerful, it's not 50% more. While the TX2 at Max-Q settings has about the same performance as a TX1, at Max-P, that only increases to a 20-40% performance lead over the TX1 while consuming a similar amount of power.

Either way, this discussion is a bit pointless, as the original X2 will certainly not be used in the Switch going forward, and even some future chip based on the X2 might be changed internally. 



Bofferbrauer2 said:

The Denver cores were based upon the A72, and not very good ones. In fact, due to the way scheduling works on them, having them working concurrent to the A57 could actually cost performance. Because the Denver doesn't want to share what's in his memory, the A57 might have to calculate the whole thing all over again to get the results in the Denver cache if he needs them too. This might actually be the reason why Nintendo opted for the X1 instead.

Just like Tegra X1, you wouldn't run Denver in conjunction with A57... Except for the IO/Interrupt core... Running multiple CPU clusters is just to much of a power hog, which is why the Switch is only running A57 rather than A57+A53.

Without a doubt that, Denver2 is far better than A57 in terms of IPC.

Bofferbrauer2 said:

Oh, and while the GPU is more powerful, it's not 50% more. While the TX2 at Max-Q settings has about the same performance as a TX1, at Max-P, that only increases to a 20-40% performance lead over the TX1 while consuming a similar amount of power.

Either way, this discussion is a bit pointless, as the original X2 will certainly not be used in the Switch going forward, and even some future chip based on the X2 might be changed internally. 

Once you start taking the improved bandwidth into account, the GPU will definitely be able to breathe, especially in fillrate heavy scenarios.
The Switch is bandwidth limited.

I am probably being conservative by stating "50%" when there is more than a 100% gain in real-world bandwidth.

The point of Pascal though is that, nVidia re-architected the GPU to operate at higher clockrates with minimal impact on power usage... We saw it on the PC with the move from Maxwell to Pascal.



--::{PC Gaming Master Race}::--