By using this site, you agree to our Privacy Policy and our Terms of Use. Close
hinch said:

I'm still skeptical of DirectML. At least with Big/Navi GPU's. There is no tensor core or AI cores on any current AMD GPU or further roadmaps. Without it, ML and PP will have to be done in software, not hardware and there will be a tonne of latency to come with it, rendering it useless for actually playing games. DirectML was introduced 2 years ago and there is still no mainstream title support for it. I think its pretty much vapourware at this point.

At least we have checkerboarding, TAA and ai sharpening, I suppose :P

False.
What a tensor core does is multiply's two 4x4 Quarter Precision Floating point matrices, then adds an additional Quarter/Half Precision calculation to the result using a fused multply-addition operation. - That then provides a half precision floating point result that can be downgraded to a quarter precision floating point if needed.

This can be done entirely on AMD's shader pipelines... Since... Well. Terascale.
And you can do a similar thing on the SSE units on CPU's.

nVidia's approach is to invest in dedicated cores to handle this task, AMD tends to have more overall compute than nVidia and thus can get away with spending some of that compute time to do these kinds of tasks.

SvennoJ said:

Regarding to SSD in low powered laptops, swap file and general browsing (the amount of temporary files that comes with browsing is staggering) But I'm not sure what the context was for that. It also helps frame stutter. Train simulator for example, old game with complex scenery updates for an 8 year old engine. Without SSD the stuttering while loading in the next section would be much worse.

Windows is extremely proficient at memory management. I would rather retain the use of a swap file.

The stuttering can be resolved entirely by using a faster mechanical disk... Sadly we don't see the velociraptor drives anymore. 10k rpm anyone?

SvennoJ said:

Swap file is the first thing I disable and delete when I get a new laptop. I rather spend extra on RAM than sacrifice the SSD disk space to a last resort measure that windows has never been able to manage well. Maybe it's better now, I've been disabling the swap file since windows XP, blue screen of death was better than heavy swap file use. Sacrificing 6.8 GB for sleep mode is enough (hiberfil.sys)

I disable hibernation and thus remove hiberfil.sys.

Standby is enough. Each to their own... I would rather use more disk space and keep more Ram available, but I will sometimes be working with data sets that exceed typical video game requirements...

SvennoJ said:

Higher resolution does need more memory, native 4K means all screen buffers are 2.25x larger than 1440p. More power is expected to deliver native 4K. Maybe not a lot more comparatively, but more than sticking to lower rendering resolutions.

Not as much as you think.

You only need 7MB for 720P, 1080P will fit in 16MB and so on... As the Framebuffer is a function of the resolution of the output signal, color depth and/or palette size. 
So for example you will have 24-bit of colour information, with an alpha channel on top of that per pixel.

SvennoJ said:

Ugh, no wonder it takes 35 minutes to make a copy of GTS for patching (103 GB) That's about 50 MB/s on ps4 pro.
(yet read and write on the same HDD, so not that bad actually)

Yeah. It's not good. I have a pair of 12 Terabyte 7200rpm drives via USB on the Xbox One X which helps alleviate some of the I/O bottlenecks... But due to how expansive my game library is, it can still take a significant amount of time to build my game database once I have turned the console on.

Next gen... That all goes away.

In saying that, Install times are here to stay, because the mechanical hard drives aren't the limiting factor there, it's the optical disks and our internet infrastructure that holds us back.

SvennoJ said:

360 had 512 MB unified RAM though, plus an extra 10MB edram. ps3 had 256 MB system ram, minus what the OS used. Skyrim afaik kept track of all the changes you make in the world (anything you displace gets flagged and remembered in a change file, you know better probably having worked on it) So at first the game runs fine (fresh save) yet the further you get the worse it gets. There must have been memory leaks as well since I had to disable auto save and restart the game every 15 minutes to be able to finish it on ps3. The frame rate would slowly degrade to unplayable.

The Xbox 360's unified memory system is what saved it with the performance degradation in Skyrim as some of Skyrims scripts and database recording will chew through significant amounts of memory, the Xbox 360 had the superior memory set up.

SvennoJ said:

The SSD could have hosted a swap file to keep track of the changes :)

Hard drives can do that as well. Many games in fact do just that.
Morrowind on the OG Xbox had a database/cache where it would record the location of all objects in the game world and update accordingly... So for example you could kill everyone in Balmora and dedicate every home to a specific item type and go from there.

It does mean you have less throughput to say... Stream textures and meshes, something which became common with games starting with Modern Warefare 2 and later.

SvennoJ said:

I have the WiiU as well, played it a lot more than the Switch actually. In docked mode Switch does look better but indeed not really a generational leap. BotW was a bit blurry on my 1080p projector, but still great looking

Yeah. If I were to compare a copy of Breath of the Wild on WiiU and Switch... Basically the WiiU is a match for the Switch's handheld mode, there are a few advantages on the Switch, but you would be hard pressed to tell the difference.

There isn't a generational leap between versions, it's the same game, same assets, same performance targets (for the most part).

Wii U is just 480P handheld, 720P display and switch is 720P handheld, 1080P display. It's pixel counting differences only.

SvennoJ said:

Agreed, enough vgcharting, time for some Tlou2!

Enjoy!

zero129 said:
SvennoJ said:

So they spend 12 months moving a slider down for the Switch version...

They must of spend 12 months porting their engine to a lower end device using mobile hardware.

As you can see the engine itself when already running on a device can scale down to work on a potato pc without any porting since its running the exact same engine as a high end pc versions. This is how scaling works when the engine is build to work on multi devices.

Majority of AAA games are built using engines like CryEngine (With additional forks like Dunia), Unity, Unreal Engine, IdTech, Source (Derivative of Quake/IdTech), Frostbite, IW (Derivative of Quake/IdTech), Anvil engines and more... And they are, you guessed it. Designed to scale across multiple generations and multiple levels of hardware...

Even the engine that powers Spiderman on the Playstation 4/Playstation 5 is based on Insomniacs propriety engine, which also scales across hardware configurations having started life with the game called "fuse" on the Xbox 360 and Playstation 3.

Game engines typically aren't written from scratch anymore, rather they are re-used, re-written and overhauled to obtain better results iteratively... And that often has the benefit of scaling between different sets of hardware.



--::{PC Gaming Master Race}::--