Also, I am pretty certain that no one expected PS4 to do TLOU2 graphics back in 2012 when they released the 7870.
I actually have a high level understanding of the rendering pipeline of TLOU2. It's nothing I didn't expect.
60fps is a CPU problem, not a GPU problem.
False. It is both a CPU and a GPU problem.
If the GPU doesn't have the capability to output at 60fps, it's not going to output at 60fps.
If the CPU doesn't have the capability to output at 60fps, then the GPU is also not going to be able to push out 60fps.
This is the frustrating thing I find with allot of console gamers who think that framerate is all about the CPU... And Flops is everything. It isn't.
It isn't a conclusion I made through any fallacy. It is an educated suggestion as to how and why it happened. If you were to tell me that Remedy devs are incompetent on the tech side, I would call you a liar. This is the only way to explain why the PC version ran so poorly. The would clearly have fixed it given enough time, as they also did with a lot of the issues afterwards. Most windows store games were planned to be Xbox one only for a long time until Microsoft changed their tune.
Educated? Feel free to provide some citations.
Most Windows Store games run like ass anyway... Due to a myriad of reasons. - They also have a ton of technical limitations as well... Shall I educate you on that? More than happy to oblige... That isn't an issue to do with the PC or PC optimization as it's not an issue that tends to be reproduced on say... Steam.
Nor does it mean the Xbox version is the epitome of optimization prowess.
Yep, it has a low-level API for developers that wish to leverage that level of control, but the problem comes in when you want to port it to the PC platform because then you can't directly port it without significant performance pitfalls. Simply put, some of the hardware intricacies of the xbox and playstation aren't available on ALL PC hardware (keyword here is "intricacies") so sometimes you can't rely on async compute or some specific number of scheduled wavefronts in your PC games, simply because you never know if it actually has all those ACE units, ROPs or ALU's or whatever.
False. All of it is false.
Elaborate on "gobbling up more Ram than a desktop OS" Most desktop OS's eat up more ram depending on how much ram it has. For instance, I am running 32gb ddr4 and it eats more than 10 gigs on caching etc. It will free that ram up if i actually start using it in a game, or some other application, but there you have it.
Basically, what you are stating is that the PC is making more efficient use of available resources in order to expedite operations.
That is a good thing.
But when it comes to crunch time, the PC will evict that cached memory in order to give priority to the application.
Again... Windows will happily operate on 2GB~ or less of DRAM. - The consoles you have no choice in the matter, they have a fixed amount in reserve that is unchanging.
Which means... Your original claim that PC OS's are more memory intensive than consoles is blatantly false... You are just unwilling to admit you were incorrect on your prior assertion.
I mean... Windows 7 is happy with 1GB of ram if the disk (I.E. SSD) is sufficient enough.
Elaborate on what you mean when you say "more lean". I didn't imply that it is automatically better than what PC offers. What I am arguing is that utilizing different specific hardware in a specialized way is preferred over using random hardware, as random hardware would have to brute-force a ton of random stuff just to get the same result.
The PC isn't "random hardware". - There are standards that are adhered to...
And there are a fixed amount of companies, building the same iterative updated hardware on a yearly cadence... It's a known quantity at this point.
Your claim may have held some water back in the 90's/early 2000's when there was over a half dozen CPU companies and a half dozen GPU companies'.
As an example, emulators are extremely inefficient.
Do you even understand how an emulator even works?
It is a similar principle. The emulation overhead itself can be lightweight but when it comes to executing the code that was built for specific hardware, then it becomes way slow.
Emulators works by taking an instruction from the original hardware and attempting to "reinterpret" it to a different hardware's instruction.
Sometimes the Emulator needs to take that single instruction and chop it up into multiple instructions and translate it to instructions the new hardware can understand for execution.
What that essentially means is that... A single instruction done per clock would take multiple clocks to perform on the different hardware. - That's not because of any special "overheads". - It's just the nature of emulation itself.
Microsoft got around that problem somewhat with the Xbox One emulating Xbox 360 games by taking a multitude of approaches.
For starters... The Xbox One has a few Xbox 360 features already baked into it's hardware... So that's one overhead reduction.
Microsoft also uses the power of virtualization to virtualize the Xbox 360 environment on the Xbox One... And abstracts the Xbox 360 hardware.
Some things is of course emulated, but not as much as you think.
And then of course... Microsoft repackages the games. - In short... It is how Microsoft was able to emulate 3x 3.2Ghz Hyperthreaded PowerPC cores on 6x 1.75ghz Jaguar cores.
Emulation, the typical approach the PC takes is a brute force approach... And would literally be impossible on consoles due to their anemically pathetic CPU's.
However, the Xbox One approach isn't impossible on PC either... In-fact some emulators take that exact approach as the original platform was an open one, not a closed one... Which means that the emulator developers could get the low-level information they need to build their emulators. - In-fact... One company leverages this to a degree on PC to repackage PC games... I will leave it to you to make an "educated" guess.
A laptop I had from 2013 buckled hard in the attempt to render some enemies with a special specular mapping in the later stages of Persona 4 for instance. It dropped to under 10fps and it was a laptop with two GT 650m SLI and an i7. Granted these specs aren't anything special today, but it would still amount to several times the power of a PS2.
I wasn't arguing that PC as a whole cannot achieve these features. I am just saying that developers cannot use these features on PC in any significant way, because they never know if the end user even HAS IT.
Sure they can. And in history there has been many examples where developers have leveraged PC-specific features.
For example... Tessellation is something that only just became prevalent this console generation... However, the PC had that technology during the Playstation 2 era.
It is actually the reason why abstraction exists... To standardize specifications for developers and expose standard functions.
The advantages have mostly shown itself in the first party space. Media Molecules game, for instance, has completely done away with the rasterization hardware (ROPs) and they are using a cutting-edge SDF solution for their graphics. If they didn't have those ACE units they would never have been able to go with this SDF solution. The Tomorrow Children is also a game that uses cascaded voxel cone tracing. No one else was doing that back in 2014. Here is a GDC PDF from early 2015
Remember this on PC?
The take away here is that, if this game was multiplat, it would most definitely mean that they would be forced to use more traditional technology. Unless they only want to target that very narrow segment of the PC market that had GPU's with those exact features.
No it doesn't.
The PC can do everything the consoles can do... They simply do it better.