Pemalite said:
Because Checkerboard isn't 3840x2160. It is two 1920x1080 frames.
|
Did you wake up today and feel like being a disagreeable person just for the same of being argumentative and assumptive?
You make a bunch of replies based on your opinions (900p is horrible, 1080p is dated, etc). Your opinion is valid for you, but sitting way back at a buddies house looking at a 60' TV, SW Battlefront doesn't look terrible by any means. Your own chart pretty much says this for a 60" TV @ 10'.
On your 4K comments, derp, I've SEEN 4K because I've done it with 1070s on a 43" 60hz setup at close range. I simply preferred 1440p @ 100+FPS on Gsync. The smoothness was much better to me than the resolution bump. You basically started an argument on this point to basically agree with me (eg; give me a high refresh 4K/5K whenever that happens, but for now 1440P/100+ >> 2160P/60, and ESPECIALLY 2160/30). It was weirdly located as well because I was quite clearly referring to consoles pushing 4K, when we know to expect a lot of 30fps games.
Then you make assumptions on why I got the 480, I gifted my 390 to a friend whose card died and I needed a drop-in for an HTPC that rarely sees games other than for guests, not an upgrade. Unfortunately the 480 was a whiny, coil whine mess that had basically zilch for OC.
Then you make some weird argumentative things about an insanely hypothetical speclist. Of course the human brain doesn't process visual data in 'frames per second', but higher FPS appears smoother, this is quite obvious. What's the point in really arguing about a hypothetical 1024-bit bus, when speaking of a vaporware system with 2TB of memory?
It's sort of ironic, because this is far from the most off-the-wall thing on that speclist : http://www.kitguru.net/components/graphic-cards/anton-shilov/samsung-expects-graphics-cards-with-6144-bit-bus-48gb-of-hbm-memory-onboard/
^^ Yes, that is an article about GPU/HBM, not specifically system ram, but as we've seen with consoles, that's becoming merged in many cases, so if it makes you feel better, make my ludicrous hypothetical system have a single pool of 2TB HBM3 @ 1024-bit interface lol.
And where exactly did I state specifically that clock speed was the only thing that determined performance? Uarch, IPC, scaling, all of this has a ton to do with what the end result would be. An 8Ghz Intel Atom gen1 would be 8Ghz trash. An 8Ghz Jaguar would be better, but not ideal. An 8Ghz Haswell would be pretty damned good. Would you have preferred that I state some particular basis for the core design on a hypothetical level? Fine, Custom 8Ghz Icelake Quad, 64-Core on the BIG.little concept (on package 64 ARM-style cores to run OS/etc in the background without notably interfering with primary gaming threads). Until GPGPU gets more mature, AI in particular amongst other things does better on the CPU.
I am well aware of core loading and how difficult it is to utilize multicore setups to the max, quite frequently 1 main core sees 90%+ usage while the others vary widely. Extremely talented devs with a great grasp of a particular setup can do more, as Naughty Dog did with TLOU on PS3 (some great videos on the making of it showing just how well they loaded the Cell's SPEs).
My whole post was meant not to be a light-hearted observation and what-if kind of offhand thing, not an aspergers episode. I've worked in IT for over a quarter century, and I wouldn't think to bore the crap out of everyone going into excessive detail about something so utterly meaningless.