By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
DonFerrari said:

You haven't seem what CGI post as works on next gen.... it is really a leap jump over this one.

Even the stuff CGI works with isn't the ceiling.

We still need to get over the entire "uncanny valley" issue yet.

CGI-Quality said:

They don’t need unlimited power. Common misconception. It’s not just about specs. It is about resources. Where they put those regarding engine work and the like.

It is also the efficient use of those resources... Which is the entire development paradigm of console to begin with... Games will max out hardware early on... But they do so wastefully,.
It takes time to allocate your limited resources to obtain the best image overall that is possible.

Plenty of examples where games ditched expensive rendering techniques (I.E. HDR Lighting in Halo 3) and used that extra processing to bolster image quality elsewhere whilst using baked/pre-calculated lighting in the successive (Halo Reach) title.

Chazore said:

Considering how AMD's previous and current GPU's have fared, I'm not really sure they'll outright crush the 1080ti let alone being 50/50 with Nvidia's next line. Especially being super cheap for both consumers and 2 of the big 3 to afford making a £400 next gen system.

AMD's GPU's a very compute driven, there are certain tasks that AMD's hardware will excel at.

Plus in a console it's really a non-issue anyway, developers will leverage their limited resources to maximum effect anyway.

Trumpstyle said:

About your comment on Multichip, from everything I read it sucks for gaming and expect 0% chance we see anything like it on desktop or console. It's more for high-end professional work.

Tell that to Threadripper.
It's essentially AMD taking it's multi-GPU, single card approach, but making it hopefully a little less driver dependent.


Trumpstyle said:

Really dude??? you want source for 3x transistor density increase.

https://www.semiwiki.com/forum/content/6713-14nm-16nm-10nm-7nm-what-we-know-now.html

Estimated transistor density for Tsmc 16nm, 28.2. For 7nm 116.7 (this is mtr/mm2 metric, no clue what that means, but suppose to be accurate). 116.7/28.2 = 4,13x increase.

Tsmc own website claims 10nm gives 2x increase and 7nm gives 1.6x increase compared to 10nm. That's total 3.2x.

http://www.tsmc.com/english/dedicatedFoundry/technology/logic.htm

Don't get the wrong idea, I am not calling you a liar. I just wanted the source for my own reading.

But thank you.
I shall peruse it when I have free time. :)

 

Trumpstyle said:
Note that everything doesn't scale perfectly in a chip. But gpu cores and cpu cores does and that is what we care about.

Uh. Not exactly. There is diminishing returns when adding more CPU cores... Main reason is due to resource contention.
GPU's the same issue exists, just not to the same degree as the workload is ridiculously parallel.

Didn't mean CGI work is the ceiling, just that if he saw the seldom posts CGI make with the next gen he is working on, the other guy would see that we still have a generational leap to make (or several).



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."