By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - " Update "When Can Sony Deliver A True Generational Leap? - Digital Foundry & Gamer NX

 

Do you agree

Yes 12 57.14%
 
No 2 9.52%
 
In between 4 19.05%
 
See result 3 14.29%
 
Total:21
Lafiel said:
CGI-Quality said:
Some of you really don’t expect graphical jumps? You guys have no idea. We aren’t even CLOSE to the ceiling.

At some point designers won't be able to increase the level of detail even if theres unlimited processing power as they can't put unlimited work hours into a scene. I think it will be tough to trump even GoW in that aspect.

Obviously I'm aware that current gen lighting models are nowhere near "life-like", yet the improvements real physical lighting/path tracing can offer are pretty subtle at this point (save for reflections) - the only thing I really need is more bounce/diffuse light / global illumination as unnaturally dark areas/corners have a negative influence on gameplay.

https://devblogs.nvidia.com/introduction-nvidia-rtx-directx-raytracing/

It's already here on ... kind of consumer level. 



Around the Network
Lafiel said:
CGI-Quality said:
Some of you really don’t expect graphical jumps? You guys have no idea. We aren’t even CLOSE to the ceiling.

At some point designers won't be able to increase the level of detail even if theres unlimited processing power as they can't put unlimited work hours into a scene. I think it will be tough to trump even GoW in that aspect.

Obviously I'm aware that current gen lighting models are nowhere near "life-like", yet the improvements real physical lighting/path tracing can offer are pretty subtle at this point (save for reflections) - the only thing I really need is more bounce/diffuse light / global illumination as unnaturally dark areas/corners have a negative influence on gameplay.

You haven't seem what CGI post as works on next gen.... it is really a leap jump over this one.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

Chazore said:
Trumpstyle said:

Yes I believe so too. Except I'm getting more positive towards the Navi gpu. I think the radeon 680 will beat the 1080 ti from Nvidia and have similiar performance to geforce 2080. Ps5/xbox two gpus will probably beat the geforce 1080 ti by about 10-20% :). So a bit higher than a high end pc.

So AMD is going to totally crush Nvidia in one fell swoop along with PS5?.

Nope, lol. I expect Nvidias gpu architecture on 7nm transistor technology be superior to amds. But I'm feeling positive that Navi gpu on 7nm will beat nvidias upcoming Turing gpu architecture that will be released q3 this year on 12nm. So an midrange Navi gpu on 200-250 die size with gddr6 will beat geforce 1080 ti by about 20%, maybe even more and will probably offer similiar performance too geforce 2080/1180. And we will see this kind of gpu in the next gen consoles.

I was concerned before that an midrange Navi gpu would suck and be an geforce 1070/vega 56 performance card.

Last edited by Trumpstyle - on 17 April 2018

6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Trumpstyle said:

Nope, lol. I expect Nvidias gpu architecture on 7nm transistor technology be superior to amds. But I'm feeling positive that Navi gpu that will be on 7nm will beat nvidias upcoming Turing gpu architecture that will be released q3 this year on 12nm. So an midrange Navi gpu on 200-250 die size with gddr6 will beat geforce 1080 ti by about 20%, maybe even more and will probably offer similiar performance too geforce 2080/1180. And we will see this kind of gpu in the next gen consoles.

I was concerned before that an midrange Navi gpu would suck and be an geforce 1070/vega 56 performance card.

Considering how AMD's previous and current GPU's have fared, I'm not really sure they'll outright crush the 1080ti let alone being 50/50 with Nvidia's next line. Especially being super cheap for both consumers and 2 of the big 3 to afford making a £400 next gen system.



Step right up come on in, feel the buzz in your veins, I'm like an chemical electrical right into your brain and I'm the one who killed the Radio, soon you'll all see

So pay up motherfuckers you belong to "V"

Pemalite said:
Trumpstyle said:

Yes I believe so too. Except I'm getting more positive towards the Navi gpu. I think the radeon 680 will beat the 1080 ti from Nvidia and have similiar performance to geforce 2080. Ps5/xbox two gpus will probably beat the geforce 1080 ti by about 10-20% :). So a bit higher than a high end pc.



That is 64CU's per chip. You can have more than one chip working together.



Trumpstyle said:

7nm gives about 3x transistor in same die size compared to 16nm in ps4 pro.

Source?







About your comment on Multichip, from everything I read it sucks for gaming and expect 0% chance we see anything like it on desktop or console. It's more for high-end professional work.

 

Really dude??? you want source for 3x transistor density increase.

 

https://www.semiwiki.com/forum/content/6713-14nm-16nm-10nm-7nm-what-we-know-now.html

Estimated transistor density for Tsmc 16nm, 28.2. For 7nm 116.7 (this is mtr/mm2 metric, no clue what that means, but suppose to be accurate). 116.7/28.2 = 4,13x increase.

Tsmc own website claims 10nm gives 2x increase and 7nm gives 1.6x increase compared to 10nm. That's total 3.2x.

http://www.tsmc.com/english/dedicatedFoundry/technology/logic.htm

Note that everything doesn't scale perfectly in a chip. But gpu cores and cpu cores does and that is what we care about.



6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Around the Network
DonFerrari said:
Lafiel said:

At some point designers won't be able to increase the level of detail even if theres unlimited processing power as they can't put unlimited work hours into a scene. I think it will be tough to trump even GoW in that aspect.

Obviously I'm aware that current gen lighting models are nowhere near "life-like", yet the improvements real physical lighting/path tracing can offer are pretty subtle at this point (save for reflections) - the only thing I really need is more bounce/diffuse light / global illumination as unnaturally dark areas/corners have a negative influence on gameplay.

You haven't seem what CGI post as works on next gen.... it is really a leap jump over this one.

Even the stuff CGI works with isn't the ceiling.

We still need to get over the entire "uncanny valley" issue yet.

CGI-Quality said:
Lafiel said:

At some point designers won't be able to increase the level of detail even if theres unlimited processing power as they can't put unlimited work hours into a scene. I think it will be tough to trump even GoW in that aspect.

Obviously I'm aware that current gen lighting models are nowhere near "life-like", yet the improvements real physical lighting/path tracing can offer are pretty subtle at this point (save for reflections) - the only thing I really need is more bounce/diffuse light / global illumination as unnaturally dark areas/corners have a negative influence on gameplay.

They don’t need unlimited power. Common misconception. It’s not just about specs. It is about resources. Where they put those regarding engine work and the like.

It is also the efficient use of those resources... Which is the entire development paradigm of console to begin with... Games will max out hardware early on... But they do so wastefully,.
It takes time to allocate your limited resources to obtain the best image overall that is possible.

Plenty of examples where games ditched expensive rendering techniques (I.E. HDR Lighting in Halo 3) and used that extra processing to bolster image quality elsewhere whilst using baked/pre-calculated lighting in the successive (Halo Reach) title.

Chazore said:

Considering how AMD's previous and current GPU's have fared, I'm not really sure they'll outright crush the 1080ti let alone being 50/50 with Nvidia's next line. Especially being super cheap for both consumers and 2 of the big 3 to afford making a £400 next gen system.

AMD's GPU's a very compute driven, there are certain tasks that AMD's hardware will excel at.

Plus in a console it's really a non-issue anyway, developers will leverage their limited resources to maximum effect anyway.

Trumpstyle said:

About your comment on Multichip, from everything I read it sucks for gaming and expect 0% chance we see anything like it on desktop or console. It's more for high-end professional work.

Tell that to Threadripper.
It's essentially AMD taking it's multi-GPU, single card approach, but making it hopefully a little less driver dependent.


Trumpstyle said:

Really dude??? you want source for 3x transistor density increase.

https://www.semiwiki.com/forum/content/6713-14nm-16nm-10nm-7nm-what-we-know-now.html

Estimated transistor density for Tsmc 16nm, 28.2. For 7nm 116.7 (this is mtr/mm2 metric, no clue what that means, but suppose to be accurate). 116.7/28.2 = 4,13x increase.

Tsmc own website claims 10nm gives 2x increase and 7nm gives 1.6x increase compared to 10nm. That's total 3.2x.

http://www.tsmc.com/english/dedicatedFoundry/technology/logic.htm

Don't get the wrong idea, I am not calling you a liar. I just wanted the source for my own reading.

But thank you.
I shall peruse it when I have free time. :)

 

Trumpstyle said:
Note that everything doesn't scale perfectly in a chip. But gpu cores and cpu cores does and that is what we care about.

Uh. Not exactly. There is diminishing returns when adding more CPU cores... Main reason is due to resource contention.
GPU's the same issue exists, just not to the same degree as the workload is ridiculously parallel.



--::{PC Gaming Master Race}::--

Pemalite said:
DonFerrari said:

You haven't seem what CGI post as works on next gen.... it is really a leap jump over this one.

Even the stuff CGI works with isn't the ceiling.

We still need to get over the entire "uncanny valley" issue yet.

CGI-Quality said:

They don’t need unlimited power. Common misconception. It’s not just about specs. It is about resources. Where they put those regarding engine work and the like.

It is also the efficient use of those resources... Which is the entire development paradigm of console to begin with... Games will max out hardware early on... But they do so wastefully,.
It takes time to allocate your limited resources to obtain the best image overall that is possible.

Plenty of examples where games ditched expensive rendering techniques (I.E. HDR Lighting in Halo 3) and used that extra processing to bolster image quality elsewhere whilst using baked/pre-calculated lighting in the successive (Halo Reach) title.

Chazore said:

Considering how AMD's previous and current GPU's have fared, I'm not really sure they'll outright crush the 1080ti let alone being 50/50 with Nvidia's next line. Especially being super cheap for both consumers and 2 of the big 3 to afford making a £400 next gen system.

AMD's GPU's a very compute driven, there are certain tasks that AMD's hardware will excel at.

Plus in a console it's really a non-issue anyway, developers will leverage their limited resources to maximum effect anyway.

Trumpstyle said:

About your comment on Multichip, from everything I read it sucks for gaming and expect 0% chance we see anything like it on desktop or console. It's more for high-end professional work.

Tell that to Threadripper.
It's essentially AMD taking it's multi-GPU, single card approach, but making it hopefully a little less driver dependent.


Trumpstyle said:

Really dude??? you want source for 3x transistor density increase.

https://www.semiwiki.com/forum/content/6713-14nm-16nm-10nm-7nm-what-we-know-now.html

Estimated transistor density for Tsmc 16nm, 28.2. For 7nm 116.7 (this is mtr/mm2 metric, no clue what that means, but suppose to be accurate). 116.7/28.2 = 4,13x increase.

Tsmc own website claims 10nm gives 2x increase and 7nm gives 1.6x increase compared to 10nm. That's total 3.2x.

http://www.tsmc.com/english/dedicatedFoundry/technology/logic.htm

Don't get the wrong idea, I am not calling you a liar. I just wanted the source for my own reading.

But thank you.
I shall peruse it when I have free time. :)

 

Trumpstyle said:
Note that everything doesn't scale perfectly in a chip. But gpu cores and cpu cores does and that is what we care about.

Uh. Not exactly. There is diminishing returns when adding more CPU cores... Main reason is due to resource contention.
GPU's the same issue exists, just not to the same degree as the workload is ridiculously parallel.

Didn't mean CGI work is the ceiling, just that if he saw the seldom posts CGI make with the next gen he is working on, the other guy would see that we still have a generational leap to make (or several).



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."