By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - Linus - I’ve Disappointed and Embarrassed Myself (re: PS5 SSD)

HollyGamer said:

Remember PS5 has small compute unit than Xbox SX so the bandwidth on RAM is enough also for that, Xbox SX also has split memory for the speed , 10 GB @ 560 GB/s, 6GB @ 336 GB/s

The number of compute units is a non issue.

If you have 16 Compute units @2ghz, then in theory they would perform the exact same as 32 Compute units @ 1ghz. It's a function of clockrate x compute units... And because of that, their bandwidth requirements would be identical.

The thing with the Playstation 5 is that in general it has about 20% less GPU capability on many fronts in an ideal perfect situation, it may even be greater than that depending on how the Playstation 5 manages clockrates when the entire system is loaded down.

The Xbox Series X doesn't actually have split memory, it's a single memory pool, it's just that a large part of that pool runs at a slower rate, so developers will need to ensure that certain parts of a game operate in the faster memory and other parts operate in the slower memory depending on how bandwidth intensive those individual tasks are... Microsoft is also using a chunk of that slower memory for the OS/background tasks, so the impact of that slower memory pool should be lessened for gaming.

Either way, without a doubt the Xbox Series X does have the bandwidth advantage... But this goes back to my point that there is more to both sides of the Playstation 5 and Xbox Series X story than the raw specification sheet numbers might imply.

HollyGamer said:

also it doesn't mean that if Xbox has more CU , PS5 cannot do simple lightning effect, the CU already powerful enough to do CG rendering for both, the problem is the more power the GPU is  the more data you need. Especially if you want to make a realistic assets and high texture assets 

Not really.
The Xbox Series X should be able to include higher quality implementations of global illumination than the Playstation 5, whilst the Playstation 5 has higher quality texturing.

So whilst the Playstation 5's GPU is formidable (Remember MPixel fillrate on the PS5's GPU is better than the Xbox Series X's GPU and probably Geometry as well), the Xbox Series X does hold the overall edge.

How much is it going to matter? Like I keep saying, we need to wait on the games and see what developers do with each piece of hardware.

HollyGamer said:

 Xbox will do great, it's just depend on the game engine game developer are optimizing for.

It also depends on the type of game, for example big open world games that rely on lots of objects and geometry, will probably run best on Playstation 5.

Games that do allot of heavy scripting, physics, impressive lighting like a corridoor horror shooter... Will probably be best on Xbox Series X hardware.

Provided developers optimize for the various hardware nuances, there is never any guarantees for that.

HollyGamer said:

I never said there is no impressive games on Switch, but most third party games on switch are ported games from PS3 and PS4, so their games engine are a still a copied method from  mechanical data structures, so no real time data path for assets. 

I was using it as an example that not all Switch games are built with mechanical drives in mind, in-fact I would say that any Nintendo exclusive probably doesn't as Nintendo has not ever had a mechanical hard drive in a mainstream console release before.

You had the Nintendo 64DD... But that was more a peripheral.



--::{PC Gaming Master Race}::--

Around the Network
Pemalite said:

The number of compute units is a non issue.

If you have 16 Compute units @2ghz, then in theory they would perform the exact same as 32 Compute units @ 1ghz. It's a function of clockrate x compute units... And because of that, their bandwidth requirements would be identical.

The thing with the Playstation 5 is that in general it has about 20% less GPU capability on many fronts in an ideal perfect situation, it may even be greater than that depending on how the Playstation 5 manages clockrates when the entire system is loaded down.

The Xbox Series X doesn't actually have split memory, it's a single memory pool, it's just that a large part of that pool runs at a slower rate, so developers will need to ensure that certain parts of a game operate in the faster memory and other parts operate in the slower memory depending on how bandwidth intensive those individual tasks are... Microsoft is also using a chunk of that slower memory for the OS/background tasks, so the impact of that slower memory pool should be lessened for gaming.

Either way, without a doubt the Xbox Series X does have the bandwidth advantage... But this goes back to my point that there is more to both sides of the Playstation 5 and Xbox Series X story than the raw specification sheet numbers might imply.

Not really.
The Xbox Series X should be able to include higher quality implementations of global illumination than the Playstation 5, whilst the Playstation 5 has higher quality texturing.

So whilst the Playstation 5's GPU is formidable (Remember MPixel fillrate on the PS5's GPU is better than the Xbox Series X's GPU and probably Geometry as well), the Xbox Series X does hold the overall edge.

How much is it going to matter? Like I keep saying, we need to wait on the games and see what developers do with each piece of hardware.

It also depends on the type of game, for example big open world games that rely on lots of objects and geometry, will probably run best on Playstation 5.

Games that do allot of heavy scripting, physics, impressive lighting like a corridoor horror shooter... Will probably be best on Xbox Series X hardware.

Provided developers optimize for the various hardware nuances, there is never any guarantees for that.

I was using it as an example that not all Switch games are built with mechanical drives in mind, in-fact I would say that any Nintendo exclusive probably doesn't as Nintendo has not ever had a mechanical hard drive in a mainstream console release before.

You had the Nintendo 64DD... But that was more a peripheral.

The problem is the raw performance and the GPU capability for PS5 and Xbox SX , are different. The  amount processing features on PS5 is 18 % slightly smaller than Xbox  , thus it doesn't need to have more than that.   PS5 has  more than enough for that type GPU. Even Nvidia RTX 2080 has the same RAM bandwidth with PS5 

Like u said Xbox memory are split , one for system/OS  and the rest for graphic,  .they need to ensure the slower keep up with faster pool and the faster pool need to match the slower ones. the end result it will be ended the same with PS5 RAM speed

Global illumination can be also produce on CPU or in fact is software solution rather GPU solution. With costume IO , PS5 CPU are free from workload and can be calculate more on the CPU side . Global ILlumination are software raytracing, it can do on CPU side. I can see Xbox doing the same thing but Xbox has more GPU compute so i dont see why xbox doing the same thing with PS5 unless they trying to add another complex algorithm physics calculation or using path tracing. I can see that Xbox will relied on path tracing a lot rather then GI and PS5 will be relied more on GI.

The benefit of fast asset streaming does not mean in only applicable on open environment,  you can add a lot of detail and hiqh quality assets on close environment without affecting the performance (UE 5 inside the cave demo). The faster the speed is the more data , assets you can stream/transfer and the fast data you can calculate. 

the idea of SSD and costume IO  is you dont have to do a lot mapping, resteriser  and geometry calculation anymore on the GPU, all rendering has been pre baked on the virtual geometry 

you can check this research by Tim Sweeney 

Nintendo Gamecube, Wii and Wii U were using mechanical disk input , a lot of their games still has loading methodology. They should break from that methode with Switch, , but i think there is more then SSD for UE 5, i can see that IO doing most of the work 





HollyGamer said:

The problem is the raw performance and the GPU capability for PS5 and Xbox SX , are different. The  amount processing features on PS5 is 18 % slightly smaller than Xbox  , thus it doesn't need to have more than that.   PS5 has  more than enough for that type GPU. Even Nvidia RTX 2080 has the same RAM bandwidth with PS5

Not in all scenarios. The Playstation 5 might have the edge in Geometric capability and certainly in the MPixel fillrate department. (As the PS5 has the same number of ROPS as the Xbox Series X and that is a characteristic of ROP x Clockrate.)

Does it have enough GPU? I would argue that we can never have enough, lighting is still very much an approximation... Even the Unreal Engine 5 with it's Global Illumination demonstration is a testament to that.

HollyGamer said:

Like u said Xbox memory are split , one for system/OS  and the rest for graphic,  .they need to ensure the slower keep up with faster pool and the faster pool need to match the slower ones. the end result it will be ended the same with PS5 RAM speed

The memory is actually a single pool, it's just mapped as different memory locations that gets abstracted to an API for developers to leverage.

Will it be the same as the Playstation 5's memory speed? No way. It's still got an advantage, especially if you load up the 10GB section with heavy amounts of Alpha Effects at 4k.

That said... The Playstation 5 will need less bandwidth as it generally has less GPU and CPU resources anyway, but I think the main cutback on this front will likely only be resolution and maybe the framerate.

HollyGamer said:

Global illumination can be also produce on CPU or in fact is software solution rather GPU solution. With costume IO , PS5 CPU are free from workload and can be calculate more on the CPU side . Global ILlumination are software raytracing, it can do on CPU side. I can see Xbox doing the same thing but Xbox has more GPU compute so i dont see why xbox doing the same thing with PS5 unless they trying to add another complex algorithm physics calculation or using path tracing. I can see that Xbox will relied on path tracing a lot rather then GI and PS5 will be relied more on GI.

Correct, but you don't want to do it on the CPU, CPU's are fantastic at highly complex, serialized tasks... Where as the GPU is very adept at lots of highly parallel "simple" tasks. - Granted GPU's have been better better at more complex calculations over the years and CPU's have increased thread counts substantially, but those differences and specializations still continue to exist.

Global Illumination is actually an "umbrella term" for a lighting implementation, it includes software approaches like you alluded to... Or it can be done in hardware like on the nVidia RTX Ray Tracing cores.

HollyGamer said:

The benefit of fast asset streaming does not mean in only applicable on open environment,  you can add a lot of detail and hiqh quality assets on close environment without affecting the performance (UE 5 inside the cave demo). The faster the speed is the more data , assets you can stream/transfer and the fast data you can calculate. 

You can do all that by dumping it all into memory to start with, completely removing the need for an SSD. Obviously you need to invest more in expensive Ram when NAND is far cheaper and comparatively more plentiful.
It's basically a cost-performance advantage.

If we could... We wouldn't have SSD's or Ram, everything would be made of eSRAM/eDRAM/L0/L1/L2/L3/L4/L5 caches.

HollyGamer said:

the idea of SSD and costume IO  is you dont have to do a lot mapping, resteriser  and geometry calculation anymore on the GPU, all rendering has been pre baked on the virtual geometry 

you can check this research by Tim Sweeney 

SSD's do NOT feature the required rendering pipelines to perform the appropriate geometry calculations, the GPU still has the geometry processing capabilities... And rather substantial ones at that.

I think the tweeter of that post is taking Sweeney's tweets out of context.

HollyGamer said:

Nintendo Gamecube, Wii and Wii U were using mechanical disk input , a lot of their games still has loading methodology. They should break from that methode with Switch, , but i think there is more then SSD for UE 5, i can see that IO doing most of the work 

I am talking mechanical hard drives.

Gamecube, Wii and WiiU were dumping as much data as they can from Optical Disk to Ram and side-stepping a mechanical hard drive entirely.

Nintendo literally has never had a mechanical hard drive for developers to build games against unless you consider something like the Nintendo 64DD.

But that brings me back towards the Nintendo 64... That system was entirely solid state, the cart could fill up the Nintendo 64's Ram 30x or more per second... Granted it was limited in other ways, but ask yourself and list what benefits to gaming did it bring? What paradigm shifts did it bring to how games were developed and showcased?



--::{PC Gaming Master Race}::--

Pemalite said:
HollyGamer said:

The problem is the raw performance and the GPU capability for PS5 and Xbox SX , are different. The  amount processing features on PS5 is 18 % slightly smaller than Xbox  , thus it doesn't need to have more than that.   PS5 has  more than enough for that type GPU. Even Nvidia RTX 2080 has the same RAM bandwidth with PS5

Not in all scenarios. The Playstation 5 might have the edge in Geometric capability and certainly in the MPixel fillrate department. (As the PS5 has the same number of ROPS as the Xbox Series X and that is a characteristic of ROP x Clockrate.)

Does it have enough GPU? I would argue that we can never have enough, lighting is still very much an approximation... Even the Unreal Engine 5 with it's Global Illumination demonstration is a testament to that.

HollyGamer said:

Like u said Xbox memory are split , one for system/OS  and the rest for graphic,  .they need to ensure the slower keep up with faster pool and the faster pool need to match the slower ones. the end result it will be ended the same with PS5 RAM speed

The memory is actually a single pool, it's just mapped as different memory locations that gets abstracted to an API for developers to leverage.

Will it be the same as the Playstation 5's memory speed? No way. It's still got an advantage, especially if you load up the 10GB section with heavy amounts of Alpha Effects at 4k.

That said... The Playstation 5 will need less bandwidth as it generally has less GPU and CPU resources anyway, but I think the main cutback on this front will likely only be resolution and maybe the framerate.

HollyGamer said:

Global illumination can be also produce on CPU or in fact is software solution rather GPU solution. With costume IO , PS5 CPU are free from workload and can be calculate more on the CPU side . Global ILlumination are software raytracing, it can do on CPU side. I can see Xbox doing the same thing but Xbox has more GPU compute so i dont see why xbox doing the same thing with PS5 unless they trying to add another complex algorithm physics calculation or using path tracing. I can see that Xbox will relied on path tracing a lot rather then GI and PS5 will be relied more on GI.

Correct, but you don't want to do it on the CPU, CPU's are fantastic at highly complex, serialized tasks... Where as the GPU is very adept at lots of highly parallel "simple" tasks. - Granted GPU's have been better better at more complex calculations over the years and CPU's have increased thread counts substantially, but those differences and specializations still continue to exist.

Global Illumination is actually an "umbrella term" for a lighting implementation, it includes software approaches like you alluded to... Or it can be done in hardware like on the nVidia RTX Ray Tracing cores.

HollyGamer said:

The benefit of fast asset streaming does not mean in only applicable on open environment,  you can add a lot of detail and hiqh quality assets on close environment without affecting the performance (UE 5 inside the cave demo). The faster the speed is the more data , assets you can stream/transfer and the fast data you can calculate. 

You can do all that by dumping it all into memory to start with, completely removing the need for an SSD. Obviously you need to invest more in expensive Ram when NAND is far cheaper and comparatively more plentiful.
It's basically a cost-performance advantage.

If we could... We wouldn't have SSD's or Ram, everything would be made of eSRAM/eDRAM/L0/L1/L2/L3/L4/L5 caches.

HollyGamer said:

the idea of SSD and costume IO  is you dont have to do a lot mapping, resteriser  and geometry calculation anymore on the GPU, all rendering has been pre baked on the virtual geometry 

you can check this research by Tim Sweeney 

SSD's do NOT feature the required rendering pipelines to perform the appropriate geometry calculations, the GPU still has the geometry processing capabilities... And rather substantial ones at that.

I think the tweeter of that post is taking Sweeney's tweets out of context.

HollyGamer said:

Nintendo Gamecube, Wii and Wii U were using mechanical disk input , a lot of their games still has loading methodology. They should break from that methode with Switch, , but i think there is more then SSD for UE 5, i can see that IO doing most of the work 

I am talking mechanical hard drives.

Gamecube, Wii and WiiU were dumping as much data as they can from Optical Disk to Ram and side-stepping a mechanical hard drive entirely.

Nintendo literally has never had a mechanical hard drive for developers to build games against unless you consider something like the Nintendo 64DD.

But that brings me back towards the Nintendo 64... That system was entirely solid state, the cart could fill up the Nintendo 64's Ram 30x or more per second... Granted it was limited in other ways, but ask yourself and list what benefits to gaming did it bring? What paradigm shifts did it bring to how games were developed and showcased?

Lumen is far away ahead of what we have seen this gen  compared pre baked. I can say it's equal to path tracing on what we have seen so far,  yet it's cheap to implement. If we use path tracing,  i bet it will just limited to few rays. Path tracing is good when it has many rays and good denoising. and it can look very noticeable on small detail. 

The GPU is more than enough for budget consoles, on console you have to think about other cost. The CPU is very powerful for this gen they need to balance with the CPU and the fast SSD. All the component of next gen console are above tier except for the RAM (but then again RAM is also expensives ) they can only manages give twice the amount of RAM compared to last gen 

The GPU in PS5/Xbox  are equal to high end GPU,  it is more than enough.  The more power GPU we put, the more RAM/bandwidth and fast SSD they need to provide. I don't know how much the ratio is , but both targeting different goal. But you are correct more RAM are preferable. 

 Especially If they using traditional memory mapping like dumping every data to RAM from storage, the RAM will be limiting factor for split memory pool especially Xbox SX. It will be hard to manage. because we don't know how much RAM game per game basis. They also need  back to back communicate with system RAM or vice versa  for other calculation on CPU and GPGPU and keeping the speed with the slow system RAM. 

The GPU power is probably will make PS5 GPU render less resolution outside assets and texture , but with high assets and textures PS5 will have high quality IQ and detail. So we will see probably less resolution on PS5 but with better IQ and better texture on PS5. For frame rates i think developer will stick to 30 fps or 60 fps. So either lower the resolution or cut back some assets textures. 

Yup agree to dump everything on RAm, that's how current gen work, less streaming data and more mesh, render target data and buffer data, but it will make the GPU work harder , and more RAM utilization this is the alternative way for Xbox. 

The reyes method are the methode where culling and geometry are calculated on micorpolygon basis it's where the idea came from for geometry virtualization comes i should give you the link for the blog from Brian Karis (chief at rendering for Unreal 5) http://graphicrants.blogspot.com/2009/01/virtual-geometry-images.html 

http://hhoppe.com/proj/gim/

Wii U, Wii and Game cube skipping HDD is even absurd , it means the dump small amount of data to RAm and need to preload the disk again if they need  run certain level and event. It means  Switch is far better storage solution than i expect. the cartridge should able to do streaming on limited data if developer want to utilize it. But then we also need to think the hardware decompression heavy task task.

 

Last edited by HollyGamer - on 09 June 2020

Around the Network
HollyGamer said:

Lumen is far away ahead of what we have seen this gen  compared pre baked. I can say it's equal to path tracing on what we have seen so far,  yet it's cheap to implement. If we use path tracing,  i bet it will just limited to few rays. Path tracing is good when it has many rays and good denoising. and it can look very noticeable on small detail.  

Plenty of games have used full global illumination this generation, don't think I need to go back and provide the evidence for this again and again and again, but happy to do so.
But Lumen is not "far ahead" of anything we have seen this generation, granted most games would do pre-calculated GI and "baked" it into a scene, but not every game.

Unreal Engine 5 is taking a Hybrid approach... It uses screen space data for smaller micro scale assets (Again, nothing new) in conjunction with mesh signed distance fields for medium scale assets (Also nothing new) and then uses voxel based global illumination for the larger based assets. (Aka. SVOGI, nothing new.)

What is new is leveraging all three techniques at the same time, mostly out of a need to save on processing time.

But if we had the additional horsepower, we could have sidestepped this entirely... And we might still sidestep this entirely once developers play around with the hardware and learn what works best.

The real interesting and groundbreaking part to me in the UE5 demo was Nanite... Those micro details in that geometry really made the screen-space lighting really pop.

HollyGamer said:

The GPU is more than enough for budget consoles, on console you have to think about other cost. The CPU is very powerful for this gen they need to balance with the CPU and the fast SSD. All the component of next gen console are above tier except for the RAM (but then again RAM is also expensives ) they can only manages give twice the amount of RAM compared to last gen  

I am well aware of consoles being a cost-sensitive device... A statement I often use on these forums.

But as a consumer, the cost to build isn't important, more for your dollar is always better from a consumers point of view.

I agree on the Ram, I don't think it's limitations will become readily apparent though until the later half of the 9th gen, the SSD's will go to some lengths to hide those limitations.
But I think you can agree that having such hard limitations means developers tend to get adventurous and do some impressive technical feats.

HollyGamer said:

The GPU in PS5/Xbox  are equal to high end GPU,  it is more than enough.  The more power GPU we put, the more RAM/bandwidth and fast SSD they need to provide. I don't know how much the ratio is , but both targeting different goal. But you are correct more RAM are preferable.

Not all GPU/CPU tasks are Ram/Memory Bandwidth sensitive. Allot are, but not all.
An SSD either-way is never going to be fast enough to directly feed a GPU the necessary bandwidth to handle modern gaming, it can help us make better use of limited Ram pools though.

Console manufacturers had to strike a balance... Sony's balanced leans slightly in favor of I/O and Microsoft's balance leans slightly in favor of computation, who made the right choices? I guess we just need the games to definitely decide on that.

HollyGamer said:

The GPU power is probably will make PS5 GPU render less resolution outside assets and texture , but with high assets and textures PS5 will have high quality IQ and detail. So we will see probably less resolution on PS5 but with better IQ and better texture on PS5. For frame rates i think developer will stick to 30 fps or 60 fps. So either lower the resolution or cut back some assets textures.  

I think 60fps games will be more common in the 9th gen afforded by the jump in CPU and GPU capabilities, but there are probably going to be allot of AAA games that stick to 30fps due to developers prioritizing graphics over framerate, which is fine and entirely expected from consoles.

Resolution is probably going to be less of an importance... Right now the Playstation 4 Pro and Xbox Series X were chasing resolution, expending a ton of resources in the process rather than significantly bolstering relative image quality over the base consoles... There is likely going to be a bigger emphasis on frame reconstruction techniques in order to run games at a lower resolution and frame-reconstruct to a higher one.

The PC will likely still run resolutions without such shenanigans, but if we want those generational increases in visuals... Something needs to give.

The Xbox Series X also has more texturing capabilities than the Playstation 5 afforded by it's higher Texture Mapping Unit counts and memory bandwidth, where the Playstation 5 should pull ahead is swapping out higher quality textures more often and using them more extensively.

It's one of those... "Xbox has an advantage here, Playstation an advantage there". - What it means for games though needs to be demonstrated.

HollyGamer said:

Yup agree to dump everything on RAm, that's how current gen work, less streaming data and more mesh, render target data and buffer data, but it will make the GPU work harder , and more RAM utilization this is the alternative way for Xbox. 

The reyes method are the methode where culling and geometry are calculated on micorpolygon basis it's where the idea came from for geometry virtualization comes i should give you the link for the blog from Brian Karis (chief at rendering for Unreal 5) http://graphicrants.blogspot.com/2009/01/virtual-geometry-images.html 

http://hhoppe.com/proj/gim/

Geometry culling isn't actually a new thing... AMD for example is leveraging the Hyper-Z technology that ATI pioneered with the Radeon 7500/R100 series of GPU's from back in the year 2000 which would cull geometry out of view/hidden by another object.

With nVidia's Maxwell, nVidia introduce Draw Stream Binning Rasterization which did a similar thing but via a tiled approach which saved a heap of processing and bandwidth... AMD then pushed that technology with Vega (Although was ineffectual...) and refined it further with Navi.

The UE5 approach does things a little differently and not relying on hardware... I don't see any reason why the Xbox Series X with it's SSD couldn't do the same as the Playstation 5 with a few extra caveats.

Guess the games will prove it all.

HollyGamer said:

Wii U, Wii and Game cube skipping HDD is even absurd , it means the dump small amount of data to RAm and need to preload the disk again if they need  run certain level and event. It means  Switch is far better storage solution than i expect. the cartridge should able to do streaming on limited data if developer want to utilize it. But then we also need to think the hardware decompression heavy task task.

Don't get me wrong, I am not saying Nintendo's approach was the "right" one, just saying that Nintendo has never had to develop games with mechanical hard drives in mind.

The Switch's Cart is capped at around 100MB/s if memory serves me right, essentially a Nintendo 64 cart offers twice the performance as a Switch Cart, it's still a speedy transfer rate relative to the paltry memory bandwidth of the console though.

Tegra does have hardware decompression logic blocks as well, it's leveraging relatively modern mobile technology... So things like tiled based draw stream binning rasterization, delta colour compression and so forth are all in, which is why the console seems to be able to do things outside of what the raw spec sheet numbers would otherwise imply.

In saying that, I am genuinely excited for the Playstation 5 and Xbox Series X. I have both on pre-order... A first time for me.






--::{PC Gaming Master Race}::--

Pemalite said:
HollyGamer said:

Lumen is far away ahead of what we have seen this gen  compared pre baked. I can say it's equal to path tracing on what we have seen so far,  yet it's cheap to implement. If we use path tracing,  i bet it will just limited to few rays. Path tracing is good when it has many rays and good denoising. and it can look very noticeable on small detail.  

Plenty of games have used full global illumination this generation, don't think I need to go back and provide the evidence for this again and again and again, but happy to do so.
But Lumen is not "far ahead" of anything we have seen this generation, granted most games would do pre-calculated GI and "baked" it into a scene, but not every game.

Unreal Engine 5 is taking a Hybrid approach... It uses screen space data for smaller micro scale assets (Again, nothing new) in conjunction with mesh signed distance fields for medium scale assets (Also nothing new) and then uses voxel based global illumination for the larger based assets. (Aka. SVOGI, nothing new.)

What is new is leveraging all three techniques at the same time, mostly out of a need to save on processing time.

But if we had the additional horsepower, we could have sidestepped this entirely... And we might still sidestep this entirely once developers play around with the hardware and learn what works best.

The real interesting and groundbreaking part to me in the UE5 demo was Nanite... Those micro details in that geometry really made the screen-space lighting really pop.

HollyGamer said:

The GPU is more than enough for budget consoles, on console you have to think about other cost. The CPU is very powerful for this gen they need to balance with the CPU and the fast SSD. All the component of next gen console are above tier except for the RAM (but then again RAM is also expensives ) they can only manages give twice the amount of RAM compared to last gen  

I am well aware of consoles being a cost-sensitive device... A statement I often use on these forums.

But as a consumer, the cost to build isn't important, more for your dollar is always better from a consumers point of view.

I agree on the Ram, I don't think it's limitations will become readily apparent though until the later half of the 9th gen, the SSD's will go to some lengths to hide those limitations.
But I think you can agree that having such hard limitations means developers tend to get adventurous and do some impressive technical feats.

HollyGamer said:

The GPU in PS5/Xbox  are equal to high end GPU,  it is more than enough.  The more power GPU we put, the more RAM/bandwidth and fast SSD they need to provide. I don't know how much the ratio is , but both targeting different goal. But you are correct more RAM are preferable.

Not all GPU/CPU tasks are Ram/Memory Bandwidth sensitive. Allot are, but not all.
An SSD either-way is never going to be fast enough to directly feed a GPU the necessary bandwidth to handle modern gaming, it can help us make better use of limited Ram pools though.

Console manufacturers had to strike a balance... Sony's balanced leans slightly in favor of I/O and Microsoft's balance leans slightly in favor of computation, who made the right choices? I guess we just need the games to definitely decide on that.

HollyGamer said:

The GPU power is probably will make PS5 GPU render less resolution outside assets and texture , but with high assets and textures PS5 will have high quality IQ and detail. So we will see probably less resolution on PS5 but with better IQ and better texture on PS5. For frame rates i think developer will stick to 30 fps or 60 fps. So either lower the resolution or cut back some assets textures.  

I think 60fps games will be more common in the 9th gen afforded by the jump in CPU and GPU capabilities, but there are probably going to be allot of AAA games that stick to 30fps due to developers prioritizing graphics over framerate, which is fine and entirely expected from consoles.

Resolution is probably going to be less of an importance... Right now the Playstation 4 Pro and Xbox Series X were chasing resolution, expending a ton of resources in the process rather than significantly bolstering relative image quality over the base consoles... There is likely going to be a bigger emphasis on frame reconstruction techniques in order to run games at a lower resolution and frame-reconstruct to a higher one.

The PC will likely still run resolutions without such shenanigans, but if we want those generational increases in visuals... Something needs to give.

The Xbox Series X also has more texturing capabilities than the Playstation 5 afforded by it's higher Texture Mapping Unit counts and memory bandwidth, where the Playstation 5 should pull ahead is swapping out higher quality textures more often and using them more extensively.

It's one of those... "Xbox has an advantage here, Playstation an advantage there". - What it means for games though needs to be demonstrated.

HollyGamer said:

Yup agree to dump everything on RAm, that's how current gen work, less streaming data and more mesh, render target data and buffer data, but it will make the GPU work harder , and more RAM utilization this is the alternative way for Xbox. 

The reyes method are the methode where culling and geometry are calculated on micorpolygon basis it's where the idea came from for geometry virtualization comes i should give you the link for the blog from Brian Karis (chief at rendering for Unreal 5) http://graphicrants.blogspot.com/2009/01/virtual-geometry-images.html 

http://hhoppe.com/proj/gim/

Geometry culling isn't actually a new thing... AMD for example is leveraging the Hyper-Z technology that ATI pioneered with the Radeon 7500/R100 series of GPU's from back in the year 2000 which would cull geometry out of view/hidden by another object.

With nVidia's Maxwell, nVidia introduce Draw Stream Binning Rasterization which did a similar thing but via a tiled approach which saved a heap of processing and bandwidth... AMD then pushed that technology with Vega (Although was ineffectual...) and refined it further with Navi.

The UE5 approach does things a little differently and not relying on hardware... I don't see any reason why the Xbox Series X with it's SSD couldn't do the same as the Playstation 5 with a few extra caveats.

Guess the games will prove it all.

HollyGamer said:

Wii U, Wii and Game cube skipping HDD is even absurd , it means the dump small amount of data to RAm and need to preload the disk again if they need  run certain level and event. It means  Switch is far better storage solution than i expect. the cartridge should able to do streaming on limited data if developer want to utilize it. But then we also need to think the hardware decompression heavy task task.

Don't get me wrong, I am not saying Nintendo's approach was the "right" one, just saying that Nintendo has never had to develop games with mechanical hard drives in mind.

The Switch's Cart is capped at around 100MB/s if memory serves me right, essentially a Nintendo 64 cart offers twice the performance as a Switch Cart, it's still a speedy transfer rate relative to the paltry memory bandwidth of the console though.

Tegra does have hardware decompression logic blocks as well, it's leveraging relatively modern mobile technology... So things like tiled based draw stream binning rasterization, delta colour compression and so forth are all in, which is why the console seems to be able to do things outside of what the raw spec sheet numbers would otherwise imply.

In saying that, I am genuinely excited for the Playstation 5 and Xbox Series X. I have both on pre-order... A first time for me.




Is texture mapping directly related to the CU count or did we got separated info on them?



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

Pemalite said:
HollyGamer said:

Lumen is far away ahead of what we have seen this gen  compared pre baked. I can say it's equal to path tracing on what we have seen so far,  yet it's cheap to implement. If we use path tracing,  i bet it will just limited to few rays. Path tracing is good when it has many rays and good denoising. and it can look very noticeable on small detail.  

Plenty of games have used full global illumination this generation, don't think I need to go back and provide the evidence for this again and again and again, but happy to do so.
But Lumen is not "far ahead" of anything we have seen this generation, granted most games would do pre-calculated GI and "baked" it into a scene, but not every game.

Unreal Engine 5 is taking a Hybrid approach... It uses screen space data for smaller micro scale assets (Again, nothing new) in conjunction with mesh signed distance fields for medium scale assets (Also nothing new) and then uses voxel based global illumination for the larger based assets. (Aka. SVOGI, nothing new.)

What is new is leveraging all three techniques at the same time, mostly out of a need to save on processing time.

But if we had the additional horsepower, we could have sidestepped this entirely... And we might still sidestep this entirely once developers play around with the hardware and learn what works best.

The real interesting and groundbreaking part to me in the UE5 demo was Nanite... Those micro details in that geometry really made the screen-space lighting really pop.

HollyGamer said:

The GPU is more than enough for budget consoles, on console you have to think about other cost. The CPU is very powerful for this gen they need to balance with the CPU and the fast SSD. All the component of next gen console are above tier except for the RAM (but then again RAM is also expensives ) they can only manages give twice the amount of RAM compared to last gen  

I am well aware of consoles being a cost-sensitive device... A statement I often use on these forums.

But as a consumer, the cost to build isn't important, more for your dollar is always better from a consumers point of view.

I agree on the Ram, I don't think it's limitations will become readily apparent though until the later half of the 9th gen, the SSD's will go to some lengths to hide those limitations.
But I think you can agree that having such hard limitations means developers tend to get adventurous and do some impressive technical feats.

HollyGamer said:

The GPU in PS5/Xbox  are equal to high end GPU,  it is more than enough.  The more power GPU we put, the more RAM/bandwidth and fast SSD they need to provide. I don't know how much the ratio is , but both targeting different goal. But you are correct more RAM are preferable.

Not all GPU/CPU tasks are Ram/Memory Bandwidth sensitive. Allot are, but not all.
An SSD either-way is never going to be fast enough to directly feed a GPU the necessary bandwidth to handle modern gaming, it can help us make better use of limited Ram pools though.

Console manufacturers had to strike a balance... Sony's balanced leans slightly in favor of I/O and Microsoft's balance leans slightly in favor of computation, who made the right choices? I guess we just need the games to definitely decide on that.

HollyGamer said:

The GPU power is probably will make PS5 GPU render less resolution outside assets and texture , but with high assets and textures PS5 will have high quality IQ and detail. So we will see probably less resolution on PS5 but with better IQ and better texture on PS5. For frame rates i think developer will stick to 30 fps or 60 fps. So either lower the resolution or cut back some assets textures.  

I think 60fps games will be more common in the 9th gen afforded by the jump in CPU and GPU capabilities, but there are probably going to be allot of AAA games that stick to 30fps due to developers prioritizing graphics over framerate, which is fine and entirely expected from consoles.

Resolution is probably going to be less of an importance... Right now the Playstation 4 Pro and Xbox Series X were chasing resolution, expending a ton of resources in the process rather than significantly bolstering relative image quality over the base consoles... There is likely going to be a bigger emphasis on frame reconstruction techniques in order to run games at a lower resolution and frame-reconstruct to a higher one.

The PC will likely still run resolutions without such shenanigans, but if we want those generational increases in visuals... Something needs to give.

The Xbox Series X also has more texturing capabilities than the Playstation 5 afforded by it's higher Texture Mapping Unit counts and memory bandwidth, where the Playstation 5 should pull ahead is swapping out higher quality textures more often and using them more extensively.

It's one of those... "Xbox has an advantage here, Playstation an advantage there". - What it means for games though needs to be demonstrated.

HollyGamer said:

Yup agree to dump everything on RAm, that's how current gen work, less streaming data and more mesh, render target data and buffer data, but it will make the GPU work harder , and more RAM utilization this is the alternative way for Xbox. 

The reyes method are the methode where culling and geometry are calculated on micorpolygon basis it's where the idea came from for geometry virtualization comes i should give you the link for the blog from Brian Karis (chief at rendering for Unreal 5) http://graphicrants.blogspot.com/2009/01/virtual-geometry-images.html 

http://hhoppe.com/proj/gim/

Geometry culling isn't actually a new thing... AMD for example is leveraging the Hyper-Z technology that ATI pioneered with the Radeon 7500/R100 series of GPU's from back in the year 2000 which would cull geometry out of view/hidden by another object.

With nVidia's Maxwell, nVidia introduce Draw Stream Binning Rasterization which did a similar thing but via a tiled approach which saved a heap of processing and bandwidth... AMD then pushed that technology with Vega (Although was ineffectual...) and refined it further with Navi.

The UE5 approach does things a little differently and not relying on hardware... I don't see any reason why the Xbox Series X with it's SSD couldn't do the same as the Playstation 5 with a few extra caveats.

Guess the games will prove it all.

HollyGamer said:

Wii U, Wii and Game cube skipping HDD is even absurd , it means the dump small amount of data to RAm and need to preload the disk again if they need  run certain level and event. It means  Switch is far better storage solution than i expect. the cartridge should able to do streaming on limited data if developer want to utilize it. But then we also need to think the hardware decompression heavy task task.

Don't get me wrong, I am not saying Nintendo's approach was the "right" one, just saying that Nintendo has never had to develop games with mechanical hard drives in mind.

The Switch's Cart is capped at around 100MB/s if memory serves me right, essentially a Nintendo 64 cart offers twice the performance as a Switch Cart, it's still a speedy transfer rate relative to the paltry memory bandwidth of the console though.

Tegra does have hardware decompression logic blocks as well, it's leveraging relatively modern mobile technology... So things like tiled based draw stream binning rasterization, delta colour compression and so forth are all in, which is why the console seems to be able to do things outside of what the raw spec sheet numbers would otherwise imply.

In saying that, I am genuinely excited for the Playstation 5 and Xbox Series X. I have both on pre-order... A first time for me.




The problem with GI is , it's less impressive on low quality assets, the better the assets and texture the better GI is. Also it's cheap to implemented and far better than normal light mapping and low ray raytracing. Of course it cannot be compared to fully ray tracing.

rather than rely on fully Path tracing Lumen from what i see is good enough for high quality textures and assets. 

we can make GPU do other work

If i can choose, i choose more powerful than what we have, but we are living in reality here, the important factor on building the system is balance. And that balance measured on what are the system target. Cerny in his presentation explain how his philosophy is by balancing between revolution and evolution.

Software rendering like geometry rendering are data dependency , it need a lot of data streaming , and data streaming need fast IO. I am not saying Xbox SX cannot do, PS5 will do far better in this aspect.

Tegra can do hardware decompression , i know , but it still make the GPU do the work. PS5/Xbox SX has their own decompression data outside CPU or GPU. 



Ok.



DonFerrari said:
Pemalite said:

Plenty of games have used full global illumination this generation, don't think I need to go back and provide the evidence for this again and again and again, but happy to do so.
But Lumen is not "far ahead" of anything we have seen this generation, granted most games would do pre-calculated GI and "baked" it into a scene, but not every game.

Unreal Engine 5 is taking a Hybrid approach... It uses screen space data for smaller micro scale assets (Again, nothing new) in conjunction with mesh signed distance fields for medium scale assets (Also nothing new) and then uses voxel based global illumination for the larger based assets. (Aka. SVOGI, nothing new.)

What is new is leveraging all three techniques at the same time, mostly out of a need to save on processing time.

But if we had the additional horsepower, we could have sidestepped this entirely... And we might still sidestep this entirely once developers play around with the hardware and learn what works best.

The real interesting and groundbreaking part to me in the UE5 demo was Nanite... Those micro details in that geometry really made the screen-space lighting really pop.

I am well aware of consoles being a cost-sensitive device... A statement I often use on these forums.

But as a consumer, the cost to build isn't important, more for your dollar is always better from a consumers point of view.

I agree on the Ram, I don't think it's limitations will become readily apparent though until the later half of the 9th gen, the SSD's will go to some lengths to hide those limitations.
But I think you can agree that having such hard limitations means developers tend to get adventurous and do some impressive technical feats.

Not all GPU/CPU tasks are Ram/Memory Bandwidth sensitive. Allot are, but not all.
An SSD either-way is never going to be fast enough to directly feed a GPU the necessary bandwidth to handle modern gaming, it can help us make better use of limited Ram pools though.

Console manufacturers had to strike a balance... Sony's balanced leans slightly in favor of I/O and Microsoft's balance leans slightly in favor of computation, who made the right choices? I guess we just need the games to definitely decide on that.

I think 60fps games will be more common in the 9th gen afforded by the jump in CPU and GPU capabilities, but there are probably going to be allot of AAA games that stick to 30fps due to developers prioritizing graphics over framerate, which is fine and entirely expected from consoles.

Resolution is probably going to be less of an importance... Right now the Playstation 4 Pro and Xbox Series X were chasing resolution, expending a ton of resources in the process rather than significantly bolstering relative image quality over the base consoles... There is likely going to be a bigger emphasis on frame reconstruction techniques in order to run games at a lower resolution and frame-reconstruct to a higher one.

The PC will likely still run resolutions without such shenanigans, but if we want those generational increases in visuals... Something needs to give.

The Xbox Series X also has more texturing capabilities than the Playstation 5 afforded by it's higher Texture Mapping Unit counts and memory bandwidth, where the Playstation 5 should pull ahead is swapping out higher quality textures more often and using them more extensively.

It's one of those... "Xbox has an advantage here, Playstation an advantage there". - What it means for games though needs to be demonstrated.

Geometry culling isn't actually a new thing... AMD for example is leveraging the Hyper-Z technology that ATI pioneered with the Radeon 7500/R100 series of GPU's from back in the year 2000 which would cull geometry out of view/hidden by another object.

With nVidia's Maxwell, nVidia introduce Draw Stream Binning Rasterization which did a similar thing but via a tiled approach which saved a heap of processing and bandwidth... AMD then pushed that technology with Vega (Although was ineffectual...) and refined it further with Navi.

The UE5 approach does things a little differently and not relying on hardware... I don't see any reason why the Xbox Series X with it's SSD couldn't do the same as the Playstation 5 with a few extra caveats.

Guess the games will prove it all.

Don't get me wrong, I am not saying Nintendo's approach was the "right" one, just saying that Nintendo has never had to develop games with mechanical hard drives in mind.

The Switch's Cart is capped at around 100MB/s if memory serves me right, essentially a Nintendo 64 cart offers twice the performance as a Switch Cart, it's still a speedy transfer rate relative to the paltry memory bandwidth of the console though.

Tegra does have hardware decompression logic blocks as well, it's leveraging relatively modern mobile technology... So things like tiled based draw stream binning rasterization, delta colour compression and so forth are all in, which is why the console seems to be able to do things outside of what the raw spec sheet numbers would otherwise imply.

In saying that, I am genuinely excited for the Playstation 5 and Xbox Series X. I have both on pre-order... A first time for me.




Is texture mapping directly related to the CU count or did we got separated info on them?

Texture Mapping Units are separate. It's usually tied to the CU counts. - Basically... It's 1x Texture mapping unit for each 16 shader pipelines.
Or... 4x Texture Mapping units per CU. - If AMD wanted it can deviate from that count.

HollyGamer said:

The problem with GI is , it's less impressive on low quality assets, the better the assets and texture the better GI is. Also it's cheap to implemented and far better than normal light mapping and low ray raytracing. Of course it cannot be compared to fully ray tracing.

rather than rely on fully Path tracing Lumen from what i see is good enough for high quality textures and assets. 

we can make GPU do other work

If i can choose, i choose more powerful than what we have, but we are living in reality here, the important factor on building the system is balance. And that balance measured on what are the system target. Cerny in his presentation explain how his philosophy is by balancing between revolution and evolution.

Software rendering like geometry rendering are data dependency , it need a lot of data streaming , and data streaming need fast IO. I am not saying Xbox SX cannot do, PS5 will do far better in this aspect.

Tegra can do hardware decompression , i know , but it still make the GPU do the work. PS5/Xbox SX has their own decompression data outside CPU or GPU. 

Global Illumination is Ray Tracing.

Tegra's decompression is on a fixed function logic block, it's not making the GPU do extra work.

It would be like not using Ray Tracing cores when you want to do a Ray Tracing operation. The tasks needs to still be done, don't let silicon go unused.







--::{PC Gaming Master Race}::--