By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - PS5 Coming at the End of 2020 According to Analyst: High-Spec Hardware for Under $500

 

Price, SKUs, specs ?

Only Base Model, $399, 9-10TF GPU, 16GB RAM 24 30.00%
 
Only Base Model, $449, 10-12TF GPU, 16GB RAM 13 16.25%
 
Only Base Model, $499, 12-14TF GPU, 24GB RAM 21 26.25%
 
Base Model $399 and PREMIUM $499 specs Ans3 10 12.50%
 
Base Mod $399 / PREM $549, >14TF 24GB RAM 5 6.25%
 
Base Mod $449 / PREM $599, the absolute Elite 7 8.75%
 
Total:80
Nate4Drake said:
DonFerrari said:

You also forgot RAM, on size and bandwidth that are necessary to keep the feeding between both.

There is no reason for a system that have a GPU that is 4x stronger than the other to have all other components the same. If you want to have both balanced architetures, CPU, RAM size and bandwidth will accompany. So there is no reason to say a system that have a GPU 4x stronger isn't a system "overall about 4x as strong".

On the bottleneck. The ideal is that the whole system struggle at the same time, so there is no excess and no lacking in specific components. Theoretically that is what Sony done on PS4, so when they just doubled GPU with minimal improvement on CPU and RAM they couldn't really make full use of the GPU, with most of the power used to just have higher res or minimal performance gain. They couldn't increase RAM and CPU much because it also wouldn't help much (besides cost and limits on architecture and full compatibility).

Yep.  I didn't talk again about size and memory bandwidth 'cause I took it for granted, and already mentioned in the previous post.    Totally agreed.

X1X is a machine that plays at near 4k (or at least upscaled to it on screen) games designed for 720p intent on X1/X1S. About 4x GPU (and most all of the rest are smaller increments) but as a machine it is closer to 4x stronger than just a small percentage.

To think that X1 doesn't hold it back is silly, because if not for the need of compatibility the CPU could be much better for a little more bucks and the games on it could be plenty better as well.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

Around the Network
Nate4Drake said:
Biggerboat1 said:

I'm basing my thoughts on the rumoured leaks of Lockhart & Anaconda sharing the same CPU, having 12GB vs 16GB of ram & of course, the 4 vs 12 TFLOP GPUs. So I'm not sure why you're alluding to a weaker CPU - have there been new leaks?

I'm admittedly a novice in tech specs but isn't resolution pretty much a GPU related matter? I.e. the geometry, ai, etc. are the same amount of work for the CPU, regardless of whether the GPU is rendering in 1080 or 4K? Frame-rate is different & effects both.

The idea of balancing your CPU and GPU concerns bottlingnecking, or when one component is preventing another component from performing to its full potential. For example, if you have an incredibly high-powered graphics card and a mediocre CPU, then the graphics card could be finishing its work faster than the CPU can accept that work and issue the GPU more. At this point, even if you install a better graphics card, your computer's performance isn't going to improve because your CPU is at the limit of what it can do with graphics cards. The same applies to having an incredibly high-powered CPU that starts issuing tasks to the GPU faster than the graphics card can handle them.

Isn't that essentially what the Switch is doing - it has a down-clocked GPU in handheld mode so reduces resolution but otherwise everything else is the same?

I accept I could be wrong here though and will happily stand to be corrected.

Bottlenecking refers to a limitation of some sort, especially as caused by hardware. When you're playing a game, there's two common bottlenecks (or limitations) to your framerates — the CPU or GPU. With the GPU so commonly being said as the most important to gamers, of course you don't want it to be held back, right?

In order to render and display an image to your screen, there are many steps taken to do so. The GPU does much of the work in order to do that. But first, it needs to be told what to do and it needs the required data to work with to do its job in the first place.

At the CPU, API calls are executed. Control is given to the OS and then GPU drivers, which translate the API calls to commands. These commands are sent to the GPU, where they are in a buffer (which there may be multiple of in modern graphics APIs) where they will then be read and executed (in other words, carried out). Even before this can be done, however, there's even more work — the CPU also has to run logic necessary to tell what needs to be rendered on screen, and this is based off user input and internal rules. On top of sending commands to the GPU with instructions, data, and state changes, the CPU also handles things like user input, AI, physics, and environment in games. Meanwhile, the GPU is tasked with, as GamsersNexus puts it concisely, "drawing the triangles and geometry, textures, rendering lighting, post-processing effects, and dispatching the packaged frame to the display."

Now, here's where bottlenecking comes in. If the CPU isn't sending commands faster than the GPU can pull them out of the command buffer and execute them, the buffer will spend time being empty with the GPU waiting for input, and you're considered to be CPU limited. If the GPU isn't executing the commands fast enough, then you're GPU limited, and the CPU will spend time waiting on the GPU.[source] When you're CPU-limited (also called CPU-bound or CPU-bottlenecked), GPU utilization (time spent not being idle) decreases as the bottleneck becomes more severe and when you're GPU-limited (AKA GPU-bound or GPU-bottlenecked), your CPU utilization will go down to an extent as the bottleneck becomes more severe.

In an ideal world, there would be no bottlenecks. In this case, such a situation would require that the CPU, PCI-e, and every stage in the GPU's pipeline all be equally loaded. Or, every component would have to be infinitely fast. But, this is not an ideal world. Something, somewhere, always holds back performance. This doesn't just go for the CPU and GPU, either.

https://pcpartpicker.com/forums/topic/214851-on-cpugpu-bottlenecking-in-games

MS knows one of the mistakes they made with XB1 was to fall below PS5 on performance so it seems to me that it would be imperative for them not to hobble their top sku by creating a poorly thought out entry sku that would meaningfully limit the technical scope of games.

I guess my question would be - are you saying that creating a 1080 & 4K sku is technically impossible without hobbling the the latter sku? I think that if executed badly it could but I don't see why it's technically impossible... or even necessarily that difficult as long as they don't cheap out on the other components (which isn't the case, based on the these rumoured specs).

It's technically possible to squeeze both SKUs, but it would require much more extra work, and Devs shouldn't develop and conceive the Game with the lowest hardware in mind; the results could be theoretically, in some cases, that you might have, according to the specific vision of developers for that particular game,  a more advanced game in areas such as physics, AI, animations, etc, apart from the given better graphics and performance on the Elite SKU.   Is it feasible ? Is it fair for the majority of gamers who will buy the cheapest SKU?

 Now, I'm not a tech guru either, and this is just according to my knowledge, and I was always wondering how scalability can work in areas such as physics, animations, system collision, interactions with the environment, AI and game-play mechanics ? How much more complex is "scalability" in those areas ?  Can this be taken into account by the developers, is it feasible, or too complex and costy for the majority of developers ?  And This also depends on how devs decide to allocate the extra power of the more powerful CPU.  

Now, I'm assuming "Anaconda" will be a balanced piece of hardware, and if the GPU will be 3X faster than 'Lockhart' GPU, the CPU should be much faster as well, and the same for memory bandwidth, unless you want another XBox One X bottleneckED by a weak CPU ;)   ...or a 'Lockhart' with an extremely powerful CPU which is not needed when coupled with a weak GPU.  

My only problem with what you're saying is that you are assuming a weaker CPU in Lockhart - which is contrary to the leak. If it was weaker hardware across the board then, yes, I agree that it would hamper the baseline development. But that's simply not what the supposed leak is suggesting. Also, the things you mention (physics, animations, system collision, interactions with the environment, AI and game-play mechanics) - seem to me to be mostly CPU related tasks - so they could be mirrored across both skus - so no need to be scaled back.

Also, this notion that all components being 'balanced' doesn't make sense to me in this instance, because the 2 skus are targeting different resolutions... resolution is GPU intensive, not CPU...

So we're coming at this from 2 different assumptions - I'm going with the rumoured leaked specs - and you're assuming your own set of components...



Nate4Drake said:

  So let's talk about developing games on several SKUs, two from MS and one from Sony.   This is not helping at all, and the complexity of scalability will increase if you wanna try to push every single SKU. The reality is, most 3rd party developers will never waste much resources and money to do it, and the main differences will be in resolutions/frame-rate, draw distance and effects; the weakest SKU will hold down all the others.      1st party developers, on the other hand, can do more, but having one SKU with a GPU 3X more powerful, together with a faster CPU and RAM compared to the other, they must develop anyway with the lowest common denominator in mind, and have to give gamers the same core experience; this is a limiting factor.   Sony, on the other hand(if it's true they will release only one SKU), can make its exclusive games shining, taking full advantage of all the resources available in the way they want, without bothering about the weakest and less capable hardware.

 So, what consumers(those not loyal and tied to any Company) will choose on Day One ?  A very cheap SKU from MS, but massively underpowered considering it is a Next Gen System, and compared to PS5, which will have probably a very competitive price ?    Or Anaconda, very expensive and more powerful than PS5 ?  Or PS5, great hardware at a competitive price ?     It depends.  I'm not sure which is the best strategy.

------------------------------------------------------------------------------------------------------

I think you are exagerating some things and understating others.

Firstly scalability is not a problem its a solution. A solution necessitated by the vastly different spec differences in the PC market. The nly way a dev can make a game that works on one hardware spec and doesn't totally just crash on others is via scalability. Its been in the PC industry for over a decade and believe it or not it made its way into consoles last gen. Its not complicating thins any more than they already have been. This is now all just normal.

As for what consumers will do? This one is even simpler. Consumers will do everything. There is a market fr people willing to spend $500 for the most powerful consle in the world. Then there is one for those that feel $400 gets them something thats good enough. And there are those that dont care abut 4K and have only $300 to spend. To all of them they are ding exactly what they want. Buoyed by every supporting reason in between.

Oneeee-Chan!!! said:
why do people think that PS5 will be only 1 SKU ?

Thats the million dollar question right there.

Circa 2005/2006 multiple skus would have meant different HDD sizes.

2020? It could very well mean 1080p and 4k. But somehow this is not something that sits well with a lot of people.



Biggerboat1 said:
Nate4Drake said:

It's technically possible to squeeze both SKUs, but it would require much more extra work, and Devs shouldn't develop and conceive the Game with the lowest hardware in mind; the results could be theoretically, in some cases, that you might have, according to the specific vision of developers for that particular game,  a more advanced game in areas such as physics, AI, animations, etc, apart from the given better graphics and performance on the Elite SKU.   Is it feasible ? Is it fair for the majority of gamers who will buy the cheapest SKU?

 Now, I'm not a tech guru either, and this is just according to my knowledge, and I was always wondering how scalability can work in areas such as physics, animations, system collision, interactions with the environment, AI and game-play mechanics ? How much more complex is "scalability" in those areas ?  Can this be taken into account by the developers, is it feasible, or too complex and costy for the majority of developers ?  And This also depends on how devs decide to allocate the extra power of the more powerful CPU.  

Now, I'm assuming "Anaconda" will be a balanced piece of hardware, and if the GPU will be 3X faster than 'Lockhart' GPU, the CPU should be much faster as well, and the same for memory bandwidth, unless you want another XBox One X bottleneckED by a weak CPU ;)   ...or a 'Lockhart' with an extremely powerful CPU which is not needed when coupled with a weak GPU.  

My only problem with what you're saying is that you are assuming a weaker CPU in Lockhart - which is contrary to the leak. If it was weaker hardware across the board then, yes, I agree that it would hamper the baseline development. But that's simply not what the supposed leak is suggesting. Also, the things you mention (physics, animations, system collision, interactions with the environment, AI and game-play mechanics) - seem to me to be mostly CPU related tasks - so they could be mirrored across both skus - so no need to be scaled back.

Also, this notion that all components being 'balanced' doesn't make sense to me in this instance, because the 2 skus are targeting different resolutions... resolution is GPU intensive, not CPU...

So we're coming at this from 2 different assumptions - I'm going with the rumoured leaked specs - and you're assuming your own set of components...

CPU: Custom 8 Cores – 16 zen threads 2, for both; one could run at 2,4 GHz, the other at 3,4 GHz(just to make an example).

    The 2,4 GHz CPU could be already well above what is needed for the 4+ TF GPU of Lockhart, while the 3,4 GHz CPU enough for Anaconda Architecture.    The CPU this time seems to be a massive leap over the previous Gen, so it's more than enough to release the same CPU architecture, but with higher clock on "Anaconda".  Furthermore you would need a bit more RAM and more memory bandwidth.

 Depending on the difference in CPU clock speed between the two, you might have on "Anaconda" extra room for even better physics/AI/animations/etc etc, apart from the obvious higher rez/frame rate/graphics/IQ, if Devs want.   Too many if though :D

 



”Every great dream begins with a dreamer. Always remember, you have within you the strength, the patience, and the passion to reach for the stars to change the world.”

Harriet Tubman.

Intrinsic said:
Nate4Drake said:

  So let's talk about developing games on several SKUs, two from MS and one from Sony.   This is not helping at all, and the complexity of scalability will increase if you wanna try to push every single SKU. The reality is, most 3rd party developers will never waste much resources and money to do it, and the main differences will be in resolutions/frame-rate, draw distance and effects; the weakest SKU will hold down all the others.      1st party developers, on the other hand, can do more, but having one SKU with a GPU 3X more powerful, together with a faster CPU and RAM compared to the other, they must develop anyway with the lowest common denominator in mind, and have to give gamers the same core experience; this is a limiting factor.   Sony, on the other hand(if it's true they will release only one SKU), can make its exclusive games shining, taking full advantage of all the resources available in the way they want, without bothering about the weakest and less capable hardware.

 So, what consumers(those not loyal and tied to any Company) will choose on Day One ?  A very cheap SKU from MS, but massively underpowered considering it is a Next Gen System, and compared to PS5, which will have probably a very competitive price ?    Or Anaconda, very expensive and more powerful than PS5 ?  Or PS5, great hardware at a competitive price ?     It depends.  I'm not sure which is the best strategy.

------------------------------------------------------------------------------------------------------

I think you are exagerating some things and understating others.

Firstly scalability is not a problem its a solution. A solution necessitated by the vastly different spec differences in the PC market. The nly way a dev can make a game that works on one hardware spec and doesn't totally just crash on others is via scalability. Its been in the PC industry for over a decade and believe it or not it made its way into consoles last gen. Its not complicating thins any more than they already have been. This is now all just normal.

As for what consumers will do? This one is even simpler. Consumers will do everything. There is a market fr people willing to spend $500 for the most powerful consle in the world. Then there is one for those that feel $400 gets them something thats good enough. And there are those that dont care abut 4K and have only $300 to spend. To all of them they are ding exactly what they want. Buoyed by every supporting reason in between.

Oneeee-Chan!!! said:
why do people think that PS5 will be only 1 SKU ?

Thats the million dollar question right there.

Circa 2005/2006 multiple skus would have meant different HDD sizes.

2020? It could very well mean 1080p and 4k. But somehow this is not something that sits well with a lot of people.

I guess you didn't understand what was being pointed as the problem of scalability.

It is that it isn't a magical thing that solves all. Also you hardly would make the game for the strongest platform and scale down (and get whatever result, even if poor) on the weaker system. You more likely make the game work as intended on the base then scale up to the stronger, thus we say the base system hold others down.

If scalability was magic, infinite or made to work from strongest to weakest instead of the opposite, the porting of AAA games to Switch would have been much easier task and a game like RDR2 wouldn't skip it even having 2 years to be ported over.

There is a point where you can't cut down efficiently so you have to ax the game and start to lose out and usually not a desirable thing. Also it takes more and more time when the gap among them is more significant.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

Around the Network
Nate4Drake said:
Biggerboat1 said:

My only problem with what you're saying is that you are assuming a weaker CPU in Lockhart - which is contrary to the leak. If it was weaker hardware across the board then, yes, I agree that it would hamper the baseline development. But that's simply not what the supposed leak is suggesting. Also, the things you mention (physics, animations, system collision, interactions with the environment, AI and game-play mechanics) - seem to me to be mostly CPU related tasks - so they could be mirrored across both skus - so no need to be scaled back.

Also, this notion that all components being 'balanced' doesn't make sense to me in this instance, because the 2 skus are targeting different resolutions... resolution is GPU intensive, not CPU...

So we're coming at this from 2 different assumptions - I'm going with the rumoured leaked specs - and you're assuming your own set of components...

CPU: Custom 8 Cores – 16 zen threads 2, for both; one could run at 2,4 GHz, the other at 3,4 GHz(just to make an example).

    The 2,4 GHz CPU could be already well above what is needed for the 4+ TF GPU of Lockhart, while the 3,4 GHz CPU enough for Anaconda Architecture.    The CPU this time seems to be a massive leap over the previous Gen, so it's more than enough to release the same CPU architecture, but with higher clock on "Anaconda".  Furthermore you would need a bit more RAM and more memory bandwidth.

 Depending on the difference in CPU clock speed between the two, you might have on "Anaconda" extra room for even better physics/AI/animations/etc etc, apart from the obvious higher rez/frame rate/graphics/IQ, if Devs want.   Too many if though :D

 

When you say "The 2,4 GHz CPU could be already well above what is needed for the 4+ TF GPU of Lockhart, while the 3,4 GHz CPU enough for Anaconda Architecture" I don't think you're taking into account the different target resolutions...

Here's a video explaining the relationship between resolution and CPU use - the conclusion - the CPU usage stays the same regardless of 720/1080/4K - so the premise of your statement is flawed - there's no reason to downgrade the CPU in Lockhart just because it is outputting at a lower resolution... If the 2 skus were targeting different frame rates, then yes, you'd have a point, but resolution, no...



Biggerboat1 said:
Nate4Drake said:

CPU: Custom 8 Cores – 16 zen threads 2, for both; one could run at 2,4 GHz, the other at 3,4 GHz(just to make an example).

    The 2,4 GHz CPU could be already well above what is needed for the 4+ TF GPU of Lockhart, while the 3,4 GHz CPU enough for Anaconda Architecture.    The CPU this time seems to be a massive leap over the previous Gen, so it's more than enough to release the same CPU architecture, but with higher clock on "Anaconda".  Furthermore you would need a bit more RAM and more memory bandwidth.

 Depending on the difference in CPU clock speed between the two, you might have on "Anaconda" extra room for even better physics/AI/animations/etc etc, apart from the obvious higher rez/frame rate/graphics/IQ, if Devs want.   Too many if though :D

 

When you say "The 2,4 GHz CPU could be already well above what is needed for the 4+ TF GPU of Lockhart, while the 3,4 GHz CPU enough for Anaconda Architecture" I don't think you're taking into account the different target resolutions...

Here's a video explaining the relationship between resolution and CPU use - the conclusion - the CPU usage stays the same regardless of 720/1080/4K - so the premise of your statement is flawed - there's no reason to downgrade the CPU in Lockhart just because it is outputting at a lower resolution... If the 2 skus were targeting different frame rates, then yes, you'd have a point, but resolution, no...

You are perfectly right on this, but it was not my point.    With a faster CPU and GPU you can achieve higher frame rate at higher resolution, together with more advanced physics, animations, system collision, better AI, etc etc.   CPU is mostly stressed by higher frame rate and all the things we have already said , while GPU by higher rez, higher geometry(CPU too), tessellation, effects, etc.   That's why I said 2,4 GHz and 3,4 GHz for Anaconda, in order to have not only Higher Resolution and better graphics due to the more powerful GPU, but also higher frame rate and better physics, AI, and all the "non-graphic" calculations/app.

 I saw that video, yep, it explains very well what you said.  Really cool video and well done!

Last edited by Nate4Drake - on 27 February 2019

”Every great dream begins with a dreamer. Always remember, you have within you the strength, the patience, and the passion to reach for the stars to change the world.”

Harriet Tubman.

Nate4Drake said:
Biggerboat1 said:

When you say "The 2,4 GHz CPU could be already well above what is needed for the 4+ TF GPU of Lockhart, while the 3,4 GHz CPU enough for Anaconda Architecture" I don't think you're taking into account the different target resolutions...

Here's a video explaining the relationship between resolution and CPU use - the conclusion - the CPU usage stays the same regardless of 720/1080/4K - so the premise of your statement is flawed - there's no reason to downgrade the CPU in Lockhart just because it is outputting at a lower resolution... If the 2 skus were targeting different frame rates, then yes, you'd have a point, but resolution, no...

You are perfectly right on this, but it was not my point.    With a faster CPU and GPU you can achieve higher frame rate at higher resolution, together with more advanced physics, animations, system collision, better AI, etc etc.   CPU is mostly stressed by higher frame rate and all the things we have already said , while GPU by higher rez, higher geometry(CPU too), tessellation, effects, etc.   That's why I said 2,4 GHz and 3,4 GHz for Anaconda, in order to have not only Higher Resolution and better graphics due to the more powerful GPU, but also higher frame rate and better physics, AI, and all the "non-graphic" calculations/app.

 I saw that video, yep, it explains very well what you said.  Really cool video and well done!

I don't see why they'd target different frame rates as that would cause major issues & needlessly complicate things...

It means you absolutely couldn't have any Anaconda game run at 30fps as that would result in a slideshow on Lockhart - sub-30fps doesn't exactly scream next gen...

If they use the same CPU across both, different GPUs as leaked, a bit less RAM for Lockhart due to smaller 1080 textures and potentially even a smaller hard drive for Lockhart (though that is contrary to leak), again due to lighter assets then you have 2 boxes that are equally equipped to deliver parity over the the 2 different resolutions. The amount of money saved through weaker GPU, less RAM, potentially smaller hard drive should be adequate to offer the lower sku at a good discount. There's no need to hobble the CPU - that would just cause headaches for developers.



Biggerboat1 said:

Here's a video explaining the relationship between resolution and CPU use - the conclusion - the CPU usage stays the same regardless of 720/1080/4K - so the premise of your statement is flawed - there's no reason to downgrade the CPU in Lockhart just because it is outputting at a lower resolution... If the 2 skus were targeting different frame rates, then yes, you'd have a point, but resolution, no...

Not always though.

For example... What happened very often during the 7th gen and on the rare occasion in the 8th gen... The CPU was tasked with performing a post-processing filter to clean up the image. I.E. Morphological Anti-Aliasing.
The higher the resolution, the more work the CPU needs to do.

When it comes to draw calls... Often that is the job of the CPU, the higher performing console will likely have things like longer draw distances and so on, so that is an increase for that too.

The Xbox One X manages to offload some of that to the command processor on the GPU side of the equation... So it will actually be interesting if Microsoft continues down that same path for next-gen.

Biggerboat1 said:

If they use the same CPU across both, different GPUs as leaked, a bit less RAM for Lockhart due to smaller 1080 textures and potentially even a smaller hard drive for Lockhart (though that is contrary to leak)

Hopefully they don't use 1080P textures... (Yes I know what you meant, but even 7th gen games used textures larger than that. :P)



--::{PC Gaming Master Race}::--

Pemalite said:
Biggerboat1 said:

Here's a video explaining the relationship between resolution and CPU use - the conclusion - the CPU usage stays the same regardless of 720/1080/4K - so the premise of your statement is flawed - there's no reason to downgrade the CPU in Lockhart just because it is outputting at a lower resolution... If the 2 skus were targeting different frame rates, then yes, you'd have a point, but resolution, no...

Not always though.

For example... What happened very often during the 7th gen and on the rare occasion in the 8th gen... The CPU was tasked with performing a post-processing filter to clean up the image. I.E. Morphological Anti-Aliasing.
The higher the resolution, the more work the CPU needs to do.

When it comes to draw calls... Often that is the job of the CPU, the higher performing console will likely have things like longer draw distances and so on, so that is an increase for that too.

The Xbox One X manages to offload some of that to the command processor on the GPU side of the equation... So it will actually be interesting if Microsoft continues down that same path for next-gen.

Biggerboat1 said:

If they use the same CPU across both, different GPUs as leaked, a bit less RAM for Lockhart due to smaller 1080 textures and potentially even a smaller hard drive for Lockhart (though that is contrary to leak)

Hopefully they don't use 1080P textures... (Yes I know what you meant, but even 7th gen games used textures larger than that. :P)

Yeah, now your going way over my head :)

What do you think of the overall proposition though, could MS feasibly produce a 1080 & 4K sku that wouldn't step on each other's toes in regards to development and significantly hamper dev's in making the most of the higher sku? If they were to make that one of the priorities in their planning?

You can probably save us a few dozen more pages of non-experts (I'm obviously including myself here) going back and forward!