By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - No Goblin co-founder: Saying PS4 is “More Powerful” Than Xbox One Or Vice Versa Is Really Misrepresenting How Games Work

I don't know. I'm just a bit that plays games. Not gonna pretend to be some kind of expert like a lot of people. I'll just keep buying whichever version of any game on the hardware that gives me the best experience.



Around the Network
walsufnir said:
JustBeingReal said:
walsufnir said:
JustBeingReal said:

I think I'll take Ubisoft's own GPU physics simulation over this guys flawed analysis, GPU simulation has also been capable of handling AI since 2009 on AMD GPUs, so the results of the difference between GPU compute on PS4 and XBox One shows the PS4 to be almost 2X more capable in that area and that's before the recent SDK launch.

GPU compute absolutely offsets any benefits and far exceeds the marginally faster CPU performance XB1 apparently has, even though developers have said that PS4's CPU actually performed faster than XB1's.

Sony have unveiled some details about their 2.0 SDK for PS4, which includes a low level API for GPU Physics Simulation. The fact is that PS4 is more capable not just in graphics processing, but everything else should also run significantly faster on PS4.

Perhaps it's Sony's fault because it's not been until recently that they've started rolling out dev kit updates to start actually taking advantage of the benefits that GPU compute can offer, or it's just not that well optimized yet.

As it stands many current engines used for game development seem to still be focusing their physics and AI processing on the CPU, not GPU, but it's something that will benefit all platforms, so it only makes logical sense that in the not too distant future even 3rd party developers will start to update their engines to use GPGPU coding.
With this being a standard thing in Sony's latest SDK newly announced games being revealed over the next few months should start taking advantage of these features on PS4.


But GPGPU is using the same hardware you use for rendering. The simulation where it showed PS4 optimization was using GPUs only for GPGPU purposes so you won't see this advantage in any game. Yes, PS4 has dedicated hardware for an async approach (rendering and GPGPU) but they don't have additional ALUs for that. Still most of the hardware is shared so it will be interesting to see in which way this can be used but I won't expect wonders off it.

 

Graphics rendering is never maxed out throughout 100% of the GPU time, GPU downtime is what Compute Queues are designed to take advantage of, it's all about making sure processing time isn't wasted, so what I'm talking about is using the hardware as efficiently as possible, without wasting resources that would otherwise go unused.

An example of this is something like Assassin's Creed Unity, where we have Ubisoft stating that if it wasn't for the weaker CPUs in either PS4 or Xbox One they could run the game at 100FPS, well they're running the game at less than 1/3 of that speed most of the time, wasting all of that GPU down time which could be otherwise used on physics and AI.

If we look at Ubisoft's benchmarks and Sony's recent SKD 2.0 slides (http://www.dualshockers.com/2014/11/23/ps4-sdk-2-0-revealed-includes-a-lot-of-interesting-new-tech-and-features-game-developers-can-use/), the CPU can be used for physics or AI that the player directly interacts with, so more close-up specific stuff, while the GPU could easily handle huge crowds filled with either. The GPU can supplement part of the demand, so it can easily be used to make up for the weaker CPU.

PS4 does have additional ALUs compared to XB1, it also has extra texture mapping units and ROPs, so higher resolutions, better AA, AF and more demanding textures can all be taken advantage of, while even better physics and AI simulation is easily programmable by developers.


But you can't use it if other resources or channels are already congested. I don't say it can't be used but the GPU is restricted and dependant of other resources and these are also used by CPU and by GPU contexts so we will have to see to what extent devs will be able to use it but I am not sure if it is that much. There will be a trade-off between using the GPU for graphics or GPGPU, you can't have both in that case without penalting the other.

 

That's the point of the way the whole APU has been designed, developers can easily queue up compute commands, have them waiting for when GPU hardware is free to process it. The GPU is never completely saturated dealing with graphical tasks, there's always tonnes of times when Stream Processors are sitting idle, so this is when AI and Physics are handled. Remember I said efficiency?

You should be sure that developers can use this to a great level of extent, because that's what they asked for when Sony did all of their research in what hardware to put in PS4.

There's no need for a trade off with this because it's part of the system's design to use resources as efficiently as possible.



JustBeingReal said:
walsufnir said:


But you can't use it if other resources or channels are already congested. I don't say it can't be used but the GPU is restricted and dependant of other resources and these are also used by CPU and by GPU contexts so we will have to see to what extent devs will be able to use it but I am not sure if it is that much. There will be a trade-off between using the GPU for graphics or GPGPU, you can't have both in that case without penalting the other.

 

That's the point of the way the whole APU has been designed, developers can easily queue up compute commands, have them waiting for when GPU hardware is free to process it. The GPU is never completely saturated dealing with graphical tasks, there's always tonnes of times when Stream Processors are sitting idle, so this is when AI and Physics are handled. Remember I said efficiency?

You should be sure that developers can use this to a great level of extent, because that's what they asked for when Sony did all of their research in what hardware to put in PS4.

There's no need for a trade off with this because it's part of the system's design to use resources as efficiently as possible.


Waiting for the GPU hardware to be free to process data? In a game where devs are fighting for milliseconds to get their stuff done (ie. computed)? How would that work? You need your data fast and in a predictable way - what if you need the data now and can't wait for another bunch of milliseconds?

I am not sure of anything until a dev talks freely about it and provides data on how and when the GPU was free enough to process data for a decent amount of time when it is not limiting the render tasks and what can be accomplished when doing so.

Btw, how would you know to which extend PS4's GPU is currently utilized? There are clearly games that are not as stable as some wish so why wouldn't devs use the hardware not to 100%? This makes no sense at all.



Yep. Makes a game that uses 10% of the system capacity call them equal and says devs who disagree are idiots, seems good.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

WhiteEaglePL said:
We knew about this for a while now...

But I don't think it's a big difference between PS4 & XBO and XBO & Wii U.

PS4 > XB1 >>> Wii U

It's on the specs. There is a decent difference between PS4 and X1 and a bigger on to the Wii U. The Wii U suffers with lack of memory bandwidth (it's smaller than PS360), pretty weak CPU, less memory and a weaker GPU.

The lack of bandwidth results in cases like MK8: 720p with no AA. The weaker CPU will be an issue to NPC count, AI, physics. The CPU is just a tri-core version of the Wii's CPU that was just a beefed up GC CPU. It can't reach the same performance of the X360 Xenos, let alone the much more powerful and modern PS4/X1 AMD CPUs.

About the OP, it isn't a misrepresenting. It like saying that a GTX 770 isn't worse than a 980. It's on the specs, and the games just reflect that. The only game that didn't was AC Unity,  but it was a disaster from the beginning and the publisher publicy stated that they would ruin the PS4 version. The difference in CPU for the X1 is just an extra overclock, something that Sony could do with firmware upgrade and it's just a 10% difference, compared with a 50% advantage on GPU.



Around the Network
torok said:
WhiteEaglePL said:
We knew about this for a while now...

But I don't think it's a big difference between PS4 & XBO and XBO & Wii U.

PS4 > XB1 >>> Wii U

It's on the specs. There is a decent difference between PS4 and X1 and a bigger on to the Wii U. The Wii U suffers with lack of memory bandwidth (it's smaller than PS360), pretty weak CPU, less memory and a weaker GPU.

The lack of bandwidth results in cases like MK8: 720p with no AA. The weaker CPU will be an issue to NPC count, AI, physics. The CPU is just a tri-core version of the Wii's CPU that was just a beefed up GC CPU. It can't reach the same performance of the X360 Xenos, let alone the much more powerful and modern PS4/X1 AMD CPUs.

About the OP, it isn't a misrepresenting. It like saying that a GTX 770 isn't worse than a 980. It's on the specs, and the games just reflect that. The only game that didn't was AC Unity,  but it was a disaster from the beginning and the publisher publicy stated that they would ruin the PS4 version. The difference in CPU for the X1 is just an extra overclock, something that Sony could do with firmware upgrade and it's just a 10% difference, compared with a 50% advantage on GPU.

This is not possible. It couldn't run the games it does with lower memory bandwidth than the PS360.

On the second point, I think people underestimate the importance of SDKs. Microsoft know their software and my understanding is that developers universally praise the Microsoft dev kits. Sony I'm not sure about but with the discrepancy in hardware specs, I would say they must be behind on the software side. I can't really see any other explanation. I can't see developers intentionally sabotaging their own work for parity's sake.



walsufnir said:
JustBeingReal said:
walsufnir said:
 


But you can't use it if other resources or channels are already congested. I don't say it can't be used but the GPU is restricted and dependant of other resources and these are also used by CPU and by GPU contexts so we will have to see to what extent devs will be able to use it but I am not sure if it is that much. There will be a trade-off between using the GPU for graphics or GPGPU, you can't have both in that case without penalting the other.

 

That's the point of the way the whole APU has been designed, developers can easily queue up compute commands, have them waiting for when GPU hardware is free to process it. The GPU is never completely saturated dealing with graphical tasks, there's always tonnes of times when Stream Processors are sitting idle, so this is when AI and Physics are handled. Remember I said efficiency?

You should be sure that developers can use this to a great level of extent, because that's what they asked for when Sony did all of their research in what hardware to put in PS4.

There's no need for a trade off with this because it's part of the system's design to use resources as efficiently as possible.


Waiting for the GPU hardware to be free to process data? In a game where devs are fighting for milliseconds to get their stuff done (ie. computed)? How would that work? You need your data fast and in a predictable way - what if you need the data now and can't wait for another bunch of milliseconds?

I am not sure of anything until a dev talks freely about it and provides data on how and when the GPU was free enough to process data for a decent amount of time when it is not limiting the render tasks and what can be accomplished when doing so.

Btw, how would you know to which extend PS4's GPU is currently utilized? There are clearly games that are not as stable as some wish so why wouldn't devs use the hardware not to 100%? This makes no sense at all.

 

It works because even in those instances where the GPU is under heavy load there are always Stream Processors free, the developers plan for this stuff in the development process.

It's part of the GPU hardware, not all of it in it's entirety. What you have to remember is that it's never the whole of the GPU doing one task, the GPU is running mulitple tasks simultaneously. Games are designed around all of their aspects from graphics to compute, you're acting like it's a one or the other type situation and completely missing the point that graphics never take up all of GPU's time. It's all a matter of proper resource management, when the game's being developed.

 

You should be sure that this is a great thing, if you did research about it, it actually becomes self evident.

 

This is simple, it's efficient use of resources, not wasting it when it could be put to use.

The reason I know this stuff is from research and common sense. The reason games aren't as stable as they could be is because of a lack of optimization, in the design of the games and how optimized the development tools are at this point in time. This new SDK directly addresses the physics side of shared compute load between the CPU and GPU.



MoHasanie said:

No Goblin’s Roundabout is a game that never takes itself seriously and that is precisely the reason why it is turning out to be an extremely enjoyable title. For those who are living under a rock, Roundabout is a partial open world game where players would need to drive passengers in a rotating limousine while solving puzzles along the way.

 

GamingBolt recently chatted with No Goblin’s co-founder Dan Teasdale to know more about how the game is shaping up. Roundabout will be released on the PlayStation 4 and Xbox One and we found it interesting to know Dan’s thoughts on the differences between the two consoles. Details such as the Xbox One’s lower GPU compute units and memory issues are fairly common knowledge now but according to Dan these are arbitrary numbers and not the ideal way to compare the consoles.

 

“Saying that the PS4 is “more powerful” than the Xbox One or vice versa as a blanket statement is really misrepresenting how games work,” Dan said to GamingBolt.

 

“If all you’re doing is rendering polygons, yes, the PS4 is a little faster at that. If all you’re doing is gameplay and simulation, the Xbox One is a little faster. Video games need both, so it’s not as simple as comparing core counts or memory speed to determine which platform is the “fastest” or “easiest”.”

 

He further states that both the consoles are mostly similar and getting the game ready for certification is a priority for developers rather than optimizing it for a unique platform.

 

“At the end of the day, it doesn’t matter. Both platforms are in the same relative ballpark, so the real work for releasing is getting your game certification-ready on each platform instead of crazy optimizations on one specific platform.”


Dan definitely has a point here. Instead of comparing the technical specs of the two consoles, players should enjoy the games that the developers are building for them. Roundabout is currently available on the Steam store and will hit the PS4 and Xbox One early next year. Stay tuned for our full interview with Dan Teasdale in the coming days.

 

http://gamingbolt.com/saying-ps4-is-more-powerful-than-xbox-one-or-vice-versa-is-really-misrepresenting-how-games-work#FLP586RumVsc0Wam.99


Going"all focus on graphics" and therefore "well then let framerate suffer to get even better graphics" is also completely misinterpreting how games work.

So what do devs expect when they indoctrinate the consumers with nonsense?

 Dear devs   #DealWithIt



the best part it that naughty dog will develop for the most powerful one and will optimise like crazy.



dd if = /dev/brain | tail -f | grep games | nc -lnvvp 80

Hey Listen!

https://archive.org/details/kohina_radio_music_collection

The way multiplatform games are developed nowadays you won't see that much of a early difference in game performance. Games ship with long lists of known bugs and performance problems. The biggest problems are tackled first, thus the console struggling the most gets the most attention.

For example GTA5, dips to 26fps aren't a priority on ps4. On XBox One it probably dipped lower in the country side areas, thus grass was removed in places. The heavy deadlines insure parity more than any money hatting deals would do.

Even if some nice optimization was made on one platform for the current game, it's unlikely it would be ported over to the other if that one is already doing 'good enough'. If it ain't boke, don't fix it. Introducing new bugs with tight deadlines is always a big risk. That doesn't mean both consoles won't benefit from eachothers optimizations later in life. For example, the new AA techniques developed on ps3 out of necessity made it to 360 as well in later titles.

The gap will likely grow this gen as engines mature and get optimized for each platform. There is simply more to be gotten from the 6 extra compute units on ps4.