By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Xbox One vs. PS4 Graphics Showdown: The Rematch

drkohler said:
g911turbo said:
Ssliasil said:
There is so much wrong with this post that I would have to rewrite the entire thing for you in order to make all the needed corrections. And quite frankly, i don't particularly want to do that.

I was thinking the same thing.

I particularly liked the "I have a computer science degree and I am a software engineer" and then shows his memory throughput graphics - which puts him instantly into the "complete moron" cathegory.

This user said it best:

Timorous 

 

Man are you wrong so lets go through it bit by bit.

I agree with Item 1 and Item 2 seems up in the air so solid conclusions are hard to draw, ultimately I would call both these items the same.

Item 3 is wrong on several fronts though so lets go through them.

1) DDR3 is faster than GDDR5 in some circumstances. Provided you are talking about memory latency here then I have to disagree with this statement. The GDDR5 memory latency is no worse and possible better than DDR3 latency.

Page 51 for DDR3 latency http://www.hynix.com/datasheet... - ACT to ACT is 45ns.

Page 133 for GDDR5 latency http://www.hynix.com/datasheet... - ACT to ACT is 40ns.

2) You cannot just add together the bandwidth like that because the job of the DDR3 and the move engines is to keep data flowing to the ESRAM as much as possible. That means the GPU will be communicating with the ESRAM most of the time but not all of the time. The true memory bandwidth will vary frame to frame depending on if the data is available in the ESRAM. We have no way of knowing what this will average out to but I doubt it will be much less than the PS4.

I would call the memory situation from a bandwidth situation a draw at the moment because we do not have enough information. It will require more developer work to get the best out of the Xbox 1's memory system though so I would give the edge to the PS4 overall.

Item 4 is totally wrong.

Yes the Xbox 1 has received a clock bump to 853Mhz and has closed the gap but there is still a gap. The Xbox 1 has 853 * 2 * 768 = 1.31 TFlops of compute performance (clockspeed * operations per clock * shader processors). The PS4 has 800 * 2 * 1152 = 1.84 TFlops of compute performance. The PS4 also has more texture units, more Colour ROPs and more Z/Stencil ROPs. The GPUs in each console are a class apart and that will show, either through higher frame rates on the PS4 leading to smoother game play or with slightly improved graphics.

Conclusion

Your conclusions shows that you are a liar. The GPU architecture is called GCN (Graphics Core Next) that defines the basic building blocks used to create the cards in the 7xxx series of GPUs.

There is no such thing as 'number of GCN's.

Considering we are talking about gaming consoles the most important part is the GPU so to try and play off that difference is minor when the other stuff has less of a performance impact is pretty ignorant.



Around the Network
Adinnieken said:
errorpwns said:
Adinnieken said:
The problem with this blog post is it doesn't acknowledge the differences we know of in the GPU. While it talks about the frequency the GPU is running at, it doesn't mention the potential benefits to the PS4 that the number of shaders could have.

My big problem with this article is that unlike with the last generation, no one technically knowledgeable has looked at the specs and in technical terms, spelled out how each one will perform based on what we do know. What we have is people looking at specs and saying "This number is bigger, therefore this is better!".

The Cell processor could smoke the Xenon processor in the PS3 and Xbox 360, yet the technical superiority of one over the other was made moot by differences elsewhere in the console. I don't know if it's because there is no one capable of providing the rundown like we saw last generation, or if it's simply a matter of not having enough data. I would much prefer though having a clear understanding of what the capabilities of each system as a whole operating unit are rather than what one has which bigger specs.

This blog post does nothing to placate that, nor does it advance the discussion any further than it has already been.

The Cell had power, but the rest of the console was one big joke. 512MB of memory and essentially only 256MB of it was allowed for the graphics. The 360 had more available graphics memory and in some multi platform games that showed to help it. The graphics chip and memory bottlenecked the cell. It's all about bottlenecks. I could put a couple titans into my system, but in the end my Athlon II x4 2.8ghz would bottleneck the graphics chips. I'd need to upgrade into a faster processor to see frame increases in some games. See why consoles have 8GB of memory now? They don't want to run into the terrible bottleneck. Even in 06 systems dedicated for gaming were getting 2-4GB of memory. So putting 512 in was flawed from the beginning. Especially when they planned a 7-8 year console cycle.  

The thing that has changed between this generation and the previous generation is the fact that the bottle necks have become smaller and less obvious.  They require a greater deal of understanding systems and how data is used on those systems, than looking at numbers.

What's the point of having GDDR5 memory when your memory block is 256Mb wide and you only have 4Mb to fill that space?  A block size is an addressible area, only one thing can fit into a single block.  So, having a larger block size as GDDR 5 does, become a disadvantage when you're dealing with smaller files.  Soon, most of your memory is consumed with small files because these 256Mb blocks are now consumed by sub 256Mb files.  Memory density, if you will.  DDR3 memory has a smaller block size, I think 64Mb?  So while it may take more blocks for a larger file, I'm not wasting as much memory on a smaller file.  That 4Mb file is only wasting 60Mb of memory in a block, not 252Mb.

Likewise, as I said before.  GDDR and DDR memory work differently.  In GDDR memory you need to flush the memory before writing to it.  You can't append the memory.  Where as in DDR memory you can.  Which is why I believe the PS4 relies on virtual memory (HDD cache file).  The cache file is used to read and write active data from/to memory as it's being used. 

The thing is the memory bandwidth/amount of memory only comes into play when you bump up to a higher resolution. The Xbox One has more than enough capability at 1080p. Considering that the current gen cosnoels will most likely never exceed 1080p for games it is safe to assume that both consoles will  hit a peak eventually after optimization and stuff is factored in. Neither console needs more than 4GB of memor dedicated to video. Anything more than 3 is absurd for games running at 1080p. Which is why it would be a joke to bring up the fact one has more usable memory than the other. Since in the end IT DOES NOT MATTER AT 1080P. PC benchmarks would verify that perfectly. The fact is 2GB of GDDR5 seems to handle 1080P in games perfectly fine should be more than enough proof of that. I don't get why people think the way they do about hardware. The fact one produces more flops than the other has no real standing in games. Drivers have increased graphics cards performance by large amounts. So if Microsoft can out write Sony driverwise their graphics chip could end up working out to be the more powerful one even though on paper the one in the PS4 is more powerful. People have severely underestimated what a graphics driver rewrite can mean for the Xbox One. It has been rewritten and that alone could boost the Xbox Ones performance. The 50mhz bump up not as much.



Egann said:

I don't know why we're still having this discussion. We *still* haven't seen an actual gameplay screenshot from either system because the demos for BOTH systems were actually high end PC's.

But here's something to put in your craw. All sources agree that the PS4 and XBox 1 have GPU's in the 1.*something* TeraFLOPS range, although until we actually get our hands on the hardware we won't know where each system stands.

We know for a fact the Wii U runs a third or less of this at 0.35 teraFLOPS.

Actual comparison screenshots:

Wii U: 

PS4/ X1:

Yes, I did steal S.T.A.G.E.'s sig because I'm just that lazy.

Conclusion: Whatever the power difference between the PS4 and XB1 exists, it's probably irrelevant. The Wii U is between a factor of three and eight weaker than either console, and yet the actual graphics pumped out are basically one or two notches better at best.

The graphical different between the PS4/Xbone and WiiU is actually pretty huge :/



Wow, your maths about esRAM show you really know your way about hardware.



errorpwns said:
The thing is the memory bandwidth/amount of memory only comes into play when you bump up to a higher resolution. The Xbox One has more than enough capability at 1080p. Considering that the current gen cosnoels will most likely never exceed 1080p for games it is safe to assume that both consoles will  hit a peak eventually after optimization and stuff is factored in. Neither console needs more than 4GB of memor dedicated to video. Anything more than 3 is absurd for games running at 1080p. Which is why it would be a joke to bring up the fact one has more usable memory than the other. Since in the end IT DOES NOT MATTER AT 1080P. PC benchmarks would verify that perfectly. The fact is 2GB of GDDR5 seems to handle 1080P in games perfectly fine should be more than enough proof of that. I don't get why people think the way they do about hardware. The fact one produces more flops than the other has no real standing in games. Drivers have increased graphics cards performance by large amounts. So if Microsoft can out write Sony driverwise their graphics chip could end up working out to be the more powerful one even though on paper the one in the PS4 is more powerful. People have severely underestimated what a graphics driver rewrite can mean for the Xbox One. It has been rewritten and that alone could boost the Xbox Ones performance. The 50mhz bump up not as much.

Just found that typo really funny. Like the typo itself is dyslexic. xD

cosnoels 



Around the Network

I don't understand why some people keep adding the eSRAM bandwidth to the DDR3 one for a total bandwidth, if they know nothing about technology then they shouldn't do something so stupid.



And I did so well staying away from poorly written and factually misleading articles/blogs like this for the past month and a half. I see I need to stay away a bit longer.



I am the Playstation Avenger.

   

CGI-Quality said:
JayWood2010 said:

Developers have been saying for months and months that these two consoles have been nearly identical(which could change down the road). So does this article(Whether it has any merit is a different story) but the fans are the ones making it a war. And it is the fans that understand little about hardware, optimization, and the operating system. It is a little annoying to have people arguing about this for months and months when they are not certified (Majority) in Engineering or developing.

http://www.pushsquare.com/news/2013/07/xbox_one_developer_concedes_that_ps4_is_more_powerful

http://www.gamespot.com/news/just-cause-dev-ps4-more-powerful-than-xbox-one-right-now-6408781

http://www.computerandvideogames.com/408484/ps4-currently-beats-xbox-one-in-terms-of-raw-power-says-avalanche-technical-officer/

http://gamingeverything.com/oddworld-developer-says-ps4-is-a-bit-stronger-than-xbo-overall-multiplatform-games-could-be-slightly-better-on-sonys-machine/

I tried to find devs from different months, but like a lot of people, most devs I hear from have been clear that the PS4 is more powerful. But their word isn't all to take, just the GPUs alone, 1.84TFLOPS vs 1.29TFLOPS. Plus, 8GB GRRD5 is stronger than 8GB DDR3, no matter how it's spun. None of it should be that big of a deal, but with an article being posted with blatant fallacies, things should be corrected.


Thread should have ended here.

Well said CGI.

How on earth Xbone has 260GB/s bandwidth is beyond me.



Intel Core i7 3770K [3.5GHz]|MSI Big Bang Z77 Mpower|Corsair Vengeance DDR3-1866 2 x 4GB|MSI GeForce GTX 560 ti Twin Frozr 2|OCZ Vertex 4 128GB|Corsair HX750|Cooler Master CM 690II Advanced|

Only read the front page of comments plus OP. As many have said, so much wrong info there based on abstract conclusions. I only had to look at the graph that indicates the speed of the RAM and I just skipped that section completely. Funny that he put his credential in front only to make him look silly.

The gap is bigger this time than the PS360 gap, folks.

This is like saying 7770/7790 and 7850/7870 are the same.



e=mc^2

Gaming on: PS4 Pro, Switch, SNES Mini, Wii U, PC (i5-7400, GTX 1060)

dsgrue3 said:
The fuck would someone go to school for CS if they wanted to concern themselves with hardware? That's straight up computer engineering.


Obviously the curricula in cs differ a lot depending on the country you study.