By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Multimedialover said:

http://arstechnica.com/gaming/2013/05/microsoft-talks-about-xbox-ones-internals-while-disclosing-nothing/

Microsoft claim the One has 8 times more graphics capabilities over 360. 360 had 12 over Xbox. So I'm happy with that. Also the potential of the recent news that devs can if tey chose to, run things like AI, Physics through Microsofts Cloud. Freeing up memory and CPU for other things. 

Interesting.


Yeah, but even so... The Xbox 360 and PS3 look like utter crap today from a graphics perspective, plus the PS4 and Xbox One will have the same hardware for it's entire life, by the time the consoles are 4-5 years old the same thing will occur as this generation, games will just look dated which then means the multi-platforms that are released on PC will also look horrible.

Plus, last generation had relatively high-end graphics in comparison to the PC, this time around the Xbox One will have Mid/Low-End Graphics, the PS4? Mid-range with the Wii U having the lowest of the lowst.

I just want games to look great and push the PC harder, not stagnate.

drkohler said:

Because it is 32MByte only? How much can you do with 32MByte? The first batch of games will use the eSRam as framebuffer and call it a day. What if the game uses 1G of textures? The better developers will start to use parts of the eSRam for data cache and variable storage purposes, but it will take time to figure out how to do this and the developers will have to program the technique for it and manage all the data.

The latency problem.. oh my, how many time has this horse beaten to death now? Do people really think that Sony engineers weren't aware of higher gddr latencies? The not so surprising answer is: Of course they were. That is why they added lots of transistors into the SoC. Some of the added features we know (direct command bus), some we don't. And fortunately for us, some engineer had a bright moment years ago and invented something called "cache" and "cache controller".  The end result is roughly the following: a) for code fetches: the latency problem simply does not exist at all. b) for data fetches: Here, we might see the latency problem sometimes. How often depends on how well the programmers have defined the data structures. With some good thinking, I'd guess than in less than 1-5% of all fetches we might see a latency. it is clear, assuming the unveiled specs are correct, that the 8G gddr5 outperforms the 8G ddr3 + 32MeSRam.

The gpu's were chosen to do what they are supposed to do: deliver 1080p games (or tvtvtvtvtvsportssportssportstvtvtv for the XBox One). They contain the best parts available for the money alotted to graphics. Anybody expecting a GeForce680 in a console lives in a dreamworld.

And one last request: please stop the "the cloud will save it" messages. Not.going.to.happen.


It's clear you don't know how the SRAM works.
Microsoft will split the rendering frame into smaller "tiles" which is perfect as it doesn't need much memory to achieve good results and it will also fit perfectly inside that paltry 32Mb.
The Xbox 360 did something similar, in order to fit the frame inside the EDRAM, they cut the frame up into smaller pieces so the EDRAM die could work on it in smaller pieces.
The assumption that you are going to get Gigabytes of information inside of it is just silly, that's not how it functions or what it's purpose was to begin with.



--::{PC Gaming Master Race}::--