By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft Discussion - Digital Foundry vs. the Xbox One architects... or the end of the secret sauce craziness

Ashadian said:

Yes and No!

According to MS employees its Yes!

Rest of the world its a NO!

In fact MS never said that... they said the Xbone games will run great and gorgeous and you won't see differences between the games.

I agree multiplataforms will run close in both consoles with maybe 1080p in PS4 and 900p upscalled to 1080p in Xbone... most people won't see difference at all.



Around the Network
kirby007 said:
still nothing about cloud and untapped potential i find these guys very mis informing people for such an "important issue"

MS stopped to talk about Cloud because they are cleary lossing in the argument.

Instead to shows what can to with cloud and shows beneficies over high latencies tasks they tried to say Xbone will be 40x more powerful with Cloud.

So now they are focusing in what really matters.

Cloud have beneficies but won't make your console more powerful in graphics terms.

Ironicaly the Sony is selling Cloud better than MS right now.



ethomaz said:
kirby007 said:
still nothing about cloud and untapped potential i find these guys very mis informing people for such an "important issue"

MS stopped to talk about Cloud because they are cleary lossing in the argument.

Instead to shows what can to with cloud and shows beneficies over high latencies tasks they tried to say Xbone will be 40x more powerful with Cloud.

So now they are focusing in what really matters.

Cloud have beneficies but won't make your console more powerful in graphics terms.

Ironicaly the Sony is selling Cloud better than MS right now.

That's actually not true.  NVidia has a video out regarding cloud computing demonstrating the power of off-loading real-time graphical work and that is with 200ms latency.  Right, Microsoft isn't even talking about that.  They're talking about off-loading some computing.  NVidia seems to be using servers with extremely beefy GPUs. 

I don't think it is as impossible as people want to believe it is.  For some, yeah it will be, but it looks as though you could see it being used in the next 5 years,  Possibly 10, to provide enhanced graphical fidelity. 



Adinnieken said:

That's actually not true.  NVidia has a video out regarding cloud computing demonstrating the power of off-loading real-time graphical work and that is with 200ms latency.  Right, Microsoft isn't even talking about that.  They're talking about off-loading some computing.  NVidia seems to be using servers with extremely beefy GPUs.  

I don't think it is as impossible as people want to believe it is.  For some, yeah it will be, but it looks as though you could see it being used in the next 5 years,  Possibly 10, to provide enhanced graphical fidelity.

That was nor render graphics... that was lighting simulations to be used after in the game.

My point is actually true.

And nVidia approuch is not possible on any Cloud service right now... the power is weak in the servers and make them ready is cost prohibited in the next 10-20 years... until there you will have a internet with latency to support this tech for graphics.



ethomaz said:

Adinnieken said:

That's actually not true.  NVidia has a video out regarding cloud computing demonstrating the power of off-loading real-time graphical work and that is with 200ms latency.  Right, Microsoft isn't even talking about that.  They're talking about off-loading some computing.  NVidia seems to be using servers with extremely beefy GPUs.  

I don't think it is as impossible as people want to believe it is.  For some, yeah it will be, but it looks as though you could see it being used in the next 5 years,  Possibly 10, to provide enhanced graphical fidelity.

That was nor render graphics... that was lighting simulations to be used after in the game.

My point is actually true.

And nVidia approuch is not possible on any Cloud service right now... the power is weak in the servers and make them ready is cost prohibited in the next 10-20 years... until there you will have a internet with latency to support this tech for graphics.

ehm how do you know the servers are weak?

and yes you will need stable internetz for it but it is entirely possible, its not really a big of a problem nowadays...



 "I think people should define the word crap" - Kirby007

Join the Prediction League http://www.vgchartz.com/predictions

Instead of seeking to convince others, we can be open to changing our own minds, and seek out information that contradicts our own steadfast point of view. Maybe it’ll turn out that those who disagree with you actually have a solid grasp of the facts. There’s a slight possibility that, after all, you’re the one who’s wrong.

Around the Network



My question is, from a programming standpoint, will using the ESRAM be as easy as they say? I've seen one developer say it's kind of a pain, which seems likely when compared to unified memory, but is it significant? Will it possibly result in title versions that differ between the two systems? Microsoft says it's easy and that developers had no problems with the 360 setup, which makes sense, but that was in comparison with the convoluted PS3 architecture.

The impression I get is that ESRAM gives quite a boost--if it's used perfectly--but I suspect that in real-world scenarios, Sony's setup is more ideal.

I guess my real question is, will this mean variable results on the XO based on developer time/skill? Will this be something that studios will need time to master or will experience with the 360 translate to developer familiarity with the process? In short, will there again be "lead system" differences because of developers having to go back and deal with split/unified memory differences? I was hoping for completely concurrent development, which I'm sure is largely still going to happen, but the less optimization that needs to be done, the better, in terms of development cost, time, and quality.



I'm tired of the balance argument. As if Sony didn't balance theirs.



drkohler said:
adriane23 said:
What I took from this is that they felt using ESRAM was a more energy and cost efficient approach to GDDR5.

Unfortunately most people still think that powerof(8G fast ddr3 + fast esram) < powerof(8G low power gdd5). In truth it is exactly the opposite.
MS was simply caught offguard by the 8G gddr5 being available in 2013. They bet on gddr5 being too low capacity in 2013/14 and went ddr3 since they always planned with 8G (SOny planned with 4G).


Are you speaking in terms of power consumption or power capabilities?

If the former, I encourage you to clarify with supporting evidence for your claim. The impression I got from this article is that MS believes their combo is more energy efficient, but you declare the opposite.

Surely these technical fellows have credentials far superior to yours and these forum goers?



So, this really ended up at the same point. Either you believe it or not. Doesn't matter what proof either side shows, the fanboys for each side will say the other side is either wrong or lying. So, more information = same comments(either don't believe or can't agree).