By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft Discussion - Major Nelson on X1 Power: "I can't wait for the truth to come out."

Shinobi-san said:
Machiavellian said:
Yes, the difference in the GPUs are big but thats the point I raise.  MS has about 6 co-processors they are not talking about.  Without knowing what these processors do, who knows how much offloading of graphics can be done on the X1 to releave the GPU and even the CPU from their task.

The problem with only comparing the GPU and not the entire system is that people who are not coding to both platforms do not know if other parts of the hardware play a role or not.  Co-processors are ways for custom designs to offload that processing from the CPU and GPU.  This is why its not evident yet eactly how this might play out.  MS might have felt its more efficient to offload specific intensive processing to specialize hardware.

As for memory, it wasn't the expense of the memory but the timing.  MS designed their system for 8GB way before 8GB of GDDR5 was possible to fit within the console.  Do not forget that Sony was rumored to be supplying 4Gb before Samsung was able to produce 512mb chips which allowed Sony to increase the memory to 8.  MS already designed their system for 8GB and needed ESRAM to fill in the bandwidth part.  Since this was already done, there was no changing the system once 8GB of GDDR5 was possible.  The latency part will come into play for anything that cannot be offloaded to the CU.  Since the CPUs are weak anyway, the latency can become a problem if there is to much CPU task that require lower latency than bandwidth.

The NDA stuff is concernign the rumor that is going around the dGPU.  I was only mentioning it because people kept saying that MS would be telling the world they had this chip but the rumor already covered why this was not happening.

As for what Albert is talking about, he is stating that raw physical numbers (mainly the GPU) does not tell the who picture.  There are other parts within the X1 that make up the difference.  Only way we will see that is in the games.

@bolded: Fair point. I didnt really think of it that way.

Although i cant see co-processors making that much of a difference(just opinion/hunch). I mean calling it a co-processor isnt actually telling us much. But we all know that GPU's are the best components for graphics, effects etc. So in that sense, Xbone is limited to it's GPU theoretical power for certain GPU specific tasks. So is the PS4, and il say it again that the PS4 also has co-processors. We just dont know how many it has.Cerny spoke a lot about offloading tasks as well...freeing up CPU and GPU resources.

I dont see either system offloading GPU specific tasks to the co-processors but more CPU tasks to assist the weak CPU.

My point about the memory though was that if MS could have gone with GDDR5 and cost was not an issue or if there were no issues at all. They would have gone with it.

I think theres a bit of PR spin to what Albert is saying to be honest...i dont see how they can close a .5 tfl gap in performance with a few co-processors. At the end of the day these arent fully fledged GPU cores, or CPU cores as far as i understand. If they are then its another story but as far as i know they not. And apart from PR spin i think MS is just explaining how they are making their console more efficient, explaining the design decisions etc. It's not like MS went and added co-processors just to compete with Sony. If MS wanted a more beefier GPU they would have added it in. Raw computation performance just didnt seem to be the gameplan really.

Remember the Cell SPUs are considered co-processor and they were used to offload task away from the PS3 GPU.  You already see the results of how well that worked out for Sony (at least for their 1st party). Depending on what task the co-processors are used for, they can make a solid difference in performance.  Without any info on what the processor do, I will not speculate as to the difference they can make only that the possibility exist.  

Lets break down the difference between the X1 GPU and the PS4.  I will limit this to just the CU or Sharder cores as that’s were the big difference in TFlops comes into play.  The X1 has 12 CUs and the PS4 has 18.  CUs are generally used to process shaders.  Since that’s a parallel process, the PS4 can execute more shaders at one time then the X1.  Now from the Hotchips convention some interesting things came out and I was processing the information and thinking about design.  MS has made it so that all parts of the system knows what is happening to a segment of code in memory.  If that is so, then MS can leverage specialize co-processor to handle specific gaming code where it would be more process intensive for the GPU or CPU to handle.  

 

Do not forget that GPUs are designed as an add on to a system not as being the main CPU.  What I am getting at is that for efficiency within a closed box, some of the things the GPU does might make more sense to handle with specialize hardware then within the GPU.

As for Albert statements, he is a PR guy so you will always take their comments with a grain of salt.  Interesting enough him commenting on Neograf where he knows his comments are going to get a lot of push back.  Most PR people know where to pick their battles so him making his statements on Gaf says that either he loves contention or MS has a few tricks up their sleeve.

As for Tflops, do you know that the PS3 is stated as having 2.1 TFlops compared to the 1.8TFlops of the PS4.  As a measuring tool for the performance of these 2 devices, the Tflop number really might not be the difference maker.



Around the Network

Hi

I've just registered on this forum.

I'm going to banned for saying this, but none of this post is false in anyway. I am not an Xbox fanboy, I do not own an Xbox and never have. The last console I bought was a launch PS3 which I stopped using after MGS4. I do however, work in the tech industry and my job involves the production of silicon wafers (not for consoles).

On the 29th of September you will realize that the rumors that have been flying around regarding a dGPU are entirely true.

A very reliable source from a well known first party developer (who are creating a title for the xbox one's launch) who I've known for a while mentioned that Microsoft had granted them a higher level access from the devkits than almost all other developers. When combined with the APU, the amount of performance in TFLOPs rises from 1.41 to somewhere closer to 3.2 (ish). It's not quite the 5.31tf+ that Misterxmedia is predicting, but some of what he says is true.

He also told me that the dGPU is the reason the console is fairly large. That one die needed massive cooling to begin with. The CPU was also initially clocked at 1.35ghz in the first dev kits to be issued in 'console' form, to offset heat-production from the GPU components of the APU and the dGPU. A massive cooling solution was used to control temps and that's why vents were placed directly above it, to ensure good airflow. It turns out after testing, the heatsink/fan was more than capable to cool the entire die at '100% theoretical load', something that applications would hardly demand. That's what warranted the increased clocks.

I also learned that while 53mhz+ was applied to the GPU core in the APU, it was not applied to the dGPU. This was because the dGPU was already running at 853mhz. What's odd is that the dGPU is granted higher bandwidths to the ESRAM than the GPU core on the APU. No reason was given as to why, and I don't want to pretend like I know.

The NDA is pretty much spot on. But please, don't take my word for it. You'll find out pretty much from the source on the 29th (or possibly the day after depending on how Msoft and mjrnelson or penello want to announce it!).

Look forward to it!



Machiavellian said:
Shinobi-san said:
Machiavellian said:
Yes, the difference in the GPUs are big but thats the point I raise.  MS has about 6 co-processors they are not talking about.  Without knowing what these processors do, who knows how much offloading of graphics can be done on the X1 to releave the GPU and even the CPU from their task.

The problem with only comparing the GPU and not the entire system is that people who are not coding to both platforms do not know if other parts of the hardware play a role or not.  Co-processors are ways for custom designs to offload that processing from the CPU and GPU.  This is why its not evident yet eactly how this might play out.  MS might have felt its more efficient to offload specific intensive processing to specialize hardware.

As for memory, it wasn't the expense of the memory but the timing.  MS designed their system for 8GB way before 8GB of GDDR5 was possible to fit within the console.  Do not forget that Sony was rumored to be supplying 4Gb before Samsung was able to produce 512mb chips which allowed Sony to increase the memory to 8.  MS already designed their system for 8GB and needed ESRAM to fill in the bandwidth part.  Since this was already done, there was no changing the system once 8GB of GDDR5 was possible.  The latency part will come into play for anything that cannot be offloaded to the CU.  Since the CPUs are weak anyway, the latency can become a problem if there is to much CPU task that require lower latency than bandwidth.

The NDA stuff is concernign the rumor that is going around the dGPU.  I was only mentioning it because people kept saying that MS would be telling the world they had this chip but the rumor already covered why this was not happening.

As for what Albert is talking about, he is stating that raw physical numbers (mainly the GPU) does not tell the who picture.  There are other parts within the X1 that make up the difference.  Only way we will see that is in the games.

@bolded: Fair point. I didnt really think of it that way.

Although i cant see co-processors making that much of a difference(just opinion/hunch). I mean calling it a co-processor isnt actually telling us much. But we all know that GPU's are the best components for graphics, effects etc. So in that sense, Xbone is limited to it's GPU theoretical power for certain GPU specific tasks. So is the PS4, and il say it again that the PS4 also has co-processors. We just dont know how many it has.Cerny spoke a lot about offloading tasks as well...freeing up CPU and GPU resources.

I dont see either system offloading GPU specific tasks to the co-processors but more CPU tasks to assist the weak CPU.

My point about the memory though was that if MS could have gone with GDDR5 and cost was not an issue or if there were no issues at all. They would have gone with it.

I think theres a bit of PR spin to what Albert is saying to be honest...i dont see how they can close a .5 tfl gap in performance with a few co-processors. At the end of the day these arent fully fledged GPU cores, or CPU cores as far as i understand. If they are then its another story but as far as i know they not. And apart from PR spin i think MS is just explaining how they are making their console more efficient, explaining the design decisions etc. It's not like MS went and added co-processors just to compete with Sony. If MS wanted a more beefier GPU they would have added it in. Raw computation performance just didnt seem to be the gameplan really.

Remember the Cell SPUs are considered co-processor and they were used to offload task away from the PS3 GPU.  You already see the results of how well that worked out for Sony (at least for their 1st party). Depending on what task the co-processors are used for, they can make a solid difference in performance.  Without any info on what the processor do, I will not speculate as to the difference they can make only that the possibility exist.  

Lets break down the difference between the X1 GPU and the PS4.  I will limit this to just the CU or Sharder cores as that’s were the big difference in TFlops comes into play.  The X1 has 12 CUs and the PS4 has 18.  CUs are generally used to process shaders.  Since that’s a parallel process, the PS4 can execute more shaders at one time then the X1.  Now from the Hotchips convention some interesting things came out and I was processing the information and thinking about design.  MS has made it so that all parts of the system knows what is happening to a segment of code in memory.  If that is so, then MS can leverage specialize co-processor to handle specific gaming code where it would be more process intensive for the GPU or CPU to handle.  

 

Do not forget that GPUs are designed as an add on to a system not as being the main CPU.  What I am getting at is that for efficiency within a closed box, some of the things the GPU does might make more sense to handle with specialize hardware then within the GPU.

As for Albert statements, he is a PR guy so you will always take their comments with a grain of salt.  Interesting enough him commenting on Neograf where he knows his comments are going to get a lot of push back.  Most PR people know where to pick their battles so him making his statements on Gaf says that either he loves contention or MS has a few tricks up their sleeve.

As for Tflops, do you know that the PS3 is stated as having 2.1 TFlops compared to the 1.8TFlops of the PS4.  As a measuring tool for the performance of these 2 devices, the Tflop number really might not be the difference maker.


I should point out, the PS3 can't even product 0.35TF. The 2TF speed was based off projections at E3 2005 long before the console went into production.

The PS4 will actually produce 1.84TF. That's not a projection.



misterymedia said:
Hi

I've just registered on this forum.

I'm going to banned for saying this, but none of this post is false in anyway. I am not an Xbox fanboy, I do not own an Xbox and never have. The last console I bought was a launch PS3 which I stopped using after MGS4. I do however, work in the tech industry and my job involves the production of silicon wafers (not for consoles).

On the 29th of September you will realize that the rumors that have been flying around regarding a dGPU are entirely true.

A very reliable source from a well known first party developer (who are creating a title for the xbox one's launch) who I've known for a while mentioned that Microsoft had granted them a higher level access from the devkits than almost all other developers. When combined with the APU, the amount of performance in TFLOPs rises from 1.41 to somewhere closer to 3.2 (ish). It's not quite the 5.31tf+ that Misterxmedia is predicting, but some of what he says is true.

He also told me that the dGPU is the reason the console is fairly large. That one die needed massive cooling to begin with. The CPU was also initially clocked at 1.35ghz in the first dev kits to be issued in 'console' form, to offset heat-production from the GPU components of the APU and the dGPU. A massive cooling solution was used to control temps and that's why vents were placed directly above it, to ensure good airflow. It turns out after testing, the heatsink/fan was more than capable to cool the entire die at '100% theoretical load', something that applications would hardly demand. That's what warranted the increased clocks.

I also learned that while 53mhz+ was applied to the GPU core in the APU, it was not applied to the dGPU. This was because the dGPU was already running at 853mhz. What's odd is that the dGPU is granted higher bandwidths to the ESRAM than the GPU core on the APU. No reason was given as to why, and I don't want to pretend like I know.

The NDA is pretty much spot on. But please, don't take my word for it. You'll find out pretty much from the source on the 29th (or possibly the day after depending on how Msoft and mjrnelson or penello want to announce it!).

Look forward to it!





JoeTheBro said:
misterymedia said:
Hi

I've just registered on this forum.

I'm going to banned for saying this, but none of this post is false in anyway. I am not an Xbox fanboy, I do not own an Xbox and never have. The last console I bought was a launch PS3 which I stopped using after MGS4. I do however, work in the tech industry and my job involves the production of silicon wafers (not for consoles).

On the 29th of September you will realize that the rumors that have been flying around regarding a dGPU are entirely true.

A very reliable source from a well known first party developer (who are creating a title for the xbox one's launch) who I've known for a while mentioned that Microsoft had granted them a higher level access from the devkits than almost all other developers. When combined with the APU, the amount of performance in TFLOPs rises from 1.41 to somewhere closer to 3.2 (ish). It's not quite the 5.31tf+ that Misterxmedia is predicting, but some of what he says is true.

He also told me that the dGPU is the reason the console is fairly large. That one die needed massive cooling to begin with. The CPU was also initially clocked at 1.35ghz in the first dev kits to be issued in 'console' form, to offset heat-production from the GPU components of the APU and the dGPU. A massive cooling solution was used to control temps and that's why vents were placed directly above it, to ensure good airflow. It turns out after testing, the heatsink/fan was more than capable to cool the entire die at '100% theoretical load', something that applications would hardly demand. That's what warranted the increased clocks.

I also learned that while 53mhz+ was applied to the GPU core in the APU, it was not applied to the dGPU. This was because the dGPU was already running at 853mhz. What's odd is that the dGPU is granted higher bandwidths to the ESRAM than the GPU core on the APU. No reason was given as to why, and I don't want to pretend like I know.

The NDA is pretty much spot on. But please, don't take my word for it. You'll find out pretty much from the source on the 29th (or possibly the day after depending on how Msoft and mjrnelson or penello want to announce it!).

Look forward to it!




Imagination is a powerful thing. None of that is involved here. You don't have to take my word for it, in fact, if I were you, I probably wouldn't myself!

But everything I've said is true. I actually booked tickets to Eurogamer on the 29th so I might actually get to see some of the reactions as the news spreads live!



Around the Network
Machiavellian said:
Shinobi-san said:
Machiavellian said:
Yes, the difference in the GPUs are big but thats the point I raise.  MS has about 6 co-processors they are not talking about.  Without knowing what these processors do, who knows how much offloading of graphics can be done on the X1 to releave the GPU and even the CPU from their task.

The problem with only comparing the GPU and not the entire system is that people who are not coding to both platforms do not know if other parts of the hardware play a role or not.  Co-processors are ways for custom designs to offload that processing from the CPU and GPU.  This is why its not evident yet eactly how this might play out.  MS might have felt its more efficient to offload specific intensive processing to specialize hardware.

As for memory, it wasn't the expense of the memory but the timing.  MS designed their system for 8GB way before 8GB of GDDR5 was possible to fit within the console.  Do not forget that Sony was rumored to be supplying 4Gb before Samsung was able to produce 512mb chips which allowed Sony to increase the memory to 8.  MS already designed their system for 8GB and needed ESRAM to fill in the bandwidth part.  Since this was already done, there was no changing the system once 8GB of GDDR5 was possible.  The latency part will come into play for anything that cannot be offloaded to the CU.  Since the CPUs are weak anyway, the latency can become a problem if there is to much CPU task that require lower latency than bandwidth.

The NDA stuff is concernign the rumor that is going around the dGPU.  I was only mentioning it because people kept saying that MS would be telling the world they had this chip but the rumor already covered why this was not happening.

As for what Albert is talking about, he is stating that raw physical numbers (mainly the GPU) does not tell the who picture.  There are other parts within the X1 that make up the difference.  Only way we will see that is in the games.

@bolded: Fair point. I didnt really think of it that way.

Although i cant see co-processors making that much of a difference(just opinion/hunch). I mean calling it a co-processor isnt actually telling us much. But we all know that GPU's are the best components for graphics, effects etc. So in that sense, Xbone is limited to it's GPU theoretical power for certain GPU specific tasks. So is the PS4, and il say it again that the PS4 also has co-processors. We just dont know how many it has.Cerny spoke a lot about offloading tasks as well...freeing up CPU and GPU resources.

I dont see either system offloading GPU specific tasks to the co-processors but more CPU tasks to assist the weak CPU.

My point about the memory though was that if MS could have gone with GDDR5 and cost was not an issue or if there were no issues at all. They would have gone with it.

I think theres a bit of PR spin to what Albert is saying to be honest...i dont see how they can close a .5 tfl gap in performance with a few co-processors. At the end of the day these arent fully fledged GPU cores, or CPU cores as far as i understand. If they are then its another story but as far as i know they not. And apart from PR spin i think MS is just explaining how they are making their console more efficient, explaining the design decisions etc. It's not like MS went and added co-processors just to compete with Sony. If MS wanted a more beefier GPU they would have added it in. Raw computation performance just didnt seem to be the gameplan really.

Remember the Cell SPUs are considered co-processor and they were used to offload task away from the PS3 GPU.  You already see the results of how well that worked out for Sony (at least for their 1st party). Depending on what task the co-processors are used for, they can make a solid difference in performance.  Without any info on what the processor do, I will not speculate as to the difference they can make only that the possibility exist.  

Lets break down the difference between the X1 GPU and the PS4.  I will limit this to just the CU or Sharder cores as that’s were the big difference in TFlops comes into play.  The X1 has 12 CUs and the PS4 has 18.  CUs are generally used to process shaders.  Since that’s a parallel process, the PS4 can execute more shaders at one time then the X1.  Now from the Hotchips convention some interesting things came out and I was processing the information and thinking about design.  MS has made it so that all parts of the system knows what is happening to a segment of code in memory.  If that is so, then MS can leverage specialize co-processor to handle specific gaming code where it would be more process intensive for the GPU or CPU to handle.  

 

Do not forget that GPUs are designed as an add on to a system not as being the main CPU.  What I am getting at is that for efficiency within a closed box, some of the things the GPU does might make more sense to handle with specialize hardware then within the GPU.

As for Albert statements, he is a PR guy so you will always take their comments with a grain of salt.  Interesting enough him commenting on Neograf where he knows his comments are going to get a lot of push back.  Most PR people know where to pick their battles so him making his statements on Gaf says that either he loves contention or MS has a few tricks up their sleeve.

As for Tflops, do you know that the PS3 is stated as having 2.1 TFlops compared to the 1.8TFlops of the PS4.  As a measuring tool for the performance of these 2 devices, the Tflop number really might not be the difference maker.


the tflops number becomes more reliable and worthwhile comparing when the systems have near identical hardware...

We not comparing a Nvidia GPU core to an AMD core...they both have GCN cores. They both have the exact same CPU. Those are the two core components of the system.

And yes we dont know that much about the co-processors but again it cant replace conventional CPU's or GPU's....using the cell as an example is not exatcly a positive thing. The SPU's were only good and well utilised under very strict circumstances. When devs started coding on the PS3 they flat out ignored the SPU's. These co-processors seems to have a set function...so it doesnt give the dev much flexibility either.

But agian i feel like you tring to argue that these co-processors will have a great impact on increasing overall system performance. Whereas, i see it more as a way to maximise the efficiency of the system. And when comparing the two systems...the PS4 also has co-processors or is that a non issue? I've said this three times now but you never actually address that. Do you not agree or ?



Intel Core i7 3770K [3.5GHz]|MSI Big Bang Z77 Mpower|Corsair Vengeance DDR3-1866 2 x 4GB|MSI GeForce GTX 560 ti Twin Frozr 2|OCZ Vertex 4 128GB|Corsair HX750|Cooler Master CM 690II Advanced|

I hope it's true, mainly because the meltdowns would be the best. That said, I'm thinking that this dude is someone trying to fuck with people for his own lulz.



I LOVE paying for Xbox Live! I also love that my love for it pisses off so many people.

misterymedia said:
JoeTheBro said:




Imagination is a powerful thing. None of that is involved here. You don't have to take my word for it, in fact, if I were you, I probably wouldn't myself!

But everything I've said is true. I actually booked tickets to Eurogamer on the 29th so I might actually get to see some of the reactions as the news spreads live!

Dude, you signed up a throw away account on a random video game forum just to "leak" this. You're full of ballsicles.



toadslayer72 said:
I hope it's true, mainly because the meltdowns would be the best. That said, I'm thinking that this dude is someone trying to fuck with people for his own lulz.


I'm not trying to 'fuck' with anybody. I've only just recently gotten back into gaming (currently running through Skyrim on my gtx460!) after a long hiatus. I won't be buying either of the next gen consoles immediately as I've got a family vacation to pay for and I'm not paid that well. I genuinely do work in silicon, and I genuinely do have contacts.

I don't really care if there are meltdowns. All I'm saying is that you'll see on the 29th that everything I've said is true. I was told other things too, but they were just rumours that he'd heard in passing, so we dismissed them.



JoeTheBro said:
misterymedia said:
JoeTheBro said:




Imagination is a powerful thing. None of that is involved here. You don't have to take my word for it, in fact, if I were you, I probably wouldn't myself!

But everything I've said is true. I actually booked tickets to Eurogamer on the 29th so I might actually get to see some of the reactions as the news spreads live!

Dude, you signed up a throw away account on a random video game forum just to "leak" this. You're full of ballsicles.


There's nothing I can say to convince you, but I am most certainly not full of 'ballsicles'. As I said, you'll see on the 29th.