By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft Discussion - [UPDATE] Xbox One Could Possibly Feature a Powerful Discrete GPU + APU rumor - New Sources and Info!

petalpusher said:


There s no eDRAM in xbox one. And that's a deliberate choice, compare to WiiU for example (or the x360 eDRAM daughter die). Having eSRAM means you can get the better process for the whole die (and let's keep in mind the goal is  to have only one die).

Every other memory cells in the CPU and GPU are  SRAM. eSRAM is just semantic to state it's an additional memory chunk (the embedded 32 MB), but it's exactly the same type of memory cell you ll find elsewhere on the die. That's why in fact, they add up all the SRAM chunks together to get 47 MB, that's because it's the same kind.

You should check white papers from AMD. For example an HD 7870 features about 7 MB of SRAM on its own, an HD 7970, 12 MB and so on..

http://www.amd.com/us/Documents/GCN_Architecture_whitepaper.pdf

"A high-end GPU like the AMD Radeon™ HD 7970 has over 12MB of SRAM and register files spread throughout the CUs and caches"


The die IS 363 mm2. Contrary to what one delusional person in the world is seeing, it's pretty simple to double check it with the pictures, using the right measurement. the micron DDR3 chips are 14x9mm (126mm2), you can almost exactly fit 3 on the apparent die (3x126=378), so it's a little bit smaller at 363 mm2. Nice ballpark figure.

 


Everyone used to evaluate die size with pictures and who knows how to do it right, can make the same measurement. There s no controversy here, unless you want to create one so bad, defying standard logic, there s no 510 mm2 die, not even close, neither some stacked up dGPU hidden underneath. 


You're still combining chunks of memory, in this case eSRAM with L1/L2 etc' caches.
It doesn't work like that.
In a best case Scenario, the eSRAM will be treated as sort of an L4 cache, that is some data may end up being duplicated in the eSRAM and LX caches.

The way you're thinking would be akin to me combining my System Ram with GPU Ram and calling it the same thing. (The GPU can read/write to system memory afterall! With some Data-sets ready to send to the GDDR5.)
However, they have very different levels of bandwidth, latency and association, it's really not comparable.



--::{PC Gaming Master Race}::--

Around the Network
drkohler said:
Adinnieken said:

There is 175mm of space for something in that 363mm SoC.  That's a lot of room for nothing.

ok.. after some searching, I found where you pulled that number - by reading this misterx guy's blog (partially, the eye and brain strain gets too much after a while).

Firstly, I'd like to notice that this guy is in serious need of help, and I don't say that as a joke or derogatory comment - this guy needs psychological help. Then, reading through his various posts, this is what is in his mind concerning his magical XBox One:

He started off by proclaiming there is a super dgpu inside the apu (taking the leaked "What should be the next XBox document" early this year). Using weird logic and simply incorrect measurements of the die size. At some point, he realised he is wrong so he tried the 3d stacking idea, which didn't really get him anywhere. His current theory is surprising, though: The XBox One presented at the HotChips talk and everywhere else is just a cable box, it is not the "Real XBox One". This real box has apparently been withheld from 80% of all developers, and will be a supercharged apu. Apparently with all his wet dreams incorporated, using occasional vodoo technology he isn't able to unveil to us mere mortals. (There are about 200 logical fallacies and a few Nobel-prize winning discoveries in all his posts which I left out for brevity, but you should get the idea).

This is approximatly his current mindset. I'm sure he will add other features on his blog in the next days until 9/29 when the evil Sony empire will collapse.

No, not my source, but the source material may or may not have been his blog.

Your presumption is that the rumor started with MisterX.  It didn't.  The rumor of a discrete GPU began in 2009. 

It was later given legs by VG247 in a rumor that included features that were verified as true when the Xbox One was revealed. 

So, whether MisterX perpetuates the rumor or not, the rumor has existed since 2009, and has been perpetuated by other sources before MisterX did.  Now if VG247 source was capable of providing solid enough information that so far the majority of the rumor they presented about the Xbox One has proven true, with the only feature not mentioned so far not verified or discredited is a dGPU, then it stands to reason that it is still entirely possible. 

The Wafer-to-Wafer design, what you call 3D stacking, is what Microsoft revealed in the Hot Chips disucussion.  It isn't something he came up with.  The Xbox One's SoC is a Wafer-to-Wafer design.  You can discredit his claim all you want but the fact is the SoC is stacked.  According to Microsoft it's eSRAM on top of the APU on top of the GPU.  There are any number of sites that have shown the graphic from the slide that Microsoft showed at the presentation.

There are a number of features that Microsoft hadn't revealed until recently.  The CPU speed and the 8GB of NAND memory for instance.  If it happens to be true, it happens to be true.  I don't know personally know what a second GPU would actually be able to do other than possibly render one image at the same time as another is being shown.  However, it is entirely possible that a dGPU is one of the move engines and that distinction just hasn't been made yet.  We'll know whether this rumor had any legs on the 30th.  And if it doesn't, so what? 

What we've wasted all this time on is debating whether it is true or not, and what I personally would like to understand is if it is true what would it mean?  The Xbox One still has only 8GB of memory, five of which is available to games.  I don't think Microsoft is incorporating a dGPU for TV purposes.  The only purpose for two GPUs that I know of is for multiple monitors.  How would dual GPUs work with one output?   



Pemalite said:
petalpusher said:


There s no eDRAM in xbox one. And that's a deliberate choice, compare to WiiU for example (or the x360 eDRAM daughter die). Having eSRAM means you can get the better process for the whole die (and let's keep in mind the goal is  to have only one die).

Every other memory cells in the CPU and GPU are  SRAM. eSRAM is just semantic to state it's an additional memory chunk (the embedded 32 MB), but it's exactly the same type of memory cell you ll find elsewhere on the die. That's why in fact, they add up all the SRAM chunks together to get 47 MB, that's because it's the same kind.

You should check white papers from AMD. For example an HD 7870 features about 7 MB of SRAM on its own, an HD 7970, 12 MB and so on..

http://www.amd.com/us/Documents/GCN_Architecture_whitepaper.pdf

"A high-end GPU like the AMD Radeon™ HD 7970 has over 12MB of SRAM and register files spread throughout the CUs and caches"


The die IS 363 mm2. Contrary to what one delusional person in the world is seeing, it's pretty simple to double check it with the pictures, using the right measurement. the micron DDR3 chips are 14x9mm (126mm2), you can almost exactly fit 3 on the apparent die (3x126=378), so it's a little bit smaller at 363 mm2. Nice ballpark figure.

 


Everyone used to evaluate die size with pictures and who knows how to do it right, can make the same measurement. There s no controversy here, unless you want to create one so bad, defying standard logic, there s no 510 mm2 die, not even close, neither some stacked up dGPU hidden underneath. 


You're still combining chunks of memory, in this case eSRAM with L1/L2 etc' caches.
It doesn't work like that.
In a best case Scenario, the eSRAM will be treated as sort of an L4 cache, that is some data may end up being duplicated in the eSRAM and LX caches.

The way you're thinking would be akin to me combining my System Ram with GPU Ram and calling it the same thing. (The GPU can read/write to system memory afterall! With some Data-sets ready to send to the GDDR5.)
However, they have very different levels of bandwidth, latency and association, it's really not comparable.

 

You are off topic again, it's the same type of memory cells as far as the die is concerned. that's why they add all the chunks together to give a whole number of e/SRAM, 47 MB. We don't care about the individuals bandwidth here, it's not the subject (and this is defined by the bus wich connect the chunk btw)

47 MB of SRAM is +2.2 Billions of transistors which leaves "only" 2.8 Billions for the cpu, gpu, all the dedicated processors, all the logic between them and bus.

It leaves about 200mm2 of die size for all this. We know a Bonaire GPU is 166 mm2, minus 2 CU, you are at about 150 mm2, 2x Jaguar 25 mm2,....minus their own SRAM, all the logic and internal soc bus system takes quite some place, as long as the dedicted hardware units, audio, display planes,.. very standard pieces that you have to have in any system.

Everything falls in place to known system specs as expected and you get a nice "System On a Chip" die.



drkohler said:

When did the SoC increase to 393mm^2? Where did you get the 175mm^2 from? You are obviously still living in the land of delusion so here are a few minor points.  This mistertroll fabulates about a dgpu with around 2000-2300units (of whatever) made in 20nm with a w2w connection to the SoC.

a) The only company that gets anything out (as complex as a cpu) near 20nm is Intel with its new 22nm processor lines. Yields are not great so there are limited numbers of these things. Process technology wise, Intel is ahead of TSMC by about 1-1.5years and ahead by about 2-2.5 years of AMD. Tell us, who do you think makes these miracle dgpus (it certainly isn't intel)?

b) 20nm chips use a different technology than 28nm chips. Tell us which company is capable of makeing a chip that incorporates two different process technologies into one chip?

c) Tell us, where is the memory for this magical dgpu? You certainly need fast memory for this super dgpu.

d) Why would there be a primary gpu at all? This magical dgpu is faster than anything on the PC market, so why waste money on a "measly gpu" at all?

e) How would you feel as a developer if you were developing on developer units for years, only to be told 1 month before release "It was all a joke, we have a completely different gpu in our box"?

You're discussing MisterX's rumor.  I'm not specifically talking about his rumor.  The rumor of a discrete GPU (dGPU) has existed for the past three years.  MisterX isn't the only one to propose it. 

Logically, one would assume it's a 28nm chip.  I believe 23 or 22nm is the next die size being attempted anyway.

I don't know.  I don't know what a dGPU does.  I suppose what might be possible is to have both GPUs working on the same rendering in order to display an image quicker, possibly it could do more complex images, or it could render two images.

I doubt it would be more powerful than the included GPU.  I'm not sure that it would need to be, if you think about it nor would you necessarily want a more powerful GPU.  If the dGPU had the same capabilities as an onboard GPU it would still be capable of assisting the primary GPU in rendering.  And equally it would be capable of rendering two different images, though I'm not sure Microsoft has suggested this capability. 

As for memory I don't know.  You could provide a pipe through the silicon to the eSRAM memory, MMU, and North Bridge.  Or it's possible they could embed the DRAM into the silicon.  If there was a pipe between the existing memory and the dGPU it would provide coherency, making it just as responsive.

The idea isn't all that foreign in computers today.  Motherboards, depending on the design, can have upwards of 11 layers to the board, with each layer of the board having unique traces for circuits, then to interconnect the layers there are "pipes".  The majority of which, we never see.  However, there are areas where the layers do come together that you can see, the grounding points (screw holes) are a good example.    There are also test points, which can pipe between layers to allow for testing of a buried circuit.



If this author wants to be taken seriously he should find a proof reader. I refuse to believe a leaker would leak to someone with such poor grammar. It was so bad i couldnt make it past the first paragraph



Around the Network
Max King of the Wild said:
It was so bad i couldnt make it past the first paragraph

Speaking of.

"I" not i and "couldn't" not couldnt, also sentences end in punctuation.  For example, "It was so bad I couldn't make it past the first paragraph."

Should we take your rationalization for not taking the rumor seriously with your comments from now on then?



wow. I just read the whole thread. There is much debate here.

Although I think it is very unlikely, I must admit that this is possible.

And before some people attack me by saying that I am a fanboy who believe in such or that i would be so dissapointed at the end of the month, I'll just say I'm here to watch and The Watch takes no part in the wars of Seven two Kingdoms.



Adinnieken said:
Max King of the Wild said:
It was so bad i couldnt make it past the first paragraph

Speaking of.

"I" not i and "couldn't" not couldnt, also sentences end in punctuation.  For example, "It was so bad I couldn't make it past the first paragraph."

Should we take your rationalization for not taking the rumor seriously with your comments from now on then?

Am I authoring an article trying to convince people I have anonymous sources leaking inside information to me? No. I'm not. Youre point is moot.



Adinnieken said:

Logically, one would assume it's a 28nm chip.  I believe 23 or 22nm is the next die size being attempted anyway.

I don't know.  I don't know what a dGPU does.  I suppose what might be possible is to have both GPUs working on the same rendering in order to display an image quicker, possibly it could do more complex images, or it could render two images.

There are many misconceptions floating in your head that messes up what you think the XBox apu has. So for you and all readers interested, let me clarify and correct a few misconceptions (this is going to be a lengthy read):

1. Is there a dgpu in the XBox SoC?

First we have to explain what "dgpu" actually means. it is short for "Discrete Graphics Processing Unit". The key word here is "Discrete", which in plain english means that it is a gpu on its own. In a PC, it is the chip on your graphics card stuck in the pcie slot. There cannot be a dgpu in a SoC, we would call that "Two gpus in a SoC" (see 4. for more). So there is NO dgpu in the XBox One, at best there woukd be two gpus in the SoC

2. Where does the dgpu rumour come from?

This actually came from early speculations on the PS4 architecture. It was speculated that the PS4 would have a SoC with a 7760-type gpu and an additional 7760 type discrete gpu (with some unspecified connection to the SoC). These two gpus could work together in a quasi-crossfire mode. There are tests floating around with such systems and the results were unconvincing performance wise, to put it mildly. This is the reason the PS4 has only the gpu in the SoC (apart from price considerations).

The second point is the photo of the SoC area of the XBox that shows cpu and gpu labels between apu socket and the five power fets. MisterX, clueless as ever, ruled that this is a sign of apu getting its power from the "cpu label area" and dgpu getting its power the "gpu label area". In reality, this is just the conventional power supply circuit, two phases feed the cpu, three phases feed the gpu (exactly as the power requirements very obviously demand for the specified cpu and gpu. There is another smaller "feeder" component visible in the photo that is not identifiable, but we must also feed the esram..).

3. Does the XBox One apu use 3D-stacking (or whatever you want to call it)?

This rumour started from the picture you see in the first post, which shows three colored squares labeled esram, cpu and gpu graphically placed like stacked planes. Note to all: This is a graphics design choice made by the person who designs nicely readable slides, it has NOTHING to do with engineering reality. Let's do an approximate estimation of die sizes (some numbers are more or less correct, some are estimates using equivalent circuitries):

a) cpu: 2 Jaguar cores 55mm^2

b) gpu: Bonair equivalent chip (minus 2CUs, minus video circuitry, plus more address/data buses) 160mm'2

c) 32mbyte esram: 100mm'2 (estimated from known Intel/IBM cell sizes, +/- some fudging, we don't know who designed it really)

d) Audio circuitry, 4dma controlers: 30mm^2 (from analog sources offering similar stuff)

e) Don't know stuff: 30mm^2

That gives us roughly 375mm^2 of die surface, which is perfectly in line with the 363mm^2 specified by MS. So this estimate fits nicely onto one layer, there is no need for stacking at all (particularly since the price of your SoC would explode with stacking).

4. Who would put two gpus into a SoC?

Short answer: Only a braindead engineer. There is not a single positive reason to put two gpus into one SoC, compared to putting a single, bigger gpu into it that does the job. Very particularly when the gpu can do gpgpu. Having two distinct gpus in a SoC would be an engineering nightmare, as every problem you have with one gpu connecting to the outside world would simply be doubled. No need for further discussion, simply a nightmarish thought for a console.

5. Would MS do 4. , anyway, just to beat Sony?

Back in February, when Sony basically opened the seals on the PS4 innards (big mistake in my opinion, should have waited longer), MS realised they were badly beaten in the gpu department. Could MS have changed the SoC design at that late stage? Did they have better/other designs in the back-up drawer? It is possible, although it would cost a f*ckton of money to ellbow themselves into the chip factories with an unannounced design change on short notice.  They spent 100mio on a piece of plastic, so who knows what happened after february..?



that's a good résumé