By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Captain_Yuri said:
IBM Reveals Next-Generation IBM POWER10 Processor

https://newsroom.ibm.com/2020-08-17-IBM-Reveals-Next-Generation-IBM-POWER10-Processor

DigiTimes: Memory Prices to Fall 10% in Q4 of 2020

https://www.techpowerup.com/271067/digitimes-memory-prices-to-fall-10-in-q4-of-2020

NAND prices going down is good for all. hopefully that fall arrives to us, customers.

Captain_Yuri said:

Also Microsoft is going to have Xbox Series X's AMD Architecture Deep Dive at Hot Chips 2020 today at 6:30PM or 7 PM PST. It could give some insight into RDNA 2 and AMD's next gen APUs.

They have already gave out some slides on what to expect from the presentation.

https://www.tomshardware.com/news/microsoft-xbox-series-x-architecture-deep-dive

" That GPU section is, not surprisingly, massive. The full chip is 360.4mm square, with 15.3 billion transistors. Doing some quick image analysis, the GPU takes up roughly half of the die (47.5% if you want a more precise estimate)."

"A Zen 2 CPU chiplet measures 74mm square (with four times the L3 cache compared to the Xbox Series X APU), and then tack on a GPU that has more features and shader cores than Navi 10 (RX 5700 XT), which measures 251mm square. That's 325mm square without the enhanced Navi 2x cores and 12 additional CUs."

"While the chip size of the Xbox Series X is in line with previous console hardware (375mm square for the Xbox One in 2013, 367mm square for the Xbox One X in 2017), and transistor counts have more than doubled relative to the Xbox One X (6.6 billion to 15.4 billion), the die cost is higher. Microsoft doesn't specify how much higher, but lists "$" as the cost on the Xbox One and Xbox One S, "$+" for the Xbox One X, and "$++" for the Xbox Series X. As we've noted elsewhere, while TSMC's 7nm lithography is proving potent, the cost per wafer is substantially higher than at 12nm."

(This one is shorter to quote than the other one.)

I'll wait until someone does a simplified summary so I can understand it. But I've looked something: during the Turing architecture deep dive they did at Anandtech, they mention that the 2080Ti is capable of 10 GigaRays/second (page 7). That slide says that the custom XBX solution can do up to 380 G/sec ray-bos peak or 95G/sec ray-tri peak. I don't know how comparable are both metrics, but this does seem to confirm that Turing won't age well for ray tracing tasks.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.