By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft Discussion - Prediction: Xbox Scarlet will use Nvidia GPU

Pemalite said:

Bofferbrauer2 said:
Doubt it

AMD can bring them an all-in-one concept with Motherboard, GPU and CPU all from one hand. NVidia however is stuck to ARM for the CPU, which can't keep up with X86 CPUs (and would seriously bottleneck a 4K console), plus NVidia are generally more expensive. An Intel/NVidia or even IBM/NVidia (with the new POWER9 CPUs) are technically possible, but would be much more expensive and/or more complicated to port to

ARM can best x86, if you take the Core wide and fast enough, just no company has seen a compelling business case for it.
Apple has made some very large and fast ARM cores that could probably give Intel Core and Ryzen a run for it's money at the same frequency.

In Dhrystone, which is pure Integer, they do. But add in floating point calculations (Whetstone) and ARM is beaten by leaps and bounds. Considering videogames are very reliant on floating point (Whetstone), the X86 still easily trumps ARM in gaming operations.

Coremark is almost totally Dhrystone, hence why ARM seems to keep up well with X86 in those benchmarks. But anything with heavy floating point usage will kick down ARM scores compared to X86.



Around the Network

The past has definitely shown that console makers will never ever go with high quaklity hardware and will first and foremost look on money before features or performance. I wouldn't say it's impossible though. It would give them a great excuse to not support backwards compatibility.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

Amd has said, Sony and themselves working hands on hands for ps5... If they didn't speak of xbox next, I don't think it's a error.

If Microsoft has decided to have the more powerful hardware, Nvidia is a evidence.

I think as well Nvidia is the best solution for Microsoft, with WGA and direct 3d it's not a problem.



HoloDust said:
Pemalite said:


nVidia doesn't need one.

Indeed it doesn't - yet, I don't see the reason why would either MS or Sony move away from x86 now that they got into it,  especially since it makes 3rd parties life much easier.

Code Morphing, Binary Translation and so on makes that mostly redundant anyway... There is a reason why Intel CPU's are able to run ARM compiled apps.

Besides, for gaming, ARM is giving x86 a run for it's money in software support anyway, mobile gaming is catastrophically massive, so the support is certainly already there... Unlike with PowerPC or MIPS which was a pretty alien ISA for developing games on at one point.

JRPGfan said:
The reason they use a single chip solution (a apu) is because its cheaper, and has a tiny bit of power saveings in it.

It is only cheaper up to a certain point, once a chip starts to get extremely large, yields tend to decrease... And then having several smaller chips actually becomes cheaper.
There is a reason why AMD has multiple CPU dies on a single package with thread-ripper rather than a single monstrosity.

JRPGfan said:

Going with a amd/intel cpu + nvidia gpu on a 2nd chip, will be more expensive to produce.

Well. Intel and nVidia tend to prefer larger, fatter more lucrative profit margins than AMD. So that's a thing.

JRPGfan said:

Also MS put all that work into makeing their games backwards compatible, useing a AMD x86 cpu..... they are not going to stop useing x86 now.
So they would have to get a x86 intel or amd cpu (a tegra Arm cpu just isnt enough, and would break their Backwards compatability).

I think it highly unlikely they dont use AMD next gen as well, for both cpu+gpu.

You falsely assume that backwards compatibility is tied to the CPU ISA.
You need to study up and learn how Microsoft is actually achieving backwards compatibility, then you might realize it's actually a brilliant approach that is extremely flexible regardless of hardware.

Bofferbrauer2 said:
Pemalite said:


ARM can best x86, if you take the Core wide and fast enough, just no company has seen a compelling business case for it.
Apple has made some very large and fast ARM cores that could probably give Intel Core and Ryzen a run for it's money at the same frequency.

In Dhrystone, which is pure Integer, they do. But add in floating point calculations (Whetstone) and ARM is beaten by leaps and bounds. Considering videogames are very reliant on floating point (Whetstone), the X86 still easily trumps ARM in gaming operations.

Coremark is almost totally Dhrystone, hence why ARM seems to keep up well with X86 in those benchmarks. But anything with heavy floating point usage will kick down ARM scores compared to X86.

Also depends on the precision of the integers and floats.

With that in mind... We need to remember that Apple's ARM chips are still pretty potent all things considered, especially when we take into account the TDP these chips run at.
Unrestrict that... And have it on a process with transistors optimized for frequency and then it's an entirely different ball game.

Still. I wouldn't under-estimate ARM's capability to scale upwards, we just haven't seen a company take the ISA and drive it into the super high-end yet.

Akeos said:
If Microsoft has decided to have the more powerful hardware, Nvidia is a evidence

How do you know if next-gen GPU nVidia hardware is going to be more powerful? You seen legitimate benchmarks?



--::{PC Gaming Master Race}::--

Pemalite said: 
Bofferbrauer2 said:

In Dhrystone, which is pure Integer, they do. But add in floating point calculations (Whetstone) and ARM is beaten by leaps and bounds. Considering videogames are very reliant on floating point (Whetstone), the X86 still easily trumps ARM in gaming operations.

Coremark is almost totally Dhrystone, hence why ARM seems to keep up well with X86 in those benchmarks. But anything with heavy floating point usage will kick down ARM scores compared to X86.

Also depends on the precision of the integers and floats.

With that in mind... We need to remember that Apple's ARM chips are still pretty potent all things considered, especially when we take into account the TDP these chips run at.
Unrestrict that... And have it on a process with transistors optimized for frequency and then it's an entirely different ball game.

Still. I wouldn't under-estimate ARM's capability to scale upwards, we just haven't seen a company take the ISA and drive it into the super high-end yet.

It doesn't matter much between FP16,32 or 64, as ARM is built with Integer in mind, hence why all the official benchmarks were Dhrystone. FP was thought to be the (almost exclusive) domain of coprocessors until the Cortex line and still isn't used more prominently, as it needs much more complex chips and instructions (which in turn would also strongly increase consumption).

I agree, on a per-Watt-basis, ARM should trump X86. However, the architecture runs into a TDP wall around 2.5 Ghz, meaning that at 3 Ghz or more an X86 chip would probably be less consuming than an ARM processor. To get past those 3Ghz they would need to lengthen the pipeline, which risks costing some IPC if they would need to lengthen too much (essentially what happened with the Pentium 4).

I wouldn't say it's impossible to make some high-end ARM chip, but it would need some hefty changes; changes which would kill it's edge in the smartphone and tablet markets.

https://www.techspot.com/review/1599-windows-on-arm-performance/page2.html

While on those tests ARM had to emulate X86, costing some performance, it still can only compete with the Atom N3450, which is also clocked slower than the Snapdragon 835 (Atom: 1.1 Ghz base 2.2 Ghz turbo; Snapdragon 1.9Ghz LITTLE 2.45Ghz big - and potentially can work together for 8 threads total against only 4 in the Atom) and get's trounced by a Core m3 6Y30 (900 Mhz base, 2.2 Ghz max. Singlecore turbo, 3.8W TDP), even in the native, non-emulated tests.

In other words, ARM still has a long way to go until it can keep up with x86 in power. But let's see how the new Cortex A75 and especially A76 Cores will perform, the Cryo 280 in the Snapdragon 835 are still based on the by now slightly outdated Cortex A73



Around the Network
Bofferbrauer2 said:

It doesn't matter much between FP16,32 or 64, as ARM is built with Integer in mind, hence why all the official benchmarks were Dhrystone. FP was thought to be the (almost exclusive) domain of coprocessors until the Cortex line and still isn't used more prominently, as it needs much more complex chips and instructions (which in turn would also strongly increase consumption).


CPU's in general are built primarily with integers at the forefront.
Even going back to the old Cyrix M2 chips... Or to something more recent as AMD's Bulldozer architecture, where AMD bundled 1 Floating Point unit between two cores, but had two integer units.

Bofferbrauer2 said:

I agree, on a per-Watt-basis, ARM should trump X86. However, the architecture runs into a TDP wall around 2.5 Ghz, meaning that at 3 Ghz or more an X86 chip would probably be less consuming than an ARM processor.

The TDP wall is also a manufacturing limitation as well.
ARM chip manufacturers tend to opt for transistors with better power characteristics at the expense of clockspeed, which is fair enough.

Bofferbrauer2 said:

To get past those 3Ghz they would need to lengthen the pipeline, which risks costing some IPC if they would need to lengthen too much (essentially what happened with the Pentium 4).


You don't need to lengthen the pipeline.
Netburst is probably a bad example considering what we have now anyway.
Willamatte had a 20 stage pipeline and never went past 2ghz, Coffee Lake has a 19 stage pipeline and can clock to 5ghz.

Prescott ended up lengthening the pipeline to 31 stages, yet on the IPC front was just as good/better than the Willamatte Pentium 4. - Why is that? Because pipeline length isn't everything.
How it works is when data is traveling down the pipeline and stalls... It is a much quicker turn around the shorter the pipeline is, so a 10 stage pipeline in theory should be 3x faster than a 30 stage pipeline when there is a stall and data has to be fetched again.
But it's never always that simple.

If that 10 stage pipeline doesn't have the data in cache, then it has to spend an inordinate amount of cycles to fetch that data from Ram, doesn't matter how many stages you have, it's going to be terrible.
So large and fast caches are vital.

Another aspect is of course... Branch Tree Prediction, where the chip can guess what the CPU needs ahead of time and gets it ready in the caches, this can have a degree of effectiveness depending how good your predictor is, Intel tends to have the advantage on this front compared to most others in the industry.

Same with Hyper-Threading, not every stage of a pipeline is actually being utilized at all times, so by firing up a second thread that can start utilizing those stages sitting idle, that helps bolster performance.

...And so much more.
In-fact a large commanding share of die-space on a chip isn't actually dedicated to processing, it's dedicated to keeping the processor fed.

Bofferbrauer2 said:

While on those tests ARM had to emulate X86, costing some performance, it still can only compete with the Atom N3450, which is also clocked slower than the Snapdragon 835 (Atom: 1.1 Ghz base 2.2 Ghz turbo; Snapdragon 1.9Ghz LITTLE 2.45Ghz big - and potentially can work together for 8 threads total against only 4 in the Atom) and get's trounced by a Core m3 6Y30 (900 Mhz base, 2.2 Ghz max. Singlecore turbo, 3.8W TDP), even in the native, non-emulated tests.

In other words, ARM still has a long way to go until it can keep up with x86 in power. But let's see how the new Cortex A75 and especially A76 Cores will perform, the Cryo 280 in the Snapdragon 835 are still based on the by now slightly outdated Cortex A73

Of course there is going to be a performance penalty.
But it is what it is.

Project Denver was actually going to be both an x86 and ARM chip, but the licensing couldn't be obtained, the way it worked was that the chip was going to reinterpret the instructions from either x86 or ARM and translate it into it's own internal instructions for processing.

Still... Cortex A75/A73 etc' isn't the fastest ARM chips anyway, Apple actually beats them out.



--::{PC Gaming Master Race}::--

Sony wins by default due to pricing, AMD way more affordable. Unless MS eats the cost to be with intel or make a weaker console to at least be in price parity with an AMD PS5



Pemalite said

1) Er. Consoles use already-established hardware design libraries as a basis for their SoC's.
It's not like these chips are being built from the ground up for these machines


2) AMD can also do Ray Tracing.
Games today are leveraging Ray Tracing.
Games from the Xbox 360/Playstation 3 era were leveraging Ray Tracing.
100% Ray Tracing will not be feasible even next gen

3) You don't need the same CPU and GPU architecture to retain backwards compatability.
PC only has been doing it for a quarter of a century.

On the CPU side you have Binary Translation, on the GPU side you can abstract.

4) nVidia isn't going to happen
Why? Price

5) The rumor is flat out wrong.

Navi is Graphics Core Next based, which has existed on the market since 2011, this is just an iterative update.
And prior to 2011, Graphics Core Next would have spent years in development, so we could be looking at 6th gen consoles when AMD was working on Graphics Core Next. - But somehow it was designed for the Playstation 5? Not buying it.



1) Not really every manufacturer orders chips with specific number of components (shader units, ROPs, memory, etc.) for their needs. For example you can’t find PS4 and Xbox one APUs anywhere else. They are only used in these consoles.

2) Of course. But nvidia claims that RTX 2080 will be able to do real time ray tracing right now when it releases. We’ll see more about it during gamescom reveal. And it is rumoured to be a 16 TF card. Don’t see why it would not be possible to have this in consoles at least during 9,5 gen upgrades.

3) Agree. That’s why I mentioned that I believe it wouldn’t be a problem for Microsoft

4) It happened in Xbox, PS3 and Switch. I don’t see why it cannot happen in the next Xbox. The problem with nvidia is that they overprice their GPUs using their domination on the market. It’s not that they cost a lot more to produce than AMD ones. It is possible that Microsoft will be able to agree on the right price with them.

5) But every GCN gen, AMD adds new blocks to it. For example shader engines weren’t a thing before GCN 2. The thing here is that they are working with Sony on the next iteration of GCN specifically for the needs of PS5.

Azzanation said:

Well if Xbox have there budgeted Streaming Box than there premium model can have whatever they want i guess. Its been rumoured that the disk console wont be cheap and with the power of the Cloud, then going big will only make it even better. I am down for an Nvidea driven console. They make the best GPUs. Why not.

Also considering PC gaming is huge with Nvidea, it makes more sense since Xbox seems to want to cater for that audience more meaning more support for PC ports etc. Not saying AMD isnt big on PC and that porting would be an issue either way they go.

You forget that streaming box won’t be the main SKU because it can’t be it due to internet connectivity. Betting on this as a main SKU will lead to losing a gen. So they can’t go hard with the specs of traditional console. It still has to be affordable and cost somewhere around 400-500$ to succeed.

JRPGfan said:
The reason they use a single chip solution (a apu) is because its cheaper, and has a tiny bit of power saveings in it.

Going with a amd/intel cpu + nvidia gpu on a 2nd chip, will be more expensive to produce.
Also MS put all that work into makeing their games backwards compatible, useing a AMD x86 cpu..... they are not going to stop useing x86 now.
So they would have to get a x86 intel or amd cpu (a tegra Arm cpu just isnt enough, and would break their Backwards compatability).

I think it highly unlikely they dont use AMD next gen as well, for both cpu+gpu.

There also was a rumour before E3 that Sony can possibly use dedicated GPU in PS5. So it is possible that they both will get rid of APUs next gen.



 

I would expect Microsoft to go with something like AMD or Intel. They are going to want their console architecture to be as close to a PC as possible.



loy310 said:
Sony wins by default due to pricing, AMD way more affordable. Unless MS eats the cost to be with intel or make a weaker console to at least be in price parity with an AMD PS5

And thats why there's a rumoured streaming box.

 

Azzanation said:

Well if Xbox have there budgeted Streaming Box than there premium model can have whatever they want i guess. Its been rumoured that the disk console wont be cheap and with the power of the Cloud, then going big will only make it even better. I am down for an Nvidea driven console. They make the best GPUs. Why not.

Also considering PC gaming is huge with Nvidea, it makes more sense since Xbox seems to want to cater for that audience more meaning more support for PC ports etc. Not saying AMD isnt big on PC and that porting would be an issue either way they go.

You forget that streaming box won’t be the main SKU because it can’t be it due to internet connectivity. Betting on this as a main SKU will lead to losing a gen. So they can’t go hard with the specs of traditional console. It still has to be affordable and cost somewhere around 400-500$ to succeed.

Define losing?

Xbox has always been designed around North America. If they can make a Streaming box thats dirt cheap than it will take the NA market by storm, and thats the biggest gaming market in the world.

There will be a disc system however at a premium price it wont be the major focus point.