By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - PCWatch: PS4 Technical Overview @ GDC 2013

Tagged games:

I was banned yesterday and can't show to you guys all the new stufs about the PS3 Architecture I found on the web (thanks Beyond3D for the transactions and some explanations).

THE SOURCE

http://pc.watch.impress.co.jp/docs/column/kaigai/20130329_593760.html

ARCHITECTURE OVERVIEW

Nothing new but there are some questions... the CPU clock (>1.8Ghz?), ROPs units (16 or 32?)... the rest of the picture is already known for everybody. 

CPU

Nothing new.

The 4-core Jaguar (the PS4 uses a 8-core version).

And the Jaguar overview.

 

MEMORY

However, in the case of the PS4 this time, you can be reduced to half the number of eight DRAM chips. Because of supporting both x32 GDDR5 is and x16, I can reduce the number of DRAM chips to increase the amount of width in the same interface. Seen in the interface of 256-bit of APU, and the current configuration of the product is connected to the 4G-bit GDDR5 16 in x16. It is composed of a total of 16 8GB. This is, I can switch to the configuration of eight in 8G-bit x32 product when out of GDDR5. By the way, XDR DRAM, if we have succeeded PS3, I was able to reduce to two the number of speed XDR2 DRAM chips by switching to the main memory.

Yeap there are a discussion about if Sony will use the 4Gb GDDR5 modules (512MB each module) already in mass production since early 2013 or the new 8Gb GDDR5 modules (1GB each module) not in production yet... I think the first PS4 will use the 4Gb modules and in the future the 8Gb modules to reduce de costs... the difference is 16 modules to 8GB to 8 modules to 8GB (a lot cheaper here).

GPU

Now the most important and cools stufs... the new customized GPU with tweaks and features to be released in PC in the AMD GNC 2.0 late of this year (or early of 2014).

The compute on the GPU, the granularity of the problem if the compute task is relatively small, needs to be controlled / issue comes out many tasks.

Therefore, in the PS4, and to have a queue of 64 to ACEs, so that the task can be controlled with the issuance of 64. In the major difference between the normal GPU AMD, correspondingly, compute at this point is so flexible it can be used in fine-grained a GPU core at PS4. In PS3, use similar to what you saw up and running in different threads to separate each SPU has become easier.

And about the Compute part of the GPU? That is the most amazing feature that AMD put in PS4 GPU... the GPGPU can run in parallel with Graphcs without lost performance... that is magical.

 API

In the "Wrapped" API you can use the Microsoft Visutal Studio with DirectX or OpenGL... that is easy and traslated to the low level API automaticly... so the developer can create a game for PC with DirectX and just build it to PS4.

 But the magic for graphics in into the Low Level API... this is used only for PS4 to use all the power behind the Arch... you can expect the first-party exclusives games using this API to makes the games looks in another graphical level.

MACRO COMPARISON

To finish a macro comparision between all PlayStations consoles.

 

 

 



Around the Network

Good read... the GPU stuff is like GOD tech.



Good thread, but pretty much what we already knew. Powerful, but not PC LEVEL. GPU looks very, very good though.

Should be a beast though. Can the games push it though?



 

Here lies the dearly departed Nintendomination Thread.

Conegamer said:
Good thread, but pretty much what we already knew. Powerful, but not PC LEVEL.

Should be a beast though. Can the games push it though?

Well the GPU is over actual PC level if AMD deliver what they are talking.

Graphics + Compute in parallel = Two GPU on PC... one for Compute and other for Graphics with the disvantage the GPU not be unified (same shared memory and PCI-E bus to comunicate).

The PS4 GPU seems like 2xGPU... one for Graphcis and other for Compute put to work together sharing the same resourses (Memory, etc) without delay in comunications.



ethomaz said:
Conegamer said:
Good thread, but pretty much what we already knew. Powerful, but not PC LEVEL.

Should be a beast though. Can the games push it though?

Well the GPU is over actual PC level if AMD deliver what they are talking.

Graphics + Compute in parallel = Two GPU on PC... one for Compute and other for Graphics with the disvantage the GPU not be unified (same shared memory and PCI-E bus to comunicate).

The PS4 GPU seems like 2xGPU... one for Graphcis and other for Compute put to work together sharing the same resourses (Memory, etc) without delay in comunications.

I sorta discounted the GPU, as it seems pretty good (if true, of course). It was more the CPU which Inwas referring to. If it pans out, of course.

 

It won't matter if it costs more than $500, which it shouldn't do fortunately.



 

Here lies the dearly departed Nintendomination Thread.

Around the Network

Conegamer said:

I sorta discounted the GPU, as it seems pretty good (if true, of course). It was more the CPU which Inwas referring to. If it pans out, of course.

 

It won't matter if it costs more than $500, which it shouldn't do fortunately.

I think the GPU doing the Compute tasks in parallel with Graphics that makes the possibility to use a low CPU... because a lot of work can be put in GPU instead CPU.

In anycase games on PC are not CPU parallelized to more than 2-cores... if the code was made specificly to 8-cores I guess the CPU will be better than any high-end dual-core CPUs on PC.

8-core @ 1.8Ghz+ (I hope Sony can reach 2.0Ghz) is a really good CPU for games if the code is parallelized.



ethomaz said:
Conegamer said:
Good thread, but pretty much what we already knew. Powerful, but not PC LEVEL.

Should be a beast though. Can the games push it though?

Well the GPU is over actual PC level if AMD deliver what they are talking.

Graphics + Compute in parallel = Two GPU on PC... one for Compute and other for Graphics with the disvantage the GPU not be unified (same shared memory and PCI-E bus to comunicate).

The PS4 GPU seems like 2xGPU... one for Graphcis and other for Compute put to work together sharing the same resourses (Memory, etc) without delay in comunications.

Uh? Where are you getting this magical duplication?

Graphics and compute in parallel means more granularity in sharing the CUs, be that because of the increased number of queues as shown by the VGLeaks diagram of a few weeks ago, or by something akin to hyperthreading in CUs- i.e. a hardware assisted context switching.

The overall global limit is still 1.8TFlops; these improvements will make sure it can be used with an efficiency closer to 100%, but not magically duplicate it.



"All you need in life is ignorance and confidence; then success is sure." - Mark Twain

"..." - Gordon Freeman

Great thread, nice architecture overview, but I think jaguar cores are clocked at 2ghz



WereKitten said:

Uh? Where are you getting this magical duplication?

Graphics and compute in parallel means more granularity in sharing the CUs, be that because of the increased number of queues as shown by the VGLeaks diagram of a few weeks ago, or by something akin to hyperthreading in CUs- i.e. a hardware assisted context switching.

The overall global limit is still 1.8TFlops; these improvements will make sure it can be used with an efficiency closer to 100%, but not magically duplicate it.

Read here... the CUs can do both at the same time without lost performance... all the 1.8TFLOPS can be used to Graphcis and Compute at the same time... on PC you need to choose in use one or other in each CU... PS4 GPU has the ability to do both in each CU at the same time... that's seem duplicate magic for me.

Or I'm geting all wrong.



The guys in Beyond3D are getting the same thing than me... and there are impressed if that parallel thing is true but the it is a little confuse yet.

Maybe is just utilization like you said @WereKitten.