By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - Cerny talks about PS4 hardware and specs, confirms all talks so far

I am surprised nobody posted it yet but here we go:

 

http://www.gamasutra.com/view/feature/191007/inside_the_playstation_4_with_mark_.php

 

I won't paste anything from it, it's for your reading pleasure.

 

Overall there is nothing really new but Cerny talking about PS4-specs in this way is somehow new to Sony and it confirms that Sony developed PS4 to be dev-friendly this gen which is a good sign for devs and games in general.

Finally with the nextgen-consoles game-engines can evolve!



Around the Network

it all sounds very promising, next gen is sounding good =) happy devs make happy gamers



I really appreciate that the guy knows what he's talking about. He says the consequences of a certain piece of tech will be a certain capability, not that it'll make everything realer than real.

 

"by the middle of the PlayStation 4 console lifetime, asynchronous compute is a very large and important part of games technology"is not GPU COMPUTE WILL MAKE PS4 THE BEST EVER



I am pretty sure I read this weeks ago on gaf so it was probably posted already.



Netyaroze said:
I am pretty sure I read this weeks ago on gaf so it was probably posted already.

no, it was posted today: it just takes a click on the link.



Around the Network
walsufnir said:
Netyaroze said:
I am pretty sure I read this weeks ago on gaf so it was probably posted already.

no, it was posted today: it just takes a click on the link.


Yes I saw that it was posted today but its still old. AFAIR this Interview he gave was to a japanese site and translated on gaf. I will look for the thread maybe I find it.



Nothing new? A lot of new.

Those of you saying the PS4 is nothing ore than a mid-range PC needs to read this article. It is so much more.



Netyaroze said:
walsufnir said:
Netyaroze said:
I am pretty sure I read this weeks ago on gaf so it was probably posted already.

no, it was posted today: it just takes a click on the link.


Yes I saw that it was posted today but its still old. AFAIR this Interview he gave was to a japanese site and translated on gaf. I will look for the thread maybe I find it.

This is an interview with Gamustra.  Cerny has had many interviews in the recent weeks, so if some of the info is the same that's why it seems to familiar. It's a different interview though. Read it. It's really informative.



I don't think we've heard much about the specialized hardware before? Here it is for the technically minded. 

 

The three "major modifications" Sony did to the architecture to support this vision are as follows, in Cerny's words:

  • "First, we added another bus to the GPU that allows it to read directly from system memory or write directly to system memory, bypassing its own L1 and L2 caches. As a result, if the data that's being passed back and forth between CPU and GPU is small, you don't have issues with synchronization between them anymore. And by small, I just mean small in next-gen terms. We can pass almost 20 gigabytes a second down that bus. That's not very small in today’s terms -- it’s larger than the PCIe on most PCs!
  • "Next, to support the case where you want to use the GPU L2 cache simultaneously for both graphics processing and asynchronous compute, we have added a bit in the tags of the cache lines, we call it the 'volatile' bit. You can then selectively mark all accesses by compute as 'volatile,' and when it's time for compute to read from system memory, it can invalidate, selectively, the lines it uses in the L2. When it comes time to write back the results, it can write back selectively the lines that it uses. This innovation allows compute to use the GPU L2 cache and perform the required operations without significantly impacting the graphics operations going on at the same time -- in other words, it radically reduces the overhead of running compute and graphics together on the GPU."
  • Thirdly, said Cerny, "The original AMD GCN architecture allowed for one source of graphics commands, and two sources of compute commands. For PS4, we’ve worked with AMD to increase the limit to 64 sources of compute commands -- the idea is if you have some asynchronous compute you want to perform, you put commands in one of these 64 queues, and then there are multiple levels of arbitration in the hardware to determine what runs, how it runs, and when it runs, alongside the graphics that's in the system."

"The reason so many sources of compute work are needed is that it isn’t just game systems that will be using compute -- middleware will have a need for compute as well. And the middleware requests for work on the GPU will need to be properly blended with game requests, and then finally properly prioritized relative to the graphics on a moment-by-moment basis."

This concept grew out of the software Sony created, called SPURS, to help programmers juggle tasks on the CELL's SPUs -- but on the PS4, it's being accomplished in hardware.

The team, to put it mildly, had to think ahead. "The time frame when we were designing these features was 2009, 2010. And the timeframe in which people will use these features fully is 2015? 2017?" said Cerny.

"Our overall approach was to put in a very large number of controls about how to mix compute and graphics, and let the development community figure out which ones they want to use when they get around to the point where they're doing a lot of asynchronous compute."

Cerny expects developers to run middleware -- such as physics, for example -- on the GPU. Using the system he describes above, you can run at peak efficiency, he said.

"If you look at the portion of the GPU available to compute throughout the frame, it varies dramatically from instant to instant. For example, something like opaque shadow map rendering doesn't even use a pixel shader, it’s entirely done by vertex shaders and the rasterization hardware -- so graphics aren't using most of the 1.8 teraflops of ALU available in the CUs. Times like that during the game frame are an opportunity to say, 'Okay, all that compute you wanted to do, turn it up to 11 now.'"

Sounds great -- but how do you handle doing that? "There are some very simple controls where on the graphics side, from the graphics command buffer, you can crank up or down the compute," Cerny said. "The question becomes, looking at each phase of rendering and the load it places on the various GPU units, what amount and style of compute can be run efficiently during that phase?"



VGKing said:
Netyaroze said:
walsufnir said:
Netyaroze said:
I am pretty sure I read this weeks ago on gaf so it was probably posted already.

no, it was posted today: it just takes a click on the link.


Yes I saw that it was posted today but its still old. AFAIR this Interview he gave was to a japanese site and translated on gaf. I will look for the thread maybe I find it.

This is an interview with Gamustra.  Cerny has had many interviews in the recent weeks, so if some of the info is the same that's why it seems to familiar. It's a different interview though. Read it. It's really informative.

 

Oh right gamasutra not neogaf.

Alot of the parts are identical to the gamasutra one. All his quotes I googled occured earlier somewhere else, but when you say its worth reading I will do that.