The whole purpose of using esram as frame buffer over the DDR3 is speed as such when you speculated the use of the cloud to expand the esram I stopped reading entirely.
The whole purpose of using esram as frame buffer over the DDR3 is speed as such when you speculated the use of the cloud to expand the esram I stopped reading entirely.


JazzB1987 said:
|
streaming can push through a shitload of data, hell if connection is lost probably reboot to internal res and AA and what not the One can pull off
"I think people should define the word crap" - Kirby007
Join the Prediction League http://www.vgchartz.com/predictions
Instead of seeking to convince others, we can be open to changing our own minds, and seek out information that contradicts our own steadfast point of view. Maybe it’ll turn out that those who disagree with you actually have a solid grasp of the facts. There’s a slight possibility that, after all, you’re the one who’s wrong.
| JazzB1987 said: Little question for dummies. Every file can be compressed. (even so it looks/sounds/feels lossless) Cant they just come up with a super algorithm and decrease filesize by 50%? I know it will take a hit on the CPU/GPU performance but still. Can this be the reason for the 64 fits into 32? |
Cutting the framebuffer size in half would be easy even with lossless compression, hell the lossless LZW algorithm used in PNGs can reduce a RAW image by 75-90%. But you still need the source data before you can compress it. At least you do if you are going to do efficient compression. And you have to decompress the data again before you can minipulate it. GPUs actually already use compression schemes for z-buffer and textures.
@TheVoxelman on twitter
Tldr; there will be 1080p games, like forza 5, but, like forza 5, overall detail and Polish will suffer to achieve it, Microsoft dun goofed with the hardware proportionality, I suppose you could say their first 180 was to 180 on their vision if "balance"
the-pi-guy said:
He was kidding. |
If only he read just one more line!
"Even if the update had Microsoft technicians go to every single owners house and install bunches of RAM, it wouldn't really matter."
I don't know why MS bothered including a CPU/GPU at all.
Just hook up that Ethernet connection and BOOM!
That's the Power of the Cloud, baby!
A new SDK Will not only help with the esram... And cleaner code Is always possible... So i disagree a 1080p 60fps witcher 3 should be achievable with the proper tools and clean work... If rushed launch game with beta if not alpha SDK for the biggest part of their dev cycle can come close to it the others won't have excuses later on... Once a few will prove it is feasible I'll just blame it on the devs and will not buy their games until on sale if they are still somewhat good games....

| JazzB1987 said: Little question for dummies. Every file can be compressed. (even so it looks/sounds/feels lossless) Cant they just come up with a super algorithm and decrease filesize by 50%? I know it will take a hit on the CPU/GPU performance but still. Can this be the reason for the 64 fits into 32? |
That has been the standard practice for textures for a long time. For screenbuffers it's not so trivial as it's required that they are a fixed size for easy access of individual pixels. Super lossless algoithms are not an option because you don't know where each pixel ends up, and you'll have to unpack a large part of the buffer to be able to work on it. No problem for textures as they are processed linearly, screenbuffers however need random access.
You can pack them into a bitstream, which is fixed size compression by throwing away high frequency detail and packing the rest together with as few bits as possible. (Which is what this thread is about http://gamrconnect.vgchartz.com/thread.php?id=179094&page=1)
To give a very simple example, say you need to store 2 values ranging from 0 to 31. Normally you would use 1 byte each or 16 bits total. But you decide that dividing them by 2 and only storing 0 to 15 is good enough for what you're trying to achieve. (rounding to an even number basically) Those can be stored in 4 bits. Multiply the second value by 16 and add both together, you get a final value ranging between 0 and 255. Which fits in 1 byte. 50% saved.
Now you use half the memory and can still easily calculate where each individual value is located. However every time you want to access or update the real value you need to unpack it from the bitstream. It's not a lot of overhead but since you access the screenbuffer constantly is does add up.
Short answer: Yes.
But it's no super algorithm nor anything new, and it does take a hit on performance. That performance hit might be less than the benefit of being able to fit everything in the faster esram. A 1080p game that previously used tiling might be optimized that way to run faster, at a small loss of fine detail. A 900p game that already fit in esram will still take a performance hit. Maybe less than tiling, but more than going from 900p to 1080p on ps4.
Really bad analogy. Functional but bad.
Xbox fans all go on about parity but they don't care what this entails. If at any point there is real visual parity Sony and its fans lost. Because then the game industry has decided that parity is more important than making the best version of a game possible.
If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.