That's nothing new, packing data in bit streams to save memory. Sort of fixed lossy compression, same as all video is packed into a 4:2:2 chroma subsampled 12bpp stream before further compression (color resolution reduced to 25% since it's not as visible as brightness). Also the same as running reflections, lights, shadow maps at quarter the resolution which is quite normal in games.
That makes moving memory around a bit faster too as there is less to move. But you get a little overhead from having to unpack and repack the bitstream when you need to work on it. Plus you might see some color banding issues in areas. So it's 1080p, but just like video not all 1080p content is created equally.
It's always a tight balancing act. Reduce the memory footprint by throwing more processor cycles at a problem, or use more memory to reduce processor demand. There's not 1 universal solution.
Or to put it in another way, what the op is proposing is a more sophisticated way of upscaling, reducing the fidelity of the individual elements instead of the whole frame buffer.











