By using this site, you agree to our Privacy Policy and our Terms of Use. Close

The games industry is currently stuck in a rut: graphics. Gamers demand better and faster graphics, but the cost to produce them is skyrocketing and the time needed is becoming unreasonable. However, we are not much closer to photorealism than 10 years ago. What will happen to game graphics in the future, and why is it so expensive to get realistic graphics into games?

To arrive at the answers, we need to understand what happened in the past in a similar situation. Remember when audio was a factor in buying games?

Yes, sound is still important in recent games, but it won't make you buy another game or console just because the audio is better. This is because:

1. There is little difference in audio quality between games

2. There isn't much room for improvement in audio quality on any game

The reason this came about was because it gradually became easier to make good music and sound effects, simply by recording real sounds from the environment and real music from the same people that compose and play chart music. In addition, we no longer need specialised software or hardware to play back those sounds – a few lines of code are common to every game that make great-sounding audio just work.

Both of these factors are complementary and due to advances in computing power. During the earliest days of game consoles, whole recording of songs could not be used because memory space and processing power was limited. To avoid this, console manufacturers used special, proprietary sound chips that simulated real sounds by combining types of sound waves. It took a lot of work in software and hardware design to make a sound that was like a real-world instrument, and manufacturers were able to improve the number, fidelity and range of these instruments each year to make people and console builders upgrade every few years to discernibly better audio. Due to the cost and the size of the potential market, companies protected their IP and there was intense competition. Game developers had to employ staff trained solely in creating good music within the harsh constraints of the chip's capabilities. Good audio, in fact, was expensive and a drain on the budget. However, there was demand from gamers for better and better audio, so it was worth doing.

Soon, however, the CPU and memory used in consoles became good enough to stream pre-recorded music from the cartridge or disc. Unlike before, game audio began to become cheaper as specialised hardware, software and staff were no longer needed. Since the audio competition was reduced, developers were able to share sound resources and code and so further cut costs. Today, sound is not a major development expense, nor is it a major game or system selling point. There is still room to produce better audio, but the cost doesn't justify the returns.

So, why is this important? It's because this same cycle will apply to graphics.

Today, graphics are a major selling point for games and systems. The primary expense on any 'blockbuster' game is graphics – a serious developer must:

1. Create or license an expensive engine with computer-jargon capabilities like anti-aliasing, anisotropy, dynamic lighting, tessellation, bump mapping and occlusion culling.

2. Hire up to hundreds of artists who will spend potentially five years creating realistic looking models using expensive tools.

3. Use complex code to decide which features the user's specialised graphics chip can support, and then do anything it can't on the CPU instead (duplication of coding effort).

Why must they do this? Why can't graphics just be like the real world without all of this expense? It's because, just like the audio example above, current graphics are an instruction-based approximation to the real thing. All of those technical terms in point 1 are just clever and complicated ways of fooling us into seeing phenomenon that are very simple in the real world like reflection and refraction. The reason why we can't 'just do' the simple stuff is because, like the audio, we haven't got enough computational power to do it on the scale and speed needed for a 3D game. We tried to improve the speed by using a special chip (the graphics chip is just like a sound chip) which takes years to design and even more time to make software drivers that work with it. The fact that graphics cards have different features at all causes the problem in point 3 of duplication of effort. Finally, the reason why point 2 is the single biggest drain on the budget of a game is because the artists aren't modelling real tables or real monsters. They're using the instruction based format, like MIDI is to music. This requires special training and complex tools, and what we get at the other end is a bad approximation at best, just like MIDI music is compared to MP3 music.

So, the solution to this problem – and the problem is destroying the industry through unsatisfied customers but increasing expense – is to do the same as audio. We must improve the raw computational power of our general purpose chips and then throw away the special hardware, drivers, engines, tools and staff that are obsolete.

The first step towards this is a technique called ray-tracing. Instead of using anti-aliasing, anisotropy, etc. to simulate light hitting the camera, we will simulate real light beams and follow their path from the camera to the source (a light bulb). By doing this millions of times per second, we build up a picture of the world as it really is – each object will already be the correct colour and lustre without the need for those post-rendering effects. Current programmers would describe it as “free” anti-aliasing, but I prefer not to because anti-aliasing and the others weren't necessary in the first place: it was an illusion; a hack. Ray-tracing takes care of the specialised hardware (graphics chips will evolve into general calculating chips much like the CPU but more suited for massively-parallel processing) and also eliminates the need for complex, proprietary game graphics engines.

However, this isn't good enough. We are still using the poor-approximation 3D models and so still must pay for specialised staff. The solution to this is still further in the future, but has even bigger revolution potential than ray-tracing. Just as the best game music is played by live instruments and recorded to the game medium, game graphics will be taken from the real world. Real objects will be acquired or constructed and then 'scanned-in' by a machine that will record every detail of the colour, lustre and other visual and physical properties of the entire object. Though reproducing this in game will require incredible amounts of memory and resources, it will be very fast to create new graphics and they will be almost as good in visual quality as the real world with no additional effort. Combined with ray-tracing, a room in game under certain lighting conditions will look effectively identical to the real room. Once this happens, it is very simple to create further rooms because there is no need to keep the machine and code secret or proprietary. Even small developers will have literally photorealistic graphics.

Once this occurs, graphics will become like audio – no longer a selling point or expensive to create. To differentiate games, developers must focus on truly creative aspects like level design and gameplay. Since this process can apply to every aspect of the game experience, not just sound and vision, it is not hard to imagine virtual reality games which are completely convincing but have very simple underlying principles.

In conclusion, the future of game graphics is photorealism, but cheap and widely available. All of the current effort on graphics chips, graphics APIs like Direct3D and OpenGL and graphics 'wars' between consoles will be largely irrelevant once processing power reaches the required level. We will return, like in the NES days, to appreciating gameplay and level design as the most important aspects of games.