By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - The future of game graphics

The games industry is currently stuck in a rut: graphics. Gamers demand better and faster graphics, but the cost to produce them is skyrocketing and the time needed is becoming unreasonable. However, we are not much closer to photorealism than 10 years ago. What will happen to game graphics in the future, and why is it so expensive to get realistic graphics into games?

To arrive at the answers, we need to understand what happened in the past in a similar situation. Remember when audio was a factor in buying games?

Yes, sound is still important in recent games, but it won't make you buy another game or console just because the audio is better. This is because:

1. There is little difference in audio quality between games

2. There isn't much room for improvement in audio quality on any game

The reason this came about was because it gradually became easier to make good music and sound effects, simply by recording real sounds from the environment and real music from the same people that compose and play chart music. In addition, we no longer need specialised software or hardware to play back those sounds – a few lines of code are common to every game that make great-sounding audio just work.

Both of these factors are complementary and due to advances in computing power. During the earliest days of game consoles, whole recording of songs could not be used because memory space and processing power was limited. To avoid this, console manufacturers used special, proprietary sound chips that simulated real sounds by combining types of sound waves. It took a lot of work in software and hardware design to make a sound that was like a real-world instrument, and manufacturers were able to improve the number, fidelity and range of these instruments each year to make people and console builders upgrade every few years to discernibly better audio. Due to the cost and the size of the potential market, companies protected their IP and there was intense competition. Game developers had to employ staff trained solely in creating good music within the harsh constraints of the chip's capabilities. Good audio, in fact, was expensive and a drain on the budget. However, there was demand from gamers for better and better audio, so it was worth doing.

Soon, however, the CPU and memory used in consoles became good enough to stream pre-recorded music from the cartridge or disc. Unlike before, game audio began to become cheaper as specialised hardware, software and staff were no longer needed. Since the audio competition was reduced, developers were able to share sound resources and code and so further cut costs. Today, sound is not a major development expense, nor is it a major game or system selling point. There is still room to produce better audio, but the cost doesn't justify the returns.

So, why is this important? It's because this same cycle will apply to graphics.

Today, graphics are a major selling point for games and systems. The primary expense on any 'blockbuster' game is graphics – a serious developer must:

1. Create or license an expensive engine with computer-jargon capabilities like anti-aliasing, anisotropy, dynamic lighting, tessellation, bump mapping and occlusion culling.

2. Hire up to hundreds of artists who will spend potentially five years creating realistic looking models using expensive tools.

3. Use complex code to decide which features the user's specialised graphics chip can support, and then do anything it can't on the CPU instead (duplication of coding effort).

Why must they do this? Why can't graphics just be like the real world without all of this expense? It's because, just like the audio example above, current graphics are an instruction-based approximation to the real thing. All of those technical terms in point 1 are just clever and complicated ways of fooling us into seeing phenomenon that are very simple in the real world like reflection and refraction. The reason why we can't 'just do' the simple stuff is because, like the audio, we haven't got enough computational power to do it on the scale and speed needed for a 3D game. We tried to improve the speed by using a special chip (the graphics chip is just like a sound chip) which takes years to design and even more time to make software drivers that work with it. The fact that graphics cards have different features at all causes the problem in point 3 of duplication of effort. Finally, the reason why point 2 is the single biggest drain on the budget of a game is because the artists aren't modelling real tables or real monsters. They're using the instruction based format, like MIDI is to music. This requires special training and complex tools, and what we get at the other end is a bad approximation at best, just like MIDI music is compared to MP3 music.

So, the solution to this problem – and the problem is destroying the industry through unsatisfied customers but increasing expense – is to do the same as audio. We must improve the raw computational power of our general purpose chips and then throw away the special hardware, drivers, engines, tools and staff that are obsolete.

The first step towards this is a technique called ray-tracing. Instead of using anti-aliasing, anisotropy, etc. to simulate light hitting the camera, we will simulate real light beams and follow their path from the camera to the source (a light bulb). By doing this millions of times per second, we build up a picture of the world as it really is – each object will already be the correct colour and lustre without the need for those post-rendering effects. Current programmers would describe it as “free” anti-aliasing, but I prefer not to because anti-aliasing and the others weren't necessary in the first place: it was an illusion; a hack. Ray-tracing takes care of the specialised hardware (graphics chips will evolve into general calculating chips much like the CPU but more suited for massively-parallel processing) and also eliminates the need for complex, proprietary game graphics engines.

However, this isn't good enough. We are still using the poor-approximation 3D models and so still must pay for specialised staff. The solution to this is still further in the future, but has even bigger revolution potential than ray-tracing. Just as the best game music is played by live instruments and recorded to the game medium, game graphics will be taken from the real world. Real objects will be acquired or constructed and then 'scanned-in' by a machine that will record every detail of the colour, lustre and other visual and physical properties of the entire object. Though reproducing this in game will require incredible amounts of memory and resources, it will be very fast to create new graphics and they will be almost as good in visual quality as the real world with no additional effort. Combined with ray-tracing, a room in game under certain lighting conditions will look effectively identical to the real room. Once this happens, it is very simple to create further rooms because there is no need to keep the machine and code secret or proprietary. Even small developers will have literally photorealistic graphics.

Once this occurs, graphics will become like audio – no longer a selling point or expensive to create. To differentiate games, developers must focus on truly creative aspects like level design and gameplay. Since this process can apply to every aspect of the game experience, not just sound and vision, it is not hard to imagine virtual reality games which are completely convincing but have very simple underlying principles.

In conclusion, the future of game graphics is photorealism, but cheap and widely available. All of the current effort on graphics chips, graphics APIs like Direct3D and OpenGL and graphics 'wars' between consoles will be largely irrelevant once processing power reaches the required level. We will return, like in the NES days, to appreciating gameplay and level design as the most important aspects of games.

 



Around the Network

Raytracing actually follows light beams backwards: from the camera to the light source.

This isn't actually a nitpick. It doesn't affect the light rays that reach the camera at all, but it has other consequences that need to be accounted for through other algorithms.



Complexity is not depth. Machismo is not maturity. Obsession is not dedication. Tedium is not challenge. Support gaming: support the Wii.

Be the ultimate ninja! Play Billy Vs. SNAKEMAN today! Poisson Village welcomes new players.

What do I hate about modern gaming? I hate tedium replacing challenge, complexity replacing depth, and domination replacing entertainment. I hate the outsourcing of mechanics to physics textbooks, art direction to photocopiers, and story to cheap Hollywood screenwriters. I hate the confusion of obsession with dedication, style with substance, new with gimmicky, old with obsolete, new with evolutionary, and old with time-tested.
There is much to hate about modern gaming. That is why I support the Wii.

Millennium said:
Raytracing actually follows light beams backwards: from the camera to the light source.

This isn't actually a nitpick. It doesn't affect the light rays that reach the camera at all, but it has other consequences that need to be accounted for through other algorithms.

Fixed. Thanks. Still, it requires a lot less time and specialised development of graphics algorithms. That was my point.

 



but the cost to produce them is skyrocketing and the time needed is becoming unreasonable


This is often mentioned but IMO simply not true, at least put like that. Its more like an arms race. Big budget games can produce amazing graphics with todays technology and small developers cannot compete with that. Its similar to the movie industry where big budget titles have all the amazing expensive computer effects and movies with smaller budgets look weird.

Halo3, GTA4, Call of duty4 or MGS4 have huge budgets, amazing graphics and bring in big profits, but smaller developers have problems to follow the graphical standards that are set by these huge games.

One mitigating factor will be the proliferation of more and more middle ware like the Unreal Engine, Havoc, middleware for facial animations, movement, technology like vegetation creation which was used in Oblivion or is provided by Crysoft could further make it easier and cheaper to develop games.





This is a very interesting thread. I was thinking about something similar in terms of the future of game development. I used to be in the Game Industry so I can confirm quite a bit of what you are saying. One major concern I have is that Games are becoming more expensive for the reasons you stated. This is causing some developers to outsource asset creation outside of our country. I have heard this from multiple colleagues. Hell even the movie biz is outsourcing stuff to Asia.

The demand for High Fidelity graphics and Photorealism is a challenge to the industry. I am not trying to be an alarmist just stating my observations of some trends. Your solution is an interesting one.



Around the Network
Soleron said:

The first step towards this is a technique called ray-tracing. Instead of using anti-aliasing, anisotropy, etc. to simulate light hitting the camera, we will simulate real light beams and follow their path from the camera to the source (a light bulb).

I agree that in the future most 3D-models will be created by scanning an object which will actually mean that prop artists, make-up artists and costume makers will actually start to replace some of the existing artists.  Games set in current realistic times might have lower costs for art than say a sci-fi game.

FYI - Anti-aliasing and anistropic filtering are actually needed when ray-tracing.  Aliasing occurs when transfering continous data (say an image of a tree) into a descrete data structure (like the resolution of a monitor or a frame buffer).  Anistropic filtering is needed to cleanup artifacts caused by viewing surfaces that are far away at a steep angle.  The two descrete samples on a surface that are projected onto two adjacent pixels in the frame buffer may actually be 10m apart in the game world.  All of the color data inbetween these points are lost.  i.e. If a few red jelly beans were on a football field and it was viewed from distance then there might be a (incredibly small) chance that some (or all) pixel of the frame buffer would appear red.  Move the camera slightly and it would appear to be green.

OT: OMG, Chrome does not have a spell checker.

 



"However, we are not much closer to photorealism than 10 years ago."

We arent closer to photo-real graphics than we were 10 years ago?

Id say we are closer! All we had back then were petty little sprites



I hope my 360 doesn't RRoD
         "Suck my balls!" - Tag courtesy of Fkusmot

Bravo. An excellent read. One of the longest posts that I've actually been able to read the whole way through.

Another point you could make about the future is that texture sizes wouldn't need to be huge thanks to technologies such as procedural renderers (I believe that's the term, I could be incorrect, though).



I think that, at some point, the industry will look at the consumer and will realize that photorealism is not what games need.

This might happen after, oh, say the Wii starts selling like crazy. Oh wait!


Raytracing in real-time is just plain not a decent solution. You still have to model (and texture) things for it to work decently -- there's just no point. Don't get me wrong, raytracing rules for easily defined mathematical objects like spheres (woo!). Take a wild guess at what kind of assets/data would need to be produced to make ray-tracing a 3D scene possible though -- textured polygons!

Raytracing is a good lighting solution only, and although you could claim that lighting is a large part of what makes a scene "realistic", there are plenty of decent techniques already available for doing decent lighting and shadow-casting. Raytracing would not make game budgets cheaper, in any way, shape, or form. You'd be better off using the huge amounts of extra CPU power you need to do it by making your animations more complex / animated skeletons more complex.  Raytracing is outdated, pure and simple.  It's almost useless these days, except for rendering "more" perfect mathematical surfaces, like clean spheres, surface patches, etc.  Those aren't "organic" modelling techniques, so even though its a good way to light something, the "something" isn't going to look like a natural object in the first place.



I can't agree with this pipe dream of a thread.
I agree that we're no better than 10 years ago, as 10 years ago, we had the first preview videos of "FF: Spirit Within", which is the only completely computer generated movie.
But I can't agree with the rest. It is a pipe dream, because it assumes that the job of video games is to reproduce reality. But it's not, like not at all.
The job of a videogame is to entertain, not to break some technology barrier.
Years of having to break limiting factors in video games have lead people to believe that video games are about technology. They're not.
There's no need for raytracing or any nonsense like that in videogames. If it comes without any additional cost, then no problem, but as über graphics are not a selling point for games, as I'm sure this generation will teach us, with the current additional costs incurred, video games will not be what push the graphics.

For now, the movie industry is pushing the graphics, and even there, they advance cautiously. Stupid Sakaguchi wanted to go too fast and nearly killed Squaresoft. Pixar is advancing slowly but surely.