I don't think we're even close to see the end of graphics improvement. Simulating light is so insanely complex that current real-time graphics has not even begun to approach the problem in a way that is long-term viable. I.e, we're still rasterizing triangles and this leads to a dead end. However, for the time being it does give the best results for the computational power we have available. Given a 1000 fold increase in power and I think we will still be rasterizing triangles because it would still give the best result for the power available. Not until we stop doing this we will be on track to pursue solutions that give real results, such as monte carlo forward raytracing. We are very far from this. When we can render a crystal vase in a full environment at 8k x 4k, with correct handling of light at different wavelengths to get proper rainbow effects and caustics (and not using situational cheap tricks), without breaking a sweat, maybe then we're close to not seeing any improvements in graphics. I think we're 40-50 years from this level of realism. Will a part of the video game industry pursue the latest and greatest of graphics that is within consumer cost range? I think so.







