| czecherychestnut said: http://www.beyond3d.com/content/articles/94/ Good article on why ray-tracing isn't necessarily the way of the future. And before anyone slags off the site, these guys are some of the best analysts of graphical architectures and techniques around. OT, x86 fails on a GFlops per mm2, which indirectly becomes GFlops per $. PowerPC is much more cost effective, being able to cram more power into each mm2 of silicon. For a company like Nintendo, I'd say that would matter more than having the bleeding edge. Furthermore in many ways IBM's fabs are more advanced than Intel's, keeping the processors cooking on up to date processes. And lastly, Larrabee is currently vapourware. The first version never saw the light of day, and there is no guarantee the second version will get out the door. And the only reason Intel focusses on raytracing so much is because by its very design Larabee 1 sucked at rasterisation, the main justification Intel had for shitcanning Larabee 1 was that the performance just wasn't competitive with the then competitors GPU's for the amount of silicon required. If we think that Wii2 will come out 2012-13, the design will need to be finalised by latest mid 2011. There has been no indication from Intel that Larabee 2 would be close to ready by then. Personally, I think there is more chance Nintendo will get a Cell-lite off IBM than getting x86 from Intel. That way compatibility is maintained, performance is massively increased, and more importantly project risk is reduced. Going forward I see consoles relying less and less on raw CPU power and more and more on GPU power, in line with PC's. |
Slagging realtime raytracing has become popular in the last few years because A) there are a lot of people invested in the current way of doing things and B) for the last few years rasterisation has been able to keep improving via more and more convoluted hacks -- but at the cost of a massive increase in development costs. The article for example points out that realtime raytracing is bad at subsurface scattering. Ok, there are good approximations of the full monte-carlo approach, but even if there weren't rasterization doesn't do subspace scattering at all.
Given infinite hardware resources raytracing will always beat rasterization, it's just a fact. Now the arcticle implicitly argues that given current realistic hardware constraints, realtime raytracing has to be superior in every way (not just with shinny balls, glass, and water) for rasterization to be supplanted. I would argue that it just has to be good enough because of how much development costs would be reduced.







