LordTheNightKnight said: bugmenot said: So far as I can tell, the lack of programmable shaders shouldn't be a problem if programmers tried using their imaginations and brains a bit. Factor 5 achieved Normal mapping on GC (on the graphics chip) by going into low level machine code.
GC was capable of hardware specularity, environment mapping and bump mapping without any particular coaxing.
Shadow of the colossus either achieved or cleverly faked fur shading, self shadowing, soft shadowing, vector based motion blur, HDR lighting, bloom lighting, a couple of sort of specularity effects, volumetric fog effects, and even a kind of sub-surface-scatter effect, ON THE FREAKING PS2. Sure PS2 had vector units on the CPU, but the GPU didn't even have hardware transform and lighting. My point is, if developers tried, they could do versions of all these effects, and add Normal mapping and depth of field for good measure, on Wii.
As it is, developers are treating Wii like it was an N64. I've heard developers talk about how it's difficult to do environment mapping on Wii. NINTENDO DID IT ON THE FIRST N64 GAME!! EA said it was impossible to do Inverse Kinematic type animations until Xbox 360 came along. NINTENDO DID IT ON WAVERACE 64!
Like all the problems with Wii games to date, I put the current limits of displayed graphics down to laziness. Wii doesn't appear to be less powerful than Xbox on any particular part. It just seems to be less flexible (read: developers can't use their basic lines of shader code without thinking). |
I have to add that the N64 was built to be heavily programmable. Unfortunately, that programming was mainly used for getting around buses that were too slow or too small. Also, I don't think Mario 64 had normal mapping. It had trilinear mapping, but that's a different effect than normal mapping, which is a more advanced version of bump mapping (in that it has a full color overlay to simulate a rough surface, instead of monochrome overlay). As for the Wii, I'd like to know what it's processor can do. Most of the shading and mapping on the PS2 was thanks to its processor, and the Wii has those five exocution units. What to they do? Also, did the GC have programmable shading? If it didn't, it didn't seem to hurt its graphics. |
Sorry for my late reply, but I have to correct you. If you read my post which you quoted, I said that Mario 64 had Environment Mapping (Metal Mario), not Normal Mapping, which actually wasn't invented round about until Far Cry came along. For those who don't know, Environment mapping is a trick for faking reflective surfaces. A false texture representing an image of the surroundings (or at least a guess at it) is pasted on a surface, but the system "slides" the texture across the surface in response to the changing angle of the surface to the camera, much as the image in a mirror moves when the mirror rotates. This creates the optical illusion of a mirrored surface. True reflection mapping can be achieved with shaders, by calculating the angle of the surface relative to the camera, and tracing back the line of incidence of light from the original surface, and rendering that. This is extremely power hungry, but has the advantage of making a true reflection that looks like the surroundings. Clever programmers would use an intermediate version: rendering a low grade version of the environment onto a box in place of the shiny object, then using that texture box as a traditional envoronment map.
To elaborate on your explaination of Normal mapping, a traditional bump map is a hidden texture map describing differential response to light. That means for instance, a dark spot on a bump map will cause that part to remain dark unless the light source is shining brightly from right above. A light spot on a bump map will respond brightly when the light is at a wide variety of angles and intensities. The former would represent a valley in a surface and the latter would represent a peak. Because only 1-dimensional data is coded (height) it's represented as a greyscale image.
A Normal map by comparrison codes the angular and height offset between the "high grade" version of the surface that the developer pre-computed on a super computer, and the low grade version that's being displayed. Because these offsets are in 3-dimensions (x,y and z) representing vertical rotation, horizontal rotation and height, three channels are required to code it, and convieniently, developers assign red, green, and blue, as each of the three. It gives a mure realistic response to light, with surfaces responding to the angle of light as well as proximity.