By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Hephaestos said:
greenmedic88 said:

It sounds like you don't have to physically move the camera; the lens physically adjusts to accomodate multiple viewing angles within the same frame of reference, but from a fixed point of perspective.

For those who aren't clear on what that means, think of "bullet time" photography in reverse. Instead of taking multiple shots of a single frame of reference from multiple reference points in sequence, the camera shoots multiple shots from a single reference point (you don't move the camera physically) from one viewing angle, just as normal vision would.

The upside is, you can "turn" an image shot in this fashion on any 2D display. In other words, your single frame of reference has in effect multiple viewing angles. I've seen QT VR files that simulate the same effect.

That data should be enough to produce a 3D image when viewed on a 3D display.


One eyed dudes don't see 3D... and normal vision has 2 origin points

put one hand over one of your eyes. now tell me you can't see depth because I can. let me explain why. I can tell when one object ends and when one object starts. from this my brain can calculate the size of the object. from this object I can then calculate the distance of all other objects based on the size. other features like light, also play apart. also when I move my head (like this camera) I see the object form different angles. yeah sure with one eye closed it is slightly harder to tell how far an object is compared to when I have both eyes, but to say that with one eye you can't see depth just isn't true.

well I suppose they wouldn't be able to see a 3d image of screen but all other things I am sure they can.



correct me if I am wrong
stop me if I am bias
I love a good civilised debate (but only if we can learn something).