By using this site, you agree to our Privacy Policy and our Terms of Use. Close
shams said:

Here is what I would like to see in the Wii 2.0 - due for release around Xmax 2010:

 - 2.0Ghz CPU
 - new "Raycasting" GPU tech (completely new gfx solution, non-poly based), running up to 720p. Supports native shadowing, lighting, complex models, etc... (***)
 - advanced Wiimote (includes camera slot, mic, etc..), better position detection
 - 256MB RAM
 - built-in "Millipede" storage (*)
 - cartridge slot (**)
 - form factor about 1/4th current Wii size

Launch price:  $199US (with game)

...

* - a Millipede drive will provide an immense amount of on-board storage (1TB up). No idea if the tech will be ready by 2010, and commercially available - but if so, it would be cool. If not possible, a fast USB/memory card slot allowing for large (expandable) quantities of on-board memory.

http://researchweb.watson.ibm.com/journal/rd/443/vettiger.html

** - no CD based drive. Removal of the drive will result in a much smaller/simpler unit, higher xfer rates, lower power consumption, improved reliability - and a much, much lower cost. By using a cartridge, they have direct access to all of the data on the card - giving it an effective memory space of gigabytes. By 2010, cheap cartridge media should reach the 10GB (or higher) range.

*** - love to see Ninty experiment with new gfx technologies. Even though resolutions have improved, image quality is not even close to 'TV' level. Whether some form of 'line-rendering, hardware raycasting' is feasible - I don't know. It may even be unnecessary, or dumb from a business point of view (make it hard for companies to port existing next-gen games to the platform).

...

I know one thing for sure. This "lag" business model is a great idea. When the Wii 2.0 comes out, there will be a compelling business reason for companies to port 360/PS3 titles to the new console (just the same way PS2 games are being ported to the Wii now).

 


I believe you mean raytracing not raycasting (ray casting is pre-scan line conversion and was used to produce 3D games like Doom) ...

You actually may be correct though:

"A working prototype of this hardware architecture has been developed based on FPGA technology. The ray tracing performance of the FPGA prototype running at 66 MHz is comparable to the OpenRT ray tracing performance of a Pentium 4 clocked at 2.6 GHz, despite the available memory bandwith to our RPU prototype is only about 350 MB/s. These numbers show the efficiency of the design, and one might estimate the performance degrees reachable with todays high end ASIC technology. High end graphics cards from NVIDIA provide 23 times more programmable floating point performance and 100 times more memory bandwidth as our prototype. The prototype can be parallelized to several FPGAs, each holding a copy of the scene. A setup with two FPGAs delivering twice the performance of a single FPGA is running in our lab. Scalability to up to 4 FPGA has been tested."

http://graphics.cs.uni-sb.de/~woop/rpu/rpu.html 

This was hardware that was built and tested in 2004 and compared directly to hardware of the day (the paper was published in 2005). The graphics they were capable of producing were approximately Playstation or N64 in terms of Geometry (much higher than that in terms of texturing) and they were far below cutting edge in terms of the manufacturing process; had they been using bleeding edge technology they could have produced PS2/Gamecube/XBox levels of geometry (with much better textures) and potentially some impressive texture effects. By 2011 or so (as graphics becomes far more about lighting and shading effects than geometry) the hardware raytracing could easily become far more impressive than conventional raster-scan conversion.