By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Khuutra said:
...

This sounds like it involves a lot more high-level knowledge than I would be able to intelligibly level at it! Or level at it at all, I suppose.

Does that mean we're potentially coming up against a physical limitation concerning the power and scale of classic computing? Like.... within the next few decades?

The 'end of Moore's law' has been predicted by analysts to be about 10 years away for as long as consumer CPUs have existed. But the rate of progress has yet to slow down.

It will actually be cost, not technology, that limits advancement. With the next generation (22nm), only the big three of Intel, TSMC and Globalfoundries will be able to afford to convert a fab plant to the node and be able to make back the money. Other chip designers will have to use one of those three for fabrication. And the costs will climb even faster in future.

Further shrinkage past 16nm or so will require a rethink of what it means to make a transistor (as in this seven-atom example, or using electron spin or photons instead of electron charge to record the state). But classical computing will endure independent of what's going on at the micro-level, because most computational tasks we're familiar with don't work well with quantum logic.

As an example of what I think will happen, WereKitten is right in saying that quantum effects will interfere with the accuracy and certainty of calculations. I think we'll see a chip which runs any classical task say ten times in parallel, and then takes the most common result. Still though - would any business use a CPU which is only 99.9999% accurate, considering how many individual operations are required (i.e. trillions)?