pokoko said: The key concept, as far as I can see, is the idea of self-awareness, which then forms into conscious and unconscious self-improvement. When a computer system looks at a new problem, recognizes it as a new problem, and then goes about understanding the parameters of that new problem before ultimately solving it and archiving it internally, that's the point when "artificial intelligence" will drop the "artificial" and simply become a new form of intelligence. Tens of millions of years from now, this new dominant form of intelligence will probably consider humanity in much the same way we consider the first single celled organisms. |
The concern is that it wouldn't take tens of millions of years. From the article I linked, it suggests that intelligence is recursive, much like Moore's Law (initially wrote Godwin's law, because I have no fucking idea why). I think the thing that made me think about this most was that people often look back in history at human progress and think it's a flat line, which that guy does a fairly good job of showing it's not.
The concern is that an Human Intelligence level AI, could potentially become an ASI (Artificial Superintelligence) very quickly, without our knowing about it. The anecdote about 'Turry' (Second Part of the article) is quite interesting, for example.