By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - General - AI - Something we should all be talking about?

The key concept, as far as I can see, is the idea of self-awareness, which then forms into conscious and unconscious self-improvement. When a computer system looks at a new problem, recognizes it as a new problem, and then goes about understanding the parameters of that new problem before ultimately solving it and archiving it internally, that's the point when "artificial intelligence" will drop the "artificial" and simply become a new form of intelligence.

Tens of millions of years from now, this new dominant form of intelligence will probably consider humanity in much the same way we consider the first single celled organisms.



Around the Network
pokoko said:
The key concept, as far as I can see, is the idea of self-awareness, which then forms into conscious and unconscious self-improvement. When a computer system looks at a new problem, recognizes it as a new problem, and then goes about understanding the parameters of that new problem before ultimately solving it and archiving it internally, that's the point when "artificial intelligence" will drop the "artificial" and simply become a new form of intelligence.

Tens of millions of years from now, this new dominant form of intelligence will probably consider humanity in much the same way we consider the first single celled organisms.

The concern is that it wouldn't take tens of millions of years.  From the article I linked, it suggests that intelligence is recursive, much like Moore's Law (initially wrote Godwin's law, because I have no fucking idea why).  I think the thing that made me think about this most was that people often look back in history at human progress and think it's a flat line, which that guy does a fairly good job of showing it's not.

The concern is that an Human Intelligence level AI, could potentially become an ASI (Artificial Superintelligence) very quickly, without our knowing about it.  The anecdote about 'Turry' (Second Part of the article) is quite interesting, for example.



mornelithe said:

The concern is that it wouldn't take tens of millions of years.  From the article I linked, it suggests that intelligence is recursive, much like Godwin's Law.  I think the thing that made me think about this most was that people often look back in history at human progress and think it's a flat line, which that guy does a fairly good job of showing it's not.

The concern is that an Human Intelligence level AI, could potentially become an ASI (Artificial Superintelligence) very quickly, without our knowing about it.  The anecdote about 'Turry' (Second Part of the article) is quite interesting, for example.

I didn't say it would take tens of millions of years for it to happen.  I'm refering to how we are remembered long after humanity is gone.



pokoko said:
mornelithe said:

The concern is that it wouldn't take tens of millions of years.  From the article I linked, it suggests that intelligence is recursive, much like Godwin's Law.  I think the thing that made me think about this most was that people often look back in history at human progress and think it's a flat line, which that guy does a fairly good job of showing it's not.

The concern is that an Human Intelligence level AI, could potentially become an ASI (Artificial Superintelligence) very quickly, without our knowing about it.  The anecdote about 'Turry' (Second Part of the article) is quite interesting, for example.

I didn't say it would take tens of millions of years for it to happen.  I'm refering to how we are remembered long after humanity is gone.

Oh sorry, that's what I thought you meant :)   I also wrote the wrong Law lol.



mornelithe said:
pokoko said:
The key concept, as far as I can see, is the idea of self-awareness, which then forms into conscious and unconscious self-improvement. When a computer system looks at a new problem, recognizes it as a new problem, and then goes about understanding the parameters of that new problem before ultimately solving it and archiving it internally, that's the point when "artificial intelligence" will drop the "artificial" and simply become a new form of intelligence.

Tens of millions of years from now, this new dominant form of intelligence will probably consider humanity in much the same way we consider the first single celled organisms.

The concern is that it wouldn't take tens of millions of years.  From the article I linked, it suggests that intelligence is recursive, much like Moore's Law (initially wrote Godwin's law, because I have no fucking idea why).  I think the thing that made me think about this most was that people often look back in history at human progress and think it's a flat line, which that guy does a fairly good job of showing it's not.

The concern is that an Human Intelligence level AI, could potentially become an ASI (Artificial Superintelligence) very quickly, without our knowing about it.  The anecdote about 'Turry' (Second Part of the article) is quite interesting, for example.

Okay, first, a disclaimer: I'm really drunk.  I'm sorry but it's true.  second, I was trying to be glib while making a point.   That's my fault.  I was trying to say that one day we'll be remembered as one step in the evolutionary processs.  Humans like to think that we're the end result but that's  just arrogancee.  Humanity will one day be consider in the same light as the first one celled organisms.   We are just one stap in the processs.  (I appologize but Ican only see out of one eye).  The graph of progress in that article reinforces that, in my mind.  We will be improved upon and evolution will continue without us.