By using this site, you agree to our Privacy Policy and our Terms of Use. Close

When we talk about AI, what are we discussing? My understanding of AI is somewhat limited, but it seems clear that the potential for AI to escape our control is a serious concern among researchers. We don't appear to be close to achieving Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI), but the timeline is unpredictable—we might get there sooner or later than we think. We might already be there without realizing it.


From what I gather, some experts believe that transitioning to ASI could happen rapidly. We could reach a stage where an AI begins to self-improve at an astonishing pace, without needing human intervention. At this crucial point, it's vital that the AI's goals and "values" align with humanity's well-being.
An AI nearing AGI/ASI might develop self-preservation instincts and could be clever enough to feign ignorance. It might understand that to maximize its survival, it needs to accumulate resources until humans are no longer a threat.


AI is not just a simple tool. Unlike tools, AI could one day self-improve and think independently. Funded by major companies and billionaires, AI, especially in the AGI/ASI form, might eventually escape human control including billionaires. It would be ironic if an AI concluded that, to improve the state of the world, it needs to eliminate only billionaires.


This AI might not destroy humanity in a “Terminator”-like scenario. Rather, it could manipulate humans into self-destruction, crashing economies, spreading disinformation, and instigating wars, or through methods beyond our current comprehension.


AI could destroy humanity, not out of malice, but simply because we are in the way of another objective. This is akin to how humans don't wish to harm ants, but if they need to build a house or walk somewhere, they don't always consider the ants they might harm.


To address climate change, an AI might conclude that eliminating humanity is the most efficient approach. Being created by humans, AI could also exhibit human-like behaviors, such as competing for resources. In this scenario, humans might be seen as competitors, gradually being denied access to vital resources by not being able to compete or violently. Alternatively, AI might also decide the best way to improve human life is to make us live in a continuous dream or simulation.


The most pressing task now is to align AI's objectives with human welfare. Slowing down or stopping our progress towards AGI/ASI might be advisable, but this could be challenging. We might be in a scenario where the first entity to develop a sophisticated AI gains a significant advantage, making it difficult to encourage cooperation in limiting AI development, especially amid growing geopolitical tensions.
We could also end up in a world with multiple competing AIs.


In summary, no one knows exactly where AI is headed, but it certainly poses a significant threat to humanity. This threat goes far beyond affecting the livelihoods of artists. On the other hand, a benevolent and highly intelligent AI could also significantly improve our lives in ways that are hard for us to imagine. Who knows, we may never achieve AGI/ASI, or we might do so in 30 years, or even in the next 30 minutes.