By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Soundwave said:
sc94597 said:

What exists today and the research that is done today is the precedent and determines the range of possibilities for what exists 20-30 years from now.

We're not even accounting for AI itself designing chips and technology and even other AI, which will probably happen at some point if you keep pouring billions/trillions of dollars into its development and have all these mega corporations hyper incentivized to create something better and better. 

Who is "we"? I can guarantee you that people actually doing AI research do think about these topics all of the time. 

The logical conclusion of your argument here is that all technological and scientific advancements have the potential to lead us to our destruction if we expand the timeline far enough and therefore we should stop making any of them. 

Afterall, when 19th century physicists were studying the ultraviolet catastrophe they couldn't have predicted that the physics that resolved it (Quantum Mechanics and Nuclear Physics) would eventually lead to the invention of nuclear bombs.

It is nonsensical to anybody who isn't an anarcho-primitivist. You become concerned about x-risks when there is sufficient evidence that they exist, not because at some indeterminate future they might exist but you can't explain why the current technology leads there.