LegitHyperbole said:
Nobody thought LLMs could do what they do and do it as well as they do it. Thing is we don't know whern and at what point inbdevelopment the singularity is gonna happen and it being a singularity we don't know anything about what will happen or how it'll take shape. It could happen while they are training GPT 6 and take ten minutes for all we know as an extreme example. I'm sorry but saying this tech is not a threat is burying your head in the sand, it's more dangerous than the atomic bomb even before ASI is reached. Competent low level AGI that never evolves past that point would be so disruptive it'd change everything about society. Look how much the Internet has changed things in ten years and sure that's only faster telecommunications. |
Big difference between LLMs and AGIs. AGIs is bordering on Sci-Fi. I never really doubted LLMs.
We're nowhere near AGI still and we're investing tens of billions into it, how long are investors going to be okay with that?
AI tech is a threat, through things like Deepfake AI, I do believe AI will be used for good in some areas, I also believe it will be used for bad, I do not have any reason at this current stage to believe it's going to revolutionise and completely change the world though and that's the problem of companies selling it like that, once investors realise it's not going to change the world, they'll realise they've been fed a lie and stocks will plummet.
We've often had tech which was hyped to the moon and amounted to nothing or not as big as claimed.
AGI could happen, it equally may not happen either, no need to jump to doomsday scenarios for something we're so unsure on.
Last edited by Ryuu96 - on 27 August 2024