By using this site, you agree to our Privacy Policy and our Terms of Use. Close
 

Can AI be trusted?

No 38 70.37%
 
Yes 16 29.63%
 
Total:54
OneTime said:

Yeah - though AI is nothing but a trick (at the moment), it's just a computer program that creates a series of words that is a plausible response to your question.

There's no intelligence (yet).

This guy gets it.



Around the Network
Jumpin said:
CaptainExplosion said:

No. It can't. In fact the Hollywood strike and the potential for AI to be weaponized prooves just how untrustworthy AI is. Imagine if an AI had advanced to the point where it decided that mankind is outdated so it might as well nuke us all. We as a species must band together and destroy AI before it destroys us all.

We were warned in works like Star Trek, The Terminator, and Avengers: Age of Ultron, it's time we listened.

The AI we have today is an algorithmic tool used by humans. The term “AI” is a marketing term - it’s a different sort of technology than Skynet, Star Trek, or Asimov’s positronic brain. And there is no technology that resembles Ultron.

As I’ll note - Asimov’s AI was incredibly helpful to humanity. Asimov also much more deeply considered the matter than James Cameron and others did - Cameron was looking for a villain for his assassin story. James Cameron has also described the danger of AI is how humans use it, much like how they use other technologies.

Anyway, in the robot series, I Robot begins near the dawn of the positronic brain. It goes on to discuss the products, mainly robots, that use the positronic brain to better the lives of humanity - end loneliness, make exploration and industry function unlike anything before - all told through the stories of robot psychologists linked to Susan Calvin. The story follows the decades of her life during the 21st century, ending with the development of the FTL drive by the AI, allowing humanity to travel to other planets

Fast forward 3000 years to the Spacer Trilogy/Robot Trilogy, and we have 50 Utopian planets where people are virtually immortal thanks to the technology developed. There’s a planet called Solaria where every human owns their own Barony with tens of thousands of robots employed. Crime is virtually non-existent in the spacer worlds… the stories that follow involve extremely rare crimes.

Ten thousand years later, humanity rules the galaxy.

So, not all of science fiction agrees on AI = evil. Including the guy who virtually put AI on the map.

I love Azimov to my bones, and I would love to see his vision come to life. Just like I would love to see Star Trek's TNG vision as well. But those had a very positive way of looking at technology. Our capital based "deciders" do not think of other humans as peers if they are not really as rich and/or powerful as them.  They do not care about humanity well being. Non rich people are cannon fodder for this small group, just numbers, and their losses are "collateral damage" (to use the words of a huge douchebag that thankfully died this very week). I don't see these deciders doing anything not even close to the first rule of robotics (A robot may not injure a human being or, through inaction, allow a human being to come to harm). If needed, they WILL program machines to harm others and even incentivize them to do so.



farlaff said:

I love Azimov to my bones, and I would love to see his vision come to life. Just like I would love to see Star Trek's TNG vision as well. But those had a very positive way of looking at technology. Our capital based "deciders" do not think of other humans as peers if they are not really as rich and/or powerful as them.  They do not care about humanity well being. Non rich people are cannon fodder for this small group, just numbers, and their losses are "collateral damage" (to use the words of a huge douchebag that thankfully died this very week). I don't see these deciders doing anything not even close to the first rule of robotics (A robot may not injure a human being or, through inaction, allow a human being to come to harm). If needed, they WILL program machines to harm others and even incentivize them to do so.

Already done. Autonomous killer drones are already used by the military.



I'm a little surprised no one has posted this BS

https://gizmodo.com/gpt4-open-ai-chatbot-task-rabbit-chatgpt-1850227471



No, it cant...but neither can people, so bring it on.



Around the Network
The_Yoda said:

I'm a little surprised no one has posted this BS

https://gizmodo.com/gpt4-open-ai-chatbot-task-rabbit-chatgpt-1850227471

AI is already succeeding in dumbing down the next generation since kids in school nowadays use chatgpt for book reports and other assignments. It won't be a matter of AI getting as smart as humans, it will be humans getting as dumb as AI ;) Idiocracy got it right. Well apart from civilization actually surviving another 500 years...

Anyway, here's the future of telemarketing and phone and email scams.




The bigger thing is the impact it will have on society.
The amount of jobs at risk from being replaced by AI is massive.

Our entire world structure will collapse one bite, after another by AI, takeing over jobs we used to do.


I want to be optimistic about the far future, where AI is side by side with man.
However, I wont live to see that day.... the near future is the one that frightens me.

The greed of man, will ultimately exploit the usage of AI, at the cost of human filled jobs.
Which will result in mass amounts of joblessness and povety.  At which point, we either move away from capitalism, to something else, or we are left with a distopian future where a few own even more than they do now, and the masses grow ever poorer.


Last edited by JRPGfan - on 02 December 2023

Using Hollywood as an argument is really poor. At 5his point I would trust AI way more than people. People are way more capable of harming other people ever will than AI will ever be for as long as I live. I can back this statement up with plenty of Wars, conflicts and by now day to day crimes against humanity.

The worst AI has done is spread misinformation and probably stole some data so far. Far less dangerous than war. Both things people also regularly do and with a more malicious intent.



Please excuse my (probally) poor grammar

JRPGfan said:

The bigger thing is the impact it will have on society.
The amount of jobs at risk from being replaced by AI is massive.

Our entire world structure will collapse one bite, after another by AI, takeing over jobs we used to do.


I want to be optimistic about the far future, where AI is side by side with man.
However, I wont live to see that day.... the near future is the one that frightens me.

The greed of man, will ultimately exploit the usage of AI, at the cost of human filled jobs.
Which will result in mass amounts of joblessness and povety.  At which point, we either move away from capitalism, to something else, or we are left with a distopian future where a few own even more than they do now, and the masses grow ever poorer.

That's the fault of mankind not of an AI. If you shoot someone I blame you not the gun.



Please excuse my (probally) poor grammar

When we talk about AI, what are we discussing? My understanding of AI is somewhat limited, but it seems clear that the potential for AI to escape our control is a serious concern among researchers. We don't appear to be close to achieving Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI), but the timeline is unpredictable—we might get there sooner or later than we think. We might already be there without realizing it.


From what I gather, some experts believe that transitioning to ASI could happen rapidly. We could reach a stage where an AI begins to self-improve at an astonishing pace, without needing human intervention. At this crucial point, it's vital that the AI's goals and "values" align with humanity's well-being.
An AI nearing AGI/ASI might develop self-preservation instincts and could be clever enough to feign ignorance. It might understand that to maximize its survival, it needs to accumulate resources until humans are no longer a threat.


AI is not just a simple tool. Unlike tools, AI could one day self-improve and think independently. Funded by major companies and billionaires, AI, especially in the AGI/ASI form, might eventually escape human control including billionaires. It would be ironic if an AI concluded that, to improve the state of the world, it needs to eliminate only billionaires.


This AI might not destroy humanity in a “Terminator”-like scenario. Rather, it could manipulate humans into self-destruction, crashing economies, spreading disinformation, and instigating wars, or through methods beyond our current comprehension.


AI could destroy humanity, not out of malice, but simply because we are in the way of another objective. This is akin to how humans don't wish to harm ants, but if they need to build a house or walk somewhere, they don't always consider the ants they might harm.


To address climate change, an AI might conclude that eliminating humanity is the most efficient approach. Being created by humans, AI could also exhibit human-like behaviors, such as competing for resources. In this scenario, humans might be seen as competitors, gradually being denied access to vital resources by not being able to compete or violently. Alternatively, AI might also decide the best way to improve human life is to make us live in a continuous dream or simulation.


The most pressing task now is to align AI's objectives with human welfare. Slowing down or stopping our progress towards AGI/ASI might be advisable, but this could be challenging. We might be in a scenario where the first entity to develop a sophisticated AI gains a significant advantage, making it difficult to encourage cooperation in limiting AI development, especially amid growing geopolitical tensions.
We could also end up in a world with multiple competing AIs.


In summary, no one knows exactly where AI is headed, but it certainly poses a significant threat to humanity. This threat goes far beyond affecting the livelihoods of artists. On the other hand, a benevolent and highly intelligent AI could also significantly improve our lives in ways that are hard for us to imagine. Who knows, we may never achieve AGI/ASI, or we might do so in 30 years, or even in the next 30 minutes.