By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Ryuu96 said:
Mnementh said:

But for that I think it is also essential to treat AIs as equal. There is much talk about AGI in regards of threats to humanity or technological possibility. But we have to consider, that a true AI is also basically a person. We need to talk about moral implications, that is something I don't see yet.

I'm fairly certain investors will get bored and move onto the next thing before we come close to "AGI" if it's possible. It happens all the time, companies hype up this "new amazing thing" investors pump in money, it doesn't change the world, they move onto hyping up the next "new amazing thing" and I've said it before that I think AI will provide some good and some very bad things and thus needs heavy regulation but it won't change the world like corporations are hyping it up and there's a decent chance it will eventually "crash" and it's going to hurt a number of companies; Microsoft, Meta, Google, Nvidia, etc.

These corps are hyping up AI like a space idiot saying we'll send a man to Pluto in the next 20 years! Only for the space idiot to only reach the Moon, I mean that's still an achievement but it's a far cry from the promise made, these corporations are hyping up AI to ridiculous levels, they're pumping in billions upon billions (and not to mention destroying all their climate pledges in the process but that's another issue) and saying to investors "Don't worry, just another $30bn, AI will make us filthy rich eventually" and at some point investors are going to get bored, as they always do.

So we're at the point now where LLM's (Claude Sonnet 3.5) are able to control desktops in virtual environments and perform basic office tasks. 

For example, I work as an MLE/Data Scientist in the health-care industry. The company I work for has already saved costs by incorporating LLM's into their IVR systems and is now able to expand its contracts because of this. We (I work on an analytics/reporting team) use LLM's to help parse unstructured data and to help with provider enrollment form processing. LLM's (combined with strict rules-engines that reduce hallucinations to <1%) are also helping expedite prior authorizations. 

A peer-team of ours that creates RPA (Robotics Process Automation) processes is already experimenting with Claude's Desktop control for light-automations. Basically you can go to Claude, ask it to do things on a virtual desktop environment, and even though it is slower than a human it -- unlike a human -- doesn't sleep or need breaks. So many tasks can be automated using the virtual desktop environment. This is likely something that will be scaled in a year or so. 

In AWS they have a set of LLM (and classical ML) tools packaged in a service called "Amazon Transcribe", that is being used to automate insights and analytics pipelines from unstructured text data in our company. We use that data for some of our reports. It's also being piloted for use for transcription services internal to the company, in general.  

Reporting productivity has increased by about 20% with the use of coding assistants. Same thing with classical ML model production for prediction and classification tasks. 

This is with the current "dumb" models in a famously slow, highly regulated industry. Everything is just going to get smarter from here on out. 

This hope that AI research isn't going to have a big effect on industry, and that it is the next cryptocurrency or metaverse, is understandable given what that means, but lets not get blind here. We're looking at something that probably will be as minimally impactful as the internet and big data paradigm-shifts were. Both had their bear markets and slowdowns, but they still fundamentally changed the global economy and society. Sometimes for the better and others for the worse. Even if AI has it Dot-Com bubble moment (which it probably will), it is going to keep progressing at a rapid rate, just like the internet did after 2000.