By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - General - The Articifical Intelligence thread.

 

AGI/ASI will be..

A great advancement 7 33.33%
 
A poor advancement 6 28.57%
 
Like summoning a demon. 5 23.81%
 
Like summoning God. 1 4.76%
 
No opinion on what AGI/ASI will be like. 2 9.52%
 
Total:21
haxxiy said:

I think it's plausible the marginal cost of labor decreases instead of increasing as jovs are automated. If employers see an AI productivity boost as mostly independent of employee quality, for instance.

At the same time, most labor (excluding the jobs that are reduntant/automatable already) is full of small little moments of zero-shot learning and analogical reasoning that even a pretty general and flexible AI model (say, one that saturates ~ all current benchmarks by 2030) will likely struggle with them in the short to mid term. So the automation of labor will take a long time, perhaps enough to deal with the societal and economic concerns around it.

Yeah the hope is for a prolonged transition, fast enough for the AI bubble not to remain a bubble, slow enough for the job market and job training to adapt. The other problem with quickly replacing entry level jobs is that those entry level positions are were people learn and gain experience to advance to higher level jobs. Without junior positions, you'll need to invest in training for 'senior' positions. 

The hiring freeze of people coming out of school mentioned in the video will eventually lead to ageing of experienced workers with no replacements. Or AI will advance to take over the higher level jobs as well. But for example would people be OK with eventually the entire legal system replaced by AI? AI can make true on "justice is blind" I guess, yet it will lead to AI setting/deciding policy etc. Maybe it will lead to a fairer world, maybe it will lead to an even more corrupt system. AI can be manipulated as well, a lot easier than bribing tons of people in power.



Around the Network
LegitHyperbole said:
SvennoJ said:

Kyle is still as narrow minded as he was when he was a decade ago, he still has the fundamental belief that everything works in the political Sphere and everything is controlled my those in power or the 1% which is clearly not true, they have fuck all of an idea most of the time. In this video he is completely blind to simple things like IT peoples propensity to want to build and advance things and the power that humanities collective consequences has, the man needs to learn about game theory the concept of Molach. He is so annoying when he takes every problem and circles it around to blame a politician, or 1%er. It's mind boggling short sighted that he hasn't gained any level of awareness that sometimes large groups of people just do things that hurt ourselves or commit to things that are of benifet but have severe consequences that are out of our control yet we can't stop and there is noone with a finger on the button. I bet he thinks climate change is whitin the control of government's. This is the first video I've seen of him in years and the last, my God he is so narrow minded and hyper focused on his little political Sphere like it's the cause and solution to all of the Universes problems but I guess it keeps the pay checks rolling in, hypocrite. 

You're just jealous of his hyperbole :p

Yeah he takes everything to the extreme and does a lot of shouting, typical youtuber. But look at the sources he quotes, that's the real story. Banning regulation (like marking AI generated content as AI) and a hiring freeze on people coming out of school are not good developments. 



Concerns are legit, and consequences are clear. And it’s not just the disappearance of career paths, but the reduction in workforce necessity. But my take is crisis should be seen as an opportunity at the same time for society to take the next step rather than say “we’re not ready” and back down. Because some of the changes might also benefit solutions toward other (even more) desperate problems we’re currently facing in the world, that many people just refuse to look up at because it won’t effect them this year.

I think the creativity problem could be solved by changing up the economy. Give every creative the ability to sustain themselves, access to educational material on creative process and dissemination, and they’ll bring something great. Removal of the bottom of Maslow’s pyramid of needs, and meaningful art will become the chief motivational factor of creatives. It will also cut down on suicide… all while adding to the value of the community.

Our economy is outdated considering our productive capacity.
It’s not so much a function of salaries and wealth… although, that’s a symptom, but on ownership of assets and the current economic system just allowing wealthy to exploit resources with the most minimal share back into the community and overall environment (fines and taxations are more an appeasement to the public, rather than a solution to the structural issues).

In short, AI isn’t anymore a problem than previous automation breakthroughs. The problem is the current socioeconomic jacket we’re wearing on this new muscle bulk… It’s a bad fit, and we could do so much better.

Last edited by Jumpin - on 21 November 2025

I describe myself as a little dose of toxic masculinity.

SvennoJ said:
And just like blue collar jobs were outsourced to cheap labor countries, AI can just as easily be outsourced to cheaper systems... Quality nor 'loyalty' was never a priority for capitalism, profits are, so AI jobs will also go to the cheapest bidder, and that shift is a lot easier than building factories in China and offices in India.

So if AI succeeds in taking over jobs, cheaper AI can just as well succeed in taking over the US AI made at enormous investments. It's like building the first ultra expensive car manufacturing factory in Detroit, just to be replaced a year later or even before opening by overseas production. No time to recoup the investments.

I think it depends a lot on how the "agents" are achieved. What seems to be happening currently is that software engineering eco-systems and tooling are being developed to perform tasks on top of base models, and it is this which is making the models useful and production-able. Deepseek and Qwen are really good at training base models, and have performed a lot of excellent fundamental research, but I think they're lagging behind on product eco-systems and there is a reason why you don't see them being used much in "agent" toolsets, despite the cheaper costs. 

The U.S is just really good at software-development and being able to productionize and market things. Marketing and productionization are probably one of the few things the U.S has and Americans have ever consistently been great at. China is a good second (Tencent and ByteDance have shown us) but they are still quite far behind. 

I suppose having the best base models can act as an accelerant, but I don't think anybody is going to jump very far ahead of anyone else by doing this.  

There is also the matter that using Chinese models can be a data security concern. I work for a large healthcare company, and they pretty much only use Anthropic models on AWS for batch inference tasks because it is HIPPA-compliant, and Open AI enterprise for day-to-day chat/coding. I am not confident that the U.S industry won't use regulatory capture to push out Chinese solutions. This is especially since we're in a post-neoliberal world when it comes to global markets. 

Day-to-day people will be using Chinese models regardless, but they aren't the main consumers. 



Jumpin said:

Concerns are legit, and consequences are clear. And it’s not just the disappearance of career paths, but the reduction in workforce necessity. But my take is crisis should be seen as an opportunity at the same time for society to take the next step rather than say “we’re not ready” and back down. Because some of the changes might also benefit solutions toward other (even more) desperate problems we’re currently facing in the world, that many people just refuse to look up at because it won’t effect them this year.

I think the creativity problem could be solved by changing up the economy. Give every creative the ability to sustain themselves, access to educational material on creative process and dissemination, and they’ll bring something great. Removal of the bottom of Maslow’s pyramid of needs, and meaningful art will become the chief motivational factor of creatives. It will also cut down on suicide… all while adding to the value of the community.

Our economy is outdated considering our productive capacity.
It’s not so much a function of salaries and wealth… although, that’s a symptom, but on ownership of assets and the current economic system just allowing wealthy to exploit resources with the most minimal share back into the community and overall environment (fines and taxations are more an appeasement to the public, rather than a solution to the structural issues).

In short, AI isn’t anymore a problem than previous automation breakthroughs. The problem is the current socioeconomic jacket we’re wearing on this new muscle bulk… It’s a bad fit, and we could do so much better.

I agree with all of that, just not that optimistic that AI is going to bring a path to Gene Roddenberry's Star Trek 'utopia'. The current trend is the opposite of 'giving' people the means to live. Greed remains the main problem, all you said can already be done today without AI. 

just allowing wealthy to exploit resources with the most minimal share back into the community and overall environment, AI is only accelerating that for now. Some kind of shake up is needed to change the current system.

Same as the internet didn't reduce inequality, I doubt AI will without intervention.



Around the Network
SvennoJ said:

Yeah the hope is for a prolonged transition, fast enough for the AI bubble not to remain a bubble, slow enough for the job market and job training to adapt. The other problem with quickly replacing entry level jobs is that those entry level positions are were people learn and gain experience to advance to higher level jobs. Without junior positions, you'll need to invest in training for 'senior' positions. 

The hiring freeze of people coming out of school mentioned in the video will eventually lead to ageing of experienced workers with no replacements. Or AI will advance to take over the higher level jobs as well. But for example would people be OK with eventually the entire legal system replaced by AI? AI can make true on "justice is blind" I guess, yet it will lead to AI setting/deciding policy etc. Maybe it will lead to a fairer world, maybe it will lead to an even more corrupt system. AI can be manipulated as well, a lot easier than bribing tons of people in power.

I suspect we'll see AI complementing productivity and R&D in a few years, while almost everyone still has a job, and when it's able to do almost everything without supervision, there will be laws requiring a human approving or reviewing their output in crucial sectors (plus the impact of late conservative adopters, etc.) for quite some time.

People like Altman and Musk know the massive disruptive potential of general AI and defend some form of UBI out of self-interest and the optimistic view that economic output will increase so much that taxing them 10% or so for UBI will be an easy consensus and more than enough. The thing is, an economic system with a linear component (humans) and an exponential one (AI), as described above, will still be essentially linear as a whole, so I'm not sure how much increased growth we'll see at first.



 

 

 

 

 

Just met with an old college friend who I haven't seen in a decade. He just got his PhD in Physics last December and now is part of a organization building and using a very interesting MCP (called Ax Prover) that acts as a proof-assistant (and eventually a formalization tool - to build out a database of formalized proofs) using the Lean language/proof assistant He's helping extend the library to formalize quantum mechanics proofs. 

The idea is that it can expedite (exponentially) math and mathematical science research. 

All it incorporates is an MCP. There is no distillation or fine-tuning. 

Eventually these extended databases can be used to improve reinforcement learning in the vein of AlphaGeometry or AlphaProof. 

These are the sort of tools we're heading toward. Less about the base model and more about neuro-symbolic systems with LLMs as sub-components. 

https://arxiv.org/abs/2510.12787