| SvennoJ said:
|
Just want to point out that "AI" today can be arbitrarily deterministic in the sense of getting the exact same output for a given input by setting the temperature hyper-parameter = 0 or even a low decimal value. Many AI workloads can run on local hardware; I train smallish CNNs and VIT-CNN hybrids on my local PC with training sessions being like 10 hours, as an example; online learners are also broad research field; and Local LLMs are pretty popular and capable relative to proprietary models, even if they don't top the benchmarks. I'd actually argue that open-source is in a much better relative position in the 2020's than in the 1990's, when even things like compilers were proprietary and compute was even more centralized. The big difference is the share of the overall economy that these companies compose is much larger than then.
I also think "brute-force search" isn't quite what is happening. Reinforcement-learning isn't brute-force and that is how "LLM's" have been post-trained since late-2024. In RL, paths are pruned based on the reward function and not traversed exhaustively. I also don't think the heuristics learned in the pre-trained base models are really "brute force" either. Having said that I don't think LLMs are human-like intelligence though. "AGI" probably will be achieved through an array of casual models like LEWM that interact with generative models in yet-to-be-known ways, and there probably will have to be some online learner involved as well.







