Mnementh said:
So I started to skim the links you provided, thank you for that. Especially neuro-symbolic machines align with my thinking. At university (or it is called college in the US I guess) I learned Prolog in my computer science. It was the classical approach of symbolic reasoning, an earlier product of AI research. But it was tedious, as you had to manually model every knowledge and you could run into edge problems pretty fast. But with the success of LLMs recently and seeing their particular limitations my thought was: what if an LLM at training is instructed to translate their summation of the training data into a symbolic model akin to Prolog (sorry, Prolog is the one I know, there are probably better symbolic languages out there). This has many advantages: a symbolic world model works better to hold context than textual reference, so it could help an LLM to keep focused on the task instead of "forgetting" earlier parts of the conversation. A world model is also helping to keep an general understanding of things. Also while a big LLM can build the world model inside the neural net it comes at high computational cost and massive training data. Symbolic representation of a world model can be cheaper and computational faster. LLMs are powerful because they operate on text. We have text/writing since a few thousand years and have developed it to be used in many contexts as a powerful tool. For instance modern math is interfaced with specialized text (mathematical symbols) which LLMs obviously can work with. We have textual notations for many games like chess or go. And most programming languages operate with text. All this LLMs can interface with and operate on. And yes, this includes symbolic reasoning, as prolog shows. So yes, combining an LLM with a symbolic reasoning system with a textual interface, which an LLM can query while processing requests could be an improvement I think. |
Prolog is still the common general-purpose logic programming language (although it has many dialects now.) I took a Knowledge Representation and Reasoning course earlier this year, and we used Prolog. We also used a language called Clingo for answer-set programming.
To an extent LLM's (especially now that they're incorporating search in the form of "reasoning tokens") are already neuro-symbolic hybrid systems. We've also seen the success of neuro-symbolic systems in the Alpha series of models, where rule-sets are made explicit and then the reinforcement learning is used to optimally traverse those rule-sets. These models are super-human in their narrow fields. So it is definitely a powerful combo.
Last edited by sc94597 - 5 hours ago