

I’ve been wondering, if you could combine LLMs with a logic programming language like Prolog. The latter is actually able to reason through things, you “just” have to express them in Prolog facts and rules.
Well, from doing a quick online search, I’m most certainly not the first person to think of this, which does not surprise me at all…






















Okay, but just to be clear, the problem is not that it can’t do a timer. The problem is that it claims to be able to and even produces a result which looks plausible. It means, you cannot trust it to do anything that you can’t easily verify. If they could fix that overconfidence in a year, it would be much better.