z5h 5 days ago

i've come to appreciate, over the past 2 years of heavy Prolog use, that all coding should be (eventually) be done in Prolog.

It's one of few languages that is simultaneously a standalone logical formalism, and a standalone representation of computation. (With caveats and exceptions, I know). So a Prolog program can stand in as a document of all facts, rules and relations that a person/organization understands/declares to be true. Even if AI writes code for us, we should expect to have it presented and manipulated as a logical formalism.

Now if someone cares to argue that some other language/compiler is better at generating more performant code on certain architectures, then that person can declare their arguments in a logical formalism (Prolog) and we can use Prolog to translate between language representations, compile, optimize, etc.

  • xelxebar 4 days ago

    > over the past 2 years of heavy Prolog use

    Oh, cool. Mind if I pick your brain a bit?

    Recently, there was an HN post[0] of a paper that makes a case against pure logic languages in favor of "functional logic" ones, which they exhibit with Curry[1]. The setup argument is that Prolog's specs backtracking, which strongly downlimits it from full SLD resolution, causing fatally sharp edges in real world usage.

    Being fairly naive to the paradigm, my interpretation is that writing real Prolog programs involves carefully thinking about and controlling the resolution algorithm, which feels very different than straight knowledge declaration. I believe cut/0 is the go-to example. Is that your experience with Prolog in practice?

    The real meat of the paper, however, is in its case that functional logic languages fully embed Prolog with almost 1-to-1 expressivity, while also providing more refined tools for externalizing knowledge about the intended search space of solutions.

    Thoughts? How are you using Prolog, logic, or constraint programming? What languages and tooling in this arena do you reach for? What is some of your most hard-earned knowledge? Any lesser-known, but golden, websites, books, or materials you'd like to share?

    Cheers!

    [0]:https://news.ycombinator.com/item?id=41816545

    [1]:https://www.curry-language.org/

    • z5h 4 days ago

      > What is some of your most hard-earned knowledge?

      1. If you find yourself straying too often from coding in relations, and instead coding in instructive steps, you're going to end up with problems.

      2. Use DCGs to create a DSL for any high level operations performed on data structures. The bi-directionality of Prolog's clauses means you can use this DSL to generate an audit trail of "commands executed" when Prolog solves a problem for you, but you can also use the audit trail and modify it to execute those commands on other data.

      • javcasas 4 days ago

        How do you debug DCGs? I get "false." instead of "syntax error at line 23", which is unacceptable for bigger inputs.

        Also DCGs for high level operations? Do you mean "use DCGs to parse strings that contain instructions" or do you parse things other than strings with DCGs? I'm assuming you take the parsed instructions and run them through some kind of interpreter that does the execution and audit trail.

        • YeGoblynQueenne 4 days ago

          (Not OP)

          >> How do you debug DCGs? I get "false." instead of "syntax error at line 23", which is unacceptable for bigger inputs.

          You need to include exception handling in your DCG rules. For example, in Prolog-like pseudocode:

            pink_apples([A|As]) --> [A], { red_apple(A), throw(error(type_error(pink_apple,red_apple),_)) }.
          
            % Raises type error:
            ERROR: Type error: `pink_apple' expected, found `red_apple' (an atom)
            ERROR: In:
            ERROR:   [12] throw(error(type_error(pink_apple,red_apple),_10070))
          
          Called from a source file the error output will list the line in the source file where the exception was raised. There are more tools to debug the error:

          https://www.swi-prolog.org/pldoc/man?section=exception

          DCGs parse lists, not strings, as such. So the input can be anything you can put in the form of a list.

          • upghost 4 days ago

            And this is the person who saved me from nearly dropping Prolog!!

            • YeGoblynQueenne 4 days ago

              Hope you don't end up cursing me down the line :P

        • z5h 4 days ago

          > How do you debug DCGs? I get "false." instead of "syntax error at line 23", which is unacceptable for bigger inputs.

          I also sympathize. "false" as the default failure mode is a challenge with Prolog. Most Prologs I've used have good debugging/stepping features (see spy and trace predicates), logical debugging of pure monotonic Prolog can often help (explained by Markus Triska), you can easily write (use existing) meta predicates that assert a called predicate must not fail otherwise throw an exception. For example: here the ./ is supposed to look like a checkmark. So `./ true.` is true. `./ false` throws an exception.

            :- op(920, fy, './').
            :- meta_predicate './'(0).
            './'(X) :- call(X) *-> true ; prolog_debug:assertion_failed(fail, X).
          • upghost 4 days ago

            Ahh nice, I like this one!

        • upghost 4 days ago

          I sympathize. I nearly dropped Prolog for this reason until I learned about term_expansion/2 and goal_expansion/2.

          If what Prolog is doing you consider incorrect, _make it_ incorrect.

          DCGs can be used to convert any data structure to a sequence. Actually, they are capable of any graph to graph , so they could produce a sequence of commands.

          The oft-cited Markus Triska has some great work on this:

            https://youtu.be/vdabv9EkYrY?feature=shared
          
            https://www.metalevel.at/zurg/
          
          You can also use Prolog as a coordinator on external systems.

          See https://github.com/mthom/scryer-prolog/blob/master/src/lib/p...

          • rdtsc 4 days ago

            I was watching some of the videos and in the intro Markus mentioned some of the most popular Prolog systems https://www.youtube.com/watch?v=8XUutFBbUrg

            SICStus, GNU Prolog, XSB, Ciao, Scryer, Traella, Tau but does not mention at all of SWI-Prolog. I remember using SWI-Prolog back in the day, did it somehow fall out favor, or is there some animosity between implementation and Markus is just not in the SWI-Prolog camp?

            • upghost 4 days ago

              My 2 cents, it's hard for me to believe there is any animosity there.

              You don't have to take my word for it, you can just see what he says and does. But I've seen nearly every one of his videos and blog posts, read through many previous comments on HN, there's not a single disparaging remark in any of them. In fact he even makes his philosophy clear in this post [1]:

                Things that I, at least, always do without even thinking about it. For instance, when I mention Scryer Prolog in Internet discussions, I always try to mention at least one other Prolog system too in a flattering way, out of respect and admiration for other systems and also to encourage more cooperation between systems.
              
              What I gather is Markus strongly advocates for ISO compliant Prolog implementations and especially open source ones, because that's where his heart is right now. But one thing to remember is that Markus has also contributed 10s of thousands of lines of code to SWI (check out the author tag in many of the SWI libraries), has co-authored authored many papers with Jan Wielemaker, and there is plenty of professional respect there. This is like trying to understand the nuance reasons why Steven Hawking disagrees with Einstein (or whoever) on something. Probably in agreement about 99.999% of most things but strong disagreement on a 0.001% that probably doesn't matter much to you and I about whether or not black holes are Humperdink-Blazensort compliant or... waves hands stuff.

              As you've seen from the comments though, SWI is an extremely successful, established system. Nearly every book and example you will read uses SWI. It has great libraries, great IDEs, FFI, embedding, support, documentation, all kinds of great stuff -- SWI is already REALLY successful, and for very good reasons!

              So my guess is just trying to give a voice to the little guys who are up and coming and are in line philosophically with his beliefs about the language and open source. I personally would describe that as "support", I don't think I would use the word "animosity".

              [1]: https://github.com/mthom/scryer-prolog/discussions/2605

              • rdtsc 4 days ago

                > This is like trying to understand the nuance reasons why Steven Hawking disagrees with Einstein (or whoever) on something.

                I like the analogy!

                > So my guess is just trying to give a voice to the little guys who are up and coming and are in line philosophically with his beliefs about the language and open source.

                That makes perfect sense. Thanks for answering! It's an interesting insight into the world of "Prologs" to a complete outsider. I had only used SWI-Prolog in the university and remember it having a variety of modules, web development libraries and other such things, so at least superficially to a novice, that was pretty impressive.

    • z5h 4 days ago

      So first, let's keep in mind that with no execution model, Prolog is still a "syntax" for Horn clauses. It's still a way to document knowledge. Add SLD resolution and we can compute. The paper (intentionally I presume) orders clauses of a simple predicate to illustrate (cause) a problem in Prolog.

      But what I actually find is the more time spent in Prolog, the more natural it is to express things in a way that is clear, logical and performant. As with any language/paradigm, there are a few gotchas to be experienced. But generally speaking, SLD resolution has never once been an obstacle (in the past 2 years) of coding.

      The general execution model of Prolog is pretty simple. The lack of functions actually makes meta-programming much clearer and simpler. A term is just data, unless it's stated as a goal. It's only a valid goal if you've already defined its meaning.

      So I'd be concerned that Curry gives up the simplicity of Prolog's execution model, and ease of meta-programming. I struggle with the lack of types in Prolog, but also know I can (at least in theory) use Prolog to solve correctness problems in Prolog code.

      I'm currently using SWI-Prolog. Performance is excellent, it has excellent high-level concurrency primitives[0] (when was the last time you pegged all your cores solving a problem?), and many libraries. I might be one of the few people who has committed to using the integrated editor (PceEmacs) despite being a Vim person. PceEmacs is just too good at syntax highlighting and error detection.

      At the same time, I'm a huge fan of Markus Triska. His Youtube[1] stuff is mind-expanding (watch all of it, even if you never write Prolog). He has an excellent book online[2]. I admire the way he explains and advances pure monotonic Prolog, and I appreciate the push for ISO conformance and his support for Prologs that that do the same (SWI is not on that list).

      If you want to learn Prolog, watch all of Markus Triska's videos, read his book, and learn what Prolog could be in a perfect world. Then download SWI-Prolog, and maybe break some rules while getting things done at a blazing speed. Eventually you'll gravitate to what makes sense for you.

      The Art of Prolog is a classic "must have". Clause and Effect is a good "hit the ground running" (on page 70 you're into symbolic differentiation via term rewriting).

      0 https://www.swi-prolog.org/pldoc/man?section=thread

      1 https://www.youtube.com/@ThePowerOfProlog

      2 https://www.metalevel.at/prolog

      • YeGoblynQueenne 4 days ago

        >> So I'd be concerned that Curry gives up the simplicity of Prolog's execution model, and ease of meta-programming.

        That was my concern with the paper listed above also. Functional syntax is to my mind needlessly over-complicated. Types are useful but you can roll your own if you need them, and Prolog makes that easy enough.

        >> I'm currently using SWI-Prolog. Performance is excellent, it has excellent high-level concurrency primitives[0] (when was the last time you pegged all your cores solving a problem?), and many libraries. I might be one of the few people who has committed to using the integrated editor (PceEmacs) despite being a Vim person. PceEmacs is just too good at syntax highlighting and error detection.

        Hah! Hello fellow PCEmacs >> vim user :D

        Markus Triska's stuff is good and he's undeniably an expert in Prolog programming, I mean duh, but he and a couple of others have needlessly caused a rift in the Prolog community (mainly between themselves and everyone else) by being so stroppy about the ISO non-conformance of SWI-Prolog. I hope Markus is reading this. The Prolog community is very small and dwindling and we can't afford such drama. We need more Prolog compilers, yes, ISO conformance is good, yes, but SWI-Prolog is a robust and battle-tested implementation and it is the ISO standard that should be leaning on its experience of being a real-world Prolog used by real programmers and not just the other way around.

        • z5h 4 days ago

          > Hah! Hello fellow PCEmacs >> vim user :D

          That made my day! Hello!

      • danieldk 4 days ago

        I'm currently using SWI-Prolog.

        Still checking every now and then if SICStus has open sourced. I used Prolog daily during my PhD and SICStus had such nice features. E.g. it could raise an exception when no more heap space can be allocated or when a 'call' would not finish within a given time. These features made it much easier to use Prolog in real-world systems (this was a parsing system and when parsing a very large corpus, this was highly preferable over simply crashing the interpreter).

        Maybe things have changed, but this wasn't possible with SWI at the time. Even worse, most C extensions would use malloc directly, making it impossible to track allocations done by extensions.

        • YeGoblynQueenne 4 days ago

          >> Still checking every now and then if SICStus has open sourced.

          There used to be a free (though proprietary) version of Quintus, the predecessor of SICStus. It might still be around somewhere if you look hard enough.

      • fao_ 4 days ago

        > The Art of Prolog is a classic "must have".

        I figured it would be a good introduction to prolog, but to date there doesn't seem to be any prolog interpreter that lets me copy the things in the book to play with them?

      • upghost 4 days ago

        Strongly 2nd checking out Markus Triska's work. It's practically poetry.

        It's really surprising to see someone at his level, a software reliability researcher no less, retain that level of passion about something. Usually folks with that level of experience are really grumpy and lose the evangelistic flair. I tend to maintain that level of evangelism about things I'm passionate for but no one would ever accuse me of being a "reliability researcher", lol.

        In fact, I was/am a hardcore lisp hacker, and never in my life did I think I'd find something that would even come close to lisp for me. because it's FUN! but I'll be damned if Prolog isn't turning out to be even more fun in ways I didn't expect.

        In fact the only thing I really miss about lisp at this point is structural editing (paredit and such).

        And it's not like it's either/or, you can use both. But learning pure Prolog is really delightful if you follow the breadcrumbs (gold nuggets, honestly) Markus left.

        If it doesn't kill you, that is!

  • larodi 4 days ago

    Been shouting here and many places for quite a while that CoT and all similar stuff eventually leads to logic programming. So happy I’m not crazy.

    • bbor 4 days ago

      You’re in good company — the most influential AI academic of all time, the cooky grandfather of AI who picked up right where (when!) Turing left off, the man hated by both camps yet somehow in charge of them, agrees with you. I’m talking about Marvin Minsky, of course. See: Logical vs. Analogical (Minsky, 1991) https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...

        …the limitations of current machine intelligence largely stem from seeking unified theories or trying to repair the deficiencies of theoretically neat but conceptually impoverished ideological positions. 
        Our purely numeric connectionist networks are inherently deficient in abilities to reason well; our purely symbolic logical systems are inherently deficient in abilities to represent the all-important heuristic connections between things—the uncertain, approximate, and analogical links that we need for making new hypotheses. The versatility that we need can be found only in larger-scale architectures that can exploit and manage the advantages of several types of representations at the same time.
        Then, each can be used to overcome the deficiencies of the others. To accomplish this task, each formally neat type of knowledge representation or inference must be complemented with some scruffier kind of machinery that can embody the heuristic connections between the knowledge itself and what we hope to do with it.
      
      He phrases it backwards here in comparison to what you’re talking about (probably because no one in their right mind would have predicted the feasibility of LLMs), but I think the parallel argument should be clear. Talking about “human reasoning” like Simon & Newell or LeCun & Hinton do in terms of one single paradigm is like talking about “human neurons”. There’s tons of different neuronal architectures at play in our brains, and only through the ad-hoc minimally-centralized combination of all of them do we find success.

      Personally, I’m a big booster of the term Unified Artificial Intelligence (UAI) for this paradigm; isn’t it fetch? ;)

      • dleink 4 days ago

        Just throwing this out there for someone, "Scruffier Kind of Machinery" is a good name for a book, company or band.

        • bbor 4 days ago

          Heh, well spotted. SSKOM will be overtaking KGATLW any day now…

      • larodi 2 days ago

        UAI seems reasonable as we need both discreet and non-discreet systems to start talking in a meaningful way.

        Was wondering recently - given that a lot can be done with predicate logic, and given that DNA is a sort of grammar, is there anything more powerful than these formalisms in math in genera, that is actually put to work somewhere, anywhere?

      • YeGoblynQueenne 4 days ago

        Minsky is right. The "rift" between symbolic and sub-symbolic, or learning and reasoning has only impeded progress. The problem is it's very hard to be an expert on both at once, and it's getting harder and harder as more and more work is done on both.

        Since you mentioned Hinton, he has worked hard to entrench the idea that symbolic AI failed. I was listening to a lecture he gave [1] and he went on and On and ON about how symbolic AI was a stupid idea and it was conclusively proved wrong by neural nets. He went on for so long bashing symbolic AI to the ground that at some point I started wondering whether he's deep down worried that there might be a resurgence of it right on time to address the limitations of neural nets with respect to reasoning, before he and his mates have the chance to figure out how to overcome it. Which, well, good luck with that.

        _________________

        [1] https://youtu.be/N1TEjTeQeg0?si=OIYY3wnDbi7rzJt3

        • larodi 2 days ago

          You’d be astonished how correct is this. I know several top PHD ppl well versed in ML who openly admin they know nothing about SQL which implies they also know very little PROLOG and very likely are ignorant about everything that is grammars and state automata.

          Top surprise was when my high school classmate who went on to win two gold medals in IoM and has been doing quant mathematics for finance more than 16years openly admitted he knew nothing about grammars and was like ‘is this useful at all…’. I was amazed how is this even possible. But it is - he went the probabilistic and symbolic way, I went the discret and graph way.

          On the other side I’m completely oblivious of what people use complex analysis for, even though I know a little DSP, some electronics, even some nano opto-electronics, and also can explain Fourier Transform to people. Even though I know what dérivâtes, nabla and vector field is, I can’t put them to work for me…

          Science is never done in isolation, and the whole LLM thing seems from another planet to many people cause it was devised in a ML silo and also enterprise silo.

          • YeGoblynQueenne 2 days ago

            >> You’d be astonished how correct is this.

            No I know it first hand :)

            I'm also not a cross-disciplinary expert, to be clear. When I say it's hard, it's because I find it hard! My strength is in discrete maths and logic. I can deal with continuous maths because I need to keep abreast with the latest statistical machine learning developments but I don't think I would ever be able to contribute directly to say neural networks research, unless I turned it into a logic-based approach (as has been done in the past). To be perfectly honest, if deep learning didn't happen to be the dominant approach to machine learning, which forces me to pay attention to it, I doubt I would have followed my own advice and looked far beyond my narrow band of expertise.

            But, that's why we're supposed to have collaborations, right? I can pair up with an expert on neural nets and we can make something new together that's more than what we can each do on our own. In the process we can learn from each other. That stuff works, I've seen that, too, in practice. I'm working with some roboticists now and I've learned a hell lot about their discipline and hopefully they're learning something about mine. I am convinced that in order to make progress in AI we need broad and wide collaborations, and not just between symbolists and connectionists, but also between computer scientists and biologists, cognitive scientists, whoever has any idea about what we're trying to achieve. After all, a computer scientist can only tell you something about the "artificial" in "artificial intelligence". We study computation, not intelligence. If we're going to create artificial intelligence we need to collaborate with someone who understands what that is.

            The hard part is to kick people out of their comfort zone and to convince them that the other experts are also, well, experts, and that they have useful knowledge and skills that can improve your own results. And seen from the other side of the coin, from my point of view, it's very difficult for me, as an early career researcher, to convince anyone that I have useful knowledge and skills and something to contribute. It takes time and you have to make your name somehow otherwise nobody will want to work with you. But that's how academia works.

            It's just not a great mechanism to ensure knowledge is shared and reused, unfortunately.

        • infinite8s 3 days ago

          Completely agree. The next revolution in AI will be the person who can successfully combine both symbolic and connectionist architectures.

    • ianandrich 4 days ago

      You are not crazy. Logic programming is the future

  • eru 4 days ago

    Prolog was a neat exercise, but for practical programming you might want to combine both logical and functional programming. I think 'Curry' does that.

  • machiaweliczny 4 days ago

    Has anyone tried to mine LLM world model via sampling to extract all relations it's believes to be true (like 99%+ certain) into Prolog like clauses. I think this is way to achieve reliable world/domain models in logical sense (non-probabilistic). Probably brain doesn't do it but it could be cool anyway. Seems like good sampler could somehow mine this info by using stuff like I believe that X is true or false for all X imaginable. Then try go generate relations for these etc.

  • dmead 5 days ago

    It's taken ages for anything from functional programming to penetrate general use. Do you think uptake of logic stuff will be any faster?

    • johnnyjeans 5 days ago

      Prolog (and logic programming in general) is much older than you think. In fact, if we take modern functional programming to have been born with John Backus' Turing Award presentation[1], then it even predates it.

      Many advancements to functional programming were implemented on top of Prolog! Erlang's early versions were built on top of a Prolog-derived language who's name escapes me. It's the source of Erlang's unfamiliar syntax for more unlearned programmers. It's very much like writing Prolog if you had return values and no cuts or complex terms.

      As for penetrating general use, probably not without a major shift in the industry. But it's a very popular language just on the periphery, even to this day.

      [1] - https://dl.acm.org/doi/10.1145/359576.359579

      • dragonwriter 4 days ago

        > Erlang's early versions were built on top of a Prolog-derived language who's name escapes me.

        AFAIK, Erlang was originally implemented in Prolog and the original VM was inspired by the Warren Abstract Machine targeted by some Prolog implementations. It was also inspired by PLEX, but PLEX wasn't a Prolog derivative.

      • dmead 4 days ago

        Did you just answer me with chatgpt?

        • exe34 4 days ago

          > top of a Prolog-derived language who's name escapes me

          are you saying we've made a huge leap in LLMs - that they can now admit when they don't know something?

          • diggan 4 days ago

            With the right prompting you can get LLMs to output pretty much anything :)

            > What is the airspeed velocity of an unladen Eurasian blue tit?

            > I’m not sure. The specific airspeed velocity of an unladen Eurasian blue tit hasn't been studied or widely documented in the same way that birds like swallows have been. It would likely depend on many factors like the bird’s weight, wing shape, and wind conditions. If you’re looking for general information about bird flight or blue tits, I can help with that!

            /GPT-4o

            • YeGoblynQueenne 4 days ago

              GPT-4o is a fan of Monty Python's O.o

              • diggan 4 days ago

                Surprised me initially too, but then I remembered it's trained on web content, makes a lot of sense then.

        • hydrolox 4 days ago

          definitely not how chat gpt writes

  • tomcam 5 days ago

    Is it your thought that for the average programmer Prolog is easier to read and maintain than say Go, C#, or Java?

    • z5h 4 days ago

      I'm surprised at how readable Prolog is.

      I've played with and seriously used many languages in my career. My experience is that pure functional (done Elm style) is productive and scales well to a larger team. Dynamic stuff like Ruby/Javascript always has more bugs than you think, even with "full" test coverage. I'm not smart enough to make sense of my own Scheme meta-programming when I revisit it months later. I have loads (but dated) experience with Java and it (and peers) are relatively easy to read and maintain.

      Prolog is very surprising, because it is homoiconic and immensely powerful in metaprogramming, BUT ... the declarative style and execution model reigns in the complexity/readability. A term is just a term. Nothing happens when you create a term. If/when a term is a goal, then you match it with the head of an existing predicate (something you've already coded). So it never gets too messy. Now, the biggest problem with Prolog is that it's so flexible, you'll perpetually be realizing that you could have coded something much more cleanly. So you do that, have less, code, it's nicer, etc. Doing this on a large team might not scale without effort.

      • YeGoblynQueenne 4 days ago

        >> I'm not smart enough to make sense of my own Scheme meta-programming when I revisit it months later.

        Then be smart enough to comment your code :P

        >> Prolog is very surprising, because it is homoiconic and immensely powerful in metaprogramming, BUT ... the declarative style and execution model reigns in the complexity/readability.

        Iiiish? This is from one of my yesterday's commit messages:

            * New look and look-around actions in the Basic Sim Environment allow
              for looking ahead in eight direction. This does get a liiittle bit
              complicated, or rather there's the usual millefeuille of abstraction
              layers on top of abstraction layers all the way down pou that mou ta
              spasei kapoia stigmi but OK.
        
        The Greek-lish interjection says approximately "that is going to bite me in the ass down the line". Because it will. The better I get with Prolog the more I worry nobody will be able to maintain my code but myself, and my future self will hate me with deep, burning passion.
        • infinite8s 3 days ago

          >> The Greek-lish interjection says approximately "that is going to bite me in the ass down the line". Because it will. The better I get with Prolog the more I worry nobody will be able to maintain my code but myself, and my future self will hate me with deep, burning passion.

          Isn't that similar to the GPs "I'm not smart enough to make sense of my own Scheme meta-programming when I revisit it months later."? Commenting your prolog won't help?

          • YeGoblynQueenne 3 days ago

            It's similar. Commenting my code (which I do, almost religiously) helps. It still taxes my brain to follow it.

            There's a certain kind of abstraction that is easier to write than read and understand. My current project is full of it, not least because a set of low-level predicates performing "primitive" operations on a foundational data structure are automatically generated and then everything else is built on top of them with a concentrated effort to avoid code duplication. There's a bunch of actions that move an agent around a map or look from the agent's position around the map, in discrete directions (currently) and the easiest way to implement these would be to implement one, say "step_north" or "look_north", and then copy/paste it with small changes however many times I need. Instead I opted to have parameterised "step" and "look" actions that I instantiate as I need. It's kind of the obvious thing to do, but starting from a "step" action, in my zeal to DRY (or NRM, I guess) I ended up creating a chain of predicates six or seven links deep that makes it harder to trace the execution of a top-level (step or look) action, just because I have to keep in mind each link in the chain and what exactly it does; and that's not obvious because some links compose new predicates from their arguments so I need to have a clear model of how that happens always in my mind. I could keep the chain shorter by using a higher level of abstraction but that would just make it even harder to debug.

            Prolog makes it easy, even pleasant, to program like that, but it doesn't make it any easier to read and maintain that kind of code than any other language as far as I can tell.

            Maybe the solution is not not do any metaprogramming and just copy/pasta and DRY and get it over with. But I find that this, too, makes it harder to debug because after a while all the instances of the copy/pasted code blur together into one smudgy fudge that has a downright hypnotic effect.

    • nsxwolf 4 days ago

      I found it completely impenetrable in college for all but the simplest problems and I tried to re-read the textbook recently and I didn’t do much better.

      • YeGoblynQueenne 3 days ago

        Which textbook was that? I think that Bratko ("Prolog programming for Artificial Intelligence) is probably the most friendly to beginning programmers with a background in more mainstream languages.

    • pjmlp 4 days ago

      As someone that went through a degree where Prolog and LP was cherisched, I would say yes, however LP might be even weirder to start into than even FP.

      Many folks on our degree couldn't be happier when they didn't had to see Prolog ever again, while me and others went on to take our chances on the national LP challenge across universities.

      Tarski's World was a good way back then to dive into what LP is all about, without being programming language specific.

      https://www.gradegrinder.net/Products/tw-index.html

      • YeGoblynQueenne 4 days ago

        Out of curiosity, in which university did you study for your degree?

        I did my MSc in Sussex Uni which was one of the centers in the UK where logic programming was developed, but when I got there in 2014 there was no trace of that history. From conversations with professors and past students it seems that Sussex tried to ram Prolog hard down students' throats and that caused a furious backlash so that nobody wanted to hear about it anymore after the '90s to early 2000's.

  • dmichulke 4 days ago

    > It's one of few languages that is simultaneously a standalone logical formalism, and a standalone representation of computation. (With caveats and exceptions, I know).

    Would you be able to formulate all those "caveats and exceptions" in Prolog?

    • z5h 17 hours ago

      For example: the logical core of Prolog along with it's resolution model (for the logical part) are non deterministic (something can have none, one, many solutions) but only one solution is explored at a time. So it's a "meta logical" thing to express something like "the set of solutions for ...". Given that the core of Prolog is Turing complete, you can still get Prolog to compute anything, you might just not have a nice way of declaring it in pure Prolog.

      Prolog has an interesting history of people discovering ways to express things that are simple, powerful and elegant. And yet despite the simplicity, these ways of expressing things were not immediately evident. DCGs are a prime example.

  • UltraSane 4 days ago

    I'm curious what your thoughts are on this paper [0]Lessons from the evolution of the Batfish configuration analysis tool. Initially they used Datalog but have since migrated to binary decision diagrams for performance reasons, the ability to more accurately model semantics, and more deterministic execution.

    [0]:https://dl.acm.org/doi/10.1145/3603269.3604866

  • hollerith 4 days ago

    What is your explanation for Prolog's lack of uptake so far?

    It has been around for 52 years and got a lot of buzz about 40 years ago because of its usefulness in creating expert systems.

    • tannhaeuser 4 days ago

      Not GP, but if Prolog has been around for 52 years, has many implementations old (SICStus/Quintus, SWI, yap, ciao, xsb, eclipse, many more) and new (Scryer, Quantum, Strawberry, Tau, Trealla, ...) then it follows it's not doing so bad after all.

      I have to say the "expert system" quote comes from lack of insight and perhaps outdated CompSci lecture notes when even at the height of "expert systems" around 1990 or before, Prolog's backward-chaining was seen as the opposite of an expert system's forward-chaining.

      It may have been the case that what was understood as an "AI language" was anathema following the AI crash around 1988/89 (the original "AI winter").

      What also may have contributed to the impression of Prolog becoming less used around 2000-2012 or so is the effect of W3C's/TBL's "Semantic Web" and description logics efforts capturing a portion of the academic and commercial attention in graph and logic databases, as well as in other applied logic domains such as formal verification (in a quite direct sense considering research grants for OWL, etc.)

    • upghost 4 days ago

      Early systems were largely closed source, proprietary, licensed commercial systems. In some ways Prolog is 52 years old and in some ways it's 5 years old.

      ...I can relate...

  • cmrdporcupine 5 days ago

    So why Prolog in particular and not another logic language like Mercury or Oz/Mozart etc?

    • jfengel 4 days ago

      "Prolog" is like Lisp, a wide array of superficially similar languages that actually are quite diverse.

      Mind you, in that sense, Java and C# are more or less the same language, which has Prolog programmers nodding their heads and Java and C# developers screaming.

      • tannhaeuser 4 days ago

        Nope. Prolog is an ISO-standardized language since 1995 and the spec was updated in 2012. Where older "legacy" Prolog implementations such as SWI, YAP, and SICStus are deviating from the standard is generally pretty well-known to Prolog practitioners, and the convener of ISO 13211 actually can verify claims of ISO conformance; for example, [1] is a link to the ISO certification of Quantum Prolog (the web app at [2]).

        It's true however that people are quick to conflate Prolog with constraint-logic programming libs, "expert systems" (RETE-style forward-chainging systems and other "rule engines"), or random "functional-logic" programming languages. The misunderstanding of Prolog and logic by Lisp programmers has been ongoing since the 1980s, probably because at one point Prolog and Lisp were seen as competing "languages for AI" for some reason even though they have very little in common.

        [1]: http://www.complang.tuwien.ac.at/ulrich/quantum-prolog/

        [2]: https://quantumprolog.sgml.net

        • anthk 4 days ago

          Also on Lisp there's Common Lisp with ANSI and maybe closer-mop from QuickLisp standarizing the cores, and then Scheme is it's own universe of compiler with official (SRFI) and own extensions and libraries.

        • samatman 4 days ago

          You are referring to 'Prolog', replying to a post referring to '"Prolog"'.

          Your constraint system is missing a clause ;-)

        • YeGoblynQueenne 4 days ago

          I think the OP may be referring to significantly divergent "Prologs" like Visual Prolog (a.k.a. Turbo Prolog) and Ciao.

    • infradig 5 days ago

      It's not meant to be taken literally, it refers to any language of logic programming”. Apologies to Monty Python.

    • Avshalom 4 days ago

      Probably because neither of them have much in the way or library or community support.

gorkempacaci 5 days ago

The generated programs are only technically Prolog programs. They use CLPFD, which makes these constraint programs. Prolog programs are quite a bit more tricky with termination issues. I wouldn’t have nitpicked if it wasn’t in the title.

Also, the experiment method has some flaws. Problems are hand-picked out of a random subset of the full set. Why not run the full set?

  • YeGoblynQueenne 4 days ago

    >> Why not run the full set?

    Most likely cherry-picking. The approach is only going to work well in domains where Prolog is commonly used to write solutions to problems, like logical puzzles or constraint problems etc.

  • bbor 4 days ago

    Yeah I’m a huge proponent of this general philosophy, but after being introduced to prolog itself for a third of a semester back in undergrad I decided to stay far, far away. The vision never quite came through as clearly as it did for the other wacky languages, namely the functional family (Lisp and Haskell in my case). I believe you on the fundamental termination issues, but just basic phrasing seemed unnecessarily convoluted…

    Since you seem like an expert: is there a better technology for logical/constraint programming? I loved predicate calculus in school so it seems like there should be something out there for me, but so far no dice. This seems kinda related to the widely-discussed paradigm of “Linear Programming”, but I’ve also failed to find much of interest there behind all the talk of “Management Theory” and detailed mathematical efficiency comparisons.

    I guess Curry (from above) might be the go-to these days?

    • gorkempacaci 4 days ago

      Curious to know what part of syntax you found convoluted. If you remember any examples I’d appreciate it. Maybe you want a constraint programming environment instead. As example check out Conjure from St Andrews: https://conjure.readthedocs.io/en/latest/tutorials-notebook....

      More generally there are the theorem provers like Coq, etc., but their use cases are even more specific.

fsndz 5 days ago

This is basically the LLM modulo approach recommended by Prof. Subbarao Kambhampati. Interesting but only works mostly for problems that have some math/first degree logic puzzle at their heart. Will fail at improving perf at ARC-AGI for example... Difficult to mimic reasoning by basic trial and error then hoping for the best: https://www.lycee.ai/blog/why-sam-altman-is-wrong

pjmlp 5 days ago

So we are back to Japanese Fifth Generation plan from 1980's. :)

  • linguae 5 days ago

    This time around we have all sorts of parallel processing capabilities in the form of GPUs. If I recall correctly, the Fifth Generation project envisioned highly parallel machines performing symbolic AI. From a hardware standpoint, those researchers were way ahead of their time.

    • nxobject 5 days ago

      And they had a self-sustaining video game industry too... if only someone had had the wild thought of implementing perceptrons and tensor arithmetic on the same hardware!

  • tokinonagare 5 days ago

    Missing some LISP but yeah it's funny how old things are new again (same story with wasm, RISC archs, etc.)

luke_galea 4 days ago

Super cool. I dig generating rules from within the LLM, but I'm not sure Prolog is the right choice in 2024.

I love Prolog and had the opportunity to use it "in anger" years ago to handle temporal logic in a scheduling app. Great experience, but I've found that more modern rules engines like Drools (anything using the Rete algorithm) are a MUCH better fit for most use cases these days.

If you are into this stuff, you might like the talk I gave on rules engines, prolog and how it led to erlang & elixir. https://www.youtube.com/watch?v=mDnntrhk-8g&t=1s

  • OutOfHere 4 days ago

    The choice is limited to the languages that LLMs already know really well. Fwiw, here is GPT's self-rating out of 10:

    Python: 9, Prolog: 7, Datalog: 6, Mercury: 6, Curry: 5, Drools: 4

    This is not even the full set of what the LLM might like to use. It may also like pyDatalog, SymPy, Haskell, Clingo ASP, ECLiPSe CLP, etc.

  • YeGoblynQueenne 4 days ago

    Drools is a rules engine, but Prolog is a fully-fledged, general-purpose language, yes? For example SWI-Prolog has a bunch of http libraries and can be used as a web development language (using Prolog's clause database itself in place of some SQL). I don't think that'd be a sensible use case for Drools.

    • riku_iki 3 days ago

      You can use drools from Java as a library probably.

a1j9o94 5 days ago

I tried an experiment with this using a Prolog interpreter with GPT-4 to try to answer complex logic questions. I found that it was really difficult because the model didn't seem to know Prolog well enough to write a description of any complexity.

It seems like you used an interpreter in the loop which is likely to help. I'd also be interested to see how o1 would do in a task like this or if it even makes sense to use something like prolog if the models can backtrack during the "thinking" phase

  • hendler 5 days ago

    I also wrote wrote an LLM to Prolog interpreter for a hackathon called "Logical". With a few hours effort I'm sure it could be improved.

    https://github.com/Hendler/logical

    I think while LLMs may approach completeness here, it's good to have an interpretable system to audit/verify and reproduce results.

  • lukasb 5 days ago

    I bet one person could probably build a pretty good synthetic NL->Prolog dataset. ROI for paying that person would be high if you were building a foundation model (ie benefits beyond being able to output Prolog.)

    • mcswell 4 days ago

      I'm not exactly sure what you're referring to, but Fernando Pereira's dissertation included a natural language (English) program for querying a "database". Both the NLP part and the database were written in Prolog. Mid-1980s, I think. Of course both parts were "toy" in the sense that they would need to be hugely expanded to be of real world use, but they did handle some interesting things (like quantifiers, graded adjectives etc.).

TyrianPurple 4 days ago

Ah Prolog. So the full circle back to expert systems is complete now, yeah?

  • namaria 4 days ago

    They tried brute forcing it, now it's back to programming it directly.

UniverseHacker 5 days ago

I think this general idea is going to be the key to really making LLMs widely useful for solving real problems.

I’ve been playing with using GPT-4 together with the Wolfram Alpha plugin, and the combo of the two can reliably solve difficult quantitative problems that neither can individually by working together, much like a human using a calculator.

DeborahWrites 5 days ago

You're telling me the seemingly arbitrary 6 weeks of Prolog on my comp sci course 11yrs ago is suddenly about to be relevant? I did not see this one coming . . .

  • aitchnyu 4 days ago

    I rolled my eyes when iterating a list meant splitting it into first and rest and recursing on the rest. I would do side projects in Scala a few years later of my own interest.

  • fullstackwife 5 days ago

    Is there any need to look at this generated Prolog code?

nonamepcbrand1 5 days ago

This is why GitHub CodeQL and Co-Pilot assistance is working better for everyone? basically codeql uses variant of Prolog (datalog) to query source code to generate better results.

baq 5 days ago

Patiently waiting for z3-guided generation, but this is a welcome, if obvious, development. Results are a bit surprising and sound too optimistic, though.

de6u99er 5 days ago

I always thought that Prolog is great for reasoning in the semantic web. It doesn't surprise me that LLM people stumble on it.

arjun_khamkar 5 days ago

Would Creating a prolog dataset would be beneficial, so that future LLM's can be trained on it and then they would be able to output prolog code.

  • shchegrikovich 4 days ago

    I discussed this a few weeks back. The idea is to take a Python dataset, as Python is the most popular language, and write a transpiler to Prolog with the help of llms. So, creating this synthetic dataset is not a huge problem.

ianbicking 5 days ago

I made a pipeline using Z3 (another prover language) to get LLMs to solve very specific puzzle problems: https://youtu.be/UjSf0rA1blc (and a presentation: https://youtu.be/TUAmfi8Ws1g)

Some thoughts:

1. Getting an LLM to model a problem accurately is a significant prompting exercise. Bridging casual logical statements and formal logic is difficult. E.g., "or" statements in English usually mean "xor" in logic.

2. Domains usually have their own language expectations. I was doing Zebra puzzles (https://en.wikipedia.org/wiki/Zebra_Puzzle) and they have a very specific pattern and language. I don't think it's fair to really call it intuitive or even entirely unambiguous, it's something you have to learn. The LLM has to learn it too. They have seen this kind of puzzle (and I think most can reproduce the original Zebra puzzle from memory), but they lack a really firm familiarity.

3. Arguably some of the familiarity is about contextualizing the problem, which is itself a prompting task. People don't naturally solve Zebra puzzles that we find organically, it's something we encounter in specific contexts (like a puzzle book) which is not so dissimilar from prompting.

4. Incidentally Claude Sonnet 3.5 has a substantial lead. And GPT o1 is not much better than GPT 4o. In some sense I think o1 is a kind of self-prompting, an attempt to create its own context; so if you already have a well-worded prompt with instructions then o1 isn't that good at improving performance over 4o.

5. A lot of the prompting is really intended to slow down the LLM, to keep it from jumping to conclusions or solving a task too quickly (and incorrectly). Which again is a case of the prompt doing what o1 tries to do generally.

6. I'm not sure what tasks call for this kind of logical reasoning. Not that I don't think they exist, I just don't know how to recognize them. Planning tasks? Highly formalized and artificially constructed problems don't seem all that interesting... and the whole point of adding an LLM to the process is to formalize the informal.

7. Perhaps it's hard to see because real-world problems seldom have conveniently exact solutions. But that's not a blocker... Prolog (and Z3) can take constraints as a form of elimination, providing lists of possible answers, and maybe just reducing the search space is enough to move forward on some kinds of problems.

8. For instance when I give my pipeline really hard Zebra problems it usually doesn't succeed; one bug in one rule will kill the whole thing. Also I think the LLMs have a hard time keeping track of large problems; a context size problem, even though the problems don't approach their formal context limits. But I can imagine building the pipeline so it also tries to mark low-confidence rules. Given that I can imagine removing those rules, sampling the resulting (non-unique, sometimes incorrect) answers and using that to revisit and perhaps correct some of those rules.

Really I'd be most interested to hear thoughts on where this logic programming might actually be applied... artificial puzzles are an interesting exercise, but I can't really motivate myself to go too deep.

  • gbanfalvi 4 days ago

    > 6. I'm not sure what tasks call for this kind of logical reasoning

    Basically any tasks that fulfill legal or business requirements? Both companies and governments are rushing to put LLMs into anything they can to avoid paying people. It’s vital to ascertain that, say, a benefits application is assessed properly and the LLM doesn’t hallucinate its way into an incorrect decision.

    I’d question if we really need LLMs in many of the places we’re sticking them at all (or if it’ll even be cheaper), but that’s more flawed human decision.

    • ianbicking 4 days ago

      I think the best way to verify if a task could benefit from this is to try it by hand. What would a logical representation of legal requirements look like? If we can't model it by hand then we shouldn't expect an LLM to be able to do it either.

      Probably 15 years ago I recall talking with someone at Columbia University who was working on an NLP project to help defendants understand the full implications of a plea deal. For instance if you plead guilty to a misdemeanor, what are all the implications elsewhere for someone who has that on their record. Or if you are on parole and plead guilty, the plea could have an effect on your parole.

      The result might look a little more like search than logic. Like "rule 39.21 applies when (logical condition)". But then I can imagine growing the logical conditions... maybe you start with the most obvious formal definitions like crime severity, but then start to pull in other definitions as formal logical states as you see where that leads.

  • slig 4 days ago

    Thanks, really interesting. I'm going to watch your videos and learn something since I'm on the other side of the fence: I'm creating Zebra Puzzles algorithmically [1] and then feeding their raw and generic rules ("The person who has Dog=Doberman lives next to the person who has House=Green") to a LLM to get the English representation of a thematic puzzle.

    [1]: https://www.zebrapuzzles.com/

  • shchegrikovich 4 days ago

    I think another problem which we can try to solve with this approach is factual verification.

lynx23 4 days ago

I implemented Prolog and Z3 as a function tool for my little OpenAI Assistants API client. Z3 (SMT-LIB, actually) seemed even more promising. The model speaks both languages. However, I didn't find any convincing use-cases yet. But it seems a logical extension of the idea to provide a programming language via a function tool to solve problems. So you start thinking "What else has text input and output, and a powerful engine in between?"

treetalker 4 days ago

Does anyone know why US attorneys and law firms are not using Prolog-based apps to automate the low-hanging fruit of issue-spotting?

  • samatman 4 days ago

    Law was particularly badly burned by the hype wave of "expert systems" in the 1970s and 80s, many of them coded in Prolog.

    https://en.wikipedia.org/wiki/Expert_system

    The Wiki link is nearly hagiographic in its studied avoidance of the topic of how the field crashed and burned, and the term "expert system" fell into disrepute, but these are things which happened.

    Which isn't to say that the legal field can't benefit from software which uses Prolog, in fact, I have a strong hunch that the number of such products currently in use is not zero.

    But if you wrote a new one, you would do well to make sure that no senior partners hear the word "Prolog" in the sales pitch.

  • sgdfhijfgsdfgds 4 days ago

    Because Prolog is difficult, and expressing fuzzy real-world facts and nuances in it is harder.

bytebach 5 days ago

An application I am developing for a customer needed to read constraints around clinical trials and essentially build a query from them. Constraints involve prior treatments, biomarkers, type of disease (cancers) etc.

Using just an LLM did not produce reliable queries, despite trying many many prompts, so being an old Prolog hacker I wondered if using it might impose more 'logic' on the LLM. So we precede the textual description of the constraints with the following prompt:

-------------

Now consider the following Prolog predicates:

biomarker(Name, Status) where Status will be one of the following integers -

Wildtype = 0 Mutated = 1 Methylated = 2 Unmethylated = 3 Amplified = 4 Deleted = 5 Positive = 6 Negative = 7

tumor(Name, Status) where Status will be one of the following integers if know else left unbound -

Newly diagnosed = 1 Recurrence = 2 Metastasized = 3 Progression = 4

chemo(Name)

surgery(Name) Where Name may be an unbound variable

other_treatment(Name)

radiation(Name) Where Name may be an unbound variable

Assume you are given predicate atMost(T, N) where T is a compound term and N is an integer. It will return true if the number of 'occurences' of T is less than or equal N else it will fail.

Assume you are given a predicate atLeastOneOf(L) where L is a list of compound terms. It will succeed if at least one of the compound terms, when executed as a predicate returns true.

Assume you are given a predicate age(Min, Max) which will return true if the patient's age is in between Min and Max.

Assume you have a predicate not(T) which returns true if predicate T evaluates false and vice versa. i.e. rather than '\\+ A' use not(A).

Do not implement the above helper functions.

VERY IMPORTANT: Use 'atLeastOneOf()' whenever you would otherwise use ';' to represent 'OR'. i.e. rather than 'A ; B' use atLeastOneOf([A, B]).

EXAMPLE INPUT: Patient must have recurrent GBM, methylated MGMT and wildtype EGFR. Patient must not have mutated KRAS.

EXAMPLE OUTPUT: tumor('gbm', 2), biomarker('MGMT', 2), biomarker('EGFR', 0), not(biomarker('KRAS', 1))

------------------

The Prolog predicates, when evaluated generate the required underlying query (of course the Prolog is itself a form of query).

Anyway - the upshot was a vast improvement in the accuracy of the generated query (I've yet to see a bad one). Somewhere in its bowels, being told to generate Prolog 'focused' the LLM. Perhaps LLMs are happier with declarative languages rather than imperative ones (I know I am :) ).

  • int_19h 4 days ago

    I find that having GPT-4 write SQL queries to query the data source as needed to solve a complex task step-by-step also works pretty well (and you can give it the schema in form of CREATE TABLE). It's not exactly good at writing fast queries, but it can do some hella complex ones with nesting and joins to get exactly what it needs in one go.

  • shchegrikovich 4 days ago

    I agree; we can do much more with LLMs if we focus on declarative-style languages.

  • worldsayshi 4 days ago

    This sounds like a very important observation.

johnisgood 4 days ago

ChatGPT seems to do awful with Prolog. Do you guys have any experiences with using an LLM to write Prolog?

  • tannhaeuser 4 days ago

    See my top-level comment for an example. Though different, there's also casual experimentation described in [1] (and additional posts linking to academic research on the SWI Prolog forum).

    Do you mind sharing your experience with ChatGPT (which version)?

    [1]: https://swi-prolog.discourse.group/t/chatgpt-prompts-prolog-...

    • johnisgood 4 days ago

      I do not think that I have the conversation / chat anymore, but I was trying to get it to make a monthly schedule for N workers with specific constraints[1]. I thought Prolog is suitable for this (is it?), but the generated code got stuck / hung up.

      [1] https://news.ycombinator.com/item?id=41756679

shchegrikovich 4 days ago

The draft version of this blog was called - 'Renaissance of Programming Languages'. With so much hidden gems we are re-discovering the full potential of formal languages.

timonoko 4 days ago

Interest in Prolog always ends with the "!". It is ugly and like smack in the head, "you are thinking too much".

  • sgdfhijfgsdfgds 4 days ago

    The course I did at uni, decades ago now, set us a Prolog assessment where we were not allowed to use the cut operator.

    Code that backtracks is hard to reason about.

mise_en_place 5 days ago

I really enjoyed tinkering with languages like Prolog and Coq. Interactive theorem proving with LLMs would be awesome to try out, if possible.

YeGoblynQueenne 5 days ago

That's not going to work. Garbage in - Garbage out is success-set equivalent to Garbage in - Prolog out.

Garbage is garbage and failure to reason is failure to reason no matter the language. If your LLM can't translate your problem to a Prolog program that solves your problem- Prolog can't solve your problem.

  • Philpax 5 days ago

    This is a shallow critique that does not engage with the core idea. Specifying the problem is not the same as solving the problem.

    • YeGoblynQueenne 4 days ago

      I've programmed in Prolog for ~13 years and my PhD thesis is in machine learning of Prolog programs. How deep would you like me to go?

      • Philpax 4 days ago

        As deep as is required to actually make your argument!

        • YeGoblynQueenne 4 days ago

          You'll have to be more specific than that. For me what I point out is obvious: Prolog is not magick. Your program won't magickally reason if you write it in Prolog, much less reason correctly. If an LLM translates a Problem to the wrong Prolog program, Prolog won't magickally turn it into a correct program. And that's just rephrasing what I said in my comment above. There's really not much more to say.

          Here's just one more observation: the problems where translating reasoning to Prolog will work best are problems where there are a lot of examples of Prolog to be found on the web, e.g. wolf-cabbage-goat problems and the like. With problems like that it is much easier for an LLM to generate a correct translation of the problem to Prolog and get a correct solution just because there's lots of examples. But if you choose a problem that's rarely attacked with Prolog code, like, I don't know, some mathematical problem that obtains in nuclear physics as a for instance, then an LLM will be much more likely to generate garbage Prolog, while e.g. Fortran would be a better target language. From what I can see, the papers linked in the article above concentrate on the Prolog-friendly kind of problem, like logical puzzles and the like. That smells like cherry picking to me, or just good, old confirmation bias.

          Again, Prolog is not magick. The article above and the papers it links to seem to take this attitude of "just add Prolog" and that will make LLMs suddenly magickally reason with fairy dust on top. Ain't gonna happen.

          • sgdfhijfgsdfgds 4 days ago

            > Again, Prolog is not magick. The article above and the papers it links to seem to take this attitude of "just add Prolog" and that will make LLMs suddenly magickally reason with fairy dust on top. Ain't gonna happen.

            It frightens me that HN is so popular with people who will strain credulity in this regard. It's like a whole decade of people engaging in cosmic-ordering wishes about crypto has now led to those same people wishing for new things as if the wishes themselves are evidence of future outcomes.

          • tannhaeuser 3 days ago

            Of course Prolog is no magic but I'm sure you know the argument in favor of having LLMs generate Prolog are based on the observation that prompting or otherwise making an LLM to perform chain-of-thought reasoning results in demonstratable experimental improvements, with [1] the canonical paper, and using Prolog with its unique characteristics and roots in NLP and logic an extension of that idea termed program-as-thought. OpenAI's latest o1 model makes heavy use of CoD internally until it returns an answer to the user.

            [1]: https://arxiv.org/abs/2201.11903

            • YeGoblynQueenne 3 days ago

              My thoughts on CoT and the extent to which it "elicits reasoning" align almost perfectly with the criticism of Rao Kambhampati and his students:

              https://arxiv.org/abs/2405.04776

              Their argument is that CoT can only improve performance of LLMs in reasoning tasks when the prompter already knows the answer and can somehow embed it in their prompt. The paper I link above supports this intuition with empirical results, summarised in the abstract as follows:

              While our problems are very simple, we only find meaningful performance improvements from chain of thought prompts when those prompts are exceedingly specific to their problem class, and that those improvements quickly deteriorate as the size n of the query-specified stack grows past the size of stacks shown in the examples. We also create scalable variants of three domains commonly studied in previous CoT papers and demonstrate the existence of similar failure modes. Our results hint that, contrary to previous claims in the literature, CoT's performance improvements do not stem from the model learning general algorithmic procedures via demonstrations but depend on carefully engineering highly problem specific prompts.

              And if that's the case and I need to know the solution to a reasoning task before I can prompt an LLM to solve it- then why do I need to prompt an LLM to solve it? Or, if I'm just asking an LLM to generate the Prolog code I can write myself then what's the point of that? As I argue in another comment, an LLM will only do well in generating correct code if it has seen some sufficient number of examples of the code I'm asking it to generate anyway. So I don't think that CoT, used to generate Prolog, is really adding anything to my capability to solve problems by coding in Prolog.

              I have no idea how o1 works internally and I prefer not to speculate but it doesn't seem to be some silver bullet that will make LLMs capable of reasoning.

      • MrLeap 4 days ago

        I'm excited for the possibility of an escalation after reading this.

      • upghost 4 days ago

        can you link your thesis? That sounds awesome!!

        • YeGoblynQueenne 4 days ago

          I can't seem to find it on the web. I don't think it's published online yet.

          • upghost 4 days ago

            that's ok (T_T)

            I'll just imagine what it could've been like! I bet it was awesome!

            • YeGoblynQueenne 3 days ago

              It's a PhD thesis... It was put to me that nobody will ever read it and nobody cares what's in it, but it's my thesis and it matters to me :)

              Edit: you can see some of the results here btw:

              https://github.com/stassa/louise

              • upghost 3 days ago

                Amazing!! There was a chapter on this in my Prolog AI [1] book but I quickly realized it was a superset of the difficulty of high level Prolog programming, high level symbolic AI, and complex composition, and I knew it would have to wait until I graduated from white belt Prolog, but I am incredibly excited to see a real project using this technique!! I remember when I went for my masters degree in AI/ML, the industry was just moving away from decision trees into NNs, but in general we were already well into the shift to subsymbolic "function approximator" style "AI". In fact, the term "AI" was generally poo-pooed in favor of the more technical term "machine learning". I was heart broken, because I was really disappointed to see that AI was less about beautiful programs and more about cleaning and cramming data into a network.

                So in a lot if ways seeing this restores some faith in humanity, great work and thanks for giving me a chance to look at it!

                With all the hot news in Prolog these days I'd think you should submit this! But also I hate submitting any of my own work and prefer to live in the comment section so I'd understand if you feel the same way.

                [1] https://books.google.com/books/about/Prolog_Programming_for_...

                • YeGoblynQueenne 2 days ago

                  Oh, I didn't remember that Bratko had a chapter on ILP. I've met him in a couple of ILP conferences so I knew he's published work in ILP though [1]. The techniques described in the book are quite a bit older and, to be honest, they were rather limited, in particular with respect to learning recursion. There's a New Wave of ILP nowadays however and a flourishing of new approaches that followed from MIL, which kinda threw open the gates.

                  There's a recent synopsis of the latest advances in ILP here:

                  https://arxiv.org/pdf/2102.10556

                  >> With all the hot news in Prolog these days I'd think you should submit this!

                  You mean to HN? I guess I could. I tend to think HN will not find it particularly interesting. Go ahead and submit it yourself though if you feel like it :)

                  _________________

                  [1] One of those times Bratko told me that I'm good with Prolog. I replied that I'm going to be saying he said that to everyone who will listen for the rest of my life XD

                  • upghost 18 hours ago

                    YAY!! Thank you! I feel like there is "so much Prolog" that is not really widely known about, it's not like it's burning up the blogosphere unfortunately. I had no idea the ILP methods in the Bratko book were already so dated. Are these Prolog specific conferences you go to or general ML ones...? Where do all the cool kids hang out??

    • sgdfhijfgsdfgds 4 days ago

      It's actually pretty concise: Prolog isn't all that easy! That's why people don't use it.

      Competent CS students fail Prolog courses all the time. A lot of Prolog on the internet will either be wrong, or it will be so riddled with unnecessary/unclear backtracking that an LLM won't be able to make more sense of it than it does words.

  • OutOfHere 4 days ago

    Why do you think that the LLM cannot translate the problem into a program? Granted, it has been said that the Curry or Mercury languages may be better than Prolog at times with their functional logic programming features. Ultimately it's best if the LLM has the freedom to decide what's best to use for the problem from what it knows.

    • YeGoblynQueenne 4 days ago

      >> Why do you think that the LLM cannot translate the problem into a program?

      No, the question is why do you think I think that? I never said anything like that.

  • mountainriver 4 days ago

    Agree, reasoning has to come from within the model. These are hacks that only work in specific use cases

    • YeGoblynQueenne 4 days ago

      Yep. Specifically ones for which there are lost of examples of Prolog programs on the web so an LLM can learn the correct translation more easily.

sgdfhijfgsdfgds 4 days ago

This is magical thinking. If an LLM can’t reason it isn’t going to be able to express itself clearly in Prolog.

Suggesting otherwise is intellectually on the same level as trying to make up a small consistent per-sale loss with volume.

  • sgdfhijfgsdfgds 4 days ago

    I know we're not supposed to comment on downvotes but I really question the logic of anyone who thinks that a thing that cannot reason can write a prolog program that is really going to be much more successful.

    Prolog is actually pretty difficult to do right, even if you are skilled. It actually requires reasoning. You don't just write out facts and have the system do the work. And many of the examples in the training set will be wrong, naturally simplistic or be full of backtracking that is itself difficult for a person to comprehend at a glance; why should an LLM be better at it? There can't even be that much data in the training set.

    Ultimately, though: stop believing in magical solutions to fundamental problems. This is nuts.

    • shchegrikovich 4 days ago

      I have another example - just a few people believed that you can apply 'a simple next token prediction algorithm' and achieve what we know as LLM. From my perspective, in the past few years, we've tried a lot of different approaches to improve LLM reasoning; some of them were good, others not so good. We need to keep trying and researching. 'Prolog + LLM' is not the answer to all questions, but it looks like a good step to move us forward.

      • sgdfhijfgsdfgds 4 days ago

        > 'Prolog + LLM' is not the answer to all questions, but it looks like a good step to move us forward.

        Or it's a thing people can write papers about, and chase reproducibility on afterwards, as the shell game of claiming LLM reasoning continues.

tannhaeuser 4 days ago

While the paper is about helping an LLM using Prolog, to give an idea for a realistic application for generating Prolog, here's the zero-shot prompt and response of a basic local Instruct LLM on a challenge to a classic 1998 AI Planning Competition "Logistics" problem re-formulated in English from PDDL, running on a very basic local 8B LLM quant'd to run on 12GB and containing even typos, but not showing the system prompt portion though. I mean, it's quite impressive and the generated state and action predicates can be run right away in the browser on eg. Quantum Prolog much like the container planning problem [1], with the main clause replaced by a generic STRIPS-like indeterministic/backtracking planning routine. Using additional training and/or in-context techniques and prompt reformulation can improve the result further.

Since the action predicates use retract/assert primitives to change state, Quantum Prolog's vardb feature [2] can also be put to good use by transforming the resulting program into an equivalent form that can be backtracked over, even in parallel, to perform combinatorical search (whereas assertz/retract in Prolog is destructive and cannot be "undone" automatically to search new states).

But as you also can see from the plan description text, a natural-language description of the problem that an LLM can comprehend only helps so much since it's highly repetitive in stating hundreds of facts about eg. current and desired package locations. This is where Prolog comes in since you can state those ground facts in clause format, or a DSL format, or a database (as in realistic applications where Prolog facts are generated or Prolog is deployed alongside a backend for integration with other services for eg. messaging in a a logistics app).

Still, the combination of an LLM (excelling at language tasks) and Prolog (for actually searching a combinatorical space spanned by clauses extracted in a natural way), just as a human reach out to a pocket calculator for certain tasks, sure is powerful and has been also explored recently in one form or another in academia eg [3] also referenced elsewhere in this thread. And this isn't even using tool calling and iterative re-prompting, etc.

[1]: https://quantumprolog.sgml.net/container-planning-demo/part1...

[2]: https://quantumprolog.sgml.net/container-planning-demo/part2...

[3]: "LLMs Can't Plan, But Can Help Planning in LLM-Modulo Frameworks" at <https://arxiv.org/abs/2402.01817>

PROMPT (submitted as-is, including typos):

Write a Prolog program to solve the following logistics problem: Packages must be transported from and to locations inside the six cities named city1, city2, city3, city4, city5, and city6) using trucks, and between those citiesusing airplanes. Each city has two locations: city1 has locations city11 and city12, city2 has locations city21 and city22, city3 has locations city31 and city32, city4 has locations city41 and city42, city5 has locations city51 and city52, andcity6 has locations city61 and city62. Initially, package1 is located at city21, package2 is located at city12, package3 is located in city11, package4 is located in city11, package5 is located in city42, and package6 is located in city31. The aim is to move package1 to city21, package2 to city62, package3 to city61, package4 to city32, package5 to city62, and package6 to city12. In each city, at the locations city11, city21, city31, city41, city51, and city61, there's a truck to transport packages to other locations within the city. There are moreover two airplanes, named plane1 and plane2, located at the airport of city1 at location11 and at the airport of city2 at location22. The airports of the other cities have the locations location32 in city3, location42 in city4, location52 in city5, and location62 in city6. To move packages, a sequence of actions as follows must be performed until the packages have reached their destination. The action load_truck will place a package in a truck at a given location. The action unload_truck will unload a package from a truck at a given location. The action fly_airplane will fly an airplane from one airport to another. The action unload_airplane will unload a package from an airplane at a given location. The action load_airplane will load an airplane with a package at a given location.

RESPONSE:

    % State facts
    
    % Initial state of the packages
    package(package1, city21).
    package(package2, city12).
    package(package3, city11).
    package(package4, city11).
    package(package5, city42).
    package(package6, city31).
    
    % Initial state of the trucks
    truck(city11, city11).
    truck(city21, city21).
    truck(city31, city31).
    truck(city41, city41).
    truck(city51, city51).
    truck(city61, city61).
    
    % Initial state of the airplanes
    airplane(plane1, city11).
    airplane(plane2, city22).
    
    % Action clauses
    % load_truck :- load a package into a truck at a given location
    load_truck :- 
        % If the given location is valid for loading a truck
        location(Loc, City), 
        package(Pkg, Loc), 
        truck(T, Loc),
        % Remove the package from its current location
        retract(package(Pkg, Loc)), 
        % Add the package to the truck
        assertz(package(Pkg, T)), 
        % Assert the new state
        assertz(truck(T, Loc)).
    
    % unload_truck :- unload a package from a truck at a given location
    unload_truck :-
        % If the given location is valid for unloading a truck
        location(Loc, City), 
        package(Pkg, T), 
        truck(T, Loc),
        % Remove the package from the truck
        retract(package(Pkg, T)), 
        % Add the package to its destination location
        assertz(package(Pkg, Loc)),
        % Assert the new state
        assertz(truck(T, Loc)).
    
    % fly_airplane :- fly an airplane from one airport to another
    fly_airplane :-
        % If the airplane is at a valid airport
        airplane(plane, Loc),
        % If there is a valid destination airport
        airport(Dest, DestCity),
        % Remove the airplane from its current location
        retract(airplane(plane, Loc)), 
        % Add the airplane to the destination airport
        assertz(airplane(plane, Dest)).
    
    % unload_airplane :- unload a package from an airplane at a given location
    unload_airplane :-
        % If the given location is valid for unloading an airplane
        location(Loc, City), 
        package(Pkg, A),
        airplane(plane, A),
        % Remove the package from the airplane
        retract(package(Pkg, A)), 
        % Add the package to its destination location
        assertz(package(Pkg, Loc)), 
        % Assert the new state
        assertz(airplane(plane, Loc)).
    
    % load_airplane :- load a package into an airplane at a given location
    load_airplane :-
        % If the given location is valid for loading an airplane
        location(Loc, City), 
        package(Pkg, Loc),
        airplane(plane, A),
        % Remove the package from its current location
        retract(package(Pkg, Loc)),
        % Add the package to the airplane
        assertz(package(Pkg, A)),
        % Assert the new state
        assertz(airplane(plane, Loc)).
    
    % Target state
    target :-
        package(package1, city22),
        package(package2, city62),
        package(package3, city61),
        package(package4, city32),
        package(package5, city62),
        package(package6, city12).
    
    % Main predicate to solve the problem
    solve :- ...
    % BOGUS MAIN AND HELPER CLAUSES OMITTED
anthk 5 days ago

Use Constraint Satisfaction Problem Solvers. It commes up with Common Lisp with ease.