• Prolog totally missed the AI Boom

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Feb 22 13:05:41 2025
    From Newsgroup: comp.lang.prolog


    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg

    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Feb 22 22:51:53 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    One idea I had was that autoencoders would
    become kind of invisible, and work under the hood
    to compress Prolog facts. Take these facts:

    % standard _, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
    data(seg7, [0,0,0,0,0,0,0], [0,0,0,0,0,0,0]).
    data(seg7, [1,1,1,1,1,1,0], [1,1,1,1,1,1,0]).
    data(seg7, [0,1,1,0,0,0,0], [0,1,1,0,0,0,0]).
    data(seg7, [1,1,0,1,1,0,1], [1,1,0,1,1,0,1]).
    data(seg7, [1,1,1,1,0,0,1], [1,1,1,1,0,0,1]).
    data(seg7, [0,1,1,0,0,1,1], [0,1,1,0,0,1,1]).
    data(seg7, [1,0,1,1,0,1,1], [1,0,1,1,0,1,1]).
    data(seg7, [1,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
    data(seg7, [1,1,1,0,0,0,0], [1,1,1,0,0,0,0]).
    data(seg7, [1,1,1,1,1,1,1], [1,1,1,1,1,1,1]).
    data(seg7, [1,1,1,1,0,1,1], [1,1,1,1,0,1,1]).
    % alternatives 9, 7, 6, 1
    data(seg7, [1,1,1,0,0,1,1], [1,1,1,1,0,1,1]).
    data(seg7, [1,1,1,0,0,1,0], [1,1,1,0,0,0,0]).
    data(seg7, [0,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
    data(seg7, [0,0,0,0,1,1,0], [0,1,1,0,0,0,0]). https://en.wikipedia.org/wiki/Seven-segment_display

    Or more visually, 9 7 6 1 have variants trained:

    :- show.
    _0123456789(9)(7)(6)(1)

    The auto encoder would create a latent space, an
    encoder, and a decoder. And we could basically query
    ?- data(seg7, X, Y) with X input, and Y output,

    9 7 6 1 were corrected:

    :- random2.
    0, 0
    _01234567899761

    The autoencoder might also tolerate errors in the
    input that are not in the data, giving it some inferential
    capability. And then choose an output again not in

    the data, giving it some generative capabilities.

    Bye

    See also:

    What is Latent Space in Deep Learning? https://www.geeksforgeeks.org/what-is-latent-space-in-deep-learning/

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Feb 23 18:33:51 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Somebody wrote:

    It’s a self-supervised form of ILP.
    No autoencoders anywhere at all.

    And, this only proofs my point that ILP doesn’t
    solve the problem to make autoencoders and transformers
    available directly in Prolog. Which was the issue I posted
    at the top of this thread.

    Subsequently I would not look into ILP for Prolog
    autoencoders and transformers is my point exactly. Because
    mostlikely ILP is unaware of the concept of latent space.
    Latent space has quite some advantages:

    - *Dimensionality Reduction:* It captures the essential
    structure of high-dimensional data in a more
    compact form.

    - *Synthetic Data:* Instead of modifying raw data, you can
    use the latent space, to generate variations for
    further learning.

    - *Domain Adaptation:* Well-structured latent space can
    help transfer knowledge from abundant domains to
    underrepresented ones.

    If you don’t mention autoencoders and transformers at
    all, you are possibly also not aware of the above advantages
    and other properties of autoencoders and transformers.

    In ILP mostlikely the concept of latent space is dormant
    or blurred, since the stance is well we invent predicates,
    ergo relations. There is no attempt to break

    down relations further:

    https://www.v7labs.com/blog/autoencoders-guide

    Basically autoencoders and transformers, by imposing some
    hidden layer, are further structuring relations into an
    encoder and a decoder. So a relation is seen as a join.

    The H is the bottleneck on purpose:

    relation(X, Y) :- encoder(X, H), decoder(H, Y).

    The values of H go through the latent space which is
    invented during the learning process. It is not simply
    the input or output space.

    This design has some very interesting repercussions.

    Bye

    Mild Shock schrieb:
    Hi,

    One idea I had was that autoencoders would
    become kind of invisible, and work under the hood
    to compress Prolog facts. Take these facts:

    % standard _, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
    data(seg7, [0,0,0,0,0,0,0], [0,0,0,0,0,0,0]).
    data(seg7, [1,1,1,1,1,1,0], [1,1,1,1,1,1,0]).
    data(seg7, [0,1,1,0,0,0,0], [0,1,1,0,0,0,0]).
    data(seg7, [1,1,0,1,1,0,1], [1,1,0,1,1,0,1]).
    data(seg7, [1,1,1,1,0,0,1], [1,1,1,1,0,0,1]).
    data(seg7, [0,1,1,0,0,1,1], [0,1,1,0,0,1,1]).
    data(seg7, [1,0,1,1,0,1,1], [1,0,1,1,0,1,1]).
    data(seg7, [1,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
    data(seg7, [1,1,1,0,0,0,0], [1,1,1,0,0,0,0]).
    data(seg7, [1,1,1,1,1,1,1], [1,1,1,1,1,1,1]).
    data(seg7, [1,1,1,1,0,1,1], [1,1,1,1,0,1,1]).
    % alternatives 9, 7, 6, 1
    data(seg7, [1,1,1,0,0,1,1], [1,1,1,1,0,1,1]).
    data(seg7, [1,1,1,0,0,1,0], [1,1,1,0,0,0,0]).
    data(seg7, [0,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
    data(seg7, [0,0,0,0,1,1,0], [0,1,1,0,0,0,0]). https://en.wikipedia.org/wiki/Seven-segment_display

    Or more visually, 9 7 6 1 have variants trained:

    :- show.
    _0123456789(9)(7)(6)(1)

    The auto encoder would create a latent space, an
    encoder, and a decoder. And we could basically query
    ?- data(seg7, X, Y) with X input, and Y output,

    9 7 6 1 were corrected:

    :- random2.
    0, 0
    _01234567899761

    The autoencoder might also tolerate errors in the
    input that are not in the data, giving it some inferential
    capability. And then choose an output again not in

    the data, giving it some generative capabilities.

    Bye

    See also:

    What is Latent Space in Deep Learning? https://www.geeksforgeeks.org/what-is-latent-space-in-deep-learning/

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg



    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Mar 7 18:16:25 2025
    From Newsgroup: comp.lang.prolog


    The problem I am trying to address was
    already adressed here:

    ILP and Reasoning by Analogy
    Intuitively, the idea is to use what is already
    known to explain new observations that appear similar
    to old knowledge. In a sense, it is opposite of induction,
    where to explain the observations one comes up with
    new hypotheses/theories.
    Vesna Poprcova et al. - 2010
    https://www.researchgate.net/publication/220141214

    The problem consists in that ILP doesn’t try to
    learn and apply analogies , whereas autoencoders and
    transformers typically try to “Grok” analogies, so that
    with a fewer training they can perform

    well in certain domains. They will do some inferencing
    on the part of the encoders also for unseen input
    data. And they will do some generation on the part of
    the decoder also for unseen

    latent space configurations from unseen input data.
    By unseen data I mean data not in the training set.
    The full context window may tune the inferencing and
    generation, which appeals to:

    Analogy as a Search Procedure
    Rumelhart and Abrahamson showed that when presented
    with analogy problems like mokey:pig:gorilla:X, with
    rabbit, tiger, cow, and elephant as alternatives for X,
    subjects rank the four options following the
    parallelogram rule.
    Matías Osta-Vélez - 2022
    https://www.researchgate.net/publication/363700634

    There are learning methods that work similarly
    like ILP, in that they are based on positive and
    negative samples. And the statistics can involve
    bilinear forms, similar like

    is seen in the “Attention is all you Need” paper.
    But I have not yet a good implementation of this
    evisioned marriage of autoencoders and ILP, and
    I am still researching the topic.

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Mar 19 20:58:47 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I first wanted to use a working title:

    "new frontiers in logic programming"

    But upon reflection and because of fElon,
    here another idea for a working title:

    "neuro infused logic programming" (NILP)

    What could it mean? Or does it have some
    alternative phrasing already?

    Try this paper:

    Compositional Neural Logic Programming
    Son N. Tran - 2021
    The combination of connectionist models for low-level
    information processing and logic programs for high-level
    decision making can offer improvements in inference
    efficiency and prediction performance https://www.ijcai.org/proceedings/2021/421

    Browsing through the bibliography I find:

    [Cohen et al., 2017]
    Tensorlog: Deep learning meets probabilistic

    [Donadello et al., 2017]
    Logic tensor networks

    [Larochelle and Murray, 2011]
    The neural autoregressive distribution estimator

    [Manhaeve et al., 2018]
    Neural probabilistic logic programming

    [Mirza and Osindero, 2014]
    Conditional generative adversarial nets

    [Odena et al., 2017]
    auxiliary classifier GANs

    [Pierrot et al., 2019]
    compositional neural programs

    [Reed and de Freitas, 2016]
    Neural programmer-interpreters

    [Riveret et al., 2020]
    Neuro-Symbolic Probabilistic Argumentation Machines

    [Serafini and d’Avila Garcez, 2016]
    logic tensor networks.

    [Socher et al., 2013]
    neural tensor networks

    [Towell and Shavlik, 1994]
    Knowledge-based artificial neural networks

    [Tran and d’Avila Garcez, 2018]
    Deep logic networks

    [Wang et al., 2019]
    compositional neural information fusion


    Mild Shock schrieb:
    Hi,

    One idea I had was that autoencoders would
    become kind of invisible, and work under the hood
    to compress Prolog facts. Take these facts:

    % standard _, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
    data(seg7, [0,0,0,0,0,0,0], [0,0,0,0,0,0,0]).
    data(seg7, [1,1,1,1,1,1,0], [1,1,1,1,1,1,0]).
    data(seg7, [0,1,1,0,0,0,0], [0,1,1,0,0,0,0]).
    data(seg7, [1,1,0,1,1,0,1], [1,1,0,1,1,0,1]).
    data(seg7, [1,1,1,1,0,0,1], [1,1,1,1,0,0,1]).
    data(seg7, [0,1,1,0,0,1,1], [0,1,1,0,0,1,1]).
    data(seg7, [1,0,1,1,0,1,1], [1,0,1,1,0,1,1]).
    data(seg7, [1,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
    data(seg7, [1,1,1,0,0,0,0], [1,1,1,0,0,0,0]).
    data(seg7, [1,1,1,1,1,1,1], [1,1,1,1,1,1,1]).
    data(seg7, [1,1,1,1,0,1,1], [1,1,1,1,0,1,1]).
    % alternatives 9, 7, 6, 1
    data(seg7, [1,1,1,0,0,1,1], [1,1,1,1,0,1,1]).
    data(seg7, [1,1,1,0,0,1,0], [1,1,1,0,0,0,0]).
    data(seg7, [0,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
    data(seg7, [0,0,0,0,1,1,0], [0,1,1,0,0,0,0]). https://en.wikipedia.org/wiki/Seven-segment_display

    Or more visually, 9 7 6 1 have variants trained:

    :- show.
    _0123456789(9)(7)(6)(1)

    The auto encoder would create a latent space, an
    encoder, and a decoder. And we could basically query
    ?- data(seg7, X, Y) with X input, and Y output,

    9 7 6 1 were corrected:

    :- random2.
    0, 0
    _01234567899761

    The autoencoder might also tolerate errors in the
    input that are not in the data, giving it some inferential
    capability. And then choose an output again not in

    the data, giving it some generative capabilities.

    Bye

    See also:

    What is Latent Space in Deep Learning? https://www.geeksforgeeks.org/what-is-latent-space-in-deep-learning/

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg



    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Mar 25 12:22:53 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    A software engineering analyis why Prolog fails ================================================

    You would also get more done, if Prolog had some
    well design plug and play machine learning libraries.
    Currently most SWI Prolog packages are just GitHub dumps:

    (Python) Problem ---> import solver ---> Solution

    (SWI) Problem ---> install pack ---> Problem

    Python shows more success in the practitioners domain,
    since it has more libraries that have made the test of
    time of practial use. Whereas Prolog is still in its
    infancy in many domains,

    you don’t arrive at the same level of convenience and
    breadth as Python, if you have only fire and forget dumps
    offered, from some PhD projects where software engineering
    is secondary.

    I don’t know exactly why Prolog has so much problems
    with software engineering. Python has object orientation,
    but Logtalk didn’t make the situation better. SWI-Prolog
    has modules, but they are never used. For example this

    here is a big monolith:

    This module performs learning over Logic Programs https://github.com/friguzzi/liftcover/blob/main/prolog/liftcover.pl

    Its more designed towards providing some command line
    control. But if you look into it, it has EM algorithms
    and gradient algorithm, and who knows what. These building
    blocks are not exposed,

    not made towards reused or towards improvement by
    switching in 3rd party alternatives. Mostlikely a design
    flaw inside the pack mechanism itself, since it assumes a
    single main module?

    So the pack mechanism works, if a unit pack imports a
    clp(BNR) pack, since it uses the single entry of clp(BNR).
    But it is never on paar with the richness of Python packages,
    which have more a hierarchical structure of many

    many modules in their packs.

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Mar 27 11:42:22 2025
    From Newsgroup: comp.lang.prolog

    I have retracted those posts, that had Python-first
    in it, not sure whether my analysis about some projects
    was water thight. I only made the Python example as to
    illustrate the idea of

    a variation point. I do not think programming language
    trench wars are good idea, and one should put software
    engineering -first, as an abstract computer science
    discipline. Not doing so

    is only a distraction from the real issues at hand.
    Variation points where defined quite vaguely
    on purpose:

    Ivar Jacobson defines a variation point as follows:
    A variation point identifies one or more locations at
    which the variation will occur.

    Variation points can come in many shades, and for
    example ProbLog based approaches take the viewpoint
    of a Prolog text with a lot of configuration flags
    and predicate

    annotations. This is quite different from the
    autoencoder or transformer component approach I
    suggested here. In particular component oriented
    approach could be

    more flexible and dynamic, when they allow programmatic
    configuration of components. The drawback is you cannot
    understand what the program does by looking at a

    simply structured Prolog text. Although I expected
    the situation is not that bad, and one could do
    something similar to a table/1 directive, i.e. some
    directive that says

    look, this predicate is an autoencoder or transformer:

    One idea I had was that autoencoders would become
    kind of invisible, and work under the hood to compress
    Prolog facts. Take these facts:

    % standard _, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
    data(seg7, [0,0,0,0,0,0,0], [0,0,0,0,0,0,0]).

    So to instruct the Prolog system to do what is sketched,
    one would possibly need a new directive autoencoder/1:

    :- autoencoder data/3.

    Mild Shock schrieb:
    Hi,

    A software engineering analyis why Prolog fails ================================================

    You would also get more done, if Prolog had some
    well design plug and play machine learning libraries.
    Currently most SWI Prolog packages are just GitHub dumps:

    (Python) Problem ---> import solver ---> Solution

    (SWI) Problem ---> install pack ---> Problem

    Python shows more success in the practitioners domain,
    since it has more libraries that have made the test of
    time of practial use. Whereas Prolog is still in its
    infancy in many domains,

    you don’t arrive at the same level of convenience and
    breadth as Python, if you have only fire and forget dumps
    offered, from some PhD projects where software engineering
    is secondary.

    I don’t know exactly why Prolog has so much problems
    with software engineering. Python has object orientation,
    but Logtalk didn’t make the situation better. SWI-Prolog
    has modules, but they are never used. For example this

    here is a big monolith:

    This module performs learning over Logic Programs https://github.com/friguzzi/liftcover/blob/main/prolog/liftcover.pl

    Its more designed towards providing some command line
    control. But if you look into it, it has EM algorithms
    and gradient algorithm, and who knows what. These building
    blocks are not exposed,

    not made towards reused or towards improvement by
    switching in 3rd party alternatives. Mostlikely a design
    flaw inside the pack mechanism itself, since it assumes a
    single main module?

    So the pack mechanism works, if a unit pack imports a
    clp(BNR) pack, since it uses the single entry of clp(BNR).
    But it is never on paar with the richness of Python packages,
    which have more a hierarchical structure of many

    many modules in their packs.

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg



    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Mar 27 11:43:58 2025
    From Newsgroup: comp.lang.prolog


    But even with such a directive there are many
    challenges, which ProbLog suffers also from. Consider
    this transformer pipeline, with two components of
    type g twice:

    +----+ +----+ +----+
    | |-->| g |-->| |
    | | +----+ | |
    x -->| f | | h |--> y
    | | +----+ | |
    | |-->| g |-->| |
    +----+ +----+ +----+

    With common subexpessions, i.e. computing
    f only once, I can write the forward pass
    as follows:

    p, q = f(x)
    y = h(g(p), g(q))

    But the above doesn’t show the learnt parameters.
    Will g and g be siamese neural networks, learning
    one sets of parameters, or will they learn
    two sets of parameters? See also:

    Siamese neural network
    https://en.wikipedia.org/wiki/Siamese_neural_network

    If I am not mistaken in ProbLog one can use
    variables to indicate probabilities annotation.
    An example of such a variable is seen here:

    % intensional probabilistic fact with flexible probability:
    P::pack(Item) :- weight(Item,Weight), P is 1.0/Weight.

    But one might need something either to create
    siamese or to separate siamese, depending on what
    the default modus operandi of the probabilistic

    logic programming language is.

    Mild Shock schrieb:
    I have retracted those posts, that had Python-first
    in it, not sure whether my analysis about some projects
    was water thight. I only made the Python example as to
    illustrate the idea of

    a variation point. I do not think programming language
    trench wars are good idea, and one should put software
    engineering -first, as an abstract computer science
    discipline. Not doing so

    is only a distraction from the real issues at hand.
    Variation points where defined quite vaguely
    on purpose:

    Ivar Jacobson defines a variation point as follows:
    A variation point identifies one or more locations at
    which the variation will occur.

    Variation points can come in many shades, and for
    example ProbLog based approaches take the viewpoint
    of a Prolog text with a lot of configuration flags
    and predicate

    annotations. This is quite different from the
    autoencoder or transformer component approach I
    suggested here. In particular component oriented
    approach could be

    more flexible and dynamic, when they allow programmatic
    configuration of components. The drawback is you cannot
    understand what the program does by looking at a

    simply structured Prolog text. Although I expected
    the situation is not that bad, and one could do
    something similar to a table/1 directive, i.e. some
    directive that says

    look, this predicate is an autoencoder or transformer:

    One idea I had was that autoencoders would become
    kind of invisible, and work under the hood to compress
    Prolog facts. Take these facts:

    % standard _, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
    data(seg7, [0,0,0,0,0,0,0], [0,0,0,0,0,0,0]).

    So to instruct the Prolog system to do what is sketched,
    one would possibly need a new directive autoencoder/1:

    :- autoencoder data/3.

    Mild Shock schrieb:
    Hi,

    A software engineering analyis why Prolog fails
    ================================================

    You would also get more done, if Prolog had some
    well design plug and play machine learning libraries.
    Currently most SWI Prolog packages are just GitHub dumps:

    (Python) Problem ---> import solver ---> Solution

    (SWI) Problem ---> install pack ---> Problem

    Python shows more success in the practitioners domain,
    since it has more libraries that have made the test of
    time of practial use. Whereas Prolog is still in its
    infancy in many domains,

    you don’t arrive at the same level of convenience and
    breadth as Python, if you have only fire and forget dumps
    offered, from some PhD projects where software engineering
    is secondary.

    I don’t know exactly why Prolog has so much problems
    with software engineering. Python has object orientation,
    but Logtalk didn’t make the situation better. SWI-Prolog
    has modules, but they are never used. For example this

    here is a big monolith:

    This module performs learning over Logic Programs
    https://github.com/friguzzi/liftcover/blob/main/prolog/liftcover.pl

    Its more designed towards providing some command line
    control. But if you look into it, it has EM algorithms
    and gradient algorithm, and who knows what. These building
    blocks are not exposed,

    not made towards reused or towards improvement by
    switching in 3rd party alternatives. Mostlikely a design
    flaw inside the pack mechanism itself, since it assumes a
    single main module?

    So the pack mechanism works, if a unit pack imports a
    clp(BNR) pack, since it uses the single entry of clp(BNR).
    But it is never on paar with the richness of Python packages,
    which have more a hierarchical structure of many

    many modules in their packs.

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg




    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jun 23 16:37:54 2025
    From Newsgroup: comp.lang.prolog

    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesn’t improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jun 23 16:47:14 2025
    From Newsgroup: comp.lang.prolog


    Technically SWI-Prolog doesn't prefer codes.
    Library `library(pure_input)` might prefer codes.
    But this is again an issue of improving the
    library by some non existent SWI-Prolog community.

    The ISO core standard is silent about a flag
    back_quotes, but has a lot of API requirements
    that support both codes and chars, for example it
    requires atom_codes/2 and atom_chars/2.

    Implementation wise there can be an issue,
    like one might decide to implement the atoms
    of length=1 more efficiently, since with Unicode
    there is now an explosion.

    Not sure whether Trealla Prolog and Scryer
    Prolog thought about this problem, that the
    atom table gets quite large. Whereas codes don't
    eat the atom table. Maybe they forbit predicates

    that have an atom of length=1 head:

    h(X) :-
    write('Hello '), write(X), write('!'), nl.

    Does this still work?

    Mild Shock schrieb:
    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesn’t improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jun 23 17:03:38 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    The most radical approach is Novacore from
    Dogelog Player. It consists of the following
    major incisions in the ISO core standard:

    - We do not forbid chars, like for example
    using lists of the form [a,b,c], we also
    provide char_code/2 predicate bidirectionally.

    - We do not provide and _chars built-in
    predicates also there is nothing _strings. The
    Prolog system is clever enough to not put
    every atom it sees in an atom table. There
    is only a predicate table.

    - Some host languages have garbage collection that
    deduplicates Strings. For example some Java
    versions have an options to do that. But we
    do not have any efforts to deduplicate atoms,
    which are simply plain strings.

    - Some languages have constant pools. For example
    the Java byte code format includes a constant
    pool in every class header. We do not do that
    during transpilation , but we could of course.
    But it begs the question, why only deduplicate
    strings and not other constant expressions as well?

    - We are totally happy that we have only codes,
    there are chances that the host languages use
    tagged pointers to represent them. So they
    are represented similar to the tagged pointers
    in SWI-Prolog which works for small integers.

    - But the tagged pointer argument is moot,
    since atom length=1 entities can be also
    represented as tagged pointers, and some
    programming languages do that. Dogelog Player
    would use such tagged pointers without
    poluting the atom table.

    - What else?

    Bye

    Mild Shock schrieb:

    Technically SWI-Prolog doesn't prefer codes.
    Library `library(pure_input)` might prefer codes.
    But this is again an issue of improving the
    library by some non existent SWI-Prolog community.

    The ISO core standard is silent about a flag
    back_quotes, but has a lot of API requirements
    that support both codes and chars, for example it
    requires atom_codes/2 and atom_chars/2.

    Implementation wise there can be an issue,
    like one might decide to implement the atoms
    of length=1 more efficiently, since with Unicode
    there is now an explosion.

    Not sure whether Trealla Prolog and Scryer
    Prolog thought about this problem, that the
    atom table gets quite large. Whereas codes don't
    eat the atom table. Maybe they forbit predicates

    that have an atom of length=1 head:

    h(X) :-
        write('Hello '), write(X), write('!'), nl.

    Does this still work?

    Mild Shock schrieb:
    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesn’t improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jun 23 18:43:16 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Even the SWI-Prolog master not wide awake,
    doing day-sleeping.

    I don’t know whether they realised that you
    cannot meaningfully support both in the same
    system and surely not in the same application.

    Maybe you didn’t notice this nifty detail.
    Thats all you need:

    The ISO core standard is silent about a flag back_quotes

    Its more a naming problem. Have two libraries
    library(portray_codes) and library(portray_chars),
    Or one library(portray_text).

    Just add one more rule:

    user:portray(Chars) :-
    portray_text_option(enabled, true),
    '$skip_list'(Length, Chars, _Tail),
    portray_text_option(min_length, MinLen),
    Length >= MinLen,
    mostly_chars(Chars, 0.9),
    portray_text_option(ellipsis, IfLonger),
    quote2(C),
    put_code(C),
    maplist(char_code, Chars, Codes),
    ( Length > IfLonger
    -> First is IfLonger - 5,
    Skip is Length - 5,
    skip_first(Skip, Codes, Rest),
    put_n_codes(First, Codes, C),
    format('...', [])
    ; Rest = Codes
    ),
    put_var_codes(Rest, C),
    put_code(C).

    The use of maplist/3 is elegant, and works since we do
    not print open lists, right?

    Mild Shock schrieb:
    Hi,

    The most radical approach is Novacore from
    Dogelog Player. It consists of the following
    major incisions in the ISO core standard:

    - We do not forbid chars, like for example
      using lists of the form [a,b,c], we also
      provide char_code/2 predicate bidirectionally.

    - We do not provide and _chars built-in
      predicates also there is nothing _strings. The
      Prolog system is clever enough to not put
      every atom it sees in an atom table. There
      is only a predicate table.

    - Some host languages have garbage collection that
      deduplicates Strings. For example some Java
      versions have an options to do that. But we
      do not have any efforts to deduplicate atoms,
      which are simply plain strings.

    - Some languages have constant pools. For example
      the Java byte code format includes a constant
      pool in every class header. We do not do that
      during transpilation , but we could of course.
      But it begs the question, why only deduplicate
      strings and not other constant expressions as well?

    - We are totally happy that we have only codes,
      there are chances that the host languages use
      tagged pointers to represent them. So they
      are represented similar to the tagged pointers
      in SWI-Prolog which works for small integers.

    - But the tagged pointer argument is moot,
      since atom length=1 entities can be also
      represented as tagged pointers, and some
      programming languages do that. Dogelog Player
      would use such tagged pointers without
      poluting the atom table.

    - What else?

    Bye

    Mild Shock schrieb:

    Technically SWI-Prolog doesn't prefer codes.
    Library `library(pure_input)` might prefer codes.
    But this is again an issue of improving the
    library by some non existent SWI-Prolog community.

    The ISO core standard is silent about a flag
    back_quotes, but has a lot of API requirements
    that support both codes and chars, for example it
    requires atom_codes/2 and atom_chars/2.

    Implementation wise there can be an issue,
    like one might decide to implement the atoms
    of length=1 more efficiently, since with Unicode
    there is now an explosion.

    Not sure whether Trealla Prolog and Scryer
    Prolog thought about this problem, that the
    atom table gets quite large. Whereas codes don't
    eat the atom table. Maybe they forbit predicates

    that have an atom of length=1 head:

    h(X) :-
         write('Hello '), write(X), write('!'), nl.

    Does this still work?

    Mild Shock schrieb:
    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesn’t improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jun 23 18:44:49 2025
    From Newsgroup: comp.lang.prolog


    Since it has a double hook, works fine simultaneously:

    ?- set_portray_text(enabled, false).
    true.

    ?- X = [a,b,c].
    X = [a, b, c].

    ?- X = [0'a,0'b,0'c].
    X = [97, 98, 99].

    And then:

    ?- set_prolog_flag(double_quotes, codes).
    true.

    ?- set_prolog_flag(back_quotes, chars).
    true.

    ?- set_portray_text(enabled, true).
    true.

    ?- X = [a,b,c].
    X = `abc`.

    ?- X = [0'a,0'b,0'c].
    X = "abc".

    Mild Shock schrieb:
    Hi,

    Even the SWI-Prolog master not wide awake,
    doing day-sleeping.

    I don’t know whether they realised that you
    cannot meaningfully support both in the same
    system and surely not in the same application.

    Maybe you didn’t notice this nifty detail.
    Thats all you need:

    The ISO core standard is silent about a flag back_quotes

    Its more a naming problem. Have two libraries
    library(portray_codes) and library(portray_chars),
    Or one library(portray_text).

    Just add one more rule:

    user:portray(Chars) :-
        portray_text_option(enabled, true),
        '$skip_list'(Length, Chars, _Tail),
        portray_text_option(min_length, MinLen),
        Length >= MinLen,
        mostly_chars(Chars, 0.9),
        portray_text_option(ellipsis, IfLonger),
        quote2(C),
        put_code(C),
        maplist(char_code, Chars, Codes),
        (   Length > IfLonger
        ->  First is IfLonger - 5,
            Skip is Length - 5,
            skip_first(Skip, Codes, Rest),
            put_n_codes(First, Codes, C),
            format('...', [])
        ;   Rest = Codes
        ),
        put_var_codes(Rest, C),
        put_code(C).

    The use of maplist/3 is elegant, and works since we do
    not print open lists, right?

    Mild Shock schrieb:
    Hi,

    The most radical approach is Novacore from
    Dogelog Player. It consists of the following
    major incisions in the ISO core standard:

    - We do not forbid chars, like for example
       using lists of the form [a,b,c], we also
       provide char_code/2 predicate bidirectionally.

    - We do not provide and _chars built-in
       predicates also there is nothing _strings. The
       Prolog system is clever enough to not put
       every atom it sees in an atom table. There
       is only a predicate table.

    - Some host languages have garbage collection that
       deduplicates Strings. For example some Java
       versions have an options to do that. But we
       do not have any efforts to deduplicate atoms,
       which are simply plain strings.

    - Some languages have constant pools. For example
       the Java byte code format includes a constant
       pool in every class header. We do not do that
       during transpilation , but we could of course.
       But it begs the question, why only deduplicate
       strings and not other constant expressions as well?

    - We are totally happy that we have only codes,
       there are chances that the host languages use
       tagged pointers to represent them. So they
       are represented similar to the tagged pointers
       in SWI-Prolog which works for small integers.

    - But the tagged pointer argument is moot,
       since atom length=1 entities can be also
       represented as tagged pointers, and some
       programming languages do that. Dogelog Player
       would use such tagged pointers without
       poluting the atom table.

    - What else?

    Bye

    Mild Shock schrieb:

    Technically SWI-Prolog doesn't prefer codes.
    Library `library(pure_input)` might prefer codes.
    But this is again an issue of improving the
    library by some non existent SWI-Prolog community.

    The ISO core standard is silent about a flag
    back_quotes, but has a lot of API requirements
    that support both codes and chars, for example it
    requires atom_codes/2 and atom_chars/2.

    Implementation wise there can be an issue,
    like one might decide to implement the atoms
    of length=1 more efficiently, since with Unicode
    there is now an explosion.

    Not sure whether Trealla Prolog and Scryer
    Prolog thought about this problem, that the
    atom table gets quite large. Whereas codes don't
    eat the atom table. Maybe they forbit predicates

    that have an atom of length=1 head:

    h(X) :-
         write('Hello '), write(X), write('!'), nl.

    Does this still work?

    Mild Shock schrieb:
    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesn’t improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jun 23 19:01:48 2025
    From Newsgroup: comp.lang.prolog

    Full source code here:

    swi2.pl.log https://github.com/SWI-Prolog/swipl-devel/issues/1373#issuecomment-2997214639

    Since it has a dual use hook, works fine simultaneously:

    ?- set_portray_text(enabled, false).
    true.

    ?- X = [a,b,c].
    X = [a, b, c].

    ?- X = [0'a,0'b,0'c].
    X = [97, 98, 99].

    And then:

    ?- set_prolog_flag(double_quotes, codes).
    true.

    ?- set_prolog_flag(back_quotes, chars).
    true.

    ?- set_portray_text(enabled, true).
    true.

    ?- X = [a,b,c].
    X = `abc`.

    ?- X = [0'a,0'b,0'c].
    X = "abc".

    Mild Shock schrieb:
    Hi,

    Even the SWI-Prolog master not wide awake,
    doing day-sleeping.

    I don’t know whether they realised that you
    cannot meaningfully support both in the same
    system and surely not in the same application.

    Maybe you didn’t notice this nifty detail.
    Thats all you need:

    The ISO core standard is silent about a flag back_quotes

    Its more a naming problem. Have two libraries
    library(portray_codes) and library(portray_chars),
    Or one library(portray_text).

    Just add one more rule:

    user:portray(Chars) :-
        portray_text_option(enabled, true),
        '$skip_list'(Length, Chars, _Tail),
        portray_text_option(min_length, MinLen),
        Length >= MinLen,
        mostly_chars(Chars, 0.9),
        portray_text_option(ellipsis, IfLonger),
        quote2(C),
        put_code(C),
        maplist(char_code, Chars, Codes),
        (   Length > IfLonger
        ->  First is IfLonger - 5,
            Skip is Length - 5,
            skip_first(Skip, Codes, Rest),
            put_n_codes(First, Codes, C),
            format('...', [])
        ;   Rest = Codes
        ),
        put_var_codes(Rest, C),
        put_code(C).

    The use of maplist/3 is elegant, and works since we do
    not print open lists, right?

    Mild Shock schrieb:
    Hi,

    The most radical approach is Novacore from
    Dogelog Player. It consists of the following
    major incisions in the ISO core standard:

    - We do not forbid chars, like for example
       using lists of the form [a,b,c], we also
       provide char_code/2 predicate bidirectionally.

    - We do not provide and _chars built-in
       predicates also there is nothing _strings. The
       Prolog system is clever enough to not put
       every atom it sees in an atom table. There
       is only a predicate table.

    - Some host languages have garbage collection that
       deduplicates Strings. For example some Java
       versions have an options to do that. But we
       do not have any efforts to deduplicate atoms,
       which are simply plain strings.

    - Some languages have constant pools. For example
       the Java byte code format includes a constant
       pool in every class header. We do not do that
       during transpilation , but we could of course.
       But it begs the question, why only deduplicate
       strings and not other constant expressions as well?

    - We are totally happy that we have only codes,
       there are chances that the host languages use
       tagged pointers to represent them. So they
       are represented similar to the tagged pointers
       in SWI-Prolog which works for small integers.

    - But the tagged pointer argument is moot,
       since atom length=1 entities can be also
       represented as tagged pointers, and some
       programming languages do that. Dogelog Player
       would use such tagged pointers without
       poluting the atom table.

    - What else?

    Bye

    Mild Shock schrieb:

    Technically SWI-Prolog doesn't prefer codes.
    Library `library(pure_input)` might prefer codes.
    But this is again an issue of improving the
    library by some non existent SWI-Prolog community.

    The ISO core standard is silent about a flag
    back_quotes, but has a lot of API requirements
    that support both codes and chars, for example it
    requires atom_codes/2 and atom_chars/2.

    Implementation wise there can be an issue,
    like one might decide to implement the atoms
    of length=1 more efficiently, since with Unicode
    there is now an explosion.

    Not sure whether Trealla Prolog and Scryer
    Prolog thought about this problem, that the
    atom table gets quite large. Whereas codes don't
    eat the atom table. Maybe they forbit predicates

    that have an atom of length=1 head:

    h(X) :-
         write('Hello '), write(X), write('!'), nl.

    Does this still work?

    Mild Shock schrieb:
    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesn’t improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jun 23 19:17:33 2025
    From Newsgroup: comp.lang.prolog

    Using again my super powered library(portray_text):

    ?- set_prolog_flag(double_quotes, codes).
    true.

    ?- set_prolog_flag(back_quotes, chars).
    true.

    ?- set_portray_text(enabled, true).
    true.

    ?- maplist(char_code, `abc`, X).
    X = "abc".

    ?- maplist(char_code, X, "abc").
    X = `abc`.

    So if you have a Prolog system that has chars, you
    could bootstrap as follows:

    atom_codes(X, Y) :-
    var(X), !,
    atom_chars(Z, Y),
    maplist(char_code, X, Z).
    atom_codes(X, Y) :-
    atom_chars(X, Z),
    maplist(char_code, Z, Y).

    Or if you have a Prolog system that has codes, you
    could bootstrap as follows:

    atom_chars(X, Y) :-
    var(X), !,
    atom_codes(Z, Y),
    maplist(char_code, Z, X).
    atom_chars(X, Y) :-
    atom_codes(X, Z),
    maplist(char_code, Y, Z).

    Mild Shock schrieb:
    Full source code here:

    swi2.pl.log https://github.com/SWI-Prolog/swipl-devel/issues/1373#issuecomment-2997214639


    Since it has a dual use hook, works fine simultaneously:

    ?- set_portray_text(enabled, false).
    true.

    ?- X = [a,b,c].
    X = [a, b, c].

    ?- X = [0'a,0'b,0'c].
    X = [97, 98, 99].

    And then:

    ?- set_prolog_flag(double_quotes, codes).
    true.

    ?- set_prolog_flag(back_quotes, chars).
    true.

    ?- set_portray_text(enabled, true).
    true.

    ?- X = [a,b,c].
    X = `abc`.

    ?- X = [0'a,0'b,0'c].
    X = "abc".

    Mild Shock schrieb:
    Hi,

    Even the SWI-Prolog master not wide awake,
    doing day-sleeping.

    I don’t know whether they realised that you
    cannot meaningfully support both in the same
    system and surely not in the same application.

    Maybe you didn’t notice this nifty detail.
    Thats all you need:

    The ISO core standard is silent about a flag back_quotes

    Its more a naming problem. Have two libraries
    library(portray_codes) and library(portray_chars),
    Or one library(portray_text).

    Just add one more rule:

    user:portray(Chars) :-
         portray_text_option(enabled, true),
         '$skip_list'(Length, Chars, _Tail),
         portray_text_option(min_length, MinLen),
         Length >= MinLen,
         mostly_chars(Chars, 0.9),
         portray_text_option(ellipsis, IfLonger),
         quote2(C),
         put_code(C),
         maplist(char_code, Chars, Codes),
         (   Length > IfLonger
         ->  First is IfLonger - 5,
             Skip is Length - 5,
             skip_first(Skip, Codes, Rest),
             put_n_codes(First, Codes, C),
             format('...', [])
         ;   Rest = Codes
         ),
         put_var_codes(Rest, C),
         put_code(C).

    The use of maplist/3 is elegant, and works since we do
    not print open lists, right?

    Mild Shock schrieb:
    Hi,

    The most radical approach is Novacore from
    Dogelog Player. It consists of the following
    major incisions in the ISO core standard:

    - We do not forbid chars, like for example
       using lists of the form [a,b,c], we also
       provide char_code/2 predicate bidirectionally.

    - We do not provide and _chars built-in
       predicates also there is nothing _strings. The
       Prolog system is clever enough to not put
       every atom it sees in an atom table. There
       is only a predicate table.

    - Some host languages have garbage collection that
       deduplicates Strings. For example some Java
       versions have an options to do that. But we
       do not have any efforts to deduplicate atoms,
       which are simply plain strings.

    - Some languages have constant pools. For example
       the Java byte code format includes a constant
       pool in every class header. We do not do that
       during transpilation , but we could of course.
       But it begs the question, why only deduplicate
       strings and not other constant expressions as well?

    - We are totally happy that we have only codes,
       there are chances that the host languages use
       tagged pointers to represent them. So they
       are represented similar to the tagged pointers
       in SWI-Prolog which works for small integers.

    - But the tagged pointer argument is moot,
       since atom length=1 entities can be also
       represented as tagged pointers, and some
       programming languages do that. Dogelog Player
       would use such tagged pointers without
       poluting the atom table.

    - What else?

    Bye

    Mild Shock schrieb:

    Technically SWI-Prolog doesn't prefer codes.
    Library `library(pure_input)` might prefer codes.
    But this is again an issue of improving the
    library by some non existent SWI-Prolog community.

    The ISO core standard is silent about a flag
    back_quotes, but has a lot of API requirements
    that support both codes and chars, for example it
    requires atom_codes/2 and atom_chars/2.

    Implementation wise there can be an issue,
    like one might decide to implement the atoms
    of length=1 more efficiently, since with Unicode
    there is now an explosion.

    Not sure whether Trealla Prolog and Scryer
    Prolog thought about this problem, that the
    atom table gets quite large. Whereas codes don't
    eat the atom table. Maybe they forbit predicates

    that have an atom of length=1 head:

    h(X) :-
         write('Hello '), write(X), write('!'), nl.

    Does this still work?

    Mild Shock schrieb:
    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesn’t improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf


    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses? >>>>>> The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg







    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jun 23 19:31:58 2025
    From Newsgroup: comp.lang.prolog

    So the SWI-Prolog master wrote:

    I wouldn’t call it an “ancient status”.

    Its ancient status because its not Unicode
    ready. It says, it doesn’t account for the
    new universal atom and strings in SWI-Prolog.

    Historical note: Unversal strings marked the
    transition from Python 2.x to Python 3.x.

    - we might be able to use the current locale
    to include the appropriate code page.
    (Does that really make sense?)

    https://www.swi-prolog.org/pldoc/doc_for?object=is_text_code/1

    But anyway I have retracted my swi2.pl.log
    from GitHub and blocked Jan. W. for the first
    time in my life. I really have lost all hope
    concerning SWI-Prolog and given up

    once and for ever...

    Mild Shock schrieb:
    Using again my super powered library(portray_text):

    ?- set_prolog_flag(double_quotes, codes).
    true.

    ?- set_prolog_flag(back_quotes, chars).
    true.

    ?- set_portray_text(enabled, true).
    true.

    ?- maplist(char_code, `abc`, X).
    X = "abc".

    ?- maplist(char_code, X, "abc").
    X = `abc`.

    So if you have a Prolog system that has chars, you
    could bootstrap as follows:

    atom_codes(X, Y) :-
      var(X), !,
      atom_chars(Z, Y),
      maplist(char_code, X, Z).
    atom_codes(X, Y) :-
      atom_chars(X, Z),
      maplist(char_code, Z, Y).

    Or if you have a Prolog system that has codes, you
    could bootstrap as follows:

    atom_chars(X, Y) :-
      var(X), !,
      atom_codes(Z, Y),
      maplist(char_code, Z, X).
    atom_chars(X, Y) :-
      atom_codes(X, Z),
      maplist(char_code, Y, Z).

    Mild Shock schrieb:
    Full source code here:

    swi2.pl.log
    https://github.com/SWI-Prolog/swipl-devel/issues/1373#issuecomment-2997214639


    Since it has a dual use hook, works fine simultaneously:

    ?- set_portray_text(enabled, false).
    true.

    ?- X = [a,b,c].
    X = [a, b, c].

    ?- X = [0'a,0'b,0'c].
    X = [97, 98, 99].

    And then:

    ?- set_prolog_flag(double_quotes, codes).
    true.

    ?- set_prolog_flag(back_quotes, chars).
    true.

    ?- set_portray_text(enabled, true).
    true.

    ?- X = [a,b,c].
    X = `abc`.

    ?- X = [0'a,0'b,0'c].
    X = "abc".

    Mild Shock schrieb:
    Hi,

    Even the SWI-Prolog master not wide awake,
    doing day-sleeping.

    I don’t know whether they realised that you
    cannot meaningfully support both in the same
    system and surely not in the same application.

    Maybe you didn’t notice this nifty detail.
    Thats all you need:

    The ISO core standard is silent about a flag back_quotes

    Its more a naming problem. Have two libraries
    library(portray_codes) and library(portray_chars),
    Or one library(portray_text).

    Just add one more rule:

    user:portray(Chars) :-
         portray_text_option(enabled, true),
         '$skip_list'(Length, Chars, _Tail),
         portray_text_option(min_length, MinLen),
         Length >= MinLen,
         mostly_chars(Chars, 0.9),
         portray_text_option(ellipsis, IfLonger),
         quote2(C),
         put_code(C),
         maplist(char_code, Chars, Codes),
         (   Length > IfLonger
         ->  First is IfLonger - 5,
             Skip is Length - 5,
             skip_first(Skip, Codes, Rest),
             put_n_codes(First, Codes, C),
             format('...', [])
         ;   Rest = Codes
         ),
         put_var_codes(Rest, C),
         put_code(C).

    The use of maplist/3 is elegant, and works since we do
    not print open lists, right?

    Mild Shock schrieb:
    Hi,

    The most radical approach is Novacore from
    Dogelog Player. It consists of the following
    major incisions in the ISO core standard:

    - We do not forbid chars, like for example
       using lists of the form [a,b,c], we also
       provide char_code/2 predicate bidirectionally.

    - We do not provide and _chars built-in
       predicates also there is nothing _strings. The
       Prolog system is clever enough to not put
       every atom it sees in an atom table. There
       is only a predicate table.

    - Some host languages have garbage collection that
       deduplicates Strings. For example some Java
       versions have an options to do that. But we
       do not have any efforts to deduplicate atoms,
       which are simply plain strings.

    - Some languages have constant pools. For example
       the Java byte code format includes a constant
       pool in every class header. We do not do that
       during transpilation , but we could of course.
       But it begs the question, why only deduplicate
       strings and not other constant expressions as well?

    - We are totally happy that we have only codes,
       there are chances that the host languages use
       tagged pointers to represent them. So they
       are represented similar to the tagged pointers
       in SWI-Prolog which works for small integers.

    - But the tagged pointer argument is moot,
       since atom length=1 entities can be also
       represented as tagged pointers, and some
       programming languages do that. Dogelog Player
       would use such tagged pointers without
       poluting the atom table.

    - What else?

    Bye

    Mild Shock schrieb:

    Technically SWI-Prolog doesn't prefer codes.
    Library `library(pure_input)` might prefer codes.
    But this is again an issue of improving the
    library by some non existent SWI-Prolog community.

    The ISO core standard is silent about a flag
    back_quotes, but has a lot of API requirements
    that support both codes and chars, for example it
    requires atom_codes/2 and atom_chars/2.

    Implementation wise there can be an issue,
    like one might decide to implement the atoms
    of length=1 more efficiently, since with Unicode
    there is now an explosion.

    Not sure whether Trealla Prolog and Scryer
    Prolog thought about this problem, that the
    atom table gets quite large. Whereas codes don't
    eat the atom table. Maybe they forbit predicates

    that have an atom of length=1 head:

    h(X) :-
         write('Hello '), write(X), write('!'), nl.

    Does this still work?

    Mild Shock schrieb:
    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesn’t improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder >>>>>>> turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf


    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of >>>>>>>
    things like stacked tensors, are they related to k-literal clauses? >>>>>>> The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg








    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jun 23 20:33:07 2025
    From Newsgroup: comp.lang.prolog

    What is holy is only for Dogelog Player!

    Do not give dogs what is holy, and do not
    throw your pearls before pigs, lest they
    trample them underfoot and turn to attack you.
    -- Matthew 7:6
    https://www.biblegateway.com/passage/?search=Matthew%207%3A6

    I have deleted my posts and the swi2.pl.log proposal:

    between(C, 0'0, 0'9), Digit is C-0'0.`

    Just rewrite it to:

    0'0 =< [Digit], [Digit] =< 0'9, [Digit] is C-0'0`.

    The [X] in an evaluation is dual use again:

    ?- X is [a].
    X = 97.

    ?- X is [0'a].
    X = 97.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jun 23 20:38:51 2025
    From Newsgroup: comp.lang.prolog

    Oops should read:

    0'0 =< [C], [C] =< 0'9, Digit is [C]-0'0`.

    Mild Shock schrieb:
    What is holy is only for Dogelog Player!

    Do not give dogs what is holy, and do not
    throw your pearls before pigs, lest they
    trample them underfoot and turn to attack you.
    -- Matthew 7:6
    https://www.biblegateway.com/passage/?search=Matthew%207%3A6

    I have deleted my posts and the swi2.pl.log proposal:

    between(C, 0'0, 0'9), Digit is C-0'0.`

    Just rewrite it to:

    0'0 =< [Digit], [Digit] =< 0'9, [Digit] is C-0'0`.

    The [X] in an evaluation is dual use again:

    ?- X is [a].
    X = 97.

    ?- X is [0'a].
    X = 97.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jun 23 21:16:47 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    What WG17 could do to prevent segregation.
    It could specify:

    - The back_quotes flag. Not really something
    new , most Prolog systems have it already.

    - The [X] evaluable function. Not really something
    new , most Prolog systems have it already. For
    example DEC-10 Prolog (10 November 1982) had it
    already, The new thing for some Prolog systems
    would be its non-strict evaluation strategy
    and the dual use:

    [X] (a list of just one element) evaluates to X if X is an
    integer. Since a quoted string is just a list of integers,
    this allows a quoted character to be used in place of its
    ASCII code; e.g. "A" behaves within arithmetic expressions
    as the integer 65.

    https://userweb.fct.unl.pt/~lmp/publications/online-papers/DECsystem-10%20PROLOG%20USER%27S%20MANUAL.pdf

    Instead what is WG17 doing?

    - Introducing a notation for open strings:

    [a, b, c|X] = "abc" || X

    With a new separator ||, giving possibly much more
    headache to Prolog system implementors than a flag
    and an evaluable function.

    Bye

    Mild Shock schrieb:
    Oops should read:

    0'0 =< [C], [C] =< 0'9, Digit is [C]-0'0`.

    Mild Shock schrieb:
    What is holy is only for Dogelog Player!

    Do not give dogs what is holy, and do not
    throw your pearls before pigs, lest they
    trample them underfoot and turn to attack you.
    -- Matthew 7:6
    https://www.biblegateway.com/passage/?search=Matthew%207%3A6

    I have deleted my posts and the swi2.pl.log proposal:

    between(C, 0'0, 0'9), Digit is C-0'0.`

    Just rewrite it to:

    0'0 =< [Digit], [Digit] =< 0'9, [Digit] is C-0'0`.

    The [X] in an evaluation is dual use again:

    ?- X is [a].
    X = 97.

    ?- X is [0'a].
    X = 97.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jun 23 22:19:23 2025
    From Newsgroup: comp.lang.prolog


    I don’t know, I would try to get out of the
    cornering that Scryer Prolog and Trealla Prolog
    tries to do with cheap tricks like this here,
    which library(portray_text) will probably

    /* Scryer Prolog 0.9.4-411 */
    ?- "٢١٠" = [H|T]. /* [0x0662, 0x0661, 0x0660] */
    H = '٢', T = "١٠".

    never attempt for codes, but might easily do
    for chars. It has currently only implemented:

    /* SWI-Prolog 9.3.24 */
    text_code(Code) :-
    is_text_code(Code),
    !.
    text_code(9). % horizontal tab, \t
    text_code(10). % newline \n
    text_code(13). % carriage return \r
    text_code(C) :- % space to tilde (127 is DEL)
    between(32, 126, C).

    And a greater range might really start getting into the
    way in working with lists that carry numbers.

    My guess SWI-Prolog could position its self as dual use.

    Mild Shock schrieb:
    Hi,

    What WG17 could do to prevent segregation.
    It could specify:

    - The back_quotes flag. Not really something
      new , most Prolog systems have it already.

    - The [X] evaluable function. Not really something
      new , most Prolog systems have it already. For
      example DEC-10 Prolog (10 November 1982) had it
      already, The new thing for some Prolog systems
      would be its non-strict evaluation strategy
      and the dual use:

     [X] (a list of just one element) evaluates to X if X is an
     integer. Since a quoted string is just a list of integers,
     this allows a quoted character to be used in place of its
     ASCII code; e.g. "A" behaves within arithmetic expressions
     as the integer 65.

    https://userweb.fct.unl.pt/~lmp/publications/online-papers/DECsystem-10%20PROLOG%20USER%27S%20MANUAL.pdf


    Instead what is WG17 doing?

    - Introducing a notation for open strings:

       [a, b, c|X] = "abc" || X

      With a new separator ||, giving possibly much more
      headache to Prolog system implementors than a flag
      and an evaluable function.

    Bye

    Mild Shock schrieb:
    Oops should read:

    0'0 =< [C], [C] =< 0'9, Digit is [C]-0'0`.

    Mild Shock schrieb:
    What is holy is only for Dogelog Player!

    Do not give dogs what is holy, and do not
    throw your pearls before pigs, lest they
    trample them underfoot and turn to attack you.
    -- Matthew 7:6
    https://www.biblegateway.com/passage/?search=Matthew%207%3A6

    I have deleted my posts and the swi2.pl.log proposal:

    between(C, 0'0, 0'9), Digit is C-0'0.`

    Just rewrite it to:

    0'0 =< [Digit], [Digit] =< 0'9, [Digit] is C-0'0`.

    The [X] in an evaluation is dual use again:

    ?- X is [a].
    X = 97.

    ?- X is [0'a].
    X = 97.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jun 23 22:20:36 2025
    From Newsgroup: comp.lang.prolog


    My guess SWI-Prolog could position its self as
    dual use. If it is not going dual use and as a
    Prolog system that is supposed to lead the way

    in teaching, it might only add to the segregation
    and confusion among Prolog systems. Of course people
    like Markus Triska try to position themselves

    as teachers and messias of Prolog. Why tuck the tail?

    Mild Shock schrieb:

    I don’t know, I would try to get out of the
    cornering that Scryer Prolog and Trealla Prolog
    tries to do with cheap tricks like this here,
    which library(portray_text) will probably

    /* Scryer Prolog 0.9.4-411 */
    ?- "٢١٠" = [H|T]. /* [0x0662, 0x0661, 0x0660] */
       H = '٢', T = "١٠".

    never attempt for codes, but might easily do
    for chars. It has currently only implemented:

    /* SWI-Prolog 9.3.24 */
    text_code(Code) :-
        is_text_code(Code),
        !.
    text_code(9).      % horizontal tab, \t
    text_code(10).     % newline \n
    text_code(13).     % carriage return \r
    text_code(C) :-    % space to tilde (127 is DEL)
        between(32, 126, C).

    And a greater range might really start getting into the
    way in working with lists that carry numbers.

    My guess SWI-Prolog could position its self as dual use.

    Mild Shock schrieb:
    Hi,

    What WG17 could do to prevent segregation.
    It could specify:

    - The back_quotes flag. Not really something
       new , most Prolog systems have it already.

    - The [X] evaluable function. Not really something
       new , most Prolog systems have it already. For
       example DEC-10 Prolog (10 November 1982) had it
       already, The new thing for some Prolog systems
       would be its non-strict evaluation strategy
       and the dual use:

      [X] (a list of just one element) evaluates to X if X is an
      integer. Since a quoted string is just a list of integers,
      this allows a quoted character to be used in place of its
      ASCII code; e.g. "A" behaves within arithmetic expressions
      as the integer 65.

    https://userweb.fct.unl.pt/~lmp/publications/online-papers/DECsystem-10%20PROLOG%20USER%27S%20MANUAL.pdf


    Instead what is WG17 doing?

    - Introducing a notation for open strings:

        [a, b, c|X] = "abc" || X

       With a new separator ||, giving possibly much more
       headache to Prolog system implementors than a flag
       and an evaluable function.

    Bye

    Mild Shock schrieb:
    Oops should read:

    0'0 =< [C], [C] =< 0'9, Digit is [C]-0'0`.

    Mild Shock schrieb:
    What is holy is only for Dogelog Player!

    Do not give dogs what is holy, and do not
    throw your pearls before pigs, lest they
    trample them underfoot and turn to attack you.
    -- Matthew 7:6
    https://www.biblegateway.com/passage/?search=Matthew%207%3A6

    I have deleted my posts and the swi2.pl.log proposal:

    between(C, 0'0, 0'9), Digit is C-0'0.`

    Just rewrite it to:

    0'0 =< [Digit], [Digit] =< 0'9, [Digit] is C-0'0`.

    The [X] in an evaluation is dual use again:

    ?- X is [a].
    X = 97.

    ?- X is [0'a].
    X = 97.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jun 27 13:21:06 2025
    From Newsgroup: comp.lang.prolog

    The official replacement character is 0xFFFD:

    Replacement Character
    https://www.compart.com/de/unicode/U+FFFD

    Well that is what people did in the past, replace
    non-printables by the ever same code, instead of
    using ‘\uXXXX’ notation. I have studied the

    library(portray_text) extensively. And my conclusion
    is still that it extremly ancient.

    For example I find:

    mostly_codes([H|T], Yes, No, MinFactor) :-
    integer(H),
    H >= 0,
    H =< 0x1ffff,
    [...]
    ; catch(code_type(H, print),error(_,_),fail),
    [...]

    https://github.com/SWI-Prolog/swipl-devel/blob/eddbde61be09b95eb3ca2e160e73c2340744a3d2/library/portray_text.pl#L235

    Why even 0x1ffff and not 0x10ffff, this is a bug,
    do you want to starve is_text_code/1 ? The official
    Unicode range is 0x0 to 0x10ffff. Ulrich Neumerkel

    often confused the range in some of his code snippets,
    maybe based on a limited interpretation of Unicode.
    But if one would switch to chars one could easily

    support any Unicode code point even without
    knowing the range. Just do this:

    mostly_chars([H|T], Yes, No, MinFactor) :-
    atom(H),
    atom_length(H, 1),
    [...]
    ; /* printable check not needed */
    [...]

    Mild Shock schrieb:
    Hi,

    The most radical approach is Novacore from
    Dogelog Player. It consists of the following
    major incisions in the ISO core standard:

    - We do not forbid chars, like for example
      using lists of the form [a,b,c], we also
      provide char_code/2 predicate bidirectionally.

    - We do not provide and _chars built-in
      predicates also there is nothing _strings. The
      Prolog system is clever enough to not put
      every atom it sees in an atom table. There
      is only a predicate table.

    - Some host languages have garbage collection that
      deduplicates Strings. For example some Java
      versions have an options to do that. But we
      do not have any efforts to deduplicate atoms,
      which are simply plain strings.

    - Some languages have constant pools. For example
      the Java byte code format includes a constant
      pool in every class header. We do not do that
      during transpilation , but we could of course.
      But it begs the question, why only deduplicate
      strings and not other constant expressions as well?

    - We are totally happy that we have only codes,
      there are chances that the host languages use
      tagged pointers to represent them. So they
      are represented similar to the tagged pointers
      in SWI-Prolog which works for small integers.

    - But the tagged pointer argument is moot,
      since atom length=1 entities can be also
      represented as tagged pointers, and some
      programming languages do that. Dogelog Player
      would use such tagged pointers without
      poluting the atom table.

    - What else?

    Bye

    Mild Shock schrieb:

    Technically SWI-Prolog doesn't prefer codes.
    Library `library(pure_input)` might prefer codes.
    But this is again an issue of improving the
    library by some non existent SWI-Prolog community.

    The ISO core standard is silent about a flag
    back_quotes, but has a lot of API requirements
    that support both codes and chars, for example it
    requires atom_codes/2 and atom_chars/2.

    Implementation wise there can be an issue,
    like one might decide to implement the atoms
    of length=1 more efficiently, since with Unicode
    there is now an explosion.

    Not sure whether Trealla Prolog and Scryer
    Prolog thought about this problem, that the
    atom table gets quite large. Whereas codes don't
    eat the atom table. Maybe they forbit predicates

    that have an atom of length=1 head:

    h(X) :-
         write('Hello '), write(X), write('!'), nl.

    Does this still work?

    Mild Shock schrieb:
    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesn’t improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jun 27 13:22:33 2025
    From Newsgroup: comp.lang.prolog

    Somebody wrote:

    It seems that it reads in as ðŸ‘\u008D but writes out as ðŸ‘\\x8D\\.

    Can one then do ‘\uXXXX’ in 100% Prolog as
    well? Even including surrogates? Of course,
    here some DCG generator snippet from Dogelog

    Player which is 100% Prolog. This is from the
    Java backend, because I didn’t introduce ‘\uXXXX’
    in my Prolog system, because it is not part of

    ISO core standard. The ISO core standard would want '\xXX':

    crossj_escape_code2(X) --> {X =< 0xFFFF}, !,
    {atom_integer(J, 16, X), atom_codes(J, H),
    length(H, N), M is 4-N}, [0'\\, 0'u],
    cross_escape_zeros(M),
    cross_escape_codes2(H).
    crossj_escape_code2(X) --> {crossj_high_surrogate(X, Y),
    crossj_low_surrogate(X, Z)},
    crossj_escape_code2(Y),
    crossj_escape_code2(Z).

    crossj_high_surrogate(X, Y) :- Y is (X >> 10) + 0xD7C0.

    crossj_low_surrogate(X, Y) :- Y is (X /\ 0x3FF) + 0xDC00.

    Mild Shock schrieb:
    The official replacement character is 0xFFFD:

    Replacement Character
    https://www.compart.com/de/unicode/U+FFFD

    Well that is what people did in the past, replace
    non-printables by the ever same code, instead of
    using ‘\uXXXX’ notation. I have studied the

    library(portray_text) extensively. And my conclusion
    is still that it extremly ancient.

    For example I find:

    mostly_codes([H|T], Yes, No, MinFactor) :-
        integer(H),
        H >= 0,
        H =< 0x1ffff,
        [...]
       ;   catch(code_type(H, print),error(_,_),fail),
        [...]

    https://github.com/SWI-Prolog/swipl-devel/blob/eddbde61be09b95eb3ca2e160e73c2340744a3d2/library/portray_text.pl#L235


    Why even 0x1ffff and not 0x10ffff, this is a bug,
    do you want to starve is_text_code/1 ? The official
    Unicode range is 0x0 to 0x10ffff. Ulrich Neumerkel

    often confused the range in some of his code snippets,
    maybe based on a limited interpretation of Unicode.
    But if one would switch to chars one could easily

    support any Unicode code point even without
    knowing the range. Just do this:

    mostly_chars([H|T], Yes, No, MinFactor) :-
        atom(H),
        atom_length(H, 1),
        [...]
       ;  /* printable check not needed */
        [...]

    Mild Shock schrieb:
    Hi,

    The most radical approach is Novacore from
    Dogelog Player. It consists of the following
    major incisions in the ISO core standard:

    - We do not forbid chars, like for example
       using lists of the form [a,b,c], we also
       provide char_code/2 predicate bidirectionally.

    - We do not provide and _chars built-in
       predicates also there is nothing _strings. The
       Prolog system is clever enough to not put
       every atom it sees in an atom table. There
       is only a predicate table.

    - Some host languages have garbage collection that
       deduplicates Strings. For example some Java
       versions have an options to do that. But we
       do not have any efforts to deduplicate atoms,
       which are simply plain strings.

    - Some languages have constant pools. For example
       the Java byte code format includes a constant
       pool in every class header. We do not do that
       during transpilation , but we could of course.
       But it begs the question, why only deduplicate
       strings and not other constant expressions as well?

    - We are totally happy that we have only codes,
       there are chances that the host languages use
       tagged pointers to represent them. So they
       are represented similar to the tagged pointers
       in SWI-Prolog which works for small integers.

    - But the tagged pointer argument is moot,
       since atom length=1 entities can be also
       represented as tagged pointers, and some
       programming languages do that. Dogelog Player
       would use such tagged pointers without
       poluting the atom table.

    - What else?

    Bye

    Mild Shock schrieb:

    Technically SWI-Prolog doesn't prefer codes.
    Library `library(pure_input)` might prefer codes.
    But this is again an issue of improving the
    library by some non existent SWI-Prolog community.

    The ISO core standard is silent about a flag
    back_quotes, but has a lot of API requirements
    that support both codes and chars, for example it
    requires atom_codes/2 and atom_chars/2.

    Implementation wise there can be an issue,
    like one might decide to implement the atoms
    of length=1 more efficiently, since with Unicode
    there is now an explosion.

    Not sure whether Trealla Prolog and Scryer
    Prolog thought about this problem, that the
    atom table gets quite large. Whereas codes don't
    eat the atom table. Maybe they forbit predicates

    that have an atom of length=1 head:

    h(X) :-
         write('Hello '), write(X), write('!'), nl.

    Does this still work?

    Mild Shock schrieb:
    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesn’t improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jun 27 13:36:14 2025
    From Newsgroup: comp.lang.prolog


    Attention: Java is an example that doesn’t
    understand \UXXXXXXXX, so one has to be careful
    in inroducing \uXXXX and \UXXXXXXXX at the same time.

    Although we have in Python that this works:

    emoji = "\U0001F600" # 😀 GRINNING FACE

    Python universal strings can even distingush between
    original code point, and surrogate translation, since
    the strings can be up to 32-bit words. Java does

    only accept for grinning face the surrogate
    translation, since their strings are 16-bit words:

    String emoji = "\uD83D\uDE00"; // 😀 GRINNING FACE

    Mild Shock schrieb:
    Somebody wrote:

    It seems that it reads in as ðŸ‘\u008D but writes out as ðŸ‘\\x8D\\.

    Can one then do ‘\uXXXX’ in 100% Prolog as
    well? Even including surrogates? Of course,
    here some DCG generator snippet from Dogelog

    Player which is 100% Prolog. This is from the
    Java backend, because I didn’t introduce ‘\uXXXX’
    in my Prolog system, because it is not part of

    ISO core standard. The ISO core standard would want '\xXX':

    crossj_escape_code2(X) --> {X =< 0xFFFF}, !,
       {atom_integer(J, 16, X), atom_codes(J, H),
       length(H, N), M is 4-N}, [0'\\, 0'u],
       cross_escape_zeros(M),
       cross_escape_codes2(H).
    crossj_escape_code2(X) --> {crossj_high_surrogate(X, Y),
       crossj_low_surrogate(X, Z)},
       crossj_escape_code2(Y),
       crossj_escape_code2(Z).

    crossj_high_surrogate(X, Y) :- Y is (X >> 10) + 0xD7C0.

    crossj_low_surrogate(X, Y) :- Y is (X /\ 0x3FF) + 0xDC00.

    Mild Shock schrieb:
    The official replacement character is 0xFFFD:

    Replacement Character
    https://www.compart.com/de/unicode/U+FFFD

    Well that is what people did in the past, replace
    non-printables by the ever same code, instead of
    using ‘\uXXXX’ notation. I have studied the

    library(portray_text) extensively. And my conclusion
    is still that it extremly ancient.

    For example I find:

    mostly_codes([H|T], Yes, No, MinFactor) :-
         integer(H),
         H >= 0,
         H =< 0x1ffff,
         [...]
        ;   catch(code_type(H, print),error(_,_),fail),
         [...]

    https://github.com/SWI-Prolog/swipl-devel/blob/eddbde61be09b95eb3ca2e160e73c2340744a3d2/library/portray_text.pl#L235


    Why even 0x1ffff and not 0x10ffff, this is a bug,
    do you want to starve is_text_code/1 ? The official
    Unicode range is 0x0 to 0x10ffff. Ulrich Neumerkel

    often confused the range in some of his code snippets,
    maybe based on a limited interpretation of Unicode.
    But if one would switch to chars one could easily

    support any Unicode code point even without
    knowing the range. Just do this:

    mostly_chars([H|T], Yes, No, MinFactor) :-
         atom(H),
         atom_length(H, 1),
         [...]
        ;  /* printable check not needed */
         [...]

    Mild Shock schrieb:
    Hi,

    The most radical approach is Novacore from
    Dogelog Player. It consists of the following
    major incisions in the ISO core standard:

    - We do not forbid chars, like for example
       using lists of the form [a,b,c], we also
       provide char_code/2 predicate bidirectionally.

    - We do not provide and _chars built-in
       predicates also there is nothing _strings. The
       Prolog system is clever enough to not put
       every atom it sees in an atom table. There
       is only a predicate table.

    - Some host languages have garbage collection that
       deduplicates Strings. For example some Java
       versions have an options to do that. But we
       do not have any efforts to deduplicate atoms,
       which are simply plain strings.

    - Some languages have constant pools. For example
       the Java byte code format includes a constant
       pool in every class header. We do not do that
       during transpilation , but we could of course.
       But it begs the question, why only deduplicate
       strings and not other constant expressions as well?

    - We are totally happy that we have only codes,
       there are chances that the host languages use
       tagged pointers to represent them. So they
       are represented similar to the tagged pointers
       in SWI-Prolog which works for small integers.

    - But the tagged pointer argument is moot,
       since atom length=1 entities can be also
       represented as tagged pointers, and some
       programming languages do that. Dogelog Player
       would use such tagged pointers without
       poluting the atom table.

    - What else?

    Bye

    Mild Shock schrieb:

    Technically SWI-Prolog doesn't prefer codes.
    Library `library(pure_input)` might prefer codes.
    But this is again an issue of improving the
    library by some non existent SWI-Prolog community.

    The ISO core standard is silent about a flag
    back_quotes, but has a lot of API requirements
    that support both codes and chars, for example it
    requires atom_codes/2 and atom_chars/2.

    Implementation wise there can be an issue,
    like one might decide to implement the atoms
    of length=1 more efficiently, since with Unicode
    there is now an explosion.

    Not sure whether Trealla Prolog and Scryer
    Prolog thought about this problem, that the
    atom table gets quite large. Whereas codes don't
    eat the atom table. Maybe they forbit predicates

    that have an atom of length=1 head:

    h(X) :-
         write('Hello '), write(X), write('!'), nl.

    Does this still work?

    Mild Shock schrieb:
    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesn’t improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf


    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses? >>>>>> The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg







    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jun 29 13:32:25 2025
    From Newsgroup: comp.lang.prolog


    Those that use a large part pay a pretty
    high price in terms of memory and currently
    also time for code points > 0xffff

    Emojis are typically above 0xffff. And from this
    announcement its seem, Emojis are a big part with
    keeping up with the AI Boom:

    :rocket: Call for Papers: Integrating Logical
    Reasoning & Large Language Models (LLMs) :brain:

    https://swi-prolog.discourse.group/t/9065

    But it would cost you nothing to support this here in library(portray_text):

    /* SWI-Prolog 9.3.24 */
    ?- X = [a,b,c]
    X = `abc`

    It is extremly trivial to implement, its not really
    rocket science. It doesn need much brains and
    it works also for Emojis:

    /* Scryer Prolog 0.9.4-411 */
    ?- X = [a,b,c].
    X = "abc".
    ?- X = ['🚀', a, '🧠', b, c].
    X = "🚀a🧠bc".

    In Scryer Prolog it shows double quotes and not
    back quotes, because of the different default settings
    of the Prolog flags double_quotes and back_quotes.

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jun 29 13:36:33 2025
    From Newsgroup: comp.lang.prolog

    Bonus in Trealla Prolog, which has a dfferent
    Prolog system Tokenzier for Prolog texts, different
    from Scryer Prolog more Unicode embracing,

    there you can even do:

    /* Trealla Prolog 2.77.9-1 */
    ?- X = [🚀, a, 🧠, b, c].
    X = "🚀a🧠bc".

    Mild Shock schrieb:

    Those that use a large part pay a pretty
    high price in terms of memory and currently
    also time for code points > 0xffff

    Emojis are typically above 0xffff. And from this
    announcement its seem, Emojis are a big part with
    keeping up with the AI Boom:

    :rocket: Call for Papers: Integrating Logical
    Reasoning & Large Language Models (LLMs) :brain:

    https://swi-prolog.discourse.group/t/9065

    But it would cost you nothing to support this here in
    library(portray_text):

    /* SWI-Prolog 9.3.24 */
    ?- X = [a,b,c]
    X = `abc`

    It is extremly trivial to implement, its not really
    rocket science. It doesn need much brains and
    it works also for Emojis:

    /* Scryer Prolog 0.9.4-411 */
    ?- X = [a,b,c].
       X = "abc".
    ?- X = ['🚀', a, '🧠', b, c].
       X = "🚀a🧠bc".

    In Scryer Prolog it shows double quotes and not
    back quotes, because of the different default settings
    of the Prolog flags double_quotes and back_quotes.

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jun 29 16:35:37 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    How it started, total humbug:

    Quantentheorie und "Ich" - Gary B. Schmid (2017) https://cropfm.at/archive/show/quantenich

    How its going, a little better:

    A Science without Free Will - Robert M. Sapolsky (2023) https://www.amazon.de/dp/0525560971

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 9 01:55:04 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    That Prolog missed the AI Boom is quite amazing,
    given that neural networks have a lot to do
    with physics, and there were even Prologers with

    a physics PhD, well almost if there werent a typo:

    Name: Bart Demoen
    Dissertation: Stability and Equilibrium for Clasical infinite Systems Advisor: Andre Frans Maria Verbeure https://www.genealogy.math.ndsu.nodak.edu/id.php?id=70951

    What does Clasical mean? But then there is a famous
    test case, which can melt Bart Demoen's brain:

    Gap in Section 7.6.2 and some Insecurity Arising from it

    ?- call((Z=!, a(X), Z)).
    Z = !
    X = 1 ?;
    Z = !
    X = 2
    yes

    ?- findall(Z-X,call((Z=!, a(X), Z)),L).
    L = [!-1]

    https://groups.google.com/g/comp.lang.prolog/c/FNou9Z-A_Zs/m/NSSTmbx4E4wJ

    Bart Demoen's memorable reaction in 04.11.2011, 23:06:47 was:

    PLEASE, DO NOT ALARM THIS NEWSGROUP ABOUT "FUNNY" RESULTS
    WITHOUT TELLING US WHICH SYSTEM CAUSED THE RESULTS

    LoL

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 9 02:08:24 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Although Paulo Moura followed the lead of Bart
    Demoen and started mobbing me. Many people were
    rather shaking their head over Bart Demoen.

    When Bart Demoen posted:

    Programming vs. Specification

    ?- X = a, setof(Y, p(X, Y), S).
    and
    ?- setof(Y, p(X, Y), S), X = a.

    not for the following definition of p/2:

    p(b,1) :- ! .
    p(_,2) .

    Fernando Pereira, sighted:

    Sigh... We were discussing *logical* advantages.
    Of course setof cannot give a declarative reading
    to a program that doesn't have one to start with.

    https://groups.google.com/g/comp.lang.prolog/c/-oerQs4l2Zw/m/v0kjwLIwuI0J

    Bye

    Mild Shock schrieb:
    Hi,

    That Prolog missed the AI Boom is quite amazing,
    given that neural networks have a lot to do
    with physics, and there were even Prologers with

    a physics PhD, well almost if there werent a typo:

    Name: Bart Demoen
    Dissertation: Stability and Equilibrium for Clasical infinite Systems Advisor:  Andre Frans Maria Verbeure https://www.genealogy.math.ndsu.nodak.edu/id.php?id=70951

    What does Clasical mean? But then there is a famous
    test case, which can melt Bart Demoen's brain:

    Gap in Section 7.6.2 and some Insecurity Arising from it

    ?- call((Z=!, a(X), Z)).
    Z = !
    X = 1 ?;
    Z = !
    X = 2
    yes

    ?- findall(Z-X,call((Z=!, a(X), Z)),L).
    L = [!-1]


    https://groups.google.com/g/comp.lang.prolog/c/FNou9Z-A_Zs/m/NSSTmbx4E4wJ

    Bart Demoen's memorable reaction in 04.11.2011, 23:06:47 was:

    PLEASE, DO NOT ALARM THIS NEWSGROUP ABOUT "FUNNY" RESULTS
    WITHOUT TELLING US WHICH SYSTEM CAUSED THE RESULTS

    LoL

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 9 02:12:46 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Ok, one last sample, barty boy in full swing:

    Oh, but you did much better: you tried to ridicule
    my research, my home country and the size of my
    Prolog programs. I can't match that.

    https://groups.google.com/g/comp.lang.prolog/c/uh_HUytRGJE/m/tXc7euv1KngJ

    So he got into struggle with industry? I would
    add not only small programms, but also a micro
    penis and an empty head.

    LoL

    Bye

    Mild Shock schrieb:
    Hi,

    Although Paulo Moura followed the lead of Bart
    Demoen and started mobbing me. Many people were
    rather shaking their head over Bart Demoen.

    When Bart Demoen posted:

    Programming vs. Specification

    ?- X = a, setof(Y, p(X, Y), S).
    and
    ?- setof(Y, p(X, Y), S), X = a.

    not for the following definition of p/2:

    p(b,1) :- ! .
    p(_,2) .

    Fernando Pereira, sighted:

    Sigh... We were discussing *logical* advantages.
    Of course setof cannot give a declarative reading
    to a program that doesn't have one to start with.


    https://groups.google.com/g/comp.lang.prolog/c/-oerQs4l2Zw/m/v0kjwLIwuI0J

    Bye

    Mild Shock schrieb:
    Hi,

    That Prolog missed the AI Boom is quite amazing,
    given that neural networks have a lot to do
    with physics, and there were even Prologers with

    a physics PhD, well almost if there werent a typo:

    Name: Bart Demoen
    Dissertation: Stability and Equilibrium for Clasical infinite Systems
    Advisor:  Andre Frans Maria Verbeure
    https://www.genealogy.math.ndsu.nodak.edu/id.php?id=70951

    What does Clasical mean? But then there is a famous
    test case, which can melt Bart Demoen's brain:

    Gap in Section 7.6.2 and some Insecurity Arising from it
    ;
    ?- call((Z=!, a(X), Z)).
    Z = !
    X = 1 ?;
    Z = !
    X = 2
    yes
    ;
    ?- findall(Z-X,call((Z=!, a(X), Z)),L).
    L = [!-1]
    ;

    https://groups.google.com/g/comp.lang.prolog/c/FNou9Z-A_Zs/m/NSSTmbx4E4wJ

    Bart Demoen's memorable reaction in 04.11.2011, 23:06:47 was:

    PLEASE, DO NOT ALARM THIS NEWSGROUP ABOUT "FUNNY" RESULTS
    WITHOUT TELLING US WHICH SYSTEM CAUSED THE RESULTS

    LoL

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 9 02:23:02 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I hope he doesn't get a heart attack.
    But he really made it too easy for me:

    Isaiah 30:8 (NIV): “Go now, write it
    on a tablet for them, inscribe it on a
    scroll, that for days to come it may
    be an everlasting witness.”

    Enjoy your free time:

    Bart Demoen retired ...
    I retired on 1 october 2018.
    https://people.cs.kuleuven.be/~bart.demoen/

    Its not that comp.lang.prolog would somehow
    track people and their biography. This
    was my little contribution.

    Bye

    Mild Shock schrieb:
    Hi,

    Ok, one last sample, barty boy in full swing:

    Oh, but you did much better: you tried to ridicule
    my research, my home country and the size of my
    Prolog programs. I can't match that.


    https://groups.google.com/g/comp.lang.prolog/c/uh_HUytRGJE/m/tXc7euv1KngJ

    So he got into struggle with industry? I would
    add not only small programms, but also a micro
    penis and an empty head.

    LoL

    Bye

    Mild Shock schrieb:
    Hi,

    Although Paulo Moura followed the lead of Bart
    Demoen and started mobbing me. Many people were
    rather shaking their head over Bart Demoen.

    When Bart Demoen posted:

    Programming vs. Specification
    ;
    ?- X = a, setof(Y, p(X, Y), S).
    and
    ?- setof(Y, p(X, Y), S), X = a.
    ;
    not for the following definition of p/2:
    ;
    p(b,1) :- ! .
    p(_,2) .

    Fernando Pereira, sighted:

    Sigh... We were discussing *logical* advantages.
    Of course setof cannot give a declarative reading
    to a program that doesn't have one to start with.
    ;

    https://groups.google.com/g/comp.lang.prolog/c/-oerQs4l2Zw/m/v0kjwLIwuI0J

    Bye

    Mild Shock schrieb:
    Hi,

    That Prolog missed the AI Boom is quite amazing,
    given that neural networks have a lot to do
    with physics, and there were even Prologers with

    a physics PhD, well almost if there werent a typo:

    Name: Bart Demoen
    Dissertation: Stability and Equilibrium for Clasical infinite Systems >>>  > Advisor:  Andre Frans Maria Verbeure
    https://www.genealogy.math.ndsu.nodak.edu/id.php?id=70951

    What does Clasical mean? But then there is a famous
    test case, which can melt Bart Demoen's brain:

    Gap in Section 7.6.2 and some Insecurity Arising from it
    ;
    ?- call((Z=!, a(X), Z)).
    Z = !
    X = 1 ?;
    Z = !
    X = 2
    yes
    ;
    ?- findall(Z-X,call((Z=!, a(X), Z)),L).
    L = [!-1]
    ;

    https://groups.google.com/g/comp.lang.prolog/c/FNou9Z-A_Zs/m/NSSTmbx4E4wJ >>>

    Bart Demoen's memorable reaction in 04.11.2011, 23:06:47 was:

    PLEASE, DO NOT ALARM THIS NEWSGROUP ABOUT "FUNNY" RESULTS
    WITHOUT TELLING US WHICH SYSTEM CAUSED THE RESULTS

    LoL

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 10 19:17:12 2025
    From Newsgroup: comp.lang.prolog


    Henri Poincaré believed that mathematical
    and scientific creativity came from a deep,
    unconscious intuition that could not be

    captured by mechanical reasoning or formal
    systems. He famously wrote about how insights
    came not from plodding logic but from sudden

    illuminations — leaps of creative synthesis.

    But now we have generative AI — models like GPT — that:

    - produce poetry, proofs, stories, and code,

    - combine ideas in novel ways,

    - and do so by processing patterns in massive
    datasets, without conscious understanding.

    And that does seem to contradict Poincaré's belief
    that true invention cannot come from automation.

    Mild Shock schrieb:
    Hi,

    But I shouldn't waste too much time.
    One shouldn't punish people for just
    being plain stupid.

    Like for example this clueless french
    philosopher who had a lot of troubles
    with non-classical logic.

    His brain tried to eliminate non-classical
    logic, it was keen on avoiding non-classical
    logic. A typical species of a human with

    an extremly small brain, again working
    in the wrong place!

    Bye

    P.S.: Maybe this a Poincaré thingy? Poincaré
    was a strong critic of logicism (as championed
    by Frege and Russell) and of Hilbert’s
    formalist program.

    But, he did not formally use or promote systems
    like intuitionistic logic, modal logic, or
    relevance logic. His logical framework remained
    within the bounds of classical logic,

    though he was skeptical of excessive formalism.
    He thought formal systems could miss the creative
    and synthetic nature of mathematical
    invention.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 10 21:22:00 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    This is nothing for Bart Demoen, Physics PhD,
    academic fraud. The ideal choice point can
    be formulated as a logical formula, involving

    an existential quantifier. Assume we have
    a query and already these answers, and the
    Prolog system is prompting the interactive user:

    ?- p(X).
    X = a1 ;
    X = a2 ;
    ...
    X = ak-1 ;
    X = ak

    A mathematical oracle that could indicate whether
    it is even necessary to prompt the user could be:

    ∃X ( p(X) & X =\= a1 & ... & X =\= ak)

    It doesn't match 100% Prolog since Prolog might
    give duplicate answers or non-ground answers,
    but assume for the moment the query q(X),

    produces only distinct and ground results.
    Nice existential FOL formula we have in the above.

    Bye

    Mild Shock schrieb:

    Henri Poincaré believed that mathematical
    and scientific creativity came from a deep,
    unconscious intuition that could not be

    captured by mechanical reasoning or formal
    systems. He famously wrote about how insights
    came not from plodding logic but from sudden

    illuminations — leaps of creative synthesis.

    But now we have generative AI — models like GPT — that:

    - produce poetry, proofs, stories, and code,

    - combine ideas in novel ways,

    - and do so by processing patterns in massive
      datasets, without conscious understanding.

    And that does seem to contradict Poincaré's belief
    that true invention cannot come from automation.

    Mild Shock schrieb:
    Hi,

    But I shouldn't waste too much time.
    One shouldn't punish people for just
    being plain stupid.

    Like for example this clueless french
    philosopher who had a lot of troubles
    with non-classical logic.

    His brain tried to eliminate non-classical
    logic, it was keen on avoiding non-classical
    logic. A typical species of a human with

    an extremly small brain, again working
    in the wrong place!

    Bye

    P.S.: Maybe this a Poincaré thingy? Poincaré
    was a strong critic of logicism (as championed
    by Frege and Russell) and of Hilbert’s
    formalist program.

    But, he did not formally use or promote systems
    like intuitionistic logic, modal logic, or
    relevance logic. His logical framework remained
    within the bounds of classical logic,

    though he was skeptical of excessive formalism.
    He thought formal systems could miss the creative
    and synthetic nature of mathematical
    invention.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 10 21:30:35 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Now what does a Prolog system do? Well when
    it prompts the end-user it has somewhere
    a list of the current query choice points:

    CPs = [CP1, CP2, .., CPn]

    This is implementation specific, what choice
    points a system creates, also the ISO core standard
    shows a machine in its more procedural explanation,

    that depicts something that has also somewhere
    choice points. Since it is implementation specific
    a Prolog System A and Prolog System B might

    use different choice points:

    System A:
    CPs = [CP1, CP2, .., CPn]

    System B:
    CP's = [CP'1, CP'2, .., CP'n]

    We say a System B could eliminate a choice point CP,
    relative to a System A, if we have:

    System A:
    CP ∈ CPs

    System B:
    CP ∉ CPs

    So System B might have an advantage over System A,
    since it will not backtrack over CP.

    When it comes to answer substitution display,
    it is now very common, that a Prolog system checks
    its own choice points, and when it finds that

    CP = []

    It knows that the query left no choice points,
    either because there were never any, because
    there was no branching in the executed code, or

    because a cut removed branching, or because
    they were eliminated somehow. Like through
    some index analysis.

    Bye

    Mild Shock schrieb:
    Hi,

    This is nothing for Bart Demoen, Physics PhD,
    academic fraud. The ideal choice point can
    be formulated as a logical formula, involving

    an existential quantifier. Assume we have
    a query and already these answers, and the
    Prolog system is prompting the interactive user:

    ?- p(X).
    X = a1 ;
    X = a2 ;
    ...
    X = ak-1 ;
    X = ak

    A mathematical oracle that could indicate whether
    it is even necessary to prompt the user could be:

       ∃X ( p(X) & X =\= a1 & ... & X =\= ak)

    It doesn't match 100% Prolog since Prolog might
    give duplicate answers or non-ground answers,
    but assume for the moment the query q(X),

    produces only distinct and ground results.
    Nice existential FOL formula we have in the above.

    Bye

    Mild Shock schrieb:

    Henri Poincaré believed that mathematical
    and scientific creativity came from a deep,
    unconscious intuition that could not be

    captured by mechanical reasoning or formal
    systems. He famously wrote about how insights
    came not from plodding logic but from sudden

    illuminations — leaps of creative synthesis.

    But now we have generative AI — models like GPT — that:

    - produce poetry, proofs, stories, and code,

    - combine ideas in novel ways,

    - and do so by processing patterns in massive
       datasets, without conscious understanding.

    And that does seem to contradict Poincaré's belief
    that true invention cannot come from automation.

    Mild Shock schrieb:
    Hi,

    But I shouldn't waste too much time.
    One shouldn't punish people for just
    being plain stupid.

    Like for example this clueless french
    philosopher who had a lot of troubles
    with non-classical logic.

    His brain tried to eliminate non-classical
    logic, it was keen on avoiding non-classical
    logic. A typical species of a human with

    an extremly small brain, again working
    in the wrong place!

    Bye

    P.S.: Maybe this a Poincaré thingy? Poincaré
    was a strong critic of logicism (as championed
    by Frege and Russell) and of Hilbert’s
    formalist program.

    But, he did not formally use or promote systems
    like intuitionistic logic, modal logic, or
    relevance logic. His logical framework remained
    within the bounds of classical logic,

    though he was skeptical of excessive formalism.
    He thought formal systems could miss the creative
    and synthetic nature of mathematical
    invention.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 10 21:35:27 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Now one might ask, if we have a Prolog system
    that anyway juggles with choice points, why
    would we need a logical formula for choice points?

    Well there is a funny correctness criteria,
    for example in the top-level, if the top-level
    doesn't prompt the end user anymore in such a scenario:

    ?- p(X).
    X = a1 ;
    X = a2 ;
    ...
    X = ak-1 ;
    X = ak

    So the end user is not prompted because the
    Prolog system founds CP = []. This is licensed
    by this correctness statement for any choice

    point elimination:

    CP = [] => ~∃X ( p(X) & X =\= a1 & ... & X =\= ak)

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    Now what does a Prolog system do? Well when
    it prompts the end-user it has somewhere
    a list of the current query choice points:

    CPs = [CP1, CP2, .., CPn]

    This is implementation specific, what choice
    points a system creates, also the ISO core standard
    shows a machine in its more procedural explanation,

    that depicts something that has also somewhere
    choice points. Since it is implementation specific
    a Prolog System A and Prolog System B might

    use different choice points:

    System A:
    CPs = [CP1, CP2, .., CPn]

    System B:
    CP's = [CP'1, CP'2, .., CP'n]

    We say a System B could eliminate a choice point CP,
    relative to a System A, if we have:

    System A:
    CP ∈ CPs

    System B:
    CP ∉ CPs

    So System B might have an advantage over System A,
    since it will not backtrack over CP.

    When it comes to answer substitution display,
    it is now very common, that a Prolog system checks
    its own choice points, and when it finds that

    CP = []

    It knows that the query left no choice points,
    either because there were never any, because
    there was no branching in the executed code, or

    because a cut removed branching, or because
    they were eliminated somehow. Like through
    some index analysis.

    Bye

    Mild Shock schrieb:
    Hi,

    This is nothing for Bart Demoen, Physics PhD,
    academic fraud. The ideal choice point can
    be formulated as a logical formula, involving

    an existential quantifier. Assume we have
    a query and already these answers, and the
    Prolog system is prompting the interactive user:

    ?- p(X).
    X = a1 ;
    X = a2 ;
    ...
    X = ak-1 ;
    X = ak

    A mathematical oracle that could indicate whether
    it is even necessary to prompt the user could be:

        ∃X ( p(X) & X =\= a1 & ... & X =\= ak)

    It doesn't match 100% Prolog since Prolog might
    give duplicate answers or non-ground answers,
    but assume for the moment the query q(X),

    produces only distinct and ground results.
    Nice existential FOL formula we have in the above.

    Bye

    Mild Shock schrieb:

    Henri Poincaré believed that mathematical
    and scientific creativity came from a deep,
    unconscious intuition that could not be

    captured by mechanical reasoning or formal
    systems. He famously wrote about how insights
    came not from plodding logic but from sudden

    illuminations — leaps of creative synthesis.

    But now we have generative AI — models like GPT — that:

    - produce poetry, proofs, stories, and code,

    - combine ideas in novel ways,

    - and do so by processing patterns in massive
       datasets, without conscious understanding.

    And that does seem to contradict Poincaré's belief
    that true invention cannot come from automation.

    Mild Shock schrieb:
    Hi,

    But I shouldn't waste too much time.
    One shouldn't punish people for just
    being plain stupid.

    Like for example this clueless french
    philosopher who had a lot of troubles
    with non-classical logic.

    His brain tried to eliminate non-classical
    logic, it was keen on avoiding non-classical
    logic. A typical species of a human with

    an extremly small brain, again working
    in the wrong place!

    Bye

    P.S.: Maybe this a Poincaré thingy? Poincaré
    was a strong critic of logicism (as championed
    by Frege and Russell) and of Hilbert’s
    formalist program.

    But, he did not formally use or promote systems
    like intuitionistic logic, modal logic, or
    relevance logic. His logical framework remained
    within the bounds of classical logic,

    though he was skeptical of excessive formalism.
    He thought formal systems could miss the creative
    and synthetic nature of mathematical
    invention.




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 10 21:43:22 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Today I had an idea, of some semi-deep Prolog
    argument indexing. Just because choice point
    elimination is so important and has so many

    benefits for performance and the end user
    experience, like also debugging. And because it is
    tied to indexing. An index and the resulting clause

    list, which can be always checked for having reached
    its end. This gives a look-ahead information to the
    Prolog system which answers this oracle, concering

    clause instantiation:

    ∃X ( p(X) & X =\= a1 & ... & X =\= ak)

    So the idea of semi-deep Prolog argument indexing
    would be a hybrid between Scryer Prolog and
    SWI-Prolog taking the best of both worls.

    It would adopt skip indexes from Scryer Prolog
    and deep indexing of SWI-Prolog, but deep indexing
    through a Key computation trick. The Key computation

    trick is quickly explained.

    Normal Key Computations:

    p(a, ..) ~~> Computed Key: a/0 or sometimes a alone
    p(b(x,y), ..) ~~> Computed Key: b/2 or sometimes b alone
    Etc..

    Semi Deep Key Computation:

    p(a, ..) ~~> Computed Key: 'a'
    p([a, ..], ..) ~~> Computed Key: '.a'
    Ect..

    Got it?

    The Scryer Prolog skip index is needed because
    in a DCG the interesting arguments are usually
    not the first argument.

    Bye

    Mild Shock schrieb:
    Hi,

    Now one might ask, if we have a Prolog system
    that anyway juggles with choice points, why
    would we need a logical formula for choice points?

    Well there is a funny correctness criteria,
    for example in the top-level, if the top-level
    doesn't prompt the end user anymore in such a scenario:

    ?- p(X).
    X = a1 ;
    X = a2 ;
    ...
    X = ak-1 ;
    X = ak

    So the end user is not prompted because the
    Prolog system founds CP = []. This is licensed
    by this correctness statement for any choice

    point elimination:

    CP = [] => ~∃X ( p(X) & X =\= a1 & ... & X =\= ak)

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    Now what does a Prolog system do? Well when
    it prompts the end-user it has somewhere
    a list of the current query choice points:

    CPs = [CP1, CP2, .., CPn]

    This is implementation specific, what choice
    points a system creates, also the ISO core standard
    shows a machine in its more procedural explanation,

    that depicts something that has also somewhere
    choice points. Since it is implementation specific
    a Prolog System A and Prolog System B might

    use different choice points:

    System A:
    CPs = [CP1, CP2, .., CPn]

    System B:
    CP's = [CP'1, CP'2, .., CP'n]

    We say a System B could eliminate a choice point CP,
    relative to a System A, if we have:

    System A:
    CP ∈ CPs

    System B:
    CP ∉ CPs

    So System B might have an advantage over System A,
    since it will not backtrack over CP.

    When it comes to answer substitution display,
    it is now very common, that a Prolog system checks
    its own choice points, and when it finds that

    CP = []

    It knows that the query left no choice points,
    either because there were never any, because
    there was no branching in the executed code, or

    because a cut removed branching, or because
    they were eliminated somehow. Like through
    some index analysis.

    Bye

    Mild Shock schrieb:
    Hi,

    This is nothing for Bart Demoen, Physics PhD,
    academic fraud. The ideal choice point can
    be formulated as a logical formula, involving

    an existential quantifier. Assume we have
    a query and already these answers, and the
    Prolog system is prompting the interactive user:

    ?- p(X).
    X = a1 ;
    X = a2 ;
    ...
    X = ak-1 ;
    X = ak

    A mathematical oracle that could indicate whether
    it is even necessary to prompt the user could be:

        ∃X ( p(X) & X =\= a1 & ... & X =\= ak)

    It doesn't match 100% Prolog since Prolog might
    give duplicate answers or non-ground answers,
    but assume for the moment the query q(X),

    produces only distinct and ground results.
    Nice existential FOL formula we have in the above.

    Bye

    Mild Shock schrieb:

    Henri Poincaré believed that mathematical
    and scientific creativity came from a deep,
    unconscious intuition that could not be

    captured by mechanical reasoning or formal
    systems. He famously wrote about how insights
    came not from plodding logic but from sudden

    illuminations — leaps of creative synthesis.

    But now we have generative AI — models like GPT — that:

    - produce poetry, proofs, stories, and code,

    - combine ideas in novel ways,

    - and do so by processing patterns in massive
       datasets, without conscious understanding.

    And that does seem to contradict Poincaré's belief
    that true invention cannot come from automation.

    Mild Shock schrieb:
    Hi,

    But I shouldn't waste too much time.
    One shouldn't punish people for just
    being plain stupid.

    Like for example this clueless french
    philosopher who had a lot of troubles
    with non-classical logic.

    His brain tried to eliminate non-classical
    logic, it was keen on avoiding non-classical
    logic. A typical species of a human with

    an extremly small brain, again working
    in the wrong place!

    Bye

    P.S.: Maybe this a Poincaré thingy? Poincaré
    was a strong critic of logicism (as championed
    by Frege and Russell) and of Hilbert’s
    formalist program.

    But, he did not formally use or promote systems
    like intuitionistic logic, modal logic, or
    relevance logic. His logical framework remained
    within the bounds of classical logic,

    though he was skeptical of excessive formalism.
    He thought formal systems could miss the creative
    and synthetic nature of mathematical
    invention.





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 10 21:58:20 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Lets say these semi-deep Prolog argument indexing
    will really work. Then I could rollback some
    uses of ROKs trick in my DCG based code base,

    where I massaged the DCG to have a terminal in
    the first argument, and DCG was somehow degraded
    in only doing the concatenative stuff, through its

    monad rewriting. This would lead to elegant code.
    But it will not perform on a couple of Prolog systems,
    that don't have deep indexing. I suspect the more

    elegant code will not perform on these Prolog system:

    - GNU Prolog
    - Scryer Prolog
    - Trealla Prolog
    -

    I didn't check ECLiPSe Prolog towards deep indexing,
    and also I didn't check Ciao Prolog towards deep
    indexing yet. It will show good performance:

    - SWI-Prolog
    - Dogelog Player (if I add semi-deep and skip there)
    - Jekejeke Runtime (if I add semi-deep there, it has already skip)
    -

    Bye

    Mild Shock schrieb:
    Hi,

    Today I had an idea, of some semi-deep Prolog
    argument indexing. Just because choice point
    elimination is so important and has so many

    benefits for performance and the end user
    experience, like also debugging. And because it is
    tied to indexing. An index and the resulting clause

    list, which can be always checked for having reached
    its end. This gives a look-ahead information to the
    Prolog system  which answers this oracle, concering

    clause instantiation:

    ∃X ( p(X) & X =\= a1 & ... & X =\= ak)

    So the idea of semi-deep Prolog argument indexing
    would be a hybrid between Scryer Prolog and
    SWI-Prolog taking the best of both worls.

    It would adopt skip indexes from Scryer Prolog
    and deep indexing of SWI-Prolog, but deep indexing
    through a Key computation trick. The Key computation

    trick is quickly explained.

    Normal Key Computations:

    p(a, ..)      ~~>  Computed Key: a/0 or sometimes a alone
    p(b(x,y), ..) ~~>  Computed Key: b/2 or sometimes b alone
    Etc..

    Semi Deep Key Computation:

    p(a, ..)        ~~>  Computed Key: 'a'
    p([a, ..], ..)  ~~>  Computed Key: '.a'
    Ect..

    Got it?

    The Scryer Prolog skip index is needed because
    in a DCG the interesting arguments are usually
    not the first argument.

    Bye

    Mild Shock schrieb:
    Hi,

    Now one might ask, if we have a Prolog system
    that anyway juggles with choice points, why
    would we need a logical formula for choice points?

    Well there is a funny correctness criteria,
    for example in the top-level, if the top-level
    doesn't prompt the end user anymore in such a scenario:

    ?- p(X).
    X = a1 ;
    X = a2 ;
    ...
    X = ak-1 ;
    X = ak

    So the end user is not prompted because the
    Prolog system founds CP = []. This is licensed
    by this correctness statement for any choice

    point elimination:

    CP = [] => ~∃X ( p(X) & X =\= a1 & ... & X =\= ak)

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    Now what does a Prolog system do? Well when
    it prompts the end-user it has somewhere
    a list of the current query choice points:

    CPs = [CP1, CP2, .., CPn]

    This is implementation specific, what choice
    points a system creates, also the ISO core standard
    shows a machine in its more procedural explanation,

    that depicts something that has also somewhere
    choice points. Since it is implementation specific
    a Prolog System A and Prolog System B might

    use different choice points:

    System A:
    CPs = [CP1, CP2, .., CPn]

    System B:
    CP's = [CP'1, CP'2, .., CP'n]

    We say a System B could eliminate a choice point CP,
    relative to a System A, if we have:

    System A:
    CP ∈ CPs

    System B:
    CP ∉ CPs

    So System B might have an advantage over System A,
    since it will not backtrack over CP.

    When it comes to answer substitution display,
    it is now very common, that a Prolog system checks
    its own choice points, and when it finds that

    CP = []

    It knows that the query left no choice points,
    either because there were never any, because
    there was no branching in the executed code, or

    because a cut removed branching, or because
    they were eliminated somehow. Like through
    some index analysis.

    Bye

    Mild Shock schrieb:
    Hi,

    This is nothing for Bart Demoen, Physics PhD,
    academic fraud. The ideal choice point can
    be formulated as a logical formula, involving

    an existential quantifier. Assume we have
    a query and already these answers, and the
    Prolog system is prompting the interactive user:

    ?- p(X).
    X = a1 ;
    X = a2 ;
    ...
    X = ak-1 ;
    X = ak

    A mathematical oracle that could indicate whether
    it is even necessary to prompt the user could be:

        ∃X ( p(X) & X =\= a1 & ... & X =\= ak)

    It doesn't match 100% Prolog since Prolog might
    give duplicate answers or non-ground answers,
    but assume for the moment the query q(X),

    produces only distinct and ground results.
    Nice existential FOL formula we have in the above.

    Bye

    Mild Shock schrieb:

    Henri Poincaré believed that mathematical
    and scientific creativity came from a deep,
    unconscious intuition that could not be

    captured by mechanical reasoning or formal
    systems. He famously wrote about how insights
    came not from plodding logic but from sudden

    illuminations — leaps of creative synthesis.

    But now we have generative AI — models like GPT — that:

    - produce poetry, proofs, stories, and code,

    - combine ideas in novel ways,

    - and do so by processing patterns in massive
       datasets, without conscious understanding.

    And that does seem to contradict Poincaré's belief
    that true invention cannot come from automation.

    Mild Shock schrieb:
    Hi,

    But I shouldn't waste too much time.
    One shouldn't punish people for just
    being plain stupid.

    Like for example this clueless french
    philosopher who had a lot of troubles
    with non-classical logic.

    His brain tried to eliminate non-classical
    logic, it was keen on avoiding non-classical
    logic. A typical species of a human with

    an extremly small brain, again working
    in the wrong place!

    Bye

    P.S.: Maybe this a Poincaré thingy? Poincaré
    was a strong critic of logicism (as championed
    by Frege and Russell) and of Hilbert’s
    formalist program.

    But, he did not formally use or promote systems
    like intuitionistic logic, modal logic, or
    relevance logic. His logical framework remained
    within the bounds of classical logic,

    though he was skeptical of excessive formalism.
    He thought formal systems could miss the creative
    and synthetic nature of mathematical
    invention.






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 10 22:03:10 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Does the new DCG standard [2025] say something about
    (Semi-)Deep indexing. Are these people even aware,
    that practically no Prolog system can execute DCG

    in a satisfactory way. And its not a problem of
    being declarative or something, or having bared
    (\+)/3 or (!)/2 or something.

    Its just that (Semi-)Deep indexing is rare, and without
    deep indexing DCG degenerte into linear scanning
    their clause set. So 20 years elaborating DCG standard.

    And still only SWI-Prolog has deep indexing and
    can practically takle DCG parsing?

    That is extremly cringe...

    Bye

    P.S.: Disclaimer: Have to double check Ciao Prolog
    and ECLiPSe Prolog, could be also a candidate for
    deep indexing. Or maybe SICStus Prolog as well.

    Mild Shock schrieb:
    Hi,

    Lets say these semi-deep Prolog argument indexing
    will really work. Then I could rollback some
    uses of ROKs trick in my DCG based code base,

    where I massaged the DCG to have a terminal in
    the first argument, and DCG was somehow degraded
    in only doing  the concatenative stuff, through its

    monad rewriting. This would lead to elegant code.
    But it will not perform on a couple of Prolog systems,
    that don't have deep indexing. I suspect the more

    elegant code will not perform on these Prolog system:

    - GNU Prolog
    - Scryer Prolog
    - Trealla Prolog
    -

    I didn't check ECLiPSe Prolog towards deep indexing,
    and also I didn't check Ciao Prolog towards deep
    indexing yet. It will show good performance:

    - SWI-Prolog
    - Dogelog Player (if I add semi-deep and skip there)
    - Jekejeke Runtime (if I add semi-deep there, it has already skip)
    -

    Bye

    Mild Shock schrieb:
    Hi,

    Today I had an idea, of some semi-deep Prolog
    argument indexing. Just because choice point
    elimination is so important and has so many

    benefits for performance and the end user
    experience, like also debugging. And because it is
    tied to indexing. An index and the resulting clause

    list, which can be always checked for having reached
    its end. This gives a look-ahead information to the
    Prolog system  which answers this oracle, concering

    clause instantiation:

    ∃X ( p(X) & X =\= a1 & ... & X =\= ak)

    So the idea of semi-deep Prolog argument indexing
    would be a hybrid between Scryer Prolog and
    SWI-Prolog taking the best of both worls.

    It would adopt skip indexes from Scryer Prolog
    and deep indexing of SWI-Prolog, but deep indexing
    through a Key computation trick. The Key computation

    trick is quickly explained.

    Normal Key Computations:

    p(a, ..)      ~~>  Computed Key: a/0 or sometimes a alone
    p(b(x,y), ..) ~~>  Computed Key: b/2 or sometimes b alone
    Etc..

    Semi Deep Key Computation:

    p(a, ..)        ~~>  Computed Key: 'a'
    p([a, ..], ..)  ~~>  Computed Key: '.a'
    Ect..

    Got it?

    The Scryer Prolog skip index is needed because
    in a DCG the interesting arguments are usually
    not the first argument.

    Bye

    Mild Shock schrieb:
    Hi,

    Now one might ask, if we have a Prolog system
    that anyway juggles with choice points, why
    would we need a logical formula for choice points?

    Well there is a funny correctness criteria,
    for example in the top-level, if the top-level
    doesn't prompt the end user anymore in such a scenario:

    ?- p(X).
    X = a1 ;
    X = a2 ;
    ...
    X = ak-1 ;
    X = ak

    So the end user is not prompted because the
    Prolog system founds CP = []. This is licensed
    by this correctness statement for any choice

    point elimination:

    CP = [] => ~∃X ( p(X) & X =\= a1 & ... & X =\= ak)

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    Now what does a Prolog system do? Well when
    it prompts the end-user it has somewhere
    a list of the current query choice points:

    CPs = [CP1, CP2, .., CPn]

    This is implementation specific, what choice
    points a system creates, also the ISO core standard
    shows a machine in its more procedural explanation,

    that depicts something that has also somewhere
    choice points. Since it is implementation specific
    a Prolog System A and Prolog System B might

    use different choice points:

    System A:
    CPs = [CP1, CP2, .., CPn]

    System B:
    CP's = [CP'1, CP'2, .., CP'n]

    We say a System B could eliminate a choice point CP,
    relative to a System A, if we have:

    System A:
    CP ∈ CPs

    System B:
    CP ∉ CPs

    So System B might have an advantage over System A,
    since it will not backtrack over CP.

    When it comes to answer substitution display,
    it is now very common, that a Prolog system checks
    its own choice points, and when it finds that

    CP = []

    It knows that the query left no choice points,
    either because there were never any, because
    there was no branching in the executed code, or

    because a cut removed branching, or because
    they were eliminated somehow. Like through
    some index analysis.

    Bye

    Mild Shock schrieb:
    Hi,

    This is nothing for Bart Demoen, Physics PhD,
    academic fraud. The ideal choice point can
    be formulated as a logical formula, involving

    an existential quantifier. Assume we have
    a query and already these answers, and the
    Prolog system is prompting the interactive user:

    ?- p(X).
    X = a1 ;
    X = a2 ;
    ...
    X = ak-1 ;
    X = ak

    A mathematical oracle that could indicate whether
    it is even necessary to prompt the user could be:

        ∃X ( p(X) & X =\= a1 & ... & X =\= ak)

    It doesn't match 100% Prolog since Prolog might
    give duplicate answers or non-ground answers,
    but assume for the moment the query q(X),

    produces only distinct and ground results.
    Nice existential FOL formula we have in the above.

    Bye

    Mild Shock schrieb:

    Henri Poincaré believed that mathematical
    and scientific creativity came from a deep,
    unconscious intuition that could not be

    captured by mechanical reasoning or formal
    systems. He famously wrote about how insights
    came not from plodding logic but from sudden

    illuminations — leaps of creative synthesis.

    But now we have generative AI — models like GPT — that:

    - produce poetry, proofs, stories, and code,

    - combine ideas in novel ways,

    - and do so by processing patterns in massive
       datasets, without conscious understanding.

    And that does seem to contradict Poincaré's belief
    that true invention cannot come from automation.

    Mild Shock schrieb:
    Hi,

    But I shouldn't waste too much time.
    One shouldn't punish people for just
    being plain stupid.

    Like for example this clueless french
    philosopher who had a lot of troubles
    with non-classical logic.

    His brain tried to eliminate non-classical
    logic, it was keen on avoiding non-classical
    logic. A typical species of a human with

    an extremly small brain, again working
    in the wrong place!

    Bye

    P.S.: Maybe this a Poincaré thingy? Poincaré
    was a strong critic of logicism (as championed
    by Frege and Russell) and of Hilbert’s
    formalist program.

    But, he did not formally use or promote systems
    like intuitionistic logic, modal logic, or
    relevance logic. His logical framework remained
    within the bounds of classical logic,

    though he was skeptical of excessive formalism.
    He thought formal systems could miss the creative
    and synthetic nature of mathematical
    invention.







    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jul 13 15:17:11 2025
    From Newsgroup: comp.lang.prolog


    BTW: I see what you did here:

    doge.pl: $(PROG)
    $(file >$@,false :- \+true. ?- ['$<'],$(MAIN).)

    https://github.com/hurufu/prolog-all/blob/main/rules.mk

    Yes, I do not yet have a -g option.

    Maybe should change that... The issue is a
    little tricky. Only recently I managed to handle
    some stuff that is tied to to the command line
    after the Novacore has been loaded.

    For example the top-level is now entered after
    the Novacore is loaded, and the top-level loads
    in itself library(session) etc.. To have a -g option
    there is a dependency on

    library(charsio), to convert a string into a term,
    which is not part of Novacore itself. So maybe I could
    do the same for a -g option, so that I can keep
    the Novacore small, load

    library(charsio) depending on the command line.
    I just did yesterday something to make the Novacore
    smaller. And handling a -g option this way could
    be a viable way to keep it small.

    Mild Shock schrieb:
    Hi,

    The most radical approach is Novacore from
    Dogelog Player. It consists of the following
    major incisions in the ISO core standard:

    - We do not forbid chars, like for example
      using lists of the form [a,b,c], we also
      provide char_code/2 predicate bidirectionally.

    - We do not provide and _chars built-in
      predicates also there is nothing _strings. The
      Prolog system is clever enough to not put
      every atom it sees in an atom table. There
      is only a predicate table.

    - Some host languages have garbage collection that
      deduplicates Strings. For example some Java
      versions have an options to do that. But we
      do not have any efforts to deduplicate atoms,
      which are simply plain strings.

    - Some languages have constant pools. For example
      the Java byte code format includes a constant
      pool in every class header. We do not do that
      during transpilation , but we could of course.
      But it begs the question, why only deduplicate
      strings and not other constant expressions as well?

    - We are totally happy that we have only codes,
      there are chances that the host languages use
      tagged pointers to represent them. So they
      are represented similar to the tagged pointers
      in SWI-Prolog which works for small integers.

    - But the tagged pointer argument is moot,
      since atom length=1 entities can be also
      represented as tagged pointers, and some
      programming languages do that. Dogelog Player
      would use such tagged pointers without
      poluting the atom table.

    - What else?

    Bye

    Mild Shock schrieb:

    Technically SWI-Prolog doesn't prefer codes.
    Library `library(pure_input)` might prefer codes.
    But this is again an issue of improving the
    library by some non existent SWI-Prolog community.

    The ISO core standard is silent about a flag
    back_quotes, but has a lot of API requirements
    that support both codes and chars, for example it
    requires atom_codes/2 and atom_chars/2.

    Implementation wise there can be an issue,
    like one might decide to implement the atoms
    of length=1 more efficiently, since with Unicode
    there is now an explosion.

    Not sure whether Trealla Prolog and Scryer
    Prolog thought about this problem, that the
    atom table gets quite large. Whereas codes don't
    eat the atom table. Maybe they forbit predicates

    that have an atom of length=1 head:

    h(X) :-
         write('Hello '), write(X), write('!'), nl.

    Does this still work?

    Mild Shock schrieb:
    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesn’t improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jul 13 15:19:09 2025
    From Newsgroup: comp.lang.prolog


    One problem is that the -g option is not really
    compatible with the scripting mode of a Prolog
    system, since in scripting mode the assumptions
    that after the scripts command line options

    appear that are handled by the Prolog application.
    So introducing a -g option can possibly only properly
    be done, by also introducing and -l option, so that
    we end up with this here, only it will not

    be SWI-Prolog, but Dogelog Player:

    swipl -l $< -g '$(MAIN),halt'

    This is surely doable, but wasn't priority so far.
    In SWI-Prolog there is still a choice you could also
    call -g and -g or -g and -t. You could also do the
    following, documented here on the SWI-Prolog website:

    swipl -l $< -g '$(MAIN)' -g halt

    Or this one, also documented on the SWI-Prolog website:

    swipl -l $< -g '$(MAIN)' -t halt

    I don't like this ambiguity, have to research what -e
    commandline does found in other Prolog systems. Although
    its tempting to have the version/0 call and the prolog/0
    call separately customizable, with

    an '-e' option I could also do:

    swipl -l $< -e alt_version,alt_prolog

    Crucial requirement is only that complex terms are
    accepted on the command line, and thats why I
    need library(charsio).

    Mild Shock schrieb:

    BTW: I see what you did here:

    doge.pl: $(PROG)
        $(file >$@,false :- \+true. ?- ['$<'],$(MAIN).)

    https://github.com/hurufu/prolog-all/blob/main/rules.mk

    Yes, I do not yet have a -g option.

    Maybe should change that... The issue is a
    little tricky. Only recently I managed to handle
    some stuff that is tied to to the command line
    after the Novacore has been loaded.

    For example the top-level is now entered after
    the Novacore is loaded, and the top-level loads
    in itself library(session) etc.. To have a -g option
    there is a dependency on

    library(charsio), to convert a string into a term,
    which is not part of Novacore itself. So maybe I could
    do the same for a -g option, so that I can keep
    the Novacore small, load

    library(charsio) depending on the command line.
    I just did yesterday something to make the Novacore
    smaller. And handling a -g option this way could
    be a viable way to keep it small.

    Mild Shock schrieb:
    Hi,

    The most radical approach is Novacore from
    Dogelog Player. It consists of the following
    major incisions in the ISO core standard:

    - We do not forbid chars, like for example
       using lists of the form [a,b,c], we also
       provide char_code/2 predicate bidirectionally.

    - We do not provide and _chars built-in
       predicates also there is nothing _strings. The
       Prolog system is clever enough to not put
       every atom it sees in an atom table. There
       is only a predicate table.

    - Some host languages have garbage collection that
       deduplicates Strings. For example some Java
       versions have an options to do that. But we
       do not have any efforts to deduplicate atoms,
       which are simply plain strings.

    - Some languages have constant pools. For example
       the Java byte code format includes a constant
       pool in every class header. We do not do that
       during transpilation , but we could of course.
       But it begs the question, why only deduplicate
       strings and not other constant expressions as well?

    - We are totally happy that we have only codes,
       there are chances that the host languages use
       tagged pointers to represent them. So they
       are represented similar to the tagged pointers
       in SWI-Prolog which works for small integers.

    - But the tagged pointer argument is moot,
       since atom length=1 entities can be also
       represented as tagged pointers, and some
       programming languages do that. Dogelog Player
       would use such tagged pointers without
       poluting the atom table.

    - What else?

    Bye

    Mild Shock schrieb:

    Technically SWI-Prolog doesn't prefer codes.
    Library `library(pure_input)` might prefer codes.
    But this is again an issue of improving the
    library by some non existent SWI-Prolog community.

    The ISO core standard is silent about a flag
    back_quotes, but has a lot of API requirements
    that support both codes and chars, for example it
    requires atom_codes/2 and atom_chars/2.

    Implementation wise there can be an issue,
    like one might decide to implement the atoms
    of length=1 more efficiently, since with Unicode
    there is now an explosion.

    Not sure whether Trealla Prolog and Scryer
    Prolog thought about this problem, that the
    atom table gets quite large. Whereas codes don't
    eat the atom table. Maybe they forbit predicates

    that have an atom of length=1 head:

    h(X) :-
         write('Hello '), write(X), write('!'), nl.

    Does this still work?

    Mild Shock schrieb:
    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesn’t improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jul 14 15:55:49 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Will the world build on American Stacks?
    Or is the american dream over?

    How it started, 1 month go:

    Nvidia CEO Jensen Huang on AI, Musk and Trump https://www.youtube.com/watch?v=c-XAL2oYelI

    How its going, now:

    Are you still talking about Jeffrey Epstein? https://www.bbc.com/news/articles/cm2m879neljo

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Jul 15 20:55:47 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    So StackOverflow has already fallen, and
    GitHub will be the next one. StackOverflow
    was eclectic, insuinated a high Signal
    quality but repelled its newcomers by
    strick language rules and deletism.

    StackOverlow is suplaned by ChatGPT, etc..
    They are more tolerant and can deliver
    excellent Signals, much beter than
    StackOverflow. ChatGPT and other assistants
    flipped the model: No downvotes.

    No “duplicate question” shaming. Conversational,
    exploratory, and often faster than Googling +
    scanning SO threads. Most importantly: they don’t
    punish incomplete knowledge, which is where
    most human learning happens.

    LLMs give a more forgiving learning curve.

    Bye

    Mild Shock schrieb:

    Henri Poincaré believed that mathematical
    and scientific creativity came from a deep,
    unconscious intuition that could not be

    captured by mechanical reasoning or formal
    systems. He famously wrote about how insights
    came not from plodding logic but from sudden

    illuminations — leaps of creative synthesis.

    But now we have generative AI — models like GPT — that:

    - produce poetry, proofs, stories, and code,

    - combine ideas in novel ways,

    - and do so by processing patterns in massive
      datasets, without conscious understanding.

    And that does seem to contradict Poincaré's belief
    that true invention cannot come from automation.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Jul 15 21:15:02 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Why have even a classification of code
    into Python, JavaScript, Java, etc.. ?
    They are just some form. Not content.

    But what is content? The last 30 years
    saw it that repository spurred
    collaboration, because I can go home
    forget everything and next morning, f

    irst do a repo refresh and pick a
    ticket and dig into something. So the
    no code companion repository will do
    the same, like it does now.

    It will not be solely intent based.
    Goals can be highly ambigious. I do
    not think the future AGI based repos
    will be intent based. They have to grasp

    the full BDI, believe desire intent
    of their users. So with a cognitive twin,
    and some trust, I might even delegate some
    work completely or multiply my ego a 100

    times an delegate. Delegate to swarm
    was always my dream.

    Bye

    Mild Shock schrieb:
    Hi,

    So StackOverflow has already fallen, and
    GitHub will be the next one. StackOverflow
    was eclectic, insuinated a high Signal
    quality but repelled its newcomers by
    strick language rules and deletism.

    StackOverlow is suplaned by ChatGPT, etc..
    They are more tolerant and can deliver
    excellent Signals, much beter than
    StackOverflow. ChatGPT and other assistants
    flipped the model: No downvotes.

    No “duplicate question” shaming. Conversational,
    exploratory, and often faster than Googling +
    scanning SO threads. Most importantly: they don’t
    punish incomplete knowledge, which is where
    most human learning happens.

    LLMs give a more forgiving learning curve.

    Bye

    Mild Shock schrieb:

    Henri Poincaré believed that mathematical
    and scientific creativity came from a deep,
    unconscious intuition that could not be

    captured by mechanical reasoning or formal
    systems. He famously wrote about how insights
    came not from plodding logic but from sudden

    illuminations — leaps of creative synthesis.

    But now we have generative AI — models like GPT — that:

    - produce poetry, proofs, stories, and code,

    - combine ideas in novel ways,

    - and do so by processing patterns in massive
       datasets, without conscious understanding.

    And that does seem to contradict Poincaré's belief
    that true invention cannot come from automation.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 17 12:00:27 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Spotting Trojan Horses is a nice example
    of creativity that also needs ground truth.
    Gian-Carlo Rota was phamous for this truth:

    "The lack of understanding of the simplest
    facts of mathematics among philosophers
    is appalling."

    You can extend it to GitHub acrobats,
    paper mill balerinas and internet trolls.
    But mathematics itself had a hard time,

    allowing other objects than numbers:

    - Blissard's symbolic method
    He was primarily an applied mathematician and
    school inspector. His symbolic method was a way
    to represent and manipulate sequences algebraically
    using formal symbols.

    - Gian-Carlo Rota (in the 1970s)
    Gian-Carlo Rota (in the 1970s) gave Blissard’s
    symbolic method a rigorous algebraic foundation. Rota
    admired the symbolic reasoning of 19th-century mathematicians
    and often described it as having a “magical” or “mystical”
    elegance — again hinting at interpretive, almost poetic, qualities.

    - Umbral calculus
    Modern formalization of this method, often involving
    linear operators and algebraic structures. "Umbral"
    means “shadow” — the power-like expressions are
    symbolic shadows of actual algebra.

    Bye

    Mild Shock schrieb:

    Henri Poincaré believed that mathematical
    and scientific creativity came from a deep,
    unconscious intuition that could not be

    captured by mechanical reasoning or formal
    systems. He famously wrote about how insights
    came not from plodding logic but from sudden

    illuminations — leaps of creative synthesis.

    But now we have generative AI — models like GPT — that:

    - produce poetry, proofs, stories, and code,

    - combine ideas in novel ways,

    - and do so by processing patterns in massive
      datasets, without conscious understanding.

    And that does seem to contradict Poincaré's belief
    that true invention cannot come from automation.

    Mild Shock schrieb:
    Hi,

    But I shouldn't waste too much time.
    One shouldn't punish people for just
    being plain stupid.

    Like for example this clueless french
    philosopher who had a lot of troubles
    with non-classical logic.

    His brain tried to eliminate non-classical
    logic, it was keen on avoiding non-classical
    logic. A typical species of a human with

    an extremly small brain, again working
    in the wrong place!

    Bye

    P.S.: Maybe this a Poincaré thingy? Poincaré
    was a strong critic of logicism (as championed
    by Frege and Russell) and of Hilbert’s
    formalist program.

    But, he did not formally use or promote systems
    like intuitionistic logic, modal logic, or
    relevance logic. His logical framework remained
    within the bounds of classical logic,

    though he was skeptical of excessive formalism.
    He thought formal systems could miss the creative
    and synthetic nature of mathematical
    invention.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 17 12:14:47 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Rota often celebrated symbolic, analogical, and
    conceptual understanding over brute calculation.
    This philosophy has come full circle in modern AI:

    - Large Language Models (LLMs) like GPT-4 don't
    just store facts — they recognize patterns,
    make analogies, and generate new structures
    from old ones.

    - Rota’s work in combinatorics, symbolic logic, and
    operator theory is essentially pattern-based
    manipulation — exactly the kind of reasoning LLMs
    aim to emulate at scale.

    Rota had a clear aesthetic. He valued clean formalisms,
    symbolic beauty, and well-defined structures. Rota wanted
    mathematics to mean something — to be not just correct,
    but intelligible and expressive.

    In contrast, modern AI (especially LLMs like GPT) thrives
    on the messy, including: Noisy data , Inconsistency ,
    Uncertainty, Contradiction. AI engineers today are mining
    meaning from noise.

    What counts as “structure” is often just the best
    pragmatic/effective description available at that moment.

    Bye

    Mild Shock schrieb:
    Hi,

    Will the world build on American Stacks?
    Or is the american dream over?

    How it started, 1 month go:

    Nvidia CEO Jensen Huang on AI, Musk and Trump https://www.youtube.com/watch?v=c-XAL2oYelI

    How its going, now:

    Are you still talking about Jeffrey Epstein? https://www.bbc.com/news/articles/cm2m879neljo

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 17 14:33:06 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    An example of human intelligence, is of course the
    name "Rational Term" for cyclic terms set forth by
    Alain Colmerauer. Since it plays with "Rational Numbers".

    A subset of cyclic terms can indeed represent
    rational numbers, and they give a nice counter
    example to transitivity:

    ?- problem(X,Y,Z).
    X = _S1-7-9-1, % where
    _S1 = _S1-6-8-0-6-2-8,
    Y = _S2-1-6-1-5-4-6-1, % where
    _S2 = _S2-0-9-2,
    Z = _S3-3-0, % where
    _S3 = _S3-8-1

    The Fuzzer 2 from 2025 does just what I did in 2023,
    expanding rational numbers into rational terms:

    % fuzzy(-Term)
    fuzzy(X) :-
    random_between(1,100,A),
    random_between(1,100,B),
    random_between(1,10,M),
    fuzzy_chunk(M,A,B,C,X,Y),
    random_between(1,10,L),
    fuzzy_chunk(L,C,B,_,Y,Z),
    Z = Y.

    % fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
    fuzzy_chunk(0, A, _, A, X, X) :- !.
    fuzzy_chunk(N, A, B, C, Y-D, X) :-
    M is N-1,
    D is A // B,
    H is 10*(A - B*D),
    fuzzy_chunk(M, H, B, C, Y, X).

    Bye

    Mild Shock schrieb:
    Hi,

    Rota often celebrated symbolic, analogical, and
    conceptual understanding over brute calculation.
    This philosophy has come full circle in modern AI:

    - Large Language Models (LLMs) like GPT-4 don't
      just store facts — they recognize patterns,
      make analogies, and generate new structures
      from old ones.

    - Rota’s work in combinatorics, symbolic logic, and
      operator theory is essentially pattern-based
      manipulation — exactly the kind of reasoning LLMs
      aim to emulate at scale.

    Rota had a clear aesthetic. He valued clean formalisms,
    symbolic beauty, and well-defined structures. Rota wanted
    mathematics to mean something — to be not just correct,
    but intelligible and expressive.

    In contrast, modern AI (especially LLMs like GPT) thrives
    on the messy, including: Noisy data , Inconsistency ,
    Uncertainty, Contradiction. AI engineers today are mining
    meaning from noise.

    What counts as “structure” is often just the best
    pragmatic/effective description available at that moment.

    Bye

    Mild Shock schrieb:
    Hi,

    Will the world build on American Stacks?
    Or is the american dream over?

    How it started, 1 month go:

    Nvidia CEO Jensen Huang on AI, Musk and Trump
    https://www.youtube.com/watch?v=c-XAL2oYelI

    How its going, now:

    Are you still talking about Jeffrey Epstein?
    https://www.bbc.com/news/articles/cm2m879neljo

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 17 14:57:04 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Ok I have to correct "Rational Term" was less
    common, what was more in use "Rational Trees",
    but they might have also talked about finitely

    represented infinite tree. Rational trees itself
    probably an echo from Dmitry Mirimanoffs
    (1861–1945) “extraordinaire” sets.

    Dmitry Semionovitch Mirimanoff (Russian:
    Дми́трий Семёнович Мирима́нов; 13 September 1861, Pereslavl-Zalessky, Russia – 5 January 1945, Geneva,
    Switzerland) was a member of the Moscow Mathematical
    Society in 1897.[1] And later became a doctor of
    mathematical sciences in 1900, in Geneva, and
    taught at the universities of Geneva and Lausanne. https://en.wikipedia.org/wiki/Dmitry_Mirimanoff

    This year we can again celebrate another researcher,
    who died in 2023, Peter Aczel R.I.P., who made
    as well some thoughtful deviance from orthodoxy.

    Peter Aczel Memorial Conference on 10th September 2025.
    Logic Colloquium will take place at the University
    of Manchester (UK) from 11th to 12th September 2025 https://sites.google.com/view/blc2025/home

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    An example of human intelligence, is of course the
    name "Rational Term" for cyclic terms set forth by
    Alain Colmerauer. Since it plays with "Rational Numbers".

    A subset of cyclic terms can indeed represent
    rational numbers, and they give a nice counter
    example to transitivity:

    ?- problem(X,Y,Z).
    X = _S1-7-9-1, % where
        _S1 = _S1-6-8-0-6-2-8,
    Y = _S2-1-6-1-5-4-6-1, % where
        _S2 = _S2-0-9-2,
    Z = _S3-3-0, % where
        _S3 = _S3-8-1

    The Fuzzer 2 from 2025 does just what I did in 2023,
    expanding rational numbers into rational terms:

    % fuzzy(-Term)
    fuzzy(X) :-
       random_between(1,100,A),
       random_between(1,100,B),
       random_between(1,10,M),
       fuzzy_chunk(M,A,B,C,X,Y),
       random_between(1,10,L),
       fuzzy_chunk(L,C,B,_,Y,Z),
       Z = Y.

    % fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
    fuzzy_chunk(0, A, _, A, X, X) :- !.
    fuzzy_chunk(N, A, B, C, Y-D, X) :-
       M is N-1,
       D is A // B,
       H is 10*(A - B*D),
       fuzzy_chunk(M, H, B, C, Y, X).

    Bye

    Mild Shock schrieb:
    Hi,

    Rota often celebrated symbolic, analogical, and
    conceptual understanding over brute calculation.
    This philosophy has come full circle in modern AI:

    - Large Language Models (LLMs) like GPT-4 don't
       just store facts — they recognize patterns,
       make analogies, and generate new structures
       from old ones.

    - Rota’s work in combinatorics, symbolic logic, and
       operator theory is essentially pattern-based
       manipulation — exactly the kind of reasoning LLMs
       aim to emulate at scale.

    Rota had a clear aesthetic. He valued clean formalisms,
    symbolic beauty, and well-defined structures. Rota wanted
    mathematics to mean something — to be not just correct,
    but intelligible and expressive.

    In contrast, modern AI (especially LLMs like GPT) thrives
    on the messy, including: Noisy data , Inconsistency ,
    Uncertainty, Contradiction. AI engineers today are mining
    meaning from noise.

    What counts as “structure” is often just the best
    pragmatic/effective description available at that moment.

    Bye

    Mild Shock schrieb:
    Hi,

    Will the world build on American Stacks?
    Or is the american dream over?

    How it started, 1 month go:

    Nvidia CEO Jensen Huang on AI, Musk and Trump
    https://www.youtube.com/watch?v=c-XAL2oYelI

    How its going, now:

    Are you still talking about Jeffrey Epstein?
    https://www.bbc.com/news/articles/cm2m879neljo

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 17 23:17:45 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I am trying to verify my hypothesis
    that Rocq is a dead horse. Dead
    horses can come in different forms,

    for example a project that just
    imitates what was already done by
    the precursor, is most likely a

    Dead horse. For example MetaRocq,
    verifying a logic framework inside
    some strong enough set theory,

    is not novell. Maybe they get more
    out of doing MetaRocq:

    MetaRocq is a project formalizing Rocq in Rocq https://github.com/MetaRocq/metarocq#papers

    #50 Nicolas Tabareau
    https://www.youtube.com/watch?v=8kwe24gvigk

    Bye

    Mild Shock schrieb:
    Hi,

    Ok I have to correct "Rational Term" was less
    common, what was more in use "Rational Trees",
    but they might have also talked about finitely

    represented infinite tree. Rational trees itself
    probably an echo from Dmitry Mirimanoffs
    (1861–1945) “extraordinaire” sets.

    Dmitry Semionovitch Mirimanoff (Russian:
    Дми́трий Семёнович Мирима́нов; 13 September 1861, Pereslavl-Zalessky, Russia – 5 January 1945, Geneva,
    Switzerland) was a member of the Moscow Mathematical
    Society in 1897.[1] And later became a doctor of
    mathematical sciences in 1900, in Geneva, and
    taught at the universities of Geneva and Lausanne. https://en.wikipedia.org/wiki/Dmitry_Mirimanoff

    This year we can again celebrate another researcher,
    who died in 2023, Peter Aczel R.I.P., who made
    as well some thoughtful deviance from orthodoxy.

    Peter Aczel Memorial Conference on 10th September 2025.
    Logic Colloquium will take place at the University
    of Manchester  (UK) from 11th to 12th September 2025 https://sites.google.com/view/blc2025/home

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    An example of human intelligence, is of course the
    name "Rational Term" for cyclic terms set forth by
    Alain Colmerauer. Since it plays with "Rational Numbers".

    A subset of cyclic terms can indeed represent
    rational numbers, and they give a nice counter
    example to transitivity:

    ?- problem(X,Y,Z).
    X = _S1-7-9-1, % where
         _S1 = _S1-6-8-0-6-2-8,
    Y = _S2-1-6-1-5-4-6-1, % where
         _S2 = _S2-0-9-2,
    Z = _S3-3-0, % where
         _S3 = _S3-8-1

    The Fuzzer 2 from 2025 does just what I did in 2023,
    expanding rational numbers into rational terms:

    % fuzzy(-Term)
    fuzzy(X) :-
        random_between(1,100,A),
        random_between(1,100,B),
        random_between(1,10,M),
        fuzzy_chunk(M,A,B,C,X,Y),
        random_between(1,10,L),
        fuzzy_chunk(L,C,B,_,Y,Z),
        Z = Y.

    % fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
    fuzzy_chunk(0, A, _, A, X, X) :- !.
    fuzzy_chunk(N, A, B, C, Y-D, X) :-
        M is N-1,
        D is A // B,
        H is 10*(A - B*D),
        fuzzy_chunk(M, H, B, C, Y, X).

    Bye

    Mild Shock schrieb:
    Hi,

    Rota often celebrated symbolic, analogical, and
    conceptual understanding over brute calculation.
    This philosophy has come full circle in modern AI:

    - Large Language Models (LLMs) like GPT-4 don't
       just store facts — they recognize patterns,
       make analogies, and generate new structures
       from old ones.

    - Rota’s work in combinatorics, symbolic logic, and
       operator theory is essentially pattern-based
       manipulation — exactly the kind of reasoning LLMs
       aim to emulate at scale.

    Rota had a clear aesthetic. He valued clean formalisms,
    symbolic beauty, and well-defined structures. Rota wanted
    mathematics to mean something — to be not just correct,
    but intelligible and expressive.

    In contrast, modern AI (especially LLMs like GPT) thrives
    on the messy, including: Noisy data , Inconsistency ,
    Uncertainty, Contradiction. AI engineers today are mining
    meaning from noise.

    What counts as “structure” is often just the best
    pragmatic/effective description available at that moment.

    Bye

    Mild Shock schrieb:
    Hi,

    Will the world build on American Stacks?
    Or is the american dream over?

    How it started, 1 month go:

    Nvidia CEO Jensen Huang on AI, Musk and Trump
    https://www.youtube.com/watch?v=c-XAL2oYelI

    How its going, now:

    Are you still talking about Jeffrey Epstein?
    https://www.bbc.com/news/articles/cm2m879neljo

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 17 23:36:06 2025
    From Newsgroup: comp.lang.prolog

    So, we had the formal revolution:

    Frege (1879): Predicate logic formalized
    Peano (1889): Axioms of arithmetic
    Hilbert (1899–1930): Formal axiomatic method, Hilbert's Program
    Zermelo (1908): Axiomatic set theory (ZF, later ZFC)
    Gödel (1931): Incompleteness theorems end Hilbert’s
    dream of a complete formal system

    Then the machanized formal revolution:

    Automath (1967): The first real proof assistant,
    laying the conceptual groundwork.
    Mizar (1970s–1980s): Building a readable,
    structured formal language and large libraries.
    Isabelle (1980s): Developing a generic proof framework, making
    formalization more flexible.
    Coq (early 1990s): Fully fledged dependent type theory and tactic
    language emerge.
    HOL family (1980s–2000s): Focus on classical higher-order logic with applications in hardware/software verification.
    Lean + mathlib (late 2010s): Community-driven scaling,
    large libraries, easier onboarding.

    So we practically landed on the moon.

    Next steps:
    - Mars Orbit (Now–2030), AI-augmented theorem proving.
    - Mars Surface — AGI-Based Proving (2030s?)
    - Mars Camp - The Hub of Next-Gen Mathematics and Reasoning
    quantum computers, distributed supercomputers, and even alien
    intelligences (hypothetically)

    Mild Shock schrieb:
    Hi,

    I am trying to verify my hypothesis
    that Rocq is a dead horse. Dead
    horses can come in different forms,

    for example a project that just
    imitates what was already done by
    the precursor, is most likely a

    Dead horse. For example MetaRocq,
    verifying a logic framework inside
    some strong enough set theory,

    is not novell. Maybe they get more
    out of doing MetaRocq:

    MetaRocq is a project formalizing Rocq in Rocq https://github.com/MetaRocq/metarocq#papers

    #50 Nicolas Tabareau
    https://www.youtube.com/watch?v=8kwe24gvigk

    Bye

    Mild Shock schrieb:
    Hi,

    Ok I have to correct "Rational Term" was less
    common, what was more in use "Rational Trees",
    but they might have also talked about finitely

    represented infinite tree. Rational trees itself
    probably an echo from Dmitry Mirimanoffs
    (1861–1945) “extraordinaire” sets.

    Dmitry Semionovitch Mirimanoff (Russian:
    Дми́трий Семёнович Мирима́нов; 13 September 1861, >> Pereslavl-Zalessky, Russia – 5 January 1945, Geneva,
    Switzerland) was a member of the Moscow Mathematical
    Society in 1897.[1] And later became a doctor of
    mathematical sciences in 1900, in Geneva, and
    taught at the universities of Geneva and Lausanne.
    https://en.wikipedia.org/wiki/Dmitry_Mirimanoff

    This year we can again celebrate another researcher,
    who died in 2023, Peter Aczel R.I.P., who made
    as well some thoughtful deviance from orthodoxy.

    Peter Aczel Memorial Conference on 10th September 2025.
    Logic Colloquium will take place at the University
    of Manchester  (UK) from 11th to 12th September 2025
    https://sites.google.com/view/blc2025/home

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    An example of human intelligence, is of course the
    name "Rational Term" for cyclic terms set forth by
    Alain Colmerauer. Since it plays with "Rational Numbers".

    A subset of cyclic terms can indeed represent
    rational numbers, and they give a nice counter
    example to transitivity:

    ?- problem(X,Y,Z).
    X = _S1-7-9-1, % where
         _S1 = _S1-6-8-0-6-2-8,
    Y = _S2-1-6-1-5-4-6-1, % where
         _S2 = _S2-0-9-2,
    Z = _S3-3-0, % where
         _S3 = _S3-8-1

    The Fuzzer 2 from 2025 does just what I did in 2023,
    expanding rational numbers into rational terms:

    % fuzzy(-Term)
    fuzzy(X) :-
        random_between(1,100,A),
        random_between(1,100,B),
        random_between(1,10,M),
        fuzzy_chunk(M,A,B,C,X,Y),
        random_between(1,10,L),
        fuzzy_chunk(L,C,B,_,Y,Z),
        Z = Y.

    % fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
    fuzzy_chunk(0, A, _, A, X, X) :- !.
    fuzzy_chunk(N, A, B, C, Y-D, X) :-
        M is N-1,
        D is A // B,
        H is 10*(A - B*D),
        fuzzy_chunk(M, H, B, C, Y, X).

    Bye

    Mild Shock schrieb:
    Hi,

    Rota often celebrated symbolic, analogical, and
    conceptual understanding over brute calculation.
    This philosophy has come full circle in modern AI:

    - Large Language Models (LLMs) like GPT-4 don't
       just store facts — they recognize patterns,
       make analogies, and generate new structures
       from old ones.

    - Rota’s work in combinatorics, symbolic logic, and
       operator theory is essentially pattern-based
       manipulation — exactly the kind of reasoning LLMs
       aim to emulate at scale.

    Rota had a clear aesthetic. He valued clean formalisms,
    symbolic beauty, and well-defined structures. Rota wanted
    mathematics to mean something — to be not just correct,
    but intelligible and expressive.

    In contrast, modern AI (especially LLMs like GPT) thrives
    on the messy, including: Noisy data , Inconsistency ,
    Uncertainty, Contradiction. AI engineers today are mining
    meaning from noise.

    What counts as “structure” is often just the best
    pragmatic/effective description available at that moment.

    Bye

    Mild Shock schrieb:
    Hi,

    Will the world build on American Stacks?
    Or is the american dream over?

    How it started, 1 month go:

    Nvidia CEO Jensen Huang on AI, Musk and Trump
    https://www.youtube.com/watch?v=c-XAL2oYelI

    How its going, now:

    Are you still talking about Jeffrey Epstein?
    https://www.bbc.com/news/articles/cm2m879neljo

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf


    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses? >>>>>> The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg







    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jul 20 13:39:42 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I didn't expect the topic to be that rich.

    The big challenge in a top-level display is
    the "interleaving" of equations and their
    factorization as well as the "inlining" of
    equations with existing variable names,

    trying hard to do exactly that, was looking
    at these test cases:

    ?- [user].
    p(X,Y) :- X = f(f(f(X))), Y = f(f(Y)).
    p(X,Y) :- X = a(f(X,Y)), Y = b(g(X,Y)).
    p(X,Y) :- X = s(s(X,Y),_), Y = s(Y,X).

    Using cycle detection via (==)/2 I get:

    /* Dogelog Player 1.3.5 */
    ?- p(X,Y).
    X = f(X), Y = X;
    X = a(f(X, Y)), Y = b(g(X, Y));
    X = s(s(X, Y), _), Y = s(Y, X).

    Using cycle detection via same_term/2 I get:

    /* Dogelog Player 1.3.5 */
    ?- p(X,Y).
    X = f(f(f(X))), Y = f(f(Y));
    X = a(f(X, Y)), Y = b(g(X, Y));
    X = s(s(X, Y), _), Y = s(Y, X).

    Cool!

    Bye

    Mild Shock schrieb:
    Hi,

    The most radical approach is Novacore from
    Dogelog Player. It consists of the following
    major incisions in the ISO core standard:

    - We do not forbid chars, like for example
      using lists of the form [a,b,c], we also
      provide char_code/2 predicate bidirectionally.

    - We do not provide and _chars built-in
      predicates also there is nothing _strings. The
      Prolog system is clever enough to not put
      every atom it sees in an atom table. There
      is only a predicate table.

    - Some host languages have garbage collection that
      deduplicates Strings. For example some Java
      versions have an options to do that. But we
      do not have any efforts to deduplicate atoms,
      which are simply plain strings.

    - Some languages have constant pools. For example
      the Java byte code format includes a constant
      pool in every class header. We do not do that
      during transpilation , but we could of course.
      But it begs the question, why only deduplicate
      strings and not other constant expressions as well?

    - We are totally happy that we have only codes,
      there are chances that the host languages use
      tagged pointers to represent them. So they
      are represented similar to the tagged pointers
      in SWI-Prolog which works for small integers.

    - But the tagged pointer argument is moot,
      since atom length=1 entities can be also
      represented as tagged pointers, and some
      programming languages do that. Dogelog Player
      would use such tagged pointers without
      poluting the atom table.

    - What else?

    Bye

    Mild Shock schrieb:

    Technically SWI-Prolog doesn't prefer codes.
    Library `library(pure_input)` might prefer codes.
    But this is again an issue of improving the
    library by some non existent SWI-Prolog community.

    The ISO core standard is silent about a flag
    back_quotes, but has a lot of API requirements
    that support both codes and chars, for example it
    requires atom_codes/2 and atom_chars/2.

    Implementation wise there can be an issue,
    like one might decide to implement the atoms
    of length=1 more efficiently, since with Unicode
    there is now an explosion.

    Not sure whether Trealla Prolog and Scryer
    Prolog thought about this problem, that the
    atom table gets quite large. Whereas codes don't
    eat the atom table. Maybe they forbit predicates

    that have an atom of length=1 head:

    h(X) :-
         write('Hello '), write(X), write('!'), nl.

    Does this still work?

    Mild Shock schrieb:
    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesn’t improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jul 20 13:43:45 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    SWI-Prolog has a small glitch in the clause
    compilation, which can be compensated by using:

    ?- [user].
    p(X,Y) :- call((X = f(f(f(X))), Y = f(f(Y)))).
    p(X,Y) :- call((X = a(f(X,Y)), Y = b(g(X,Y)))).
    p(X,Y) :- call((X = s(s(X,Y),_), Y = s(Y,X))).

    I then get these results:

    /* SWI-Prolog 9.3.25 */
    ?- p(X,Y).
    X = Y, Y = f(f(Y)) ; /* ordering dependent */
    X = _S1, % where
    _S1 = a(f(_S1, _S2)),
    _S2 = b(g(_S1, _S2)),
    Y = b(g(_S1, _S2)) ; /* could use _S2 */
    X = _S1, % where
    _S1 = s(s(_S1, _S2), _),
    _S2 = s(_S2, _S1),
    Y = s(_S2, _S1). /* could use _S2 */

    And for Ciao Prolog I get these results:

    /* Ciao Prolog 1.25.0 */
    ?- p(X,Y).
    X = f(X),
    Y = f(X) ? ; /* too big! */
    X = a(f(X,b(g(X,Y)))), /* too big! */
    Y = b(g(a(f(X,Y)),Y)) ? ; /* too big! */
    X = s(s(X,s(Y,X)),_A), /* too big! */
    Y = s(Y,s(s(X,Y),_A)) ? ; /* too big! */

    Bye

    Mild Shock schrieb:
    Hi,

    I didn't expect the topic to be that rich.

    The big challenge in a top-level display is
    the "interleaving" of equations and their
    factorization as well as the "inlining" of
    equations with existing variable names,

    trying hard to do exactly that, was looking
    at these test cases:

    ?- [user].
    p(X,Y) :- X = f(f(f(X))), Y = f(f(Y)).
    p(X,Y) :- X = a(f(X,Y)), Y = b(g(X,Y)).
    p(X,Y) :- X = s(s(X,Y),_), Y = s(Y,X).

    Using cycle detection via (==)/2 I get:

    /* Dogelog Player 1.3.5 */
    ?- p(X,Y).
    X = f(X), Y = X;
    X = a(f(X, Y)), Y = b(g(X, Y));
    X = s(s(X, Y), _), Y = s(Y, X).

    Using cycle detection via same_term/2 I get:

    /* Dogelog Player 1.3.5 */
    ?- p(X,Y).
    X = f(f(f(X))), Y = f(f(Y));
    X = a(f(X, Y)), Y = b(g(X, Y));
    X = s(s(X, Y), _), Y = s(Y, X).

    Cool!

    Bye

    Mild Shock schrieb:
    Hi,

    The most radical approach is Novacore from
    Dogelog Player. It consists of the following
    major incisions in the ISO core standard:

    - We do not forbid chars, like for example
       using lists of the form [a,b,c], we also
       provide char_code/2 predicate bidirectionally.

    - We do not provide and _chars built-in
       predicates also there is nothing _strings. The
       Prolog system is clever enough to not put
       every atom it sees in an atom table. There
       is only a predicate table.

    - Some host languages have garbage collection that
       deduplicates Strings. For example some Java
       versions have an options to do that. But we
       do not have any efforts to deduplicate atoms,
       which are simply plain strings.

    - Some languages have constant pools. For example
       the Java byte code format includes a constant
       pool in every class header. We do not do that
       during transpilation , but we could of course.
       But it begs the question, why only deduplicate
       strings and not other constant expressions as well?

    - We are totally happy that we have only codes,
       there are chances that the host languages use
       tagged pointers to represent them. So they
       are represented similar to the tagged pointers
       in SWI-Prolog which works for small integers.

    - But the tagged pointer argument is moot,
       since atom length=1 entities can be also
       represented as tagged pointers, and some
       programming languages do that. Dogelog Player
       would use such tagged pointers without
       poluting the atom table.

    - What else?

    Bye

    Mild Shock schrieb:

    Technically SWI-Prolog doesn't prefer codes.
    Library `library(pure_input)` might prefer codes.
    But this is again an issue of improving the
    library by some non existent SWI-Prolog community.

    The ISO core standard is silent about a flag
    back_quotes, but has a lot of API requirements
    that support both codes and chars, for example it
    requires atom_codes/2 and atom_chars/2.

    Implementation wise there can be an issue,
    like one might decide to implement the atoms
    of length=1 more efficiently, since with Unicode
    there is now an explosion.

    Not sure whether Trealla Prolog and Scryer
    Prolog thought about this problem, that the
    atom table gets quite large. Whereas codes don't
    eat the atom table. Maybe they forbit predicates

    that have an atom of length=1 head:

    h(X) :-
         write('Hello '), write(X), write('!'), nl.

    Does this still work?

    Mild Shock schrieb:
    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesn’t improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 23 13:57:20 2025
    From Newsgroup: comp.lang.prolog

    Looks like sorting of rational trees
    needs an existential type, if we go full “logical”.
    If I use my old code from 2023 which computes
    a finest (*), i.e. non-monster, bisimulation

    pre-quotient (**) in prefix order:

    factorize(T, _, T) --> {var(T)}, !.
    factorize(T, C, V) --> {compound(T), member(S-V, C), S == T}, !.
    factorize(T, C, V) --> {compound(T)}, !,
    [V = S],
    {T =.. [F|L]},
    factorize_list(L, [T-V|C], R),
    {S =.. [F|R]}.
    factorize(T, _, T) --> [].

    I see that it always generates new
    intermediate variables:

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl. [_8066=f(_8066)]-_8066

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl. [_10984=f(_10984)]-_10984

    What would be swell if it would generate an
    existential quantifier, something like T^([T = f(T)]-T)
    in the above case. Then using alpha conversion
    different factorization runs would be equal,

    when they only differ by the introduced
    intermediate variables. But Prolog has no alpha
    conversion, only λ-Prolog has such things.
    So what can we do, how can we produce a

    representation, that can be used for sorting?

    (*) Why finest and not corsets? Because it uses
    non-monster instructions and not monster
    instructions

    (**) Why only pre-quotient? Because a
    XXX_with_stack algorithm does not fully
    deduplicate the equations, would
    probably need a XXX_with_memo algorithm.

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 23 14:03:42 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    So do we see a new wave in interst in bismulation,
    especially in computing existential types for all
    kind of things? It seems so, quite facinating find:

    BQ-NCO: Bisimulation Quotienting for Efficient
    Neural Combinatorial Optimization
    https://arxiv.org/abs/2301.03313

    Has nobody less than Jean-Marc Andreoli on the
    author list. Possibly the same guy from earlier
    Focusing and Linear Logic, who was associated with

    ECRC Munich in 1990’s, but now working for naverlabs.com.

    Bye

    Mild Shock schrieb:
    Looks like sorting of rational trees
    needs an existential type, if we go full “logical”.
    If I use my old code from 2023 which computes
    a finest (*), i.e. non-monster, bisimulation

    pre-quotient (**) in prefix order:

    factorize(T, _, T) --> {var(T)}, !.
    factorize(T, C, V) --> {compound(T), member(S-V, C), S == T}, !.
    factorize(T, C, V) --> {compound(T)}, !,
       [V = S],
       {T =.. [F|L]},
       factorize_list(L, [T-V|C], R),
       {S =.. [F|R]}.
    factorize(T, _, T) --> [].

    I see that it always generates new
    intermediate variables:

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl. [_8066=f(_8066)]-_8066

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl. [_10984=f(_10984)]-_10984

    What would be swell if it would generate an
    existential quantifier, something like T^([T = f(T)]-T)
    in the above case. Then using alpha conversion
    different factorization runs would be equal,

    when they only differ by the introduced
    intermediate variables. But Prolog has no alpha
    conversion, only λ-Prolog has such things.
    So what can we do, how can we produce a

    representation, that can be used for sorting?

    (*) Why finest and not corsets? Because it uses
    non-monster instructions and not monster
    instructions

    (**) Why only pre-quotient? Because a
    XXX_with_stack algorithm does not fully
    deduplicate the equations, would
    probably need a XXX_with_memo algorithm.

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 23 15:18:31 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    To do bi-simulation, you don't need to wear
    this t-shirt, bi-simulation doesn't refer to
    any sexual orientation, although you could give

    it a game theoretic touch with Samson and Delilah:

    Why Are You Geh T-Shirt https://www.amazon.co.uk/Why-Are-You-Gay-T-Shirt/dp/B0DJMZFQN8

    “bi-simulation equivalent” is sometimes simply
    named “bi-similar”. There is a nice paper by Manual
    Carro which gives a larger bisimilarity example:

    An Application of Rational Trees in a Logic.
    Programming Interpreter for a Procedural Language
    Manuel Carro - 2004
    https://arxiv.org/abs/cs/0403028v1

    He makes the case of “goto” in a programming
    language, where Labels are not needed, simply
    rational tree sharing and looping can be used.

    The case, from Figure 5: Threading the code into
    a rational tree, uses in its result the simpler
    bisimilarity, doesn’t need that much of a more
    elaborat bisimulation later.

    You can use dicts (not SWI-Prolog dicts, but
    some table operations) lookup to create the
    rational tree. But I guess you can also use dicts
    (again table operations) for the reverse, find

    some factorization of a rational tree,
    recreate the labels and jumps.

    Bye

    Mild Shock schrieb:
    Hi,

    So do we see a new wave in interst in bismulation,
    especially in computing existential types for all
    kind of things? It seems so, quite facinating find:

    BQ-NCO: Bisimulation Quotienting for Efficient
    Neural Combinatorial Optimization
    https://arxiv.org/abs/2301.03313

    Has nobody less than Jean-Marc Andreoli on the
    author list. Possibly the same guy from earlier
    Focusing and Linear Logic, who was associated with

    ECRC Munich in 1990’s, but now working for naverlabs.com.

    Bye

    Mild Shock schrieb:
    Looks like sorting of rational trees
    needs an existential type, if we go full “logical”.
    If I use my old code from 2023 which computes
    a finest (*), i.e. non-monster, bisimulation

    pre-quotient (**) in prefix order:

    factorize(T, _, T) --> {var(T)}, !.
    factorize(T, C, V) --> {compound(T), member(S-V, C), S == T}, !.
    factorize(T, C, V) --> {compound(T)}, !,
        [V = S],
        {T =.. [F|L]},
        factorize_list(L, [T-V|C], R),
        {S =.. [F|R]}.
    factorize(T, _, T) --> [].

    I see that it always generates new
    intermediate variables:

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl.
    [_8066=f(_8066)]-_8066

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl.
    [_10984=f(_10984)]-_10984

    What would be swell if it would generate an
    existential quantifier, something like T^([T = f(T)]-T)
    in the above case. Then using alpha conversion
    different factorization runs would be equal,

    when they only differ by the introduced
    intermediate variables. But Prolog has no alpha
    conversion, only λ-Prolog has such things.
    So what can we do, how can we produce a

    representation, that can be used for sorting?

    (*) Why finest and not corsets? Because it uses
    non-monster instructions and not monster
    instructions

    (**) Why only pre-quotient? Because a
    XXX_with_stack algorithm does not fully
    deduplicate the equations, would
    probably need a XXX_with_memo algorithm.

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 23 19:10:09 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    you do need a theory of terms, and a specific one

    You could pull an Anti Ackerman. Negate the
    infinity axiom like Ackerman did here, where
    he also kept the regularity axiom:

    Die Widerspruchsfreiheit der allgemeinen Mengenlehre
    Ackermann, Wilhelm - 1937 https://www.digizeitschriften.de/id/235181684_0114%7Clog23

    But instead of Ackermann, you get an Anti (-Foundation)
    Ackermann if you drop the regularity axiom. Result, you
    get a lot of exotic sets, among which are also the

    famous Quine atoms:

    x = {x}

    Funny that in the setting I just described , where
    there is the negation of the infinity axiom, i.e.
    all sets are finite, contrary to the usually vulgar
    view, x = {x} is a finite object. Just like in Prolog

    X = f(X) is in principle a finite object, it has
    only one subtree, or what Alain Colmerauer
    already postulated:

    Definition: a "rational" tre is a tree which
    has a finite set of subtrees.

    Bye

    Mild Shock schrieb:
    Hi,

    Ok I have to correct "Rational Term" was less
    common, what was more in use "Rational Trees",
    but they might have also talked about finitely

    represented infinite tree. Rational trees itself
    probably an echo from Dmitry Mirimanoffs
    (1861–1945) “extraordinaire” sets.

    Dmitry Semionovitch Mirimanoff (Russian:
    Дми́трий Семёнович Мирима́нов; 13 September 1861, Pereslavl-Zalessky, Russia – 5 January 1945, Geneva,
    Switzerland) was a member of the Moscow Mathematical
    Society in 1897.[1] And later became a doctor of
    mathematical sciences in 1900, in Geneva, and
    taught at the universities of Geneva and Lausanne. https://en.wikipedia.org/wiki/Dmitry_Mirimanoff

    This year we can again celebrate another researcher,
    who died in 2023, Peter Aczel R.I.P., who made
    as well some thoughtful deviance from orthodoxy.

    Peter Aczel Memorial Conference on 10th September 2025.
    Logic Colloquium will take place at the University
    of Manchester  (UK) from 11th to 12th September 2025 https://sites.google.com/view/blc2025/home

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    An example of human intelligence, is of course the
    name "Rational Term" for cyclic terms set forth by
    Alain Colmerauer. Since it plays with "Rational Numbers".

    A subset of cyclic terms can indeed represent
    rational numbers, and they give a nice counter
    example to transitivity:

    ?- problem(X,Y,Z).
    X = _S1-7-9-1, % where
         _S1 = _S1-6-8-0-6-2-8,
    Y = _S2-1-6-1-5-4-6-1, % where
         _S2 = _S2-0-9-2,
    Z = _S3-3-0, % where
         _S3 = _S3-8-1

    The Fuzzer 2 from 2025 does just what I did in 2023,
    expanding rational numbers into rational terms:

    % fuzzy(-Term)
    fuzzy(X) :-
        random_between(1,100,A),
        random_between(1,100,B),
        random_between(1,10,M),
        fuzzy_chunk(M,A,B,C,X,Y),
        random_between(1,10,L),
        fuzzy_chunk(L,C,B,_,Y,Z),
        Z = Y.

    % fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
    fuzzy_chunk(0, A, _, A, X, X) :- !.
    fuzzy_chunk(N, A, B, C, Y-D, X) :-
        M is N-1,
        D is A // B,
        H is 10*(A - B*D),
        fuzzy_chunk(M, H, B, C, Y, X).

    Bye

    Mild Shock schrieb:
    Hi,

    Rota often celebrated symbolic, analogical, and
    conceptual understanding over brute calculation.
    This philosophy has come full circle in modern AI:

    - Large Language Models (LLMs) like GPT-4 don't
       just store facts — they recognize patterns,
       make analogies, and generate new structures
       from old ones.

    - Rota’s work in combinatorics, symbolic logic, and
       operator theory is essentially pattern-based
       manipulation — exactly the kind of reasoning LLMs
       aim to emulate at scale.

    Rota had a clear aesthetic. He valued clean formalisms,
    symbolic beauty, and well-defined structures. Rota wanted
    mathematics to mean something — to be not just correct,
    but intelligible and expressive.

    In contrast, modern AI (especially LLMs like GPT) thrives
    on the messy, including: Noisy data , Inconsistency ,
    Uncertainty, Contradiction. AI engineers today are mining
    meaning from noise.

    What counts as “structure” is often just the best
    pragmatic/effective description available at that moment.

    Bye

    Mild Shock schrieb:
    Hi,

    Will the world build on American Stacks?
    Or is the american dream over?

    How it started, 1 month go:

    Nvidia CEO Jensen Huang on AI, Musk and Trump
    https://www.youtube.com/watch?v=c-XAL2oYelI

    How its going, now:

    Are you still talking about Jeffrey Epstein?
    https://www.bbc.com/news/articles/cm2m879neljo

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 23 19:14:13 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    But you might then experience the problem
    that the usual extensionality axiom of
    set theory is not enough, there could
    be two quine atoms y = {y} and x = {x}
    with x=/=y.

    On the other hand SWI-Prolog is convinced
    that X = [X] and Y = [Y] are the same,
    it can even apply member/2 to it since
    it has built-in rational trees:

    /* SWI-Prolog 9.3.25 */
    ?- X = [X], Y = [Y], X == Y.
    X = Y, Y = [Y].

    ?- X = [X], member(X, X).
    X = [X].

    But Peter Aczel’s Original AFA Statement was
    only Uniqueness of solutions to graph equations,
    whereas today we would talk about Equality =

    existence of a bisimulation relation.

    Bye

    Hi,

    you do need a theory of terms, and a specific one

    You could pull an Anti Ackerman. Negate the
    infinity axiom like Ackerman did here, where
    he also kept the regularity axiom:

    Die Widerspruchsfreiheit der allgemeinen Mengenlehre
    Ackermann, Wilhelm - 1937 https://www.digizeitschriften.de/id/235181684_0114%7Clog23

    But instead of Ackermann, you get an Anti (-Foundation)
    Ackermann if you drop the regularity axiom. Result, you
    get a lot of exotic sets, among which are also the

    famous Quine atoms:

    x = {x}

    Funny that in the setting I just described , where
    there is the negation of the infinity axiom, i.e.
    all sets are finite, contrary to the usually vulgar
    view, x = {x} is a finite object. Just like in Prolog

    X = f(X) is in principle a finite object, it has
    only one subtree, or what Alain Colmerauer
    already postulated:

    Definition: a "rational" tre is a tree which
    has a finite set of subtrees.

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jul 25 21:27:28 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    That is extremly embarassing. I don’t know
    what you are bragging about, when you wrote
    the below. You are wrestling with a ghost!
    Maybe you didn’t follow my superbe link:

    seemingly interesting paper. In stead
    particular, his final coa[l]gebra theorem

    The link behind Hopcroft and Karp (1971) I
    gave, which is a Bisimulation and Equirecursive
    Equality hand-out, has a coalgebra example,
    I used to derive pairs.pl from:

    https://www.cs.cornell.edu/courses/cs6110/2014sp/Lectures/lec35a.pdf

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jul 25 21:38:59 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    My beloved Logic professor introduced Non-Wellfounded
    in the form of library cards, sorry only German:

    Wir denken uns dazu eine Kartothek, auf deren
    Karten wieder Karten derselben Kartothek
    aufgeführt sind. Ein Beispiel einer solchen
    Kartothek wäre etwa das folgende : wir haben
    drei Karten a, b, c; a führt a und b auf, b
    die Karten a und c, c die Karte b a = (a, b),
    b = (a, c), c = (b). Entsprechend den sich
    nicht selbst als Element enthaltenden Mengen
    fragen wir nach den Karten, die sich nicht
    selbst aufführen. Die Karte a ist die einzige,
    die sich selbst aufführt ; b und c sind somit
    die sich nicht selbst aufführenden Karten.

    He then concludes that Non-Wellfounded has still the
    Russell Paradox, and hence also the productive form of it:

    Es gibt somit in jeder Kartothek eine
    Gesamtheit G von Karten, zu der es keine Karte
    gibt, die genau jene aus G aufführt. (Für endliche
    Kartotheken ist dies ziemlich selbstverständlich,
    doch wollen wir auch unendliche Kartotheken in
    Betracht ziehen.) Dieser Satz schliesst aber
    natürlich nicht aus, dass es stets möglich ist,
    eine genau die Karten aus G aufführende Karte
    herzustellen und diese in die Kartothek zu legen.
    Nur müssen wir mit der Möglich-

    What is your opinion? Excerpt from:

    **DIE ANTINOMIEN DER MENGENLEHRE**
    E. Specker, Dialectica, Vol. 8, No. 3 (15. 9. 1954) https://www.jstor.org/stable/42964119?seq=7

    Bye

    Mild Shock schrieb:
    Hi,

    That is extremly embarassing. I don’t know
    what you are bragging about, when you wrote
    the below. You are wrestling with a ghost!
    Maybe you didn’t follow my superbe link:

    seemingly interesting paper. In stead
    particular, his final coa[l]gebra theorem

    The link behind Hopcroft and Karp (1971) I
    gave, which is a Bisimulation and Equirecursive
    Equality hand-out, has a coalgebra example,
    I used to derive pairs.pl from:

    https://www.cs.cornell.edu/courses/cs6110/2014sp/Lectures/lec35a.pdf

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jul 25 23:03:45 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Take this exercise. Exercise 4.1 Draw the tree
    represented by the term n1(n2(n4),n3(n5,n6)). https://book.simply-logical.space/src/text/2_part_ii/4.1.html

    Maybe there was a plan that SWISH can draw trees,
    and it could be that something was implemented as well.

    But I don't see anything dynamic working on the
    above web site link. Next challenge for Simply Logical,

    in another life. Draw a rational tree.
    The Prolog system has them:

    /* SWI-Prolog 9.3.26 */
    ?- X = a(Y,_), Y = b(X,_).
    X = a(b(X, _A), _),
    Y = b(X, _A).

    Bye

    Mild Shock schrieb:
    Hi,

    My beloved Logic professor introduced Non-Wellfounded
    in the form of library cards, sorry only German:

    Wir denken uns dazu eine Kartothek, auf deren
    Karten wieder Karten derselben Kartothek
    aufgeführt sind. Ein Beispiel einer solchen
    Kartothek wäre etwa das folgende : wir haben
    drei Karten a, b, c; a führt a und b auf, b
    die Karten a und c, c die Karte b a = (a, b),
    b = (a, c), c = (b). Entsprechend den sich
    nicht selbst als Element enthaltenden Mengen
    fragen wir nach den Karten, die sich nicht
    selbst aufführen. Die Karte a ist die einzige,
    die sich selbst aufführt ; b und c sind somit
    die sich nicht selbst aufführenden Karten.

    He then concludes that Non-Wellfounded has still the
    Russell Paradox, and hence also the productive form of it:

    Es gibt somit in jeder Kartothek eine
    Gesamtheit G von Karten, zu der es keine Karte
    gibt, die genau jene aus G aufführt. (Für endliche
    Kartotheken ist dies ziemlich selbstverständlich,
    doch wollen wir auch unendliche Kartotheken in
    Betracht ziehen.) Dieser Satz schliesst aber
    natürlich nicht aus, dass es stets möglich ist,
    eine genau die Karten aus G aufführende Karte
    herzustellen und diese in die Kartothek zu legen.
    Nur müssen wir mit der Möglich-

    What is your opinion? Excerpt from:

    **DIE ANTINOMIEN DER MENGENLEHRE**
    E. Specker, Dialectica, Vol. 8, No. 3 (15. 9. 1954) https://www.jstor.org/stable/42964119?seq=7

    Bye

    Mild Shock schrieb:
    Hi,

    That is extremly embarassing. I don’t know
    what you are bragging about, when you wrote
    the below. You are wrestling with a ghost!
    Maybe you didn’t follow my superbe link:

    seemingly interesting paper. In stead
    particular, his final coa[l]gebra theorem

    The link behind Hopcroft and Karp (1971) I
    gave, which is a Bisimulation and Equirecursive
    Equality hand-out, has a coalgebra example,
    I used to derive pairs.pl from:

    https://www.cs.cornell.edu/courses/cs6110/2014sp/Lectures/lec35a.pdf

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Jul 26 16:10:59 2025
    From Newsgroup: comp.lang.prolog


    I guess there is a bug in preparing flat terms vector

    I give you a gold medal 🥇, if you can prove a
    compare_index/3 correct that uses this rule. It
    was already shown impossible by Matt Carlson.

    There are alternative approaches that can reach
    transitivity, but do not use the below step
    inside some compare_index/3.

    compare_term_args(I, C, X, Y, A, H):-
    arg(I, X, K),
    arg(I, Y, L),
    !,
    compare_index(D, K, L, A, H),
    ( D = (=) ->
    I0 is I + 1,
    compare_term_args(I0, C, X, Y, A, H)
    ; C = D
    ).
    compare_term_args(_ ,= , _, _, _, _).

    Maybe there is a grain of salt of invoking the
    Axiom of Choice (AC) in some previous posts.
    Although the Axiom of Choice is not needed for

    finite sets, they have anyway some choice.

    BTW: When Peter Aczel writes ZFC-, he then
    means ZFC without AC, right? But he doesn’t
    show some compare/3 .

    Mild Shock schrieb:
    Hi,

    Take this exercise. Exercise 4.1 Draw the tree
    represented by the term n1(n2(n4),n3(n5,n6)). https://book.simply-logical.space/src/text/2_part_ii/4.1.html

    Maybe there was a plan that SWISH can draw trees,
    and it could be that something was implemented as well.

    But I don't see anything dynamic working on the
    above web site link. Next challenge for Simply Logical,

    in another life. Draw a rational tree.
    The Prolog system has them:

    /* SWI-Prolog 9.3.26 */
    ?- X = a(Y,_), Y = b(X,_).
    X = a(b(X, _A), _),
    Y = b(X, _A).

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Jul 26 16:17:42 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Did the old School Logicians waste time
    with compare/3 ? I guess no:

    Ernst Specker, my beloved Professor, and
    Dana Scott made only a partial order. A
    partial order might have transitivity

    of (<') lacking:

    "Scott's model construction is in fact
    closely related to Specker's but there
    is a subtle difference in the notion of
    tree that they use. In fact neither of
    them formulate their notion of tree in
    terms of graphs but rather in terms of
    what it will be convenient here to
    call tree-partial-orderings."

    See here:

    NON-WELL-FOUNDED SETS
    Peter Aczel - 1988 https://les-mathematiques.net/vanilla/uploads/editor/fh/v4pi6qyxfbel.pdf

    There is also the notion of co-well-
    foundedness, something like Noetherian but
    up side down, i.e. certain ascending
    chains stabilizing.

    Bye

    Mild Shock schrieb:

    I guess there is a bug in preparing flat terms vector

    I give you a gold medal 🥇, if you can prove a
    compare_index/3 correct that uses this rule. It
    was already shown impossible by Matt Carlson.

    There are alternative approaches that can reach
    transitivity, but do not use the below step
    inside some compare_index/3.

    compare_term_args(I, C, X, Y, A, H):-
            arg(I, X, K),
            arg(I, Y, L),
            !,
            compare_index(D, K, L, A, H),
            (    D = (=) ->
                I0 is I + 1,
                compare_term_args(I0, C, X, Y, A, H)
            ;    C = D
            ).
    compare_term_args(_ ,= , _, _, _, _).

    Maybe there is a grain of salt of invoking the
    Axiom of Choice (AC) in some previous posts.
    Although the Axiom of Choice is not needed for

    finite sets, they have anyway some choice.

    BTW: When Peter Aczel writes ZFC-, he then
    means ZFC without AC, right? But he doesn’t
    show some compare/3 .

    Mild Shock schrieb:
    Hi,

    Take this exercise. Exercise 4.1 Draw the tree
    represented by the term n1(n2(n4),n3(n5,n6)).
    https://book.simply-logical.space/src/text/2_part_ii/4.1.html

    Maybe there was a plan that SWISH can draw trees,
    and it could be that something was implemented as well.

    But I don't see anything dynamic working on the
    above web site link. Next challenge for Simply Logical,

    in another life. Draw a rational tree.
    The Prolog system has them:

    /* SWI-Prolog 9.3.26 */
    ?- X = a(Y,_), Y = b(X,_).
    X = a(b(X, _A), _),
    Y = b(X, _A).

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Jul 26 16:36:35 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Is compare/3 for rational trees a sunflower
    study subject? With one publication from
    the University of Tanzania? Who knows?

    Bye

    Mild Shock schrieb:
    Hi,

    Did the old School Logicians waste time
    with compare/3 ? I guess no:

    Ernst Specker, my beloved Professor, and
    Dana Scott made only a partial order. A
    partial order might have transitivity

    of (<') lacking:

    "Scott's model construction is in fact
    closely related to Specker's but there
    is a subtle difference in the notion of
    tree that they use. In fact neither of
    them formulate their notion of tree in
    terms of graphs but rather in terms of
    what it will be convenient here to
    call tree-partial-orderings."

    See here:

    NON-WELL-FOUNDED SETS
    Peter Aczel - 1988 https://les-mathematiques.net/vanilla/uploads/editor/fh/v4pi6qyxfbel.pdf

    There is also the notion of co-well-
    foundedness, something like Noetherian but
    up side down, i.e. certain ascending
    chains stabilizing.

    Bye

    Mild Shock schrieb:

    I guess there is a bug in preparing flat terms vector

    I give you a gold medal 🥇, if you can prove a
    compare_index/3 correct that uses this rule. It
    was already shown impossible by Matt Carlson.

    There are alternative approaches that can reach
    transitivity, but do not use the below step
    inside some compare_index/3.

    compare_term_args(I, C, X, Y, A, H):-
             arg(I, X, K),
             arg(I, Y, L),
             !,
             compare_index(D, K, L, A, H),
             (    D = (=) ->
                 I0 is I + 1,
                 compare_term_args(I0, C, X, Y, A, H)
             ;    C = D
             ).
    compare_term_args(_ ,= , _, _, _, _).

    Maybe there is a grain of salt of invoking the
    Axiom of Choice (AC) in some previous posts.
    Although the Axiom of Choice is not needed for

    finite sets, they have anyway some choice.

    BTW: When Peter Aczel writes ZFC-, he then
    means ZFC without AC, right? But he doesn’t
    show some compare/3 .

    Mild Shock schrieb:
    Hi,

    Take this exercise. Exercise 4.1 Draw the tree
    represented by the term n1(n2(n4),n3(n5,n6)).
    https://book.simply-logical.space/src/text/2_part_ii/4.1.html

    Maybe there was a plan that SWISH can draw trees,
    and it could be that something was implemented as well.

    But I don't see anything dynamic working on the
    above web site link. Next challenge for Simply Logical,

    in another life. Draw a rational tree.
    The Prolog system has them:

    /* SWI-Prolog 9.3.26 */
    ?- X = a(Y,_), Y = b(X,_).
    X = a(b(X, _A), _),
    Y = b(X, _A).

    Bye





    --- Synchronet 3.21a-Linux NewsLink 1.2