• Prolog Education Group clueless about the AI Boom?

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Mar 3 14:18:40 2025
    From Newsgroup: comp.lang.prolog

    Concerning this boring nonsense:

    https://book.simply-logical.space/src/text/2_part_ii/5.3.html#

    Funny idea that anybody would be interested just now in
    the year 2025 in things like teaching breadth first
    search versus depth first search, or even be “mystified”
    by such stuff. Its extremly trivial stuff:

    Insert your favorite tree traversal pictures here.

    Its even not artificial intelligence neither has anything
    to do with mathematical logic, rather belongs to computer
    science and discrete mathematics which you have in
    1st year university

    courses, making it moot to call it “simply logical”. It
    reminds me of the idea of teaching how wax candles work
    to dumb down students, when just light bulbs have been
    invented. If this is the outcome

    of the Prolog Education Group 2.0, then good night.

    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Mar 3 14:20:14 2025
    From Newsgroup: comp.lang.prolog


    My suspicion, teaching WalkSAT as an altermative
    to DPLL, and show its limitations would maybe give
    more bang. We are currently entering an era that
    already started in the end of 1990

    when some new probabilistic complexity classes
    were defined. Many machine learning techniques
    have also such an aspect, and it will only get
    worse with Quantum Computing. Having

    a grip on these things helps also distinguishing
    when a AI acts by chance, or whether it diverts from
    chance and shows some excelling adaptation to the
    problem domain at hand. Like here,

    anybody an idea what they mean by “above chance”?

    Intuitive physics understanding emerges from
    self-supervised pretraining on natural videos
    https://arxiv.org/abs/2502.11831

    Mild Shock schrieb:
    Concerning this boring nonsense:

    https://book.simply-logical.space/src/text/2_part_ii/5.3.html#

    Funny idea that anybody would be interested just now in
    the year 2025 in things like teaching breadth first
    search versus depth first search, or even be “mystified”
    by such stuff. Its extremly trivial stuff:

    Insert your favorite tree traversal pictures here.

    Its even not artificial intelligence neither has anything
    to do with mathematical logic, rather belongs to computer
    science and discrete mathematics which you have in
    1st year university

    courses, making it moot to call it “simply logical”. It
    reminds me of the idea of teaching how wax candles work
    to dumb down students, when just light bulbs have been
    invented. If this is the outcome

    of the Prolog Education Group 2.0, then good night.


    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Mar 3 18:03:11 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Salary Templates if you "Grok" ML / AI [PhDs negotiate salaries]:

    Tendency to go towards 500'000.- USD per year https://x.com/chiefaioffice/status/1734329284821672270

    But I guess you need to have the talent to train a ChatGPT.
    Still very rare, I had interesting discussions on the internet.

    LoL

    Bye

    Final Annual Initial Negotiated
    Company Compensation Compensation Delta
    OpenAl $865K $665K 30%
    Anthropic $855K $855K 0%
    Inflection* $825K NA NA
    Tesla $780K $702K 11%
    Amazon $719K $520K 38%
    Google Brain $695K $590K 17%
    TikTok $605K $430K 40%
    FAIR $556K $480K 15%
    Google Research $549K $310K 77%
    Waymo $530K $385K 37%
    Deepmind $515K $452K 13%
    Bloomberg Al $460K $318K 44%
    Apple $450K $337K 33%
    Microsoft Research $449K $270K 66%
    Salesforce Research $441K $355K 24%
    Toyota Research $410K $370K 10%
    Twitter $409K $359K 13%
    NVIDIA $390K $340K 14%
    IBM Research $377K $262K 43%
    Allen Institute $350K $310K 12%
    Samsung Research $285K $240K 18%
    Hugging Face $238K $185K 27%

    Mild Shock schrieb:

    My suspicion, teaching WalkSAT as an altermative
    to DPLL, and show its limitations would maybe give
    more bang. We are currently entering an era that
    already started in the end of 1990

    when some new probabilistic complexity classes
    were defined. Many machine learning techniques
    have also such an aspect, and it will only get
    worse with Quantum Computing. Having

    a grip on these things helps also distinguishing
    when a AI acts by chance, or whether it diverts from
    chance and shows some excelling adaptation to the
    problem domain at hand. Like here,

    anybody an idea what they mean by “above chance”?

    Intuitive physics understanding emerges from
    self-supervised pretraining on natural videos https://arxiv.org/abs/2502.11831

    Mild Shock schrieb:
    Concerning this boring nonsense:

    https://book.simply-logical.space/src/text/2_part_ii/5.3.html#

    Funny idea that anybody would be interested just now in
    the year 2025 in things like teaching breadth first
    search versus depth first search, or even be “mystified”
    by such stuff. Its extremly trivial stuff:

    Insert your favorite tree traversal pictures here.

    Its even not artificial intelligence neither has anything
    to do with mathematical logic, rather belongs to computer
    science and discrete mathematics which you have in
    1st year university

    courses, making it moot to call it “simply logical”. It
    reminds me of the idea of teaching how wax candles work
    to dumb down students, when just light bulbs have been
    invented. If this is the outcome

    of the Prolog Education Group 2.0, then good night.



    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Mar 7 23:58:10 2025
    From Newsgroup: comp.lang.prolog

    The first deep learning breakthrough was
    AlexNet by Alex Krizhevsky, Ilya Sutskever
    and Geoffrey Hinton:

    In 2011, Geoffrey Hinton started reaching out
    to colleagues about “What do I have to do to
    convince you that neural networks are the future?” https://en.wikipedia.org/wiki/AlexNet

    Meanwhile ILP is still dreaming of higher order logic:

    We pull it out of thin air. And the job that does
    is, indeed, that it breaks up relations into
    sub-relations or sub-routines, if you prefer.

    You mean this here:

    Background knowledge (Second Order)
    -----------------------------------
    (Chain) ∃.P,Q,R ∀.x,y,z: P(x,y)← Q(x,z),R(z,y)

    https://github.com/stassa/vanilla/tree/master/lib/poker

    Thats too general, it doesn’t adress
    analogical reasoning.

    Mild Shock schrieb:
    Concerning this boring nonsense:

    https://book.simply-logical.space/src/text/2_part_ii/5.3.html#

    Funny idea that anybody would be interested just now in
    the year 2025 in things like teaching breadth first
    search versus depth first search, or even be “mystified”
    by such stuff. Its extremly trivial stuff:

    Insert your favorite tree traversal pictures here.

    Its even not artificial intelligence neither has anything
    to do with mathematical logic, rather belongs to computer
    science and discrete mathematics which you have in
    1st year university

    courses, making it moot to call it “simply logical”. It
    reminds me of the idea of teaching how wax candles work
    to dumb down students, when just light bulbs have been
    invented. If this is the outcome

    of the Prolog Education Group 2.0, then good night.


    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Mar 8 00:00:34 2025
    From Newsgroup: comp.lang.prolog

    You are probably aiming at the decomposition
    of autoencoder or transformer into an encoder
    and decoder. Making the split up automatically

    from a more general ILP framework.

    The H is the bottleneck on purpose:

    relation(X, Y) :- encoder(X, H), decoder(H, Y).

    Ok you missed the point. Lets assume for the
    moment the H on purpose is not something that
    happens accidential through a more general

    learning algorithm. But it is a design feature
    of how we want to learn. Can we incorporate
    analogical reasoning, the parallelogram?

    Yeah, relatively simple, just add more input
    and output layers. The new parameter K indicates
    how the representation was chosen:

    relation(X, Y) :-
    similar(X, A, K),
    encoder(A, H),
    decoder(H, B),
    similar(Y, B, K).

    Its again an autoencoder respectively transformer,
    with a bigger latent space. Prominent additional input
    layers that work here are convolutional neural networks.

    Things like max pooling or self-attention pooling:

    relation(X, Y) :-
    encoder2(X, J),
    decoder2(J, Y).

    encoder2(X, [K|H]) :-
    similar(X, A, K),
    encoder(A, H),

    decoder2([K|H], Y) :-
    decoder(H, B),
    similar(Y, B, K).

    You can learn the decoder2/2 as a whole in your
    autoencoder and transformer learning framework,
    provided it can deal with many layers, i.e.

    if it has deep learning techniques.

    Mild Shock schrieb:
    The first deep learning breakthrough was
    AlexNet by Alex Krizhevsky, Ilya Sutskever
    and Geoffrey Hinton:

    In 2011, Geoffrey Hinton started reaching out
    to colleagues about “What do I have to do to
    convince you that neural networks are the future?” https://en.wikipedia.org/wiki/AlexNet

    Meanwhile ILP is still dreaming of higher order logic:

    We pull it out of thin air. And the job that does
    is, indeed, that it breaks up relations into
    sub-relations or sub-routines, if you prefer.

    You mean this here:

    Background knowledge (Second Order)
    -----------------------------------
    (Chain) ∃.P,Q,R ∀.x,y,z: P(x,y)← Q(x,z),R(z,y)

    https://github.com/stassa/vanilla/tree/master/lib/poker

    Thats too general, it doesn’t adress
    analogical reasoning.

    Mild Shock schrieb:
    Concerning this boring nonsense:

    https://book.simply-logical.space/src/text/2_part_ii/5.3.html#

    Funny idea that anybody would be interested just now in
    the year 2025 in things like teaching breadth first
    search versus depth first search, or even be “mystified”
    by such stuff. Its extremly trivial stuff:

    Insert your favorite tree traversal pictures here.

    Its even not artificial intelligence neither has anything
    to do with mathematical logic, rather belongs to computer
    science and discrete mathematics which you have in
    1st year university

    courses, making it moot to call it “simply logical”. It
    reminds me of the idea of teaching how wax candles work
    to dumb down students, when just light bulbs have been
    invented. If this is the outcome

    of the Prolog Education Group 2.0, then good night.



    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Aug 12 18:37:13 2025
    From Newsgroup: comp.lang.prolog


    Struggle understanding Hopcroft and Karp (1971) ?
    No worries Philip Zucker is the man:

    Co-Egraphs: Streams, Unification, PEGs, Rational Lambdas
    You can make egraphs support stream like things / rational terms. https://www.philipzucker.com/coegraph/

    Strange I came up with the same stuff over the last
    weeks while discussing with @kuniaki.mukai .

    Lets say 2025 is the year of rational trees!

    Mild Shock schrieb:
    Concerning this boring nonsense:

    https://book.simply-logical.space/src/text/2_part_ii/5.3.html#

    Funny idea that anybody would be interested just now in
    the year 2025 in things like teaching breadth first
    search versus depth first search, or even be “mystified”
    by such stuff. Its extremly trivial stuff:

    Insert your favorite tree traversal pictures here.

    Its even not artificial intelligence neither has anything
    to do with mathematical logic, rather belongs to computer
    science and discrete mathematics which you have in
    1st year university

    courses, making it moot to call it “simply logical”. It
    reminds me of the idea of teaching how wax candles work
    to dumb down students, when just light bulbs have been
    invented. If this is the outcome

    of the Prolog Education Group 2.0, then good night.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Aug 14 12:31:55 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Using Hopcroft & Karp (HK) everywhere
    clearly reases the bar. I decided to use
    HK also in compare/3 , just to have it

    detect structure sharing. But here a little
    test comparing with WebPL, only (=)/2 because
    I didn't find compare/3:

    /* WebPL May 2025 */
    test(25).
    True
    (1040.4ms)
    https://webpl.whenderson.dev/

    /* Dogelog Player 1.3.6 */
    :- version.
    :- time(test(25)).
    Dogelog Spieler, Prolog zum Mond, 1.3.6 (02.08.2025)
    (c) 1985-2025, XLOG Technologies AG, Schweiz
    % Zeit 1 ms, GC 0 ms, Lips 125000, Uhr 14.08.2025 12:30 https://www.dogelog.ch/littab/doclet/live/07_basic/example01/package.html

    SWI-Prolog WASM can also do it fast. The
    old Trealla WASM version cannot do it fast.

    Bye

    P.S.: The test code:

    hydra(0,_) :- !.
    hydra(N,s(X,X)) :- M is N-1, hydra(M,X).

    test(N) :-
    hydra(N,X), hydra(N,Y), X = Y.

    Mild Shock schrieb:

    Struggle understanding Hopcroft and Karp (1971) ?
    No worries Philip Zucker is the man:

    Co-Egraphs: Streams, Unification, PEGs, Rational Lambdas
    You can make egraphs support stream like things / rational terms. https://www.philipzucker.com/coegraph/

    Strange I came up with the same stuff over the last
    weeks while discussing with @kuniaki.mukai .

    Lets say 2025 is the year of rational trees!

    Mild Shock schrieb:
    Concerning this boring nonsense:

    https://book.simply-logical.space/src/text/2_part_ii/5.3.html#

    Funny idea that anybody would be interested just now in
    the year 2025 in things like teaching breadth first
    search versus depth first search, or even be “mystified”
    by such stuff. Its extremly trivial stuff:

    Insert your favorite tree traversal pictures here.

    Its even not artificial intelligence neither has anything
    to do with mathematical logic, rather belongs to computer
    science and discrete mathematics which you have in
    1st year university

    courses, making it moot to call it “simply logical”. It
    reminds me of the idea of teaching how wax candles work
    to dumb down students, when just light bulbs have been
    invented. If this is the outcome

    of the Prolog Education Group 2.0, then good night.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Aug 14 14:31:32 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    This is a nice confusion of the highest order.

    observations on the decidability

    Well in summary the existence of a pretest (==)/2
    makes Mercio's decidable. Equality on rational trees
    is decidable, even if it comes in disguise as Mercio’s
    mathematical definition, which gives a total order

    which is unfortunately not a natural order:

    A=B :⟺ A' =lex B'

    A<B :⟺ A' <lex B'

    Natural order of rational trees?
    https://math.stackexchange.com/a/210730

    Why is Mercio’s mathematical definition of equality
    decidable? Because it is semantically equivalent to
    bisimulation as implemented in (==)/2 , and decidable
    is a semantic notion:

    Decidability (logic)
    In logic, a true/false decision problem is decidableif there
    exists an effective method for deriving the correct answer.
    Decidability (logic) - Wikipedia

    So if you talk about semi-decidable, decidable etc..
    its always about the existence of an algorithm in
    the large space of effective method according to
    the Church hypothesis.

    If you want to talk about a particular algorithm
    you would use the terminology “incomplete”. Like for
    example DFS algorithms are “incomplete” compared to
    BFS algorithms on certain problems.

    Bye

    Mild Shock schrieb:
    Hi,

    Using Hopcroft & Karp (HK) everywhere
    clearly reases the bar. I decided to use
    HK also in compare/3 , just to have it

    detect structure sharing. But here a little
    test comparing with WebPL, only (=)/2 because
    I didn't find compare/3:

    /* WebPL May 2025 */
    test(25).
    True
    (1040.4ms)
    https://webpl.whenderson.dev/

    /* Dogelog Player 1.3.6 */
    :- version.
    :- time(test(25)).
    Dogelog Spieler, Prolog zum Mond, 1.3.6 (02.08.2025)
    (c) 1985-2025, XLOG Technologies AG, Schweiz
    % Zeit 1 ms, GC 0 ms, Lips 125000, Uhr 14.08.2025 12:30 https://www.dogelog.ch/littab/doclet/live/07_basic/example01/package.html

    SWI-Prolog WASM can also do it fast. The
    old Trealla WASM version cannot do it fast.

    Bye

    P.S.: The test code:

    hydra(0,_) :- !.
    hydra(N,s(X,X)) :- M is N-1, hydra(M,X).

    test(N) :-
       hydra(N,X), hydra(N,Y), X = Y.

    Mild Shock schrieb:

    Struggle understanding Hopcroft and Karp (1971) ?
    No worries Philip Zucker is the man:

    Co-Egraphs: Streams, Unification, PEGs, Rational Lambdas
    You can make egraphs support stream like things / rational terms.
    https://www.philipzucker.com/coegraph/

    Strange I came up with the same stuff over the last
    weeks while discussing with @kuniaki.mukai .

    Lets say 2025 is the year of rational trees!

    Mild Shock schrieb:
    Concerning this boring nonsense:

    https://book.simply-logical.space/src/text/2_part_ii/5.3.html#

    Funny idea that anybody would be interested just now in
    the year 2025 in things like teaching breadth first
    search versus depth first search, or even be “mystified”
    by such stuff. Its extremly trivial stuff:

    Insert your favorite tree traversal pictures here.

    Its even not artificial intelligence neither has anything
    to do with mathematical logic, rather belongs to computer
    science and discrete mathematics which you have in
    1st year university

    courses, making it moot to call it “simply logical”. It
    reminds me of the idea of teaching how wax candles work
    to dumb down students, when just light bulbs have been
    invented. If this is the outcome

    of the Prolog Education Group 2.0, then good night.




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Aug 14 15:14:11 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    DFS algorithms can nevertheless gain some completness
    power on rational trees when paired with loop checking.
    Like @kuniaki.mukai was using in compare_with_stack/3.
    They might be not as good as HK though.

    You find a DFA subset and equality relation here,
    implemented via DFS algorithms using a visitor set:

    def find_equiv_counterexample(dfa_a, dfa_b):
    # Symmetric difference: strings accepted by exactly one DFA
    return find_word(dfa_a ^ dfa_b)

    def find_subset_counterexample(smaller, bigger):
    # Intersection of "smaller" with the complement of "bigger"
    return find_word(~bigger & smaller)

    https://pypi.org/project/dfa/

    The later results in ordering DFAs by a partial order
    and not by a total order. The project has also a DFA
    minimization algorithm. But I didn’t get my head around
    yet implementing it,

    and I don’t know whether @kuniaki.mukai implemented
    the DFA minimization algorithm that has supposedly
    O(N log(N)) complexity.

    Bye

    Mild Shock schrieb:
    Hi,

    This is a nice confusion of the highest order.

    observations on the decidability

    Well in summary the existence of a pretest (==)/2
    makes Mercio's decidable. Equality on rational trees
    is decidable, even if it comes in disguise as Mercio’s
    mathematical definition, which gives a total order

    which is unfortunately not a natural order:

    A=B :⟺ A' =lex B'

    A<B :⟺ A' <lex B'

    Natural order of rational trees?
    https://math.stackexchange.com/a/210730

    Why is Mercio’s mathematical definition of equality
    decidable? Because it is semantically equivalent to
    bisimulation as implemented in (==)/2 , and decidable
    is a semantic notion:

    Decidability (logic)
    In logic, a true/false decision problem is decidableif there
    exists an effective method for deriving the correct answer.
    Decidability (logic) - Wikipedia

    So if you talk about semi-decidable, decidable etc..
    its always about the existence of an algorithm in
    the large space of effective method according to
    the Church hypothesis.

    If you want to talk about a particular algorithm
    you would use the terminology “incomplete”. Like for
    example DFS algorithms are “incomplete” compared to
    BFS algorithms on certain problems.

    Bye
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Aug 16 11:22:12 2025
    From Newsgroup: comp.lang.prolog

    There is a difference between this:

    /* Version I */
    fun(X, Y, Z, A) :-
    X =< Y, !,
    Z = A.
    fun(X, Y, Z, A) :-
    X1 is X - 1,
    fun(X1, Y, Z, A1),
    Y1 is Y - 1,
    fun(Y1, Z, X, A2),
    Z1 is Z - 1,
    fun(Z1, X, Y, A3),
    fun(A1, A2, A3, A).

    And this in Dogelog Player:

    /* Version II */
    fun(X, Y, Z, A) :-
    X =< Y, !,
    Z = A.
    fun(X, Y, Z, A) :-
    X1 is X - 1,
    Y1 is Y - 1,
    Z1 is Z - 1,
    fun(X1, Y, Z, A1),
    fun(Y1, Z, X, A2),
    fun(Z1, X, Y, A3),
    fun(A1, A2, A3, A).

    Usually I am benchmarking Version I, but Version II
    gives Dogelog Player the opportunity to do more
    static Variable Shunting. Which is seen in the

    performance. Not its JavaScript not WASM:

    /* Version I */
    % Zeit 1723 ms, GC 6 ms, Lips 4863960, Uhr 16.08.2025 11:21

    /* Version II */
    % Zeit 1489 ms, GC 2 ms, Lips 5628343, Uhr 16.08.2025 11:21

    Was resting:

    mtak :- between(1,31,_), fun(18, 12, 6, _), fail.
    mtak.

    :- time(mtak).
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Aug 16 11:34:09 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    It seems the gist of WebPL is to examine
    dynamic variable shunting as part of a
    garbage collection strategy, as found here:

    Variable Shunting for the WAM
    Sahlin, D. and Carlsson, M. - 1991 https://www.diva-portal.org/smash/get/diva2:1041385/FULLTEXT01.pdf

    Does it pay off? I am testing
    Version II of mtak:

    /* WebPL GC Rust-WAMS */
    True
    (1875.2ms)
    No more results

    /* Trealla Prolog C-WASM */
    True
    (1269.5ms)
    No more results

    /* SWI-Prolog C-WASM */
    True
    (1368.9ms)
    No more results

    The 1875.2ms are worse than my 1489 ms. The
    1269.5ms and 1368.9ms are only slightly better
    than my 1489 ms.

    I would say static shunting is the secret ingredient
    of Dogelog Player, that makes it fast sometimes!
    Just think about it, in the above JavaScript is

    compared to WASM. Whats going on? Of course
    one should repeat the test with newest and newest
    SWI-Prolog and Trealla Prolog. Don't know

    whether there is a site that does all the updates.

    Bye

    BTW: For between/2 was using:

    between(Lo, Lo, R) :- !, Lo = R.
    between(Lo, _, Lo).
    between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).

    Mild Shock schrieb:
    There is a difference between this:

    /* Version I */
    fun(X, Y, Z, A) :-
        X =< Y, !,
        Z = A.
    fun(X, Y, Z, A) :-
        X1 is X - 1,
        fun(X1, Y, Z, A1),
        Y1 is Y - 1,
        fun(Y1, Z, X, A2),
        Z1 is Z - 1,
        fun(Z1, X, Y, A3),
        fun(A1, A2, A3, A).

    And this in Dogelog Player:

    /* Version II */
    fun(X, Y, Z, A) :-
        X =< Y, !,
        Z = A.
    fun(X, Y, Z, A) :-
        X1 is X - 1,
        Y1 is Y - 1,
        Z1 is Z - 1,
        fun(X1, Y, Z, A1),
        fun(Y1, Z, X, A2),
        fun(Z1, X, Y, A3),
        fun(A1, A2, A3, A).

    Usually I am benchmarking Version I, but Version II
    gives Dogelog Player the opportunity to do more
    static Variable Shunting. Which is seen in the

    performance. Not its JavaScript not WASM:

    /* Version I */
    % Zeit 1723 ms, GC 6 ms, Lips 4863960, Uhr 16.08.2025 11:21

    /* Version II */
    % Zeit 1489 ms, GC 2 ms, Lips 5628343, Uhr 16.08.2025 11:21

    Was resting:

    mtak :- between(1,31,_), fun(18, 12, 6, _), fail.
    mtak.

    :- time(mtak).

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Aug 16 11:46:33 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    BTW: Here a test without WASM:

    /* Trealla Prolog 2.82.12 WSL2 */
    ?- time(mtak).
    % Time elapsed 0.734s, 8873504 Inferences, 12.091 MLips
    true.

    /* SWI Prolog 9.0.4 WSL2 */
    ?- time(mtak).
    % 3,943,790 inferences, 0.210 CPU in 0.217 seconds
    (97% CPU, 18777906 Lips)
    true.

    /* Dogelog Player 1.3.6 Windows 11 */
    ?- time(mtak).
    % Zeit 412 ms, GC 0 ms, Lips 20341271, Uhr 16.08.2025 11:40
    true.

    /* Jekejeke Prolog 1.7.3 Windows 11 */
    ?- time(mtak).
    % Zeit 664 ms, GC 3 ms, Uhr 16.08.2025 11:41
    true.

    I didn't test Scryer Prolog because it has no GC.
    I keep testing Jekejeke Prolog because it has
    reference counting. But Dogelog Player without

    reference counting and true garbage collection,
    seems to be more speedy. Not as fast as SWI-Prolog.
    But faster than Trealla Prolog.

    So SWI-Prolog is still the Orginal Gangster (OG)
    of garbage collection (GC). Some Prolog systems
    like Ciao Prolog, ECLiPSe Prolog or SICStus Prolog

    might be even faster, since it seems to me they use
    native backend compilation schemes, AOT or JIT. The
    tested systems above are all more of the

    interpreter type Prolog systems.

    Bye

    Mild Shock schrieb:
    Hi,

    It seems the gist of WebPL is to examine
    dynamic variable shunting as part of a
    garbage collection strategy, as found here:

    Variable Shunting for the WAM
    Sahlin, D. and Carlsson, M. - 1991 https://www.diva-portal.org/smash/get/diva2:1041385/FULLTEXT01.pdf

    Does it pay off? I am testing
    Version II of mtak:

    /* WebPL GC Rust-WAMS */
    True
    (1875.2ms)
    No more results

    /* Trealla Prolog C-WASM */
    True
    (1269.5ms)
    No more results

    /* SWI-Prolog C-WASM */
    True
    (1368.9ms)
    No more results

    The 1875.2ms are worse than my 1489 ms. The
    1269.5ms and 1368.9ms are only slightly better
    than my 1489 ms.

    I would say static shunting is the secret ingredient
    of Dogelog Player, that makes it fast sometimes!
    Just think about it, in the above JavaScript is

    compared to WASM. Whats going on? Of course
    one should repeat the test with newest and newest
    SWI-Prolog and Trealla Prolog. Don't know

    whether there is a site that does all the updates.

    Bye

    BTW: For between/2 was using:

    between(Lo, Lo, R) :- !, Lo = R.
    between(Lo, _, Lo).
    between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).

    Mild Shock schrieb:
    There is a difference between this:

    /* Version I */
    fun(X, Y, Z, A) :-
         X =< Y, !,
         Z = A.
    fun(X, Y, Z, A) :-
         X1 is X - 1,
         fun(X1, Y, Z, A1),
         Y1 is Y - 1,
         fun(Y1, Z, X, A2),
         Z1 is Z - 1,
         fun(Z1, X, Y, A3),
         fun(A1, A2, A3, A).

    And this in Dogelog Player:

    /* Version II */
    fun(X, Y, Z, A) :-
         X =< Y, !,
         Z = A.
    fun(X, Y, Z, A) :-
         X1 is X - 1,
         Y1 is Y - 1,
         Z1 is Z - 1,
         fun(X1, Y, Z, A1),
         fun(Y1, Z, X, A2),
         fun(Z1, X, Y, A3),
         fun(A1, A2, A3, A).

    Usually I am benchmarking Version I, but Version II
    gives Dogelog Player the opportunity to do more
    static Variable Shunting. Which is seen in the

    performance. Not its JavaScript not WASM:

    /* Version I */
    % Zeit 1723 ms, GC 6 ms, Lips 4863960, Uhr 16.08.2025 11:21

    /* Version II */
    % Zeit 1489 ms, GC 2 ms, Lips 5628343, Uhr 16.08.2025 11:21

    Was resting:

    mtak :- between(1,31,_), fun(18, 12, 6, _), fail.
    mtak.

    :- time(mtak).


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Aug 16 11:59:26 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    In the newest test results below, Cyclic
    Term Unification of Clause Heads is already
    Accounted. Means I tested versions that

    have Cyclic Term Unification deep in the
    Prolog interpreter. Why does it not slow down
    the Prolog system? Because Cyclic Term

    Unification doesn't kick in for certain Prolog
    system designs under certain conditions:

    - If the head is linear no cyclic unification
    happens. Since the machine code for head
    unification is executed in write-unification
    mode. All variables are "first_var" instructions.

    - Even if the the calling goal is smaller than
    the head, means if a switch happens to
    read-unification mode, again cyclic unification
    doesn't happen for linear patterns.

    You can check yourself whether a head is linear
    or not. Just define:

    term_linear(X) :-
    term_variables(X, L),
    term_singletons(X, R),
    L == R.

    ?- term_linear(fun(X, Y, Z, A)).
    true.

    The bad thing is this holds only for Dogelog Player
    which has the smarter head unification. While it
    doesn't hold for formerly Jekejeke Prolog,

    which has an ancient PLM design. So Jekejeke Prolog
    got indeed a little slower with Cyclic Term Unification
    and I don't know much what I could do against it.

    I had old versions of Jekejeke Prolog which did
    indeed special head linear compilation, and linear
    head unification execution, as some experiment in

    occurs check. But I wont resurrect this special
    compilation. I am more preparing the final graveyard
    for Jekejeke Prolog. But there are still a few things

    to harvest from formerly Jekejeke Prolog,
    so the final stab might come in 1-2 years.

    Bye

    Mild Shock schrieb:
    Hi,

    BTW: Here a test without WASM:

    /* Trealla Prolog 2.82.12 WSL2 */
    ?- time(mtak).
    % Time elapsed 0.734s, 8873504 Inferences, 12.091 MLips
       true.

    /* SWI Prolog 9.0.4 WSL2 */
    ?- time(mtak).
    % 3,943,790 inferences, 0.210 CPU in 0.217 seconds
    (97% CPU, 18777906 Lips)
    true.

    /* Dogelog Player 1.3.6 Windows 11 */
    ?- time(mtak).
    % Zeit 412 ms, GC 0 ms, Lips 20341271, Uhr 16.08.2025 11:40
    true.

    /* Jekejeke Prolog 1.7.3 Windows 11 */
    ?- time(mtak).
    % Zeit 664 ms, GC 3 ms, Uhr 16.08.2025 11:41
    true.

    I didn't test Scryer Prolog because it has no GC.
    I keep testing Jekejeke Prolog because it has
    reference counting. But Dogelog Player without

    reference counting and true garbage collection,
    seems to be more speedy. Not as fast as SWI-Prolog.
    But faster than Trealla Prolog.

    So SWI-Prolog is still the Orginal Gangster (OG)
    of garbage collection (GC). Some Prolog systems
    like Ciao Prolog, ECLiPSe Prolog or SICStus Prolog

    might be even faster, since it seems to me they use
    native backend compilation schemes, AOT or JIT. The
    tested systems above are all more of the

    interpreter type Prolog systems.

    Bye

    Mild Shock schrieb:
    Hi,

    It seems the gist of WebPL is to examine
    dynamic variable shunting as part of a
    garbage collection strategy, as found here:

    Variable Shunting for the WAM
    Sahlin, D. and Carlsson, M. - 1991
    https://www.diva-portal.org/smash/get/diva2:1041385/FULLTEXT01.pdf

    Does it pay off? I am testing
    Version II of mtak:

    /* WebPL GC Rust-WAMS */
    True
    (1875.2ms)
    No more results

    /* Trealla Prolog C-WASM */
    True
    (1269.5ms)
    No more results

    /* SWI-Prolog C-WASM */
    True
    (1368.9ms)
    No more results

    The 1875.2ms are worse than my 1489 ms. The
    1269.5ms and 1368.9ms are only slightly better
    than my 1489 ms.

    I would say static shunting is the secret ingredient
    of Dogelog Player, that makes it fast sometimes!
    Just think about it, in the above JavaScript is

    compared to WASM. Whats going on? Of course
    one should repeat the test with newest and newest
    SWI-Prolog and Trealla Prolog. Don't know

    whether there is a site that does all the updates.

    Bye

    BTW: For between/2 was using:

    between(Lo, Lo, R) :- !, Lo = R.
    between(Lo, _, Lo).
    between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).

    Mild Shock schrieb:
    There is a difference between this:

    /* Version I */
    fun(X, Y, Z, A) :-
         X =< Y, !,
         Z = A.
    fun(X, Y, Z, A) :-
         X1 is X - 1,
         fun(X1, Y, Z, A1),
         Y1 is Y - 1,
         fun(Y1, Z, X, A2),
         Z1 is Z - 1,
         fun(Z1, X, Y, A3),
         fun(A1, A2, A3, A).

    And this in Dogelog Player:

    /* Version II */
    fun(X, Y, Z, A) :-
         X =< Y, !,
         Z = A.
    fun(X, Y, Z, A) :-
         X1 is X - 1,
         Y1 is Y - 1,
         Z1 is Z - 1,
         fun(X1, Y, Z, A1),
         fun(Y1, Z, X, A2),
         fun(Z1, X, Y, A3),
         fun(A1, A2, A3, A).

    Usually I am benchmarking Version I, but Version II
    gives Dogelog Player the opportunity to do more
    static Variable Shunting. Which is seen in the

    performance. Not its JavaScript not WASM:

    /* Version I */
    % Zeit 1723 ms, GC 6 ms, Lips 4863960, Uhr 16.08.2025 11:21

    /* Version II */
    % Zeit 1489 ms, GC 2 ms, Lips 5628343, Uhr 16.08.2025 11:21

    Was resting:

    mtak :- between(1,31,_), fun(18, 12, 6, _), fail.
    mtak.

    :- time(mtak).



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Aug 17 18:00:02 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    The paper by Shalin and Carlson from 1991
    did not yet ring a bell. But it suggest testing
    something with primes and freeze. Lets do

    primes as suggested but without freeze. SWI-Prolog
    seems not to the OG of GC. Putting aside Shalin
    and Carlson, its an typical example of a lot of

    intermediate results, that can be discarded by
    a garbage collection. Every candidate number that
    is not a prime number can be remove from the

    trail they get unreachable in the first clause
    of search/3. Besides this obvious unreachability
    task, I don't have statistics or don't see immediately

    where large variable instantiation chains are supposed
    to be created. At least not in my Prolog system, since
    a result variable is passed without binding it to a

    local variable, this "shunting" happens independent
    of neck tests and the "shunting" there. The result variable
    passing is extremly simple to implement and could

    be what is effective here besides the reachability thingy.
    At least the 1 ms GC time in Dogelog Player show that
    the reachability thingy is the minor effort or optimization

    to get nice performance:


    /* WebPL GC */
    (1846.1ms)

    /* Dogelog Player 1.3.6 for JavaScript (16.08.2025) */
    % Zeit 2992 ms, GC 1 ms, Lips 2514202, Uhr 17.08.2025 17:44

    /* SWI-Prolog WASM */
    (4204.2ms)

    /* Trealla Prolog WASM */
    (23568.9ms)

    The test code was:

    test :-
    len(L, 1000),
    primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    primes(L, I),
    K is I+1,
    search(L, K, J).

    search(L, I, J) :-
    mem(X, L),
    I mod X =:= 0, !,
    K is I+1,
    search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    N > 0,
    M is N-1,
    len(L, M).

    Bye
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Aug 17 18:24:15 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Here some test results when testing
    Desktop and not the Web. With Desktop
    Prolog versions I find:

    /* SWI-Prolog 9.3.28 */
    % 7,506,637 inferences, 0.578 CPU in 0.567 seconds

    /* Dogelog Player 1.3.6 for Java (16.08.2025) */
    % Zeit 803 ms, GC 0 ms, Lips 9367988, Uhr 17.08.2025 18:03

    /* Scryer Prolog 0.9.4-592 */
    % CPU time: 0.838s, 7_517_613 inferences

    /* Trealla Prolog 2.82.12 */
    % Time elapsed 2.315s, 11263917 Inferences, 4.866 MLips

    Bye

    Mild Shock schrieb:
    Hi,

    The paper by Shalin and Carlson from 1991
    did not yet ring a bell. But it suggest testing
    something with primes and freeze. Lets do

    primes as suggested but without freeze. SWI-Prolog
    seems not to the OG of GC. Putting aside Shalin
    and Carlson, its an typical example of a lot of

    intermediate results, that can be discarded by
    a garbage collection. Every candidate number that
    is not a prime number can be remove from the

    trail they get unreachable in the first clause
    of search/3. Besides this obvious unreachability
    task, I don't have statistics or don't see immediately

    where large variable instantiation chains are supposed
    to be created. At least not in my Prolog system, since
    a result variable is passed without binding it to a

    local variable, this "shunting" happens independent
    of neck tests and the "shunting" there. The result variable
    passing is extremly simple to implement and could

    be what is effective here besides the reachability thingy.
    At least the 1 ms GC time in Dogelog Player show that
    the reachability thingy is the minor effort or optimization

    to get nice performance:


    /* WebPL GC */
    (1846.1ms)

    /* Dogelog Player 1.3.6 for JavaScript (16.08.2025) */
    % Zeit 2992 ms, GC 1 ms, Lips 2514202, Uhr 17.08.2025 17:44

    /* SWI-Prolog WASM */
    (4204.2ms)

    /* Trealla Prolog WASM */
    (23568.9ms)

    The test code was:

    test :-
       len(L, 1000),
       primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
       primes(L, I),
       K is I+1,
       search(L, K, J).

    search(L, I, J) :-
       mem(X, L),
       I mod X =:= 0, !,
       K is I+1,
       search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
       mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
       N > 0,
       M is N-1,
       len(L, M).

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2