• Book Project: WomanLogic (Re: USA is shitting its pants)

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Nov 14 22:25:22 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Now I had the idea of a new book project,
    termed "WomanLogic". It would be an
    introduction to logic and computational

    thinking, by means of Prolog. Especially
    taylored and respecting the needs of a
    female brain. Just in the spirit of

    extending the "WomanSphere":

    "She Will Be in the Shop": Women's Sphere of
    Trade in Eighteenth-Century Philadelphia and New York
    Author(s): Patricia Cleary
    Source: The Pennsylvania Magazine of History and Biography,
    Vol. 119, No. 3 (Jul., 1995), pp. 181-202
    doi: 10.2307/20092959

    "WomanLogic" would enabling woman to participate
    in Web 3.0, like doing crypto trading or
    program AI robot traders?

    Bye

    Mild Shock schrieb:

    You know USA has a problem,
    when Oracle enters the race:

    To source the 131,072 GPU Al "supercluster,"
    Larry Ellison, appealed directly to Jensen Huang,
    during a dinner joined by Elon Musk at Nobu.
    "I would describe the dinner as me and Elon
    begging Jensen for GPUs. Please take our money.
    We need you to take more of our money. Please!” https://twitter.com/benitoz/status/1834741314740756621

    Meanwhile a contender in Video GenAI
    FLUX.1 from Germany, Hurray! With Open Source:

    OK. Now I'm Scared... AI Better Than Reality https://www.youtube.com/watch?v=cvMAVWDD-DU
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Nov 14 22:26:03 2024
    From Newsgroup: comp.lang.prolog


    Ok, I used the help of eloquent ChatGPT:

    "If robots were to take over traditional
    domestic tasks, the human brain, including
    the female brain, wouldn't fundamentally
    change in terms of structure or biology.
    However, there could be shifts in behavior,
    focus, and skills due to lifestyle and
    societal changes.

    Cognitive and Skill Adaptation: With less
    need to manage household chores, there might
    be an emphasis on developing other skills,
    like problem-solving, creative thinking, or
    leadership.
    The brain’s plasticity allows it to adapt to
    new challenges and environments, so women
    might direct more cognitive resources
    toward career or personal interests.

    Emotional and Social Impacts: The psychological
    identity tied to caregiving roles may evolve.
    If traditional expectations are transformed,
    there could be changes in how women view
    themselves and their purpose within
    families or society.

    Focus on New Activities: Freed from repetitive
    tasks, women's brains could be stimulated in
    new ways, perhaps through increased social
    interaction, lifelong learning, or
    creative pursuits.

    Overall, while the physical structure of the
    brain wouldn’t change, the way women think,
    work, and engage with the world could evolve
    to reflect new societal roles and opportunities.

    Mild Shock schrieb:
    Hi,

    Now I had the idea of a new book project,
    termed "WomanLogic". It would be an
    introduction to logic and computational

    thinking, by means of Prolog. Especially
    taylored and respecting the needs of a
    female brain. Just in the spirit of

    extending the "WomanSphere":

    "She Will Be in the Shop": Women's Sphere of
    Trade in Eighteenth-Century Philadelphia and New York
    Author(s): Patricia Cleary
    Source: The Pennsylvania Magazine of History and Biography,
    Vol. 119, No. 3 (Jul., 1995), pp. 181-202
    doi: 10.2307/20092959

    "WomanLogic" would enabling woman to participate
    in Web 3.0, like doing crypto trading or
    program AI robot traders?

    Bye

    Mild Shock schrieb:

    You know USA has a problem,
    when Oracle enters the race:

    To source the 131,072 GPU Al "supercluster,"
    Larry Ellison, appealed directly to Jensen Huang,
    during a dinner joined by Elon Musk at Nobu.
    "I would describe the dinner as me and Elon
    begging Jensen for GPUs. Please take our money.
    We need you to take more of our money. Please!”
    https://twitter.com/benitoz/status/1834741314740756621

    Meanwhile a contender in Video GenAI
    FLUX.1 from Germany, Hurray! With Open Source:

    OK. Now I'm Scared... AI Better Than Reality
    https://www.youtube.com/watch?v=cvMAVWDD-DU

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Dec 1 00:56:19 2024
    From Newsgroup: comp.lang.prolog


    Thats a funny quote:

    "Once you have a truly massive amount of information
    integrated as knowledge, then the human-software
    system will be superhuman, in the same sense that
    mankind with writing is superhuman compared to
    mankind before writing."

    https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes

    Mild Shock schrieb:
    I told you so, not worth a dime:

    I have something to share wit you. After much reflection,
    I have made the difficut decision to leave OpenAI. https://twitter.com/miramurati/status/1839025700009030027

    Who is stepping in with the difficult task, Sam Altman himself?

    The Intelligence Age
    September 23, 2024
    https://ia.samaltman.com/

    Mild Shock schrieb:
    Hi,

    The blue are AfD, the green are:

    German greens after losing badly
    https://www.dw.com/en/german-greens-suffer-major-loss-of-votes-in-eu-elections-nina-haase-reports/video-69316755


    Time to start a yellow party, the first party
    with an Artificial Intelligence Ethics agenda?

    Bye

    P.S.: Here I tried some pigwrestling with
    ChatGPT demonstrating Mira Murati is just
    a nice face. But ChatGPT is just like a child,

    spamming me with large bullets list, from
    its huge lexical memory, without any deep
    understanding. But it also gave me an interesting

    list of potential caliber AI critiques. Any new
    Greta Thunberg of Artificial Intelligence
    Ethics among them?

    Mira Murati Education Background
    https://chatgpt.com/c/fbc385d4-de8d-4f29-b925-30fac75072d4


    Mild Shock schrieb:
    What a bullshit:

    Another concern is the potential for AI to displace
    jobs and exacerbate economic inequality. A recent
    study by McKinsey estimates that up to 800 million
    jobs could be automated by 2030. While Murati believes
    that AI will ultimately create more jobs than it
    displaces, she acknowledges the need for policies to
    support workers through the transition, such as job
    retraining programs and strengthened social safety nets.
    https://expertbeacon.com/mira-murati-shaping-the-future-of-ai-ethics-and-innovation-at-openai/


    Lets say there is a wine valley. All workers
    are replaced by AI robots. Where do they go.
    In some cultures you don't find people over
    30 that are long life learners. What should they

    learn, on another valley where they harvest
    oranges, they also replaced everybody by AI
    robots. And so on the next valley, and the
    next valley. We need NGO's and a Greta Thunberg

    for AI ethics, not a nice face from OpenAI.


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Dec 1 01:08:32 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Lets say I have to chose between pig wrestle with a
    grammar nazi stackoverflow user with 100k reputation, or
    to interact with ChatGPT that puts a lot of

    effort to understand the least cue I give, isn't
    shot in to english only, you can also use it with
    german, turkish, etc.. what ever.

    Who do I use as a programmimg companion, stackoverflow
    or ChatGPT. I think ChatGPT is the clear winner,
    it doesn't feature the abomination of a virtual

    prison like stackoverflow. Or as Cycorp, Inc has put
    it already decades ago:

    Common Sense Reasoning – From Cyc to Intelligent Assistant
    Doug Lenat et al. - August 2006
    2 The Case for an Ambient Research Assistant
    2.3 Components of a Truly Intelligent Computational Assistant
    Natural Language:
    An assistant system must be able to remember
    questions, statements, etc. from the user, and
    what its own response was, in order to understand
    the kinds of language ‘shortcuts’ people normally use
    in context.
    https://www.researchgate.net/publication/226813714

    Bye

    Mild Shock schrieb:

    Thats a funny quote:

    "Once you have a truly massive amount of information
    integrated as knowledge, then the human-software
    system will be superhuman, in the same sense that
    mankind with writing is superhuman compared to
    mankind before writing."

    https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Dec 1 11:42:44 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Chatgpt is definitely unreliable.

    So is stackoverflow, you have no guarantee
    to get a realiable answer. Sometimes they
    have even biased nonsense, due to the over

    representation of certain communities on
    stackoverflow. But ChatGPT it is easier to
    re-iterate a problem and explore solutions,

    you don't get punished for sloppy questions,
    or changing topic midflight exploring corner
    digging deeper and deeper.

    ChatGPT certainly beats stackoverflow.

    Also stackoverflow is extremly hysteric about
    keeping every comment trail, and has a very
    slow garbage collection. I think stackoveflow

    automatically deletes a answer with negative
    votes after a while. On the other hand ChatGPT
    keeps a side bar with all the interactions,

    and you can delete an interaction any time
    you want to do so. There is no maniac idea to
    keep interactions. Stackoverflow possibly

    only keeps this interactions to be able to
    send people to their virtual prison. Basically
    they have become a perverted para-governemental

    institution that exercises violence.

    Mild Shock schrieb:
    Hi,

    Lets say I have to chose between pig wrestle with a
    grammar nazi stackoverflow user with 100k reputation, or
    to interact with ChatGPT that puts a lot of

    effort to understand the least cue I give, isn't
    shot in to english only, you can also use it with
    german, turkish, etc.. what ever.

    Who do I use as a programmimg companion, stackoverflow
    or ChatGPT. I think ChatGPT is the clear winner,
    it doesn't feature the abomination of a virtual

    prison like stackoverflow. Or as Cycorp, Inc has put
    it already decades ago:

    Common Sense Reasoning – From Cyc to Intelligent Assistant
    Doug Lenat et al. - August 2006
    2 The Case for an Ambient Research Assistant
    2.3 Components of a Truly Intelligent Computational Assistant
    Natural Language:
    An assistant system must be able to remember
    questions, statements, etc. from the user, and
    what its own response was, in order to understand
    the kinds of language ‘shortcuts’ people normally use
    in context.
    https://www.researchgate.net/publication/226813714

    Bye

    Mild Shock schrieb:

    Thats a funny quote:

    "Once you have a truly massive amount of information
    integrated as knowledge, then the human-software
    system will be superhuman, in the same sense that
    mankind with writing is superhuman compared to
    mankind before writing."

    https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Dec 1 14:34:22 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Given that Scryer Prolog is dead.
    This made me smile, traces of Scryer Prolog

    are found in FLOPs 2024 proceedings:

    7th International Symposium, FLOPS 2024,
    Kumamoto, Japan, May 15–17, 2024, Proceedings https://www.cs.ox.ac.uk/jeremy.gibbons/flops2024.pdf

    So why did it flop? Missing garbage collection
    in the Prolog System? Or did or is it to estimate
    that ChatGPT will also kill Scryer Prolog?

    Or simply a problem of using Rust as the
    underlying host language?

    Bye

    Mild Shock schrieb:

    The biggest flop in logic programming
    history, scryer prolog is dead. The poor
    thing is a prolog system without garbage

    collection, not very useful. So how will
    Austria get out of all this?
    With 50 PhDs and 10 Postdocs?

    "To develop its foundations, BILAI employs a
    Bilateral AI approach, effectively combining
    sub-symbolic AI (neural networks and machine learning)
    with symbolic AI (logic, knowledge representation,
    and reasoning) in various ways."

    https://www.bilateral-ai.net/jobs/.

    LoL

    Mild Shock schrieb:

    You know USA has a problem,
    when Oracle enters the race:

    To source the 131,072 GPU Al "supercluster,"
    Larry Ellison, appealed directly to Jensen Huang,
    during a dinner joined by Elon Musk at Nobu.
    "I would describe the dinner as me and Elon
    begging Jensen for GPUs. Please take our money.
    We need you to take more of our money. Please!”
    https://twitter.com/benitoz/status/1834741314740756621

    Meanwhile a contender in Video GenAI
    FLUX.1 from Germany, Hurray! With Open Source:

    OK. Now I'm Scared... AI Better Than Reality
    https://www.youtube.com/watch?v=cvMAVWDD-DU

    Mild Shock schrieb:

    The carbon emissions of writing and illustrating
    are lower for AI than for humans
    https://www.nature.com/articles/s41598-024-54271-x

    Perplexity CEO Aravind Srinivas says that the cost per
    query in AI models has decreased by 100x in the past
    2 years and quality will improve as hallucinations
    decrease 10x per year
    https://twitter.com/tsarnick/status/1830045611036721254

    Disclaimer: Can't verify the later claim... need to find a paper.


    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05
    UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA




    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jan 24 17:57:17 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Just noticed that SICStus Prolog says that
    their mode declaration is a dummy declaration,
    does nothing. Now I tried whether I can force

    SWI Prolog to accept different manually compiled clauses:

    test1(X,Y) :- Y = j(C,D), g(C) = A, h(D) = B, f(A,B) = X.

    test2(X,Y) :- X = f(A,B), A = g(C), B = h(D), j(C,D) = Y.

    Difficult to archive in SWI-Prolog, since it
    orders unification on its own, test1/2 and test2/2
    will behave the same, since they are essentially the same:

    /* SWI-Prolog 9.3.19 */
    ?- listing(test1/2), listing(test2/2).
    test1(f(A, B), j(C, D)) :-
    A=g(C),
    B=h(D).

    test2(f(A, B), j(C, D)) :-
    A=g(C),
    B=h(D).

    But maybe not necessary since SWI-Prolog has an
    advanced instruction set and advanced Prolog
    logical variable representation?

    Bye

    Mild Shock schrieb:
    Hi,

    Given that Scryer Prolog is dead.
    This made me smile, traces of Scryer Prolog

    are found in FLOPs 2024 proceedings:

    7th International Symposium, FLOPS 2024,
    Kumamoto, Japan, May 15–17, 2024, Proceedings https://www.cs.ox.ac.uk/jeremy.gibbons/flops2024.pdf

    So why did it flop? Missing garbage collection
    in the Prolog System? Or did or is it to estimate
    that ChatGPT will also kill Scryer Prolog?

    Or simply a problem of using Rust as the
    underlying host language?

    Bye


    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jan 24 17:58:06 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Would need more testing but the
    present example is immune:

    /* SWI-Prolog 9.3.19 */
    ?- X = f(g(1),h(2)), time((between(1,1000000,_), test1(X, Y), fail; true)).
    % 1,999,998 inferences, 0.094 CPU in 0.100 seconds (93% CPU, 21333312 Lips)

    ?- X = f(g(1),h(2)), time((between(1,1000000,_), test2(X, Y), fail; true)).
    % 1,999,998 inferences, 0.094 CPU in 0.100 seconds (93% CPU, 21333312 Lips)

    ?- Y = j(1,2), time((between(1,1000000,_), test1(X, Y), fail; true)).
    % 1,999,998 inferences, 0.109 CPU in 0.100 seconds (109% CPU, 18285696 Lips)

    ?- Y = j(1,2), time((between(1,1000000,_), test2(X, Y), fail; true)).
    % 1,999,998 inferences, 0.094 CPU in 0.102 seconds (92% CPU, 21333312 Lips)

    Not all Prolog systems are that lucky:

    /* Scryer Prolog 0.9.4-286 */
    ?- X = f(g(1),h(2)), time((between(1,1000000,_), test1(X, Y), fail; true)).
    % CPU time: 1.163s, 11_000_108 inferences

    ?- X = f(g(1),h(2)), time((between(1,1000000,_), test2(X, Y), fail; true)).
    % CPU time: 1.248s, 11_000_131 inferences

    ?- Y = j(1,2), time((between(1,1000000,_), test1(X, Y), fail; true)).
    % CPU time: 0.979s, 11_000_131 inferences

    ?- Y = j(1,2), time((between(1,1000000,_), test2(X, Y), fail; true)).
    % CPU time: 1.338s, 11_000_131 inferences

    Bye

    Mild Shock schrieb:
    Hi,

    Just noticed that SICStus Prolog says that
    their mode declaration is a dummy declaration,
    does nothing. Now I tried whether I can force

    SWI Prolog to accept different manually compiled clauses:

    test1(X,Y) :- Y = j(C,D), g(C) = A, h(D) = B, f(A,B) = X.

    test2(X,Y) :- X = f(A,B), A = g(C), B = h(D), j(C,D) = Y.

    Difficult to archive in SWI-Prolog, since it
    orders unification on its own, test1/2 and test2/2
    will behave the same, since they are essentially the same:

    /* SWI-Prolog 9.3.19 */
    ?- listing(test1/2), listing(test2/2).
    test1(f(A, B), j(C, D)) :-
        A=g(C),
        B=h(D).

    test2(f(A, B), j(C, D)) :-
        A=g(C),
        B=h(D).

    But maybe not necessary since SWI-Prolog has an
    advanced instruction set and advanced Prolog
    logical variable representation?

    Bye

    Mild Shock schrieb:
    Hi,

    Given that Scryer Prolog is dead.
    This made me smile, traces of Scryer Prolog

    are found in FLOPs 2024 proceedings:

    7th International Symposium, FLOPS 2024,
    Kumamoto, Japan, May 15–17, 2024, Proceedings
    https://www.cs.ox.ac.uk/jeremy.gibbons/flops2024.pdf

    So why did it flop? Missing garbage collection
    in the Prolog System? Or did or is it to estimate
    that ChatGPT will also kill Scryer Prolog?

    Or simply a problem of using Rust as the
    underlying host language?

    Bye



    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Jan 25 14:13:33 2025
    From Newsgroup: comp.lang.prolog


    Possibly it does round away from zero and not half-even:

    /* Trealla Prolog 2.63.33 & SWI-Prolog 9.0.4 */
    ?- format('~0f ~0f', [1.5, 2.5]), nl.
    2 2

    /* Scryer Prolog 0.9.4-286 */
    ?- format("~0f ~0f", [1.5, 2.5]), nl.
    1 2

    Pitty there is no Prolog Improvement Proposals (PIP) for format/2.

    https://prolog-lang.org/ImplementersForum/PIPs
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Jan 25 20:44:02 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Lets say there are at least two unification
    spilling rewriting techniques in a Prolog system
    that would eliminate a (=)/2 call:

    /* Left Spilling into the Head */
    p(V, Q) :- V = T, ... ~~> p(T, Q) :- ...

    /* Right Spilling into a Goal */
    ..., V = T, p(V, Q), ... ~~> ..., p(T, Q), ...

    Maybe the head movement and the indexing benefit
    in SWI-Prolog was discovered because of DCG translation
    and not to eliminate mode directed compilation.

    Take this DCG rule:

    p --> [a], !, [b].

    I find that SWI-Prolog does left spilling:

    /* SWI-Prolog 9.3.19 */
    ?- listing(p/2).
    p([a|A], B) :-
    !,
    C=A,
    C=[b|B].

    Bye

    Mild Shock schrieb:
    Hi,

    Would need more testing but the
    present example is immune:

    /* SWI-Prolog 9.3.19 */
    ?- X = f(g(1),h(2)), time((between(1,1000000,_), test1(X, Y), fail; true)).
    % 1,999,998 inferences, 0.094 CPU in 0.100 seconds (93% CPU, 21333312 Lips)

    ?- X = f(g(1),h(2)), time((between(1,1000000,_), test2(X, Y), fail; true)).
    % 1,999,998 inferences, 0.094 CPU in 0.100 seconds (93% CPU, 21333312 Lips)

    ?- Y = j(1,2), time((between(1,1000000,_), test1(X, Y), fail; true)).
    % 1,999,998 inferences, 0.109 CPU in 0.100 seconds (109% CPU, 18285696
    Lips)

    ?- Y = j(1,2), time((between(1,1000000,_), test2(X, Y), fail; true)).
    % 1,999,998 inferences, 0.094 CPU in 0.102 seconds (92% CPU, 21333312 Lips)

    Not all Prolog systems are that lucky:

    /* Scryer Prolog 0.9.4-286 */
    ?- X = f(g(1),h(2)), time((between(1,1000000,_), test1(X, Y), fail; true)).
       % CPU time: 1.163s, 11_000_108 inferences

    ?- X = f(g(1),h(2)), time((between(1,1000000,_), test2(X, Y), fail; true)).
       % CPU time: 1.248s, 11_000_131 inferences

    ?- Y = j(1,2), time((between(1,1000000,_), test1(X, Y), fail; true)).
       % CPU time: 0.979s, 11_000_131 inferences

    ?- Y = j(1,2), time((between(1,1000000,_), test2(X, Y), fail; true)).
       % CPU time: 1.338s, 11_000_131 inferences

    Bye

    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Jan 25 20:44:46 2025
    From Newsgroup: comp.lang.prolog


    I made a little experiment introducing both
    left spilling and right spilling into a DCG
    translator, which previously had neither:

    /* Dogelog Player 1.3.0 */
    ?- listing(p/2).
    p([a|A], B) :-
    !,
    A = [b|B].

    Now my benchmark suite DCG calculator runs 25% faster!

    Mild Shock schrieb:
    Hi,

    Lets say there are at least two unification
    spilling rewriting techniques in a Prolog system
    that would eliminate a (=)/2 call:

    /* Left Spilling into the Head */
    p(V, Q) :- V = T, ...         ~~>            p(T, Q) :- ...

    /* Right Spilling into a Goal */
    ..., V = T, p(V, Q), ...      ~~>            ..., p(T, Q), ...

    Maybe the head movement and the indexing benefit
    in SWI-Prolog was discovered because of DCG translation
    and not to eliminate mode directed compilation.

    Take this DCG rule:

    p --> [a], !, [b].

    I find that SWI-Prolog does left spilling:

    /* SWI-Prolog 9.3.19 */
    ?- listing(p/2).
    p([a|A], B) :-
        !,
        C=A,
        C=[b|B].

    Bye

    Mild Shock schrieb:
    Hi,

    Would need more testing but the
    present example is immune:

    /* SWI-Prolog 9.3.19 */
    ?- X = f(g(1),h(2)), time((between(1,1000000,_), test1(X, Y), fail;
    true)).
    % 1,999,998 inferences, 0.094 CPU in 0.100 seconds (93% CPU, 21333312
    Lips)

    ?- X = f(g(1),h(2)), time((between(1,1000000,_), test2(X, Y), fail;
    true)).
    % 1,999,998 inferences, 0.094 CPU in 0.100 seconds (93% CPU, 21333312
    Lips)

    ?- Y = j(1,2), time((between(1,1000000,_), test1(X, Y), fail; true)).
    % 1,999,998 inferences, 0.109 CPU in 0.100 seconds (109% CPU, 18285696
    Lips)

    ?- Y = j(1,2), time((between(1,1000000,_), test2(X, Y), fail; true)).
    % 1,999,998 inferences, 0.094 CPU in 0.102 seconds (92% CPU, 21333312
    Lips)

    Not all Prolog systems are that lucky:

    /* Scryer Prolog 0.9.4-286 */
    ?- X = f(g(1),h(2)), time((between(1,1000000,_), test1(X, Y), fail;
    true)).
        % CPU time: 1.163s, 11_000_108 inferences

    ?- X = f(g(1),h(2)), time((between(1,1000000,_), test2(X, Y), fail;
    true)).
        % CPU time: 1.248s, 11_000_131 inferences

    ?- Y = j(1,2), time((between(1,1000000,_), test1(X, Y), fail; true)).
        % CPU time: 0.979s, 11_000_131 inferences

    ?- Y = j(1,2), time((between(1,1000000,_), test2(X, Y), fail; true)).
        % CPU time: 1.338s, 11_000_131 inferences

    Bye


    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Feb 18 18:13:36 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    How it started:

    Lets use Silicon instead of Germanium https://en.wikipedia.org/wiki/William_Shockley

    How its going, Intel producing buggy chips and missing the AI train:

    Stock Collapse & Fire Sale: https://www.marketscreener.com/quote/stock/INTEL-CORPORATION-4829/news/Intel-Shares-Gain-After-Report-on-Potential-Break-Up-49088812/

    Will Elon Musk also buy a piece of the cake?

    Bye

    Mild Shock schrieb:

    Hi,

    Woa! ChatGPT for the Flintstones: Bloomberg

    Our long-term investment in AI is already
    available for fixed income securities.
    Try it for yourself! https://twitter.com/TheTerminal/status/1783473601632465352

    Did she just say Terminal? LoL

    Bye

    P.S.: But the display of the extracted logical
    query from the natural language phrase is quite
    cute. Can ChatGPT do the same?

    Mild Shock schrieb:
    To hell with GPUs. Here come the FPGA qubits:

    Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous
    https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/


    The superposition property enables a quantum computer
    to be in multiple states at once.
    https://www.techtarget.com/whatis/definition/qubit

    Maybe their new board is even less suited for hitting
    a ship with a torpedo than some machine learning?



    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Feb 18 18:14:04 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Or just buy the dip, since they have something
    in the pipeline? This is quite unclear:

    Can Intel survive the valley of death? https://www.youtube.com/watch?v=OZrPOjnAyqs

    ChatGPT leaves my with a question mark: Yes, Sapphire
    Rapids (Intel's 4th Gen Xeon) was too weak for what
    it was supposed to compete against.

    Too slow & inefficient for AI workloads → Nvidia/AMD
    took over. Overall: A stopgap product while Intel
    tries to recover with Granite Rapids (2024) and

    Sierra Forest (2024/2025). Would you say Intel is too
    far behind, or do you think they can still catch up?

    Bye

    Mild Shock schrieb:
    Hi,

    How it started:

    Lets use Silicon instead of Germanium https://en.wikipedia.org/wiki/William_Shockley

    How its going, Intel producing buggy chips and missing the AI train:

    Stock Collapse & Fire Sale: https://www.marketscreener.com/quote/stock/INTEL-CORPORATION-4829/news/Intel-Shares-Gain-After-Report-on-Potential-Break-Up-49088812/


    Will Elon Musk also buy a piece of the cake?

    Bye

    Mild Shock schrieb:

    Hi,

    Woa! ChatGPT for the Flintstones: Bloomberg

    Our long-term investment in AI is already
    available for fixed income securities.
    Try it for yourself!
    https://twitter.com/TheTerminal/status/1783473601632465352

    Did she just say Terminal? LoL

    Bye

    P.S.: But the display of the extracted logical
    query from the natural language phrase is quite
    cute. Can ChatGPT do the same?

    Mild Shock schrieb:
    To hell with GPUs. Here come the FPGA qubits:

    Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous
    https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/


    The superposition property enables a quantum computer
    to be in multiple states at once.
    https://www.techtarget.com/whatis/definition/qubit

    Maybe their new board is even less suited for hitting
    a ship with a torpedo than some machine learning?




    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Mar 4 10:09:51 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I wonder wether WASM will be fast some time.
    I found this paper which draws a rather dim picture,
    but the paper is already a little bit old:

    Understanding the Performance of WebAssembly Application
    Weihang Wang et al. - 2021
    https://weihang-wang.github.io/papers/imc21.pdf

    The take away is more or less that jitted JavaScript has
    same speed as WASM. With sometimes WASM more favorable
    outcome for Firefox than for Chrome and Edge:

    Java Script Chrome Firefox Edge
    D.1 Exec. Time (ms) 45.57 48.26 63.62
    M.2 Exec. Time (ms) 249.60 167.03 201.68

    WASM Chrome Firefox Edge
    D.1 Exec. Time (ms) 65.23 39.65 83.53
    M.2 Exec. Time (ms) 233.08 345.98 192.87

    Bye

    Mild Shock schrieb:
    Hi,

    Given that Scryer Prolog is dead.
    This made me smile, traces of Scryer Prolog

    are found in FLOPs 2024 proceedings:

    7th International Symposium, FLOPS 2024,
    Kumamoto, Japan, May 15–17, 2024, Proceedings https://www.cs.ox.ac.uk/jeremy.gibbons/flops2024.pdf

    So why did it flop? Missing garbage collection
    in the Prolog System? Or did or is it to estimate
    that ChatGPT will also kill Scryer Prolog?

    Or simply a problem of using Rust as the
    underlying host language?

    Bye

    Mild Shock schrieb:

    The biggest flop in logic programming
    history, scryer prolog is dead. The poor
    thing is a prolog system without garbage

    collection, not very useful. So how will
    Austria get out of all this?
    With 50 PhDs and 10 Postdocs?

    "To develop its foundations, BILAI employs a
    Bilateral AI approach, effectively combining
    sub-symbolic AI (neural networks and machine learning)
    with symbolic AI (logic, knowledge representation,
    and reasoning) in various ways."

    https://www.bilateral-ai.net/jobs/.

    LoL

    Mild Shock schrieb:

    You know USA has a problem,
    when Oracle enters the race:

    To source the 131,072 GPU Al "supercluster,"
    Larry Ellison, appealed directly to Jensen Huang,
    during a dinner joined by Elon Musk at Nobu.
    "I would describe the dinner as me and Elon
    begging Jensen for GPUs. Please take our money.
    We need you to take more of our money. Please!”
    https://twitter.com/benitoz/status/1834741314740756621

    Meanwhile a contender in Video GenAI
    FLUX.1 from Germany, Hurray! With Open Source:

    OK. Now I'm Scared... AI Better Than Reality
    https://www.youtube.com/watch?v=cvMAVWDD-DU

    Mild Shock schrieb:

    The carbon emissions of writing and illustrating
    are lower for AI than for humans
    https://www.nature.com/articles/s41598-024-54271-x

    Perplexity CEO Aravind Srinivas says that the cost per
    query in AI models has decreased by 100x in the past
    2 years and quality will improve as hallucinations
    decrease 10x per year
    https://twitter.com/tsarnick/status/1830045611036721254

    Disclaimer: Can't verify the later claim... need to find a paper.


    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05
    UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA





    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Mar 4 10:13:40 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I made a test, you can run it from here, its CORS enabled:

    :- ['https://www.dogelog.ch/module/bench/suite.pl'].

    :- suite.

    First a WASM Prolog, showing warm run:

    nrev % 2,995,946 inferences, 1.417 CPU in 1.417 seconds
    crypt % 4,169,360 inferences, 2.776 CPU in 2.776 seconds
    deriv % 2,100,600 inferences, 1.555 CPU in 1.555 seconds
    poly % 2,088,942 inferences, 1.681 CPU in 1.681 seconds
    sortq % 3,627,136 inferences, 1.923 CPU in 1.923 seconds
    tictac % 1,013,322 inferences, 2.659 CPU in 2.659 seconds
    queens % 4,599,283 inferences, 2.496 CPU in 2.496 seconds
    query % 8,645,933 inferences, 5.189 CPU in 5.189 seconds
    mtak % 3,946,569 inferences, 1.642 CPU in 1.642 seconds
    perfect % 3,243,474 inferences, 1.498 CPU in 1.498 seconds
    calc % 3,062,293 inferences, 1.791 CPU in 1.791 seconds https://wasm.swi-prolog.org/wasm/tinker

    Then a JavaScript Prolog, showing warm run:

    nrev % Zeit 1413 ms, GC 0 ms, Lips 2127790, Uhr 04.03.2025 10:00
    crypt % Zeit 1204 ms, GC 0 ms, Lips 3461555, Uhr 04.03.2025 10:00
    deriv % Zeit 1270 ms, GC 0 ms, Lips 3094644, Uhr 04.03.2025 10:00
    poly % Zeit 1068 ms, GC 0 ms, Lips 2336682, Uhr 04.03.2025 10:00
    sortq % Zeit 1595 ms, GC 0 ms, Lips 2667571, Uhr 04.03.2025 10:00
    tictac % Zeit 1596 ms, GC 0 ms, Lips 1087015, Uhr 04.03.2025 10:00
    queens % Zeit 1718 ms, GC 0 ms, Lips 3322441, Uhr 04.03.2025 10:00
    query % Zeit 2764 ms, GC 0 ms, Lips 3132399, Uhr 04.03.2025 10:00
    mtak % Zeit 2129 ms, GC 8 ms, Lips 3936434, Uhr 04.03.2025 10:00
    perfect % Zeit 1415 ms, GC 0 ms, Lips 3317768, Uhr 04.03.2025 10:00
    calc % Zeit 1314 ms, GC 0 ms, Lips 2922571, Uhr 04.03.2025 10:00 https://www.xlog.ch/runtab/doclet/docs/04_tutor/basic/example01/package.html

    Whats the cause of the discrepancy? 32-bit? Auto-Yield?

    Bye

    Mild Shock schrieb:
    Hi,

    I wonder wether WASM will be fast some time.
    I found this paper which draws a rather dim picture,
    but the paper is already a little bit old:

    Understanding the Performance of WebAssembly Application
    Weihang Wang et al. - 2021
    https://weihang-wang.github.io/papers/imc21.pdf

    The take away is more or less that jitted JavaScript has
    same speed as WASM. With sometimes WASM more favorable
    outcome for Firefox than for Chrome and Edge:

    Java Script     Chrome    Firefox    Edge
    D.1 Exec. Time (ms)    45.57    48.26    63.62
    M.2 Exec. Time (ms)    249.60    167.03    201.68

    WASM    Chrome    Firefox    Edge
    D.1 Exec. Time (ms)    65.23    39.65    83.53
    M.2 Exec. Time (ms)    233.08    345.98    192.87

    Bye
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Mar 14 13:37:11 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Not only $TSLA is on fire sale! Also
    Prolog system have capitualted long ago.
    Scryer Prolog and Trealla Prolog copy

    some old CLP(X) nonsense based on attributed
    variables. SWI-Prolog isn't better off.
    Basically the USA and their ICLP venue

    is dumbing down all of Prolog development,
    so that nonsense such as this is published:

    Automatic Differentiation in Prolog
    Schrijvers Tom et. al - 2023
    https://arxiv.org/pdf/2305.07878

    It has the most stupid conclusion.

    "In future work we plan to explore Prolog’s meta-
    programming facilities (e.g., term expansion) to
    implement partial evaluation of revad/5 calls on
    known expressions. We also wish to develop further
    applications on top of our AD approach, such as
    Prolog-based neural networks and integration with
    existing probabilistic logic programming languages."

    As if term expansion would do anything good
    concerning the evaluation or training of neural
    networks. They are totally clueless!

    Bye

    P.S.: The stupidity is even topped, that people
    have unlearned how to do symbolic algebra
    in Prolog itself. They are not able to code it:

    ?- simplify(x+x+y-y,E).
    E = number(2)*x+y-y

    Simplification is hard (IMO).

    Instead they are now calling Python:

    sym(A * B, S) :-
    !, sym(A, A1),
    sym(B, B1),
    py_call(operator:mul(A1, B1), S).

    mys(S, A * B) :-
    py_call(sympy:'Mul', Mul),
    py_call(isinstance(S, Mul), @(true)),
    !, py_call(S:args, A0-B0),
    mys(A0, A),
    mys(B0, B).

    Etc..

    sympy(A, R) :-
    sym(A, S),
    mys(S, R).

    ?- sympy(x + y + 1 + x + y + -1, S).
    S = 2*x+2*y ;

    This is the final nail in the coffin, the declaration
    of the complete decline of Prolog. Full proof that
    SWI-Prolog Janus is indicative that we have reached

    the valley of idiocracy in Prolog. And that there
    are no more capable Prologers around.

    Mild Shock schrieb:
    Hi,

    Lets say I have to chose between pig wrestle with a
    grammar nazi stackoverflow user with 100k reputation, or
    to interact with ChatGPT that puts a lot of

    effort to understand the least cue I give, isn't
    shot in to english only, you can also use it with
    german, turkish, etc.. what ever.

    Who do I use as a programmimg companion, stackoverflow
    or ChatGPT. I think ChatGPT is the clear winner,
    it doesn't feature the abomination of a virtual

    prison like stackoverflow. Or as Cycorp, Inc has put
    it already decades ago:

    Common Sense Reasoning – From Cyc to Intelligent Assistant
    Doug Lenat et al. - August 2006
    2 The Case for an Ambient Research Assistant
    2.3 Components of a Truly Intelligent Computational Assistant
    Natural Language:
    An assistant system must be able to remember
    questions, statements, etc. from the user, and
    what its own response was, in order to understand
    the kinds of language ‘shortcuts’ people normally use
    in context.
    https://www.researchgate.net/publication/226813714

    Bye

    Mild Shock schrieb:

    Thats a funny quote:

    "Once you have a truly massive amount of information
    integrated as knowledge, then the human-software
    system will be superhuman, in the same sense that
    mankind with writing is superhuman compared to
    mankind before writing."

    https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes


    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Mar 14 14:06:05 2025
    From Newsgroup: comp.lang.prolog

    Maybe real Prologers that could code it correctly,
    are now an endangered species, on the brink of getting
    extinct? I easily find a 40 year old reference that

    shows how to symbolically compute with multivariate
    polynomials. It doesn’t have the same problem as the
    bug in PRESS that it cannot render (1+x) + -1 as x:

    Haygood, R. (1989): A Prolog Benchmark Suite for Aquarius,
    Computer Science Division, University of California
    Berkely, April 30, 1989, See “poly”

    The Prolog code is rooted in some Lisp code by
    R.P. Gabriel. The Python code of sympy wont do much
    else in its class Poly, than using a recursive dense
    representations and term orderings.

    If only simplification and no division is needed,
    then only canonicalization is needed. Handing over
    expressions to Python might be useful to factorize
    polynomials or to compute Gröbner basis,

    since Prolog libraries for that are more rare. On
    the other hand I think the entry level for a simplification
    in Prolog itself, should not be so high. But education in
    this direction could have been already died out.

    Mild Shock schrieb:
    Hi,

    Not only $TSLA is on fire sale! Also
    Prolog system have capitualted long ago.
    Scryer Prolog and Trealla Prolog copy

    some old CLP(X) nonsense based on attributed
    variables. SWI-Prolog isn't better off.
    Basically the USA and their ICLP venue

    is dumbing down all of Prolog development,
    so that nonsense such as this is published:

    Automatic Differentiation in Prolog
    Schrijvers Tom et. al - 2023
    https://arxiv.org/pdf/2305.07878

    It has the most stupid conclusion.

    "In future work we plan to explore Prolog’s meta-
    programming facilities (e.g., term expansion) to
    implement partial evaluation of revad/5 calls on
    known expressions. We also wish to develop further
    applications on top of our AD approach, such as
    Prolog-based neural networks and integration with
    existing probabilistic logic programming languages."

    As if term expansion would do anything good
    concerning the evaluation or training of neural
    networks. They are totally clueless!

    Bye

    P.S.: The stupidity is even topped, that people
    have unlearned how to do symbolic algebra
    in Prolog itself. They are not able to code it:

    ?- simplify(x+x+y-y,E).
    E = number(2)*x+y-y

    Simplification is hard (IMO).

    Instead they are now calling Python:

    sym(A * B, S) :-
        !, sym(A, A1),
        sym(B, B1),
        py_call(operator:mul(A1, B1), S).

    mys(S, A * B) :-
        py_call(sympy:'Mul', Mul),
        py_call(isinstance(S, Mul), @(true)),
        !, py_call(S:args, A0-B0),
        mys(A0, A),
        mys(B0, B).

    Etc..

    sympy(A, R) :-
        sym(A, S),
        mys(S, R).

    ?- sympy(x + y + 1 + x + y + -1, S).
    S = 2*x+2*y ;

    This is the final nail in the coffin, the declaration
    of the complete decline of Prolog. Full proof that
    SWI-Prolog Janus is indicative that we have reached

    the valley of idiocracy in Prolog. And that there
    are no more capable Prologers around.

    Mild Shock schrieb:
    Hi,

    Lets say I have to chose between pig wrestle with a
    grammar nazi stackoverflow user with 100k reputation, or
    to interact with ChatGPT that puts a lot of

    effort to understand the least cue I give, isn't
    shot in to english only, you can also use it with
    german, turkish, etc.. what ever.

    Who do I use as a programmimg companion, stackoverflow
    or ChatGPT. I think ChatGPT is the clear winner,
    it doesn't feature the abomination of a virtual

    prison like stackoverflow. Or as Cycorp, Inc has put
    it already decades ago:

    Common Sense Reasoning – From Cyc to Intelligent Assistant
    Doug Lenat et al. - August 2006
    2 The Case for an Ambient Research Assistant
    2.3 Components of a Truly Intelligent Computational Assistant
    Natural Language:
    An assistant system must be able to remember
    questions, statements, etc. from the user, and
    what its own response was, in order to understand
    the kinds of language ‘shortcuts’ people normally use
    in context.
    https://www.researchgate.net/publication/226813714

    Bye

    Mild Shock schrieb:

    Thats a funny quote:

    "Once you have a truly massive amount of information
    integrated as knowledge, then the human-software
    system will be superhuman, in the same sense that
    mankind with writing is superhuman compared to
    mankind before writing."

    https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes



    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon May 19 10:54:05 2025
    From Newsgroup: comp.lang.prolog


    How about splitting up the Prologue to prolog into
    multiple modules? To the best of my knowledge websites
    allow multiple documents and Prolog systems allow multiple

    Prolog texts, so why make a one fits all page like here:

    https://www.complang.tuwien.ac.at/ulrich/iso-prolog/prologue

    Its by way not anymore a Prologue, which somehow implies
    that the predicates are automatically loaded. What would
    make more sense to start morking on commons modules, like:

    maplist/n ~~> library(lists)
    call_nth/2 ~~> library(sequence)
    crypto_data_hash/3 ~~> library(crypto)
    Etc..

    Revise this "The Prolog prologue is a possibly empty file to
    be included (7.4.2.7)." This is nonsense, if you add things
    that are just commons modules.
    --- Synchronet 3.21a-Linux NewsLink 1.2