• =?UTF-8?Q?Philosophical_Twist_due_to_negligence_=28Was:_Rene_Descar?==?UTF-8?Q?tes_=22Discours_de_la_m=c3=a9thode=22_has_fizzled_out=29?=

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 14 11:43:52 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Descartes’ “divide problems into parts” works
    only for well-behaved, linear, decomposable systems.
    But its just that parts might end up as Schrödingers

    equation. It could be that stable diffusion is the
    new constraint solver. In a sense, stable diffusion
    models (or other generative AI) are functioning as

    probabilistic, fuzzy constraint solvers — but in a
    very different paradigm from classical logic or
    formal methods. But what was neglected?

    - Cybernetics (1940s–50s)
    Focused on feedback loops, control, and self-regulation
    in machines and biological systems. Showed that
    decomposition can fail because subparts are interdependent.

    - Chaos Theory (1960s–80s)
    Nonlinear deterministic systems can produce unpredictable,
    sensitive dependence on initial conditions. Decomposition
    into parts is tricky: small errors explode, and “solving
    subparts” may not help predict the whole.

    - Santa Fe Institute & Complex Systems (1980s–present)
    Studied emergent behavior, networks, adaptation,
    self-organization. Linear, reductionist thinking fails
    to capture dynamics of economic, social, and ecological systems.

    Bye

    Mild Shock schrieb:
    Hi,

    How it started:

    https://conceptbase.sourceforge.net/

    How its going:

    https://www.ibm.com/products/datastax

    The problem with claims such as " Formal languages,
    such as KAOS, are based on predicate logic and
    capture additional details about an application
    in a precise manner. They also provide a foundation
    for reasoning with information models." is that
    every thing in the quoted sentence is wrong.

    Real AI systems scale by approximation,
    vectorization, distributed representations,
    and partial knowledge — not by globally
    consistent logical models. No classical requirements
    language or ontology captures the informal
    cognitive machinery that makes
    intelligence flexible. Intelligence needs the
    whole messy cognitive spectrum.

    Somehow DataStax looks like n8n married AI embedding.
    I hope Amazon, Meta, Google, etc.. get the message.
    I don't worry about Microsoft, they might come with

    something from their Encarta corner and Copilot+ is
    more Local AI. After all we need things like Wikidata
    in a Robot and not in a Data Center.

    LoL

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 14 13:50:08 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    The good thing is while some people care
    for the foundation of reasoning, which also
    includes performance of "compiled" knowledge,

    some people are like Vestal Virgins that have
    never been touched by a running computer program,
    still they hord in some SWI-Prolog forum:

    Take this question:

    @jp-diegidio Yes, term/goal expansion does not apply to assert.
    Does not apply to assert and call, as Jan was saying.
    Do you or anybody happen to know why that is so?

    Guess what? DCG expansion is also not applied to
    the so called "dynamic database", see ISO core standard.
    My speculation it is for speed, and the idea is that

    assert/retract/clause are for already "compiled"
    knowledge, i.e. facts and rules produced by all the
    dozen pipelines that could sit in front the dynamic

    database, before one gets to the bare bone Horn clauses.

    LoL

    Bye

    P.S.: Try for yourself, no DCG processing:

    ?- assertz((a --> b)).
    true.

    ?- listing((-->)/2).
    :- dynamic (-->)/2.
    b.

    https://wasm.swi-prolog.org/wasm/tinker

    Mild Shock schrieb:
    Hi,

    Descartes’ “divide problems into parts” works
    only for well-behaved, linear, decomposable systems.
    But its just that parts might end up as Schrödingers

    equation. It could be that stable diffusion is the
    new constraint solver. In a sense, stable diffusion
    models (or other generative AI) are functioning as

    probabilistic, fuzzy constraint solvers — but in a
    very different paradigm from classical logic or
    formal methods. But what was neglected?

    - Cybernetics (1940s–50s)
    Focused on feedback loops, control, and self-regulation
    in machines and biological systems. Showed that
    decomposition can fail because subparts are interdependent.

    - Chaos Theory (1960s–80s)
    Nonlinear deterministic systems can produce unpredictable,
    sensitive dependence on initial conditions. Decomposition
    into parts is tricky: small errors explode, and “solving
    subparts” may not help predict the whole.

    - Santa Fe Institute & Complex Systems (1980s–present)
    Studied emergent behavior, networks, adaptation,
    self-organization. Linear, reductionist thinking fails
    to capture dynamics of economic, social, and ecological systems.

    Bye

    Mild Shock schrieb:
    Hi,

    How it started:

    https://conceptbase.sourceforge.net/

    How its going:

    https://www.ibm.com/products/datastax

    The problem with claims such as " Formal languages,
    such as KAOS, are based on predicate logic and
    capture additional details about an application
    in a precise manner. They also provide a foundation
    for reasoning with information models." is that
    every thing in the quoted sentence is wrong.

    Real AI systems scale by approximation,
    vectorization, distributed representations,
    and partial knowledge — not by globally
    consistent logical models. No classical requirements
    language or ontology captures the informal
    cognitive machinery that makes
    intelligence flexible. Intelligence needs the
    whole messy cognitive spectrum.

    Somehow DataStax looks like n8n married AI embedding.
    I hope Amazon, Meta, Google, etc.. get the message.
    I don't worry about Microsoft, they might come with

    something from their Encarta corner and Copilot+ is
    more Local AI. After all we need things like Wikidata
    in a Robot and not in a Data Center.

    LoL

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 14 14:15:30 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Give this man (@jp-diegidio) Money to buy the
    ISO CORE standard. I wouldn't say the main
    argument is performance. If SWI-Prolog has a slow

    goal / term expansion framework, its their problem.
    I guess the main argument for having a dynamic
    database as is, as defined in the ISO CORE standard,

    is "coreness". The idea of the ISO CORE standard
    is to define Horn clause processor. And it does
    not want to hide Horn clauses, and the interface

    for asssert/retract are Horn clauses. And in this
    way the ISO CORE standard becomes a lower level
    foundation, that is quite versatile, and supports

    a couple of translation pipelines. Its a little
    bit the opposite to SWI-Prolog, with its unspecified dict
    madness. And the current PIP efforts, which go on somehow

    out of the blue, without specifying a new deep preprocessor
    foundation, introducing certain concepts that might
    require preprocessors beyond goal expansion and

    term expansion. Might have the same gaps has here:

    https://simon.peytonjones.org/assets/pdfs/verse-icfp23.pdf

    Does it have assert/retract ?

    Bye

    P.S.: The ISO core standard careful navigates around
    another issue. While it specifies dynamic database
    kind of behavioural, very detailed including that

    dynamic database has a very small shallow preprocessig,
    it for example still requires that p(X) :- X, is translated
    into p(X) :- call(X). It specifies the loader in the

    form of "prepare for execution". But static predicates
    share something with dynamic predicates. Their minimal
    shallow preprocessing is as well that p(X) :- X, is

    translated into p(X) :- call(X).

    Mild Shock schrieb:
    Hi,

    The good thing is while some people care
    for the foundation of reasoning, which also
    includes performance of "compiled" knowledge,

    some people are like Vestal Virgins that have
    never been touched by a running computer program,
    still they hord in some SWI-Prolog forum:

    Take this question:

    @jp-diegidio Yes, term/goal expansion does not apply to assert.
    Does not apply to assert and call, as Jan was saying.
    Do you or anybody happen to know why that is so?

    Guess what? DCG expansion is also not applied to
    the so called "dynamic database", see ISO core standard.
    My speculation it is for speed, and the idea is that

    assert/retract/clause are for already "compiled"
    knowledge, i.e. facts and rules produced by all the
    dozen pipelines that could sit in front the dynamic

    database, before one gets to the bare bone Horn clauses.

    LoL

    Bye

    P.S.: Try for yourself, no DCG processing:

    ?- assertz((a --> b)).
    true.

    ?- listing((-->)/2).
    :- dynamic (-->)/2.
    b.

    https://wasm.swi-prolog.org/wasm/tinker

    Mild Shock schrieb:
    Hi,

    Descartes’ “divide problems into parts” works
    only for well-behaved, linear, decomposable systems.
    But its just that parts might end up as Schrödingers

    equation. It could be that stable diffusion is the
    new constraint solver. In a sense, stable diffusion
    models (or other generative AI) are functioning as

    probabilistic, fuzzy constraint solvers — but in a
    very different paradigm from classical logic or
    formal methods. But what was neglected?

    - Cybernetics (1940s–50s)
    Focused on feedback loops, control, and self-regulation
    in machines and biological systems. Showed that
    decomposition can fail because subparts are interdependent.

    - Chaos Theory (1960s–80s)
    Nonlinear deterministic systems can produce unpredictable,
    sensitive dependence on initial conditions. Decomposition
    into parts is tricky: small errors explode, and “solving
    subparts” may not help predict the whole.

    - Santa Fe Institute & Complex Systems (1980s–present)
    Studied emergent behavior, networks, adaptation,
    self-organization. Linear, reductionist thinking fails
    to capture dynamics of economic, social, and ecological systems.

    Bye

    Mild Shock schrieb:
    Hi,

    How it started:

    https://conceptbase.sourceforge.net/

    How its going:

    https://www.ibm.com/products/datastax

    The problem with claims such as " Formal languages,
    such as KAOS, are based on predicate logic and
    capture additional details about an application
    in a precise manner. They also provide a foundation
    for reasoning with information models." is that
    every thing in the quoted sentence is wrong.

    Real AI systems scale by approximation,
    vectorization, distributed representations,
    and partial knowledge — not by globally
    consistent logical models. No classical requirements
    language or ontology captures the informal
    cognitive machinery that makes
    intelligence flexible. Intelligence needs the
    whole messy cognitive spectrum.

    Somehow DataStax looks like n8n married AI embedding.
    I hope Amazon, Meta, Google, etc.. get the message.
    I don't worry about Microsoft, they might come with

    something from their Encarta corner and Copilot+ is
    more Local AI. After all we need things like Wikidata
    in a Robot and not in a Data Center.

    LoL

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 14 14:24:36 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Already the idea that expansion needs fixpoints,
    is the most stupid idea. If you need fixpoints
    you did something wrong.

    For example this here is not a fixpoint:

    expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B).
    expand_term(A, A).

    Its only the transitive closure R* of R, where
    R = term_expansion. The predicate term_expansion/2
    decides when to stop, by signalling a failure.

    I have never needed fixpoint neigher for term expansion,
    nor for goal_expansion, nor for function_expansion.
    You can spare all (==)/2 loop checking.

    The problem might be badly written term_expansion/2,
    that cannot signal failure. Or a relation R that is
    indeed cyclic. But a cyclic R doesn't make any

    sense in an expansion.

    Bye

    Mild Shock schrieb:
    Hi,

    Give this man (@jp-diegidio) Money to buy the
    ISO CORE standard. I wouldn't say the main
    argument is performance. If SWI-Prolog has a slow

    goal / term expansion framework, its their problem.
    I guess the main argument for having a dynamic
    database as is, as defined in the ISO CORE standard,

    is "coreness". The idea of the ISO CORE standard
    is to define Horn clause processor. And it does
    not want to hide Horn clauses, and the interface

    for asssert/retract are Horn clauses. And in this
    way the ISO CORE standard becomes a lower level
    foundation, that is quite versatile, and supports

    a couple of translation pipelines. Its a little
    bit the opposite to SWI-Prolog, with its unspecified dict
    madness. And the current PIP efforts, which go on somehow

    out of the blue, without specifying a new deep preprocessor
    foundation, introducing certain concepts that might
    require preprocessors beyond goal expansion and

    term expansion. Might have the same gaps has here:

    https://simon.peytonjones.org/assets/pdfs/verse-icfp23.pdf

    Does it have assert/retract ?

    Bye

    P.S.: The ISO core standard careful navigates around
    another issue. While it specifies dynamic database
    kind of behavioural, very detailed including that

    dynamic database has a very small shallow preprocessig,
    it for example still requires that p(X) :- X, is translated
    into p(X) :- call(X). It specifies the loader in the

    form of "prepare for execution". But static predicates
    share something with dynamic predicates. Their minimal
    shallow preprocessing is as well that p(X) :- X, is

    translated into p(X) :- call(X).

    Mild Shock schrieb:
    Hi,

    The good thing is while some people care
    for the foundation of reasoning, which also
    includes performance of "compiled" knowledge,

    some people are like Vestal Virgins that have
    never been touched by a running computer program,
    still they hord in some SWI-Prolog forum:

    Take this question:

    @jp-diegidio Yes, term/goal expansion does not apply to assert.
    Does not apply to assert and call, as Jan was saying.
    Do you or anybody happen to know why that is so?

    Guess what? DCG expansion is also not applied to
    the so called "dynamic database", see ISO core standard.
    My speculation it is for speed, and the idea is that

    assert/retract/clause are for already "compiled"
    knowledge, i.e. facts and rules produced by all the
    dozen pipelines that could sit in front the dynamic

    database, before one gets to the bare bone Horn clauses.

    LoL

    Bye

    P.S.: Try for yourself, no DCG processing:

    ?- assertz((a --> b)).
    true.

    ?- listing((-->)/2).
    :- dynamic (-->)/2.
    b.

    https://wasm.swi-prolog.org/wasm/tinker

    Mild Shock schrieb:
    Hi,

    Descartes’ “divide problems into parts” works
    only for well-behaved, linear, decomposable systems.
    But its just that parts might end up as Schrödingers

    equation. It could be that stable diffusion is the
    new constraint solver. In a sense, stable diffusion
    models (or other generative AI) are functioning as

    probabilistic, fuzzy constraint solvers — but in a
    very different paradigm from classical logic or
    formal methods. But what was neglected?

    - Cybernetics (1940s–50s)
    Focused on feedback loops, control, and self-regulation
    in machines and biological systems. Showed that
    decomposition can fail because subparts are interdependent.

    - Chaos Theory (1960s–80s)
    Nonlinear deterministic systems can produce unpredictable,
    sensitive dependence on initial conditions. Decomposition
    into parts is tricky: small errors explode, and “solving
    subparts” may not help predict the whole.

    - Santa Fe Institute & Complex Systems (1980s–present)
    Studied emergent behavior, networks, adaptation,
    self-organization. Linear, reductionist thinking fails
    to capture dynamics of economic, social, and ecological systems.

    Bye

    Mild Shock schrieb:
    Hi,

    How it started:

    https://conceptbase.sourceforge.net/

    How its going:

    https://www.ibm.com/products/datastax

    The problem with claims such as " Formal languages,
    such as KAOS, are based on predicate logic and
    capture additional details about an application
    in a precise manner. They also provide a foundation
    for reasoning with information models." is that
    every thing in the quoted sentence is wrong.

    Real AI systems scale by approximation,
    vectorization, distributed representations,
    and partial knowledge — not by globally
    consistent logical models. No classical requirements
    language or ontology captures the informal
    cognitive machinery that makes
    intelligence flexible. Intelligence needs the
    whole messy cognitive spectrum.

    Somehow DataStax looks like n8n married AI embedding.
    I hope Amazon, Meta, Google, etc.. get the message.
    I don't worry about Microsoft, they might come with

    something from their Encarta corner and Copilot+ is
    more Local AI. After all we need things like Wikidata
    in a Robot and not in a Data Center.

    LoL

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 14 14:29:19 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Maybe one should analyze the cases where
    fixpoints are needed. The worst case analysis
    of poor failure signalling could be that

    in the last step of this fixpoint search,
    that in the very last step a copy of the entire
    clause is created, which is then identical,

    while proper failure signalling, would avoid
    this copy. When term_expansion/2 fails, you
    have no intermediate term H:

    expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B). expand_term(A, A).

    So proper failure signalling could be faster,
    and less memory consuming, less pressure on the
    garbage collection of the Prolog system.

    Bye

    Mild Shock schrieb:
    Hi,

    Already the idea that expansion needs fixpoints,
    is the most stupid idea. If you need fixpoints
    you did something wrong.

    For example this here is not a fixpoint:

    expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B). expand_term(A, A).

    Its only the transitive closure R* of R, where
    R = term_expansion. The predicate term_expansion/2
    decides when to stop, by signalling a failure.

    I have never needed fixpoint neigher for term expansion,
    nor for goal_expansion, nor for function_expansion.
    You can spare all (==)/2 loop checking.

    The problem might be badly written term_expansion/2,
    that cannot signal failure. Or a relation R that is
    indeed cyclic. But a cyclic R doesn't make any

    sense in an expansion.

    Bye

    Mild Shock schrieb:
    Hi,

    Give this man (@jp-diegidio) Money to buy the
    ISO CORE standard. I wouldn't say the main
    argument is performance. If SWI-Prolog has a slow

    goal / term expansion framework, its their problem.
    I guess the main argument for having a dynamic
    database as is, as defined in the ISO CORE standard,

    is "coreness". The idea of the ISO CORE standard
    is to define Horn clause processor. And it does
    not want to hide Horn clauses, and the interface

    for asssert/retract are Horn clauses. And in this
    way the ISO CORE standard becomes a lower level
    foundation, that is quite versatile, and supports

    a couple of translation pipelines. Its a little
    bit the opposite to SWI-Prolog, with its unspecified dict
    madness. And the current PIP efforts, which go on somehow

    out of the blue, without specifying a new deep preprocessor
    foundation, introducing certain concepts that might
    require preprocessors beyond goal expansion and

    term expansion. Might have the same gaps has here:

    https://simon.peytonjones.org/assets/pdfs/verse-icfp23.pdf

    Does it have assert/retract ?

    Bye

    P.S.: The ISO core standard careful navigates around
    another issue. While it specifies dynamic database
    kind of behavioural, very detailed including that

    dynamic database has a very small shallow preprocessig,
    it for example still requires that p(X) :- X, is translated
    into p(X) :- call(X). It specifies the loader in the

    form of "prepare for execution". But static predicates
    share something with dynamic predicates. Their minimal
    shallow preprocessing is as well that p(X) :- X, is

    translated into p(X) :- call(X).

    Mild Shock schrieb:
    Hi,

    The good thing is while some people care
    for the foundation of reasoning, which also
    includes performance of "compiled" knowledge,

    some people are like Vestal Virgins that have
    never been touched by a running computer program,
    still they hord in some SWI-Prolog forum:

    Take this question:

    @jp-diegidio Yes, term/goal expansion does not apply to assert.
    Does not apply to assert and call, as Jan was saying.
    Do you or anybody happen to know why that is so?

    Guess what? DCG expansion is also not applied to
    the so called "dynamic database", see ISO core standard.
    My speculation it is for speed, and the idea is that

    assert/retract/clause are for already "compiled"
    knowledge, i.e. facts and rules produced by all the
    dozen pipelines that could sit in front the dynamic

    database, before one gets to the bare bone Horn clauses.

    LoL

    Bye

    P.S.: Try for yourself, no DCG processing:

    ?- assertz((a --> b)).
    true.

    ?- listing((-->)/2).
    :- dynamic (-->)/2.
    b.

    https://wasm.swi-prolog.org/wasm/tinker

    Mild Shock schrieb:
    Hi,

    Descartes’ “divide problems into parts” works
    only for well-behaved, linear, decomposable systems.
    But its just that parts might end up as Schrödingers

    equation. It could be that stable diffusion is the
    new constraint solver. In a sense, stable diffusion
    models (or other generative AI) are functioning as

    probabilistic, fuzzy constraint solvers — but in a
    very different paradigm from classical logic or
    formal methods. But what was neglected?

    - Cybernetics (1940s–50s)
    Focused on feedback loops, control, and self-regulation
    in machines and biological systems. Showed that
    decomposition can fail because subparts are interdependent.

    - Chaos Theory (1960s–80s)
    Nonlinear deterministic systems can produce unpredictable,
    sensitive dependence on initial conditions. Decomposition
    into parts is tricky: small errors explode, and “solving
    subparts” may not help predict the whole.

    - Santa Fe Institute & Complex Systems (1980s–present)
    Studied emergent behavior, networks, adaptation,
    self-organization. Linear, reductionist thinking fails
    to capture dynamics of economic, social, and ecological systems.

    Bye

    Mild Shock schrieb:
    Hi,

    How it started:

    https://conceptbase.sourceforge.net/

    How its going:

    https://www.ibm.com/products/datastax

    The problem with claims such as " Formal languages,
    such as KAOS, are based on predicate logic and
    capture additional details about an application
    in a precise manner. They also provide a foundation
    for reasoning with information models." is that
    every thing in the quoted sentence is wrong.

    Real AI systems scale by approximation,
    vectorization, distributed representations,
    and partial knowledge — not by globally
    consistent logical models. No classical requirements
    language or ontology captures the informal
    cognitive machinery that makes
    intelligence flexible. Intelligence needs the
    whole messy cognitive spectrum.

    Somehow DataStax looks like n8n married AI embedding.
    I hope Amazon, Meta, Google, etc.. get the message.
    I don't worry about Microsoft, they might come with

    something from their Encarta corner and Copilot+ is
    more Local AI. After all we need things like Wikidata
    in a Robot and not in a Data Center.

    LoL

    Bye





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 14 14:36:04 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Is this another dumb down inside the Prolog
    community. The narrative of fixpoint for expansion
    is repeated over and over, up to the point that

    @jan
    goal_expansion/2 is called by expand_goal/2
    until fixed point is reached.

    one finds it even in the code:

    File: swipl-devel/library/prolog_clause.pl

    goal_expansion(G0, G, P, P) :-
    user:goal_expansion(G0, G), % TBD: we need the module!
    G0 \== G. % \=@=?

    But given the many question marks and TBD, it
    seems people don't know what they are doing.
    Its just one more mess.

    Bye

    This here looks fine somehow:

    expand_goal(A, B, Module, P0, P) :-
    goal_expansion(A, B0, P0, P1),
    !,
    expand_goal(B0, B, Module, P1, P).
    expand_goal(A, A, _, P, P).

    Mild Shock schrieb:
    Hi,

    Maybe one should analyze the cases where
    fixpoints are needed. The worst case analysis
    of poor failure signalling could be that

    in the last step of this fixpoint search,
    that in the very last step a copy of the entire
    clause is created, which is then identical,

    while proper failure signalling, would avoid
    this copy. When term_expansion/2 fails, you
    have no intermediate term H:

    expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B). expand_term(A, A).

    So proper failure signalling could be faster,
    and less memory consuming, less pressure on the
    garbage collection of the Prolog system.

    Bye

    Mild Shock schrieb:
    Hi,

    Already the idea that expansion needs fixpoints,
    is the most stupid idea. If you need fixpoints
    you did something wrong.

    For example this here is not a fixpoint:

    expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B).
    expand_term(A, A).

    Its only the transitive closure R* of R, where
    R = term_expansion. The predicate term_expansion/2
    decides when to stop, by signalling a failure.

    I have never needed fixpoint neigher for term expansion,
    nor for goal_expansion, nor for function_expansion.
    You can spare all (==)/2 loop checking.

    The problem might be badly written term_expansion/2,
    that cannot signal failure. Or a relation R that is
    indeed cyclic. But a cyclic R doesn't make any

    sense in an expansion.

    Bye

    Mild Shock schrieb:
    Hi,

    Give this man (@jp-diegidio) Money to buy the
    ISO CORE standard. I wouldn't say the main
    argument is performance. If SWI-Prolog has a slow

    goal / term expansion framework, its their problem.
    I guess the main argument for having a dynamic
    database as is, as defined in the ISO CORE standard,

    is "coreness". The idea of the ISO CORE standard
    is to define Horn clause processor. And it does
    not want to hide Horn clauses, and the interface

    for asssert/retract are Horn clauses. And in this
    way the ISO CORE standard becomes a lower level
    foundation, that is quite versatile, and supports

    a couple of translation pipelines. Its a little
    bit the opposite to SWI-Prolog, with its unspecified dict
    madness. And the current PIP efforts, which go on somehow

    out of the blue, without specifying a new deep preprocessor
    foundation, introducing certain concepts that might
    require preprocessors beyond goal expansion and

    term expansion. Might have the same gaps has here:

    https://simon.peytonjones.org/assets/pdfs/verse-icfp23.pdf

    Does it have assert/retract ?

    Bye

    P.S.: The ISO core standard careful navigates around
    another issue. While it specifies dynamic database
    kind of behavioural, very detailed including that

    dynamic database has a very small shallow preprocessig,
    it for example still requires that p(X) :- X, is translated
    into p(X) :- call(X). It specifies the loader in the

    form of "prepare for execution". But static predicates
    share something with dynamic predicates. Their minimal
    shallow preprocessing is as well that p(X) :- X, is

    translated into p(X) :- call(X).

    Mild Shock schrieb:
    Hi,

    The good thing is while some people care
    for the foundation of reasoning, which also
    includes performance of "compiled" knowledge,

    some people are like Vestal Virgins that have
    never been touched by a running computer program,
    still they hord in some SWI-Prolog forum:

    Take this question:

    @jp-diegidio Yes, term/goal expansion does not apply to assert.
    Does not apply to assert and call, as Jan was saying.
    Do you or anybody happen to know why that is so?

    Guess what? DCG expansion is also not applied to
    the so called "dynamic database", see ISO core standard.
    My speculation it is for speed, and the idea is that

    assert/retract/clause are for already "compiled"
    knowledge, i.e. facts and rules produced by all the
    dozen pipelines that could sit in front the dynamic

    database, before one gets to the bare bone Horn clauses.

    LoL

    Bye

    P.S.: Try for yourself, no DCG processing:

    ?- assertz((a --> b)).
    true.

    ?- listing((-->)/2).
    :- dynamic (-->)/2.
    b.

    https://wasm.swi-prolog.org/wasm/tinker

    Mild Shock schrieb:
    Hi,

    Descartes’ “divide problems into parts” works
    only for well-behaved, linear, decomposable systems.
    But its just that parts might end up as Schrödingers

    equation. It could be that stable diffusion is the
    new constraint solver. In a sense, stable diffusion
    models (or other generative AI) are functioning as

    probabilistic, fuzzy constraint solvers — but in a
    very different paradigm from classical logic or
    formal methods. But what was neglected?

    - Cybernetics (1940s–50s)
    Focused on feedback loops, control, and self-regulation
    in machines and biological systems. Showed that
    decomposition can fail because subparts are interdependent.

    - Chaos Theory (1960s–80s)
    Nonlinear deterministic systems can produce unpredictable,
    sensitive dependence on initial conditions. Decomposition
    into parts is tricky: small errors explode, and “solving
    subparts” may not help predict the whole.

    - Santa Fe Institute & Complex Systems (1980s–present)
    Studied emergent behavior, networks, adaptation,
    self-organization. Linear, reductionist thinking fails
    to capture dynamics of economic, social, and ecological systems.

    Bye

    Mild Shock schrieb:
    Hi,

    How it started:

    https://conceptbase.sourceforge.net/

    How its going:

    https://www.ibm.com/products/datastax

    The problem with claims such as " Formal languages,
    such as KAOS, are based on predicate logic and
    capture additional details about an application
    in a precise manner. They also provide a foundation
    for reasoning with information models." is that
    every thing in the quoted sentence is wrong.

    Real AI systems scale by approximation,
    vectorization, distributed representations,
    and partial knowledge — not by globally
    consistent logical models. No classical requirements
    language or ontology captures the informal
    cognitive machinery that makes
    intelligence flexible. Intelligence needs the
    whole messy cognitive spectrum.

    Somehow DataStax looks like n8n married AI embedding.
    I hope Amazon, Meta, Google, etc.. get the message.
    I don't worry about Microsoft, they might come with

    something from their Encarta corner and Copilot+ is
    more Local AI. After all we need things like Wikidata
    in a Robot and not in a Data Center.

    LoL

    Bye






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 14 14:42:51 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I remember Ciao People slapping in my face,
    I don't work for the Prolog cause. But they
    probably sit on 30 year old code mess, and don't

    know what "compiled" knowledge they have
    dispersed all over the place. And how this
    could be cast in a sane preprocessor framework.

    Now we have 3 candidates and the rest is draft:

    0001.0 What is a PIP?
    0101.0 Communication between Prolog and Python via Janus
    0105.0 Options in write_term
    https://prolog-lang.org/ImplementersForum/PIPs

    Bravo! Nobody really cares anymore of the ISO
    CORE standard and compiled knowledge in the form
    of Horn clauses. Prolog findally becomes

    an irrelevant scripting language.

    Bye

    Mild Shock schrieb:
    Hi,

    Is this another dumb down inside the Prolog
    community. The narrative of fixpoint for expansion
    is repeated over and over, up to the point that

    @jan
    goal_expansion/2 is called by expand_goal/2
    until fixed point is reached.

    one finds it even in the code:

    File: swipl-devel/library/prolog_clause.pl

    goal_expansion(G0, G, P, P) :-
        user:goal_expansion(G0, G),     % TBD: we need the module!
        G0 \== G.                       % \=@=?

    But given the many question marks and TBD, it
    seems people don't know what they are doing.
    Its just one more mess.

    Bye

    This here looks fine somehow:

    expand_goal(A, B, Module, P0, P) :-
        goal_expansion(A, B0, P0, P1),
        !,
        expand_goal(B0, B, Module, P1, P).
    expand_goal(A, A, _, P, P).

    Mild Shock schrieb:
    Hi,

    Maybe one should analyze the cases where
    fixpoints are needed. The worst case analysis
    of poor failure signalling could be that

    in the last step of this fixpoint search,
    that in the very last step a copy of the entire
    clause is created, which is then identical,

    while proper failure signalling, would avoid
    this copy. When term_expansion/2 fails, you
    have no intermediate term H:

    expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B).
    expand_term(A, A).

    So proper failure signalling could be faster,
    and less memory consuming, less pressure on the
    garbage collection of the Prolog system.

    Bye

    Mild Shock schrieb:
    Hi,

    Already the idea that expansion needs fixpoints,
    is the most stupid idea. If you need fixpoints
    you did something wrong.

    For example this here is not a fixpoint:

    expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B).
    expand_term(A, A).

    Its only the transitive closure R* of R, where
    R = term_expansion. The predicate term_expansion/2
    decides when to stop, by signalling a failure.

    I have never needed fixpoint neigher for term expansion,
    nor for goal_expansion, nor for function_expansion.
    You can spare all (==)/2 loop checking.

    The problem might be badly written term_expansion/2,
    that cannot signal failure. Or a relation R that is
    indeed cyclic. But a cyclic R doesn't make any

    sense in an expansion.

    Bye

    Mild Shock schrieb:
    Hi,

    Give this man (@jp-diegidio) Money to buy the
    ISO CORE standard. I wouldn't say the main
    argument is performance. If SWI-Prolog has a slow

    goal / term expansion framework, its their problem.
    I guess the main argument for having a dynamic
    database as is, as defined in the ISO CORE standard,

    is "coreness". The idea of the ISO CORE standard
    is to define Horn clause processor. And it does
    not want to hide Horn clauses, and the interface

    for asssert/retract are Horn clauses. And in this
    way the ISO CORE standard becomes a lower level
    foundation, that is quite versatile, and supports

    a couple of translation pipelines. Its a little
    bit the opposite to SWI-Prolog, with its unspecified dict
    madness. And the current PIP efforts, which go on somehow

    out of the blue, without specifying a new deep preprocessor
    foundation, introducing certain concepts that might
    require preprocessors beyond goal expansion and

    term expansion. Might have the same gaps has here:

    https://simon.peytonjones.org/assets/pdfs/verse-icfp23.pdf

    Does it have assert/retract ?

    Bye

    P.S.: The ISO core standard careful navigates around
    another issue. While it specifies dynamic database
    kind of behavioural, very detailed including that

    dynamic database has a very small shallow preprocessig,
    it for example still requires that p(X) :- X, is translated
    into p(X) :- call(X). It specifies the loader in the

    form of "prepare for execution". But static predicates
    share something with dynamic predicates. Their minimal
    shallow preprocessing is as well that p(X) :- X, is

    translated into p(X) :- call(X).
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 14 15:07:19 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Why Python, nobody cares about Python.
    Take LangChain Memory. It has a few Python
    data types, can be mapped to JSON:

    ConversationBufferMemory
    ConversationSummaryMemory
    VectorStoreMemory

    But then it has also a JavaScript interface.
    This is aspect of Java, which is not WASM,
    Browser, etc.. driven. More Java on the server side:

    LangChain also has a JS/TS version: langchainjs.com
    Many memory types exist here as well.
    Example: BufferMemory, ConversationSummaryMemory —
    works in Node.js or Deno.

    But maybe will see some pure Browser stuff as well,
    when there is more Loca AI. Some Low-code / no-code
    platforms integrate LangChain under the hood:

    - LlamaIndex (formerly GPT Index) + Airtable / Notion
    - n8n / Node-RED nodes (via LangChain JS API)

    So whats the common denominator between Python
    and JavaScript. Could there be a Janus that
    works for both?

    Bye

    Mild Shock schrieb:
    Hi,

    I remember Ciao People slapping in my face,
    I don't work for the Prolog cause. But they
    probably sit on 30 year old code mess, and don't

    know what "compiled" knowledge they have
    dispersed all over the place. And how this
    could be cast in a sane preprocessor framework.

    Now we have 3 candidates and the rest is draft:

    0001.0     What is a PIP?
    0101.0     Communication between Prolog and Python via Janus 0105.0     Options in write_term https://prolog-lang.org/ImplementersForum/PIPs

    Bravo! Nobody really cares anymore of the ISO
    CORE standard and compiled knowledge in the form
    of Horn clauses. Prolog findally becomes

    an irrelevant scripting language.

    Bye

    Mild Shock schrieb:
    Hi,

    Is this another dumb down inside the Prolog
    community. The narrative of fixpoint for expansion
    is repeated over and over, up to the point that

    @jan
    goal_expansion/2 is called by expand_goal/2
    until fixed point is reached.

    one finds it even in the code:

    File: swipl-devel/library/prolog_clause.pl

    goal_expansion(G0, G, P, P) :-
         user:goal_expansion(G0, G),     % TBD: we need the module!
         G0 \== G.                       % \=@=?

    But given the many question marks and TBD, it
    seems people don't know what they are doing.
    Its just one more mess.

    Bye

    This here looks fine somehow:

    expand_goal(A, B, Module, P0, P) :-
         goal_expansion(A, B0, P0, P1),
         !,
         expand_goal(B0, B, Module, P1, P).
    expand_goal(A, A, _, P, P).

    Mild Shock schrieb:
    Hi,

    Maybe one should analyze the cases where
    fixpoints are needed. The worst case analysis
    of poor failure signalling could be that

    in the last step of this fixpoint search,
    that in the very last step a copy of the entire
    clause is created, which is then identical,

    while proper failure signalling, would avoid
    this copy. When term_expansion/2 fails, you
    have no intermediate term H:

    expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B).
    expand_term(A, A).

    So proper failure signalling could be faster,
    and less memory consuming, less pressure on the
    garbage collection of the Prolog system.

    Bye

    Mild Shock schrieb:
    Hi,

    Already the idea that expansion needs fixpoints,
    is the most stupid idea. If you need fixpoints
    you did something wrong.

    For example this here is not a fixpoint:

    expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B).
    expand_term(A, A).

    Its only the transitive closure R* of R, where
    R = term_expansion. The predicate term_expansion/2
    decides when to stop, by signalling a failure.

    I have never needed fixpoint neigher for term expansion,
    nor for goal_expansion, nor for function_expansion.
    You can spare all (==)/2 loop checking.

    The problem might be badly written term_expansion/2,
    that cannot signal failure. Or a relation R that is
    indeed cyclic. But a cyclic R doesn't make any

    sense in an expansion.

    Bye

    Mild Shock schrieb:
    Hi,

    Give this man (@jp-diegidio) Money to buy the
    ISO CORE standard. I wouldn't say the main
    argument is performance. If SWI-Prolog has a slow

    goal / term expansion framework, its their problem.
    I guess the main argument for having a dynamic
    database as is, as defined in the ISO CORE standard,

    is "coreness". The idea of the ISO CORE standard
    is to define Horn clause processor. And it does
    not want to hide Horn clauses, and the interface

    for asssert/retract are Horn clauses. And in this
    way the ISO CORE standard becomes a lower level
    foundation, that is quite versatile, and supports

    a couple of translation pipelines. Its a little
    bit the opposite to SWI-Prolog, with its unspecified dict
    madness. And the current PIP efforts, which go on somehow

    out of the blue, without specifying a new deep preprocessor
    foundation, introducing certain concepts that might
    require preprocessors beyond goal expansion and

    term expansion. Might have the same gaps has here:

    https://simon.peytonjones.org/assets/pdfs/verse-icfp23.pdf

    Does it have assert/retract ?

    Bye

    P.S.: The ISO core standard careful navigates around
    another issue. While it specifies dynamic database
    kind of behavioural, very detailed including that

    dynamic database has a very small shallow preprocessig,
    it for example still requires that p(X) :- X, is translated
    into p(X) :- call(X). It specifies the loader in the

    form of "prepare for execution". But static predicates
    share something with dynamic predicates. Their minimal
    shallow preprocessing is as well that p(X) :- X, is

    translated into p(X) :- call(X).

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 14 15:08:19 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Why Python, nobody cares about Python.
    Take LangChain Memory. It has a few Python
    data types, can be mapped to JSON:

    ConversationBufferMemory
    ConversationSummaryMemory
    VectorStoreMemory

    But then it has also a JavaScript interface.
    This is aspect of JavaScript, which is not WASM,
    Browser, etc.. driven. More JavaScript on the server side:

    LangChain also has a JS/TS version: langchainjs.com
    Many memory types exist here as well.
    Example: BufferMemory, ConversationSummaryMemory —
    works in Node.js or Deno.

    But maybe will see some pure Browser stuff as well,
    when there is more Local AI. Some Low-code / no-code
    platforms integrate LangChain under the hood:

    - LlamaIndex (formerly GPT Index) + Airtable / Notion
    - n8n / Node-RED nodes (via LangChain JS API)

    So whats the common denominator between Python
    and JavaScript. Could there be a Janus that
    works for both?

    Bye

    Mild Shock schrieb:
    Hi,

    I remember Ciao People slapping in my face,
    I don't work for the Prolog cause. But they
    probably sit on 30 year old code mess, and don't

    know what "compiled" knowledge they have
    dispersed all over the place. And how this
    could be cast in a sane preprocessor framework.

    Now we have 3 candidates and the rest is draft:

    0001.0     What is a PIP?
    0101.0     Communication between Prolog and Python via Janus 0105.0     Options in write_term https://prolog-lang.org/ImplementersForum/PIPs

    Bravo! Nobody really cares anymore of the ISO
    CORE standard and compiled knowledge in the form
    of Horn clauses. Prolog findally becomes

    an irrelevant scripting language.

    Bye

    Mild Shock schrieb:
    Hi,

    Is this another dumb down inside the Prolog
    community. The narrative of fixpoint for expansion
    is repeated over and over, up to the point that

    @jan
    goal_expansion/2 is called by expand_goal/2
    until fixed point is reached.

    one finds it even in the code:

    File: swipl-devel/library/prolog_clause.pl

    goal_expansion(G0, G, P, P) :-
         user:goal_expansion(G0, G),     % TBD: we need the module!
         G0 \== G.                       % \=@=?

    But given the many question marks and TBD, it
    seems people don't know what they are doing.
    Its just one more mess.

    Bye

    This here looks fine somehow:

    expand_goal(A, B, Module, P0, P) :-
         goal_expansion(A, B0, P0, P1),
         !,
         expand_goal(B0, B, Module, P1, P).
    expand_goal(A, A, _, P, P).

    Mild Shock schrieb:
    Hi,

    Maybe one should analyze the cases where
    fixpoints are needed. The worst case analysis
    of poor failure signalling could be that

    in the last step of this fixpoint search,
    that in the very last step a copy of the entire
    clause is created, which is then identical,

    while proper failure signalling, would avoid
    this copy. When term_expansion/2 fails, you
    have no intermediate term H:

    expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B).
    expand_term(A, A).

    So proper failure signalling could be faster,
    and less memory consuming, less pressure on the
    garbage collection of the Prolog system.

    Bye

    Mild Shock schrieb:
    Hi,

    Already the idea that expansion needs fixpoints,
    is the most stupid idea. If you need fixpoints
    you did something wrong.

    For example this here is not a fixpoint:

    expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B).
    expand_term(A, A).

    Its only the transitive closure R* of R, where
    R = term_expansion. The predicate term_expansion/2
    decides when to stop, by signalling a failure.

    I have never needed fixpoint neigher for term expansion,
    nor for goal_expansion, nor for function_expansion.
    You can spare all (==)/2 loop checking.

    The problem might be badly written term_expansion/2,
    that cannot signal failure. Or a relation R that is
    indeed cyclic. But a cyclic R doesn't make any

    sense in an expansion.

    Bye

    Mild Shock schrieb:
    Hi,

    Give this man (@jp-diegidio) Money to buy the
    ISO CORE standard. I wouldn't say the main
    argument is performance. If SWI-Prolog has a slow

    goal / term expansion framework, its their problem.
    I guess the main argument for having a dynamic
    database as is, as defined in the ISO CORE standard,

    is "coreness". The idea of the ISO CORE standard
    is to define Horn clause processor. And it does
    not want to hide Horn clauses, and the interface

    for asssert/retract are Horn clauses. And in this
    way the ISO CORE standard becomes a lower level
    foundation, that is quite versatile, and supports

    a couple of translation pipelines. Its a little
    bit the opposite to SWI-Prolog, with its unspecified dict
    madness. And the current PIP efforts, which go on somehow

    out of the blue, without specifying a new deep preprocessor
    foundation, introducing certain concepts that might
    require preprocessors beyond goal expansion and

    term expansion. Might have the same gaps has here:

    https://simon.peytonjones.org/assets/pdfs/verse-icfp23.pdf

    Does it have assert/retract ?

    Bye

    P.S.: The ISO core standard careful navigates around
    another issue. While it specifies dynamic database
    kind of behavioural, very detailed including that

    dynamic database has a very small shallow preprocessig,
    it for example still requires that p(X) :- X, is translated
    into p(X) :- call(X). It specifies the loader in the

    form of "prepare for execution". But static predicates
    share something with dynamic predicates. Their minimal
    shallow preprocessing is as well that p(X) :- X, is

    translated into p(X) :- call(X).

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 14 16:49:19 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Give Money to all the students working on SWI/XSB.
    I feel this phrasing could be misleading:

    Mapping Types: The translation of Python dictionaries takes
    advantage of the syntax of braces, which is supported by all Prologs
    that support DCGs. The term form of a dictionary is;

    https://prolog-lang.org/ImplementersForum/janus-bitrans.html

    The {}/1 syntax is already part of the ISO core standard.
    No ISO DCG standard needed. You find it in section 6.3.6:

    ------------------- cut here -------------------

    6.3.6 Compound terms - curly bracketed term

    A term with principal functor '{}'/1 can also be expressed
    by enclosing its argument in curly brackets.

    term = open curly, term, close curly ;

    Abstract: {}(l) l
    Priority: 0 1201


    NOTE - For the syntax of an empty curly brackets, see 6.3.1.3.

    ------------------- cut here -------------------

    Bye

    Mild Shock schrieb:
    Hi,

    Give this man (@jp-diegidio) Money to buy the
    ISO CORE standard. I wouldn't say the main
    argument is performance. If SWI-Prolog has a slow

    goal / term expansion framework, its their problem.
    I guess the main argument for having a dynamic
    database as is, as defined in the ISO CORE standard,

    is "coreness". The idea of the ISO CORE standard
    is to define Horn clause processor. And it does
    not want to hide Horn clauses, and the interface

    for asssert/retract are Horn clauses. And in this
    way the ISO CORE standard becomes a lower level
    foundation, that is quite versatile, and supports

    a couple of translation pipelines. Its a little
    bit the opposite to SWI-Prolog, with its unspecified dict
    madness. And the current PIP efforts, which go on somehow

    out of the blue, without specifying a new deep preprocessor
    foundation, introducing certain concepts that might
    require preprocessors beyond goal expansion and

    term expansion. Might have the same gaps has here:

    https://simon.peytonjones.org/assets/pdfs/verse-icfp23.pdf

    Does it have assert/retract ?

    Bye

    P.S.: The ISO core standard careful navigates around
    another issue. While it specifies dynamic database
    kind of behavioural, very detailed including that

    dynamic database has a very small shallow preprocessig,
    it for example still requires that p(X) :- X, is translated
    into p(X) :- call(X). It specifies the loader in the

    form of "prepare for execution". But static predicates
    share something with dynamic predicates. Their minimal
    shallow preprocessing is as well that p(X) :- X, is

    translated into p(X) :- call(X).

    Mild Shock schrieb:
    Hi,

    The good thing is while some people care
    for the foundation of reasoning, which also
    includes performance of "compiled" knowledge,

    some people are like Vestal Virgins that have
    never been touched by a running computer program,
    still they hord in some SWI-Prolog forum:

    Take this question:

    @jp-diegidio Yes, term/goal expansion does not apply to assert.
    Does not apply to assert and call, as Jan was saying.
    Do you or anybody happen to know why that is so?

    Guess what? DCG expansion is also not applied to
    the so called "dynamic database", see ISO core standard.
    My speculation it is for speed, and the idea is that

    assert/retract/clause are for already "compiled"
    knowledge, i.e. facts and rules produced by all the
    dozen pipelines that could sit in front the dynamic

    database, before one gets to the bare bone Horn clauses.

    LoL

    Bye

    P.S.: Try for yourself, no DCG processing:

    ?- assertz((a --> b)).
    true.

    ?- listing((-->)/2).
    :- dynamic (-->)/2.
    b.

    https://wasm.swi-prolog.org/wasm/tinker

    Mild Shock schrieb:
    Hi,

    Descartes’ “divide problems into parts” works
    only for well-behaved, linear, decomposable systems.
    But its just that parts might end up as Schrödingers

    equation. It could be that stable diffusion is the
    new constraint solver. In a sense, stable diffusion
    models (or other generative AI) are functioning as

    probabilistic, fuzzy constraint solvers — but in a
    very different paradigm from classical logic or
    formal methods. But what was neglected?

    - Cybernetics (1940s–50s)
    Focused on feedback loops, control, and self-regulation
    in machines and biological systems. Showed that
    decomposition can fail because subparts are interdependent.

    - Chaos Theory (1960s–80s)
    Nonlinear deterministic systems can produce unpredictable,
    sensitive dependence on initial conditions. Decomposition
    into parts is tricky: small errors explode, and “solving
    subparts” may not help predict the whole.

    - Santa Fe Institute & Complex Systems (1980s–present)
    Studied emergent behavior, networks, adaptation,
    self-organization. Linear, reductionist thinking fails
    to capture dynamics of economic, social, and ecological systems.

    Bye

    Mild Shock schrieb:
    Hi,

    How it started:

    https://conceptbase.sourceforge.net/

    How its going:

    https://www.ibm.com/products/datastax

    The problem with claims such as " Formal languages,
    such as KAOS, are based on predicate logic and
    capture additional details about an application
    in a precise manner. They also provide a foundation
    for reasoning with information models." is that
    every thing in the quoted sentence is wrong.

    Real AI systems scale by approximation,
    vectorization, distributed representations,
    and partial knowledge — not by globally
    consistent logical models. No classical requirements
    language or ontology captures the informal
    cognitive machinery that makes
    intelligence flexible. Intelligence needs the
    whole messy cognitive spectrum.

    Somehow DataStax looks like n8n married AI embedding.
    I hope Amazon, Meta, Google, etc.. get the message.
    I don't worry about Microsoft, they might come with

    something from their Encarta corner and Copilot+ is
    more Local AI. After all we need things like Wikidata
    in a Robot and not in a Data Center.

    LoL

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 14 17:12:00 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    A few months / years ago, I called the {}/1 based
    Dicts, Affine Dicts, because they play with the
    syntax affinity between JavaScript, Python and JSON

    dicts, and the curly backeted terms, when used
    in connection with (:)/2 and (,)/2. Which lead
    to the libraries library(misc/json) and

    library(misc/dict). Recently I made a few
    observations, namely:

    - library(misc/json) deduplicate:
    The library does not yet deduplicate key value pairs.
    It seems that JavaScript and Python use a replacement
    strategy where the first pair defines the order
    and the last pair defines the definite value:

    JSON.parse('{"a":123, "b":"abc", "a":456}')
    {a: 456, b: 'abc'}

    - library(misc/dict) empty dict:
    The reader order influences the backtracking operations,
    as well the non-backtracking operations I am
    currently working on. Now I would prefer that
    the empty dict is represented as {true} using {}/1
    and not using {}/0. This would facilitate the non-
    backtracking operations, making them also applicable
    to empty dicts, or allowing them also to return
    the empty dict. Because the change_arg/3 based
    operations can the modify given dict always.
    The recent new non-backtracking operations are:

    dict_set(T, K, V) : Non-failure signalling,
    override just as in the JSON read semantics.
    dict_add(T, K, V) : Failure signalling if K already present.
    dict_remove(+Dict, +Term) : Non-failure signalling.

    The non-backtracking dicts are also usable with some
    of the backtracking dicts operations, these operations
    extend to compounds modified via change_arg/3:

    dict_current(T, K, V) : Failure signalling if K isn not present.
    Etc.. Etc..

    But currently everything is implemented with {}/0
    interpreting as empty dict, which I will probably change
    into {false} everywhere, so that change_arg/3 and

    non change_arg/3 work uniformly. The change will
    affect both library(misc/json) and library(misc/dict).

    Bye

    Mild Shock schrieb:
    Hi,

    Give Money to all the students working on SWI/XSB.
    I feel this phrasing could be misleading:

    Mapping Types: The translation of Python dictionaries takes
    advantage of the syntax of braces, which is supported by all Prologs
    that support DCGs. The term form of a dictionary is;

    https://prolog-lang.org/ImplementersForum/janus-bitrans.html

    The {}/1 syntax is already part of the ISO core standard.
    No ISO DCG standard needed. You find it in section 6.3.6:

    ------------------- cut here -------------------

    6.3.6 Compound terms - curly bracketed term

    A term with principal functor '{}'/1 can also be expressed
    by enclosing its argument in curly brackets.

    term = open curly, term, close curly ;

    Abstract: {}(l)           l
    Priority: 0               1201


    NOTE - For the syntax of an empty curly brackets, see 6.3.1.3.

    ------------------- cut here -------------------

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 14 17:24:45 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Here some more testing, that shows that Affine Dicts
    are quite different from SWI-Dicts, since they do
    not sort the keys:

    JSON.parse('{"a":123, "b":"abc", "a":456}')
    {a: 456, b: 'abc'}

    JSON.parse('{"b":"abc", "a":123, "a":456}')
    {b: 'abc', a: 456}

    This is now the standard across JavaScript and Python.
    Since Python 3.7, the insertion-order behavior of
    dict is a language guarantee.

    Further ECMAScript defines a deterministic property
    order for plain objects, String keys (non-integer)
    are in insertion order.

    Still JavaScript and Python seem not to mourne some
    performance issues. ChatGPT claims JavaScript: Order
    is tracked efficiently, and Python: Order is cheap

    because of a major redesign. So maybe lets figure
    out how to do either or both transparently. So far
    my Affine do not perform key sorting, not having

    the beauty of O(log N) access as SWI has sometimes.
    The access is O(N). But I expect it can be made O(1)
    under the hood! Not yet sure, the arrow functions can

    do it, since they provide hash table of anonymous
    predicate clauses. But this is still a little too
    heavy for dicts I guess.

    Bye

    P.S.: Corollary, ditch PIP0104 and PIP0102 as well.

    Dictionaries in Prolog (dynamic) https://prolog-lang.org/ImplementersForum/0102-dicts.html

    Terms with named arguments (static dicts) https://prolog-lang.org/ImplementersForum/0104-argnames.html

    Mild Shock schrieb:
    Hi,

    A few months / years ago, I called the {}/1 based
    Dicts, Affine Dicts, because they play with the
    syntax affinity between JavaScript, Python and JSON

    dicts, and the curly backeted terms, when used
    in connection with (:)/2 and (,)/2. Which lead
    to the libraries library(misc/json) and

    library(misc/dict). Recently I made a few
    observations, namely:

    - library(misc/json) deduplicate:
      The library does not yet deduplicate key value pairs.
      It seems that JavaScript and Python use a replacement
      strategy where the first pair defines the order
      and the last pair defines the definite value:

    JSON.parse('{"a":123, "b":"abc", "a":456}')
    {a: 456, b: 'abc'}

    - library(misc/dict) empty dict:
      The reader order influences the backtracking operations,
      as well the non-backtracking operations I am
      currently working on. Now I would prefer that
      the empty dict is represented as {true} using {}/1
      and not using {}/0. This would facilitate the non-
      backtracking operations, making them also applicable
      to empty dicts, or allowing them also to return
      the empty dict. Because the change_arg/3 based
      operations can the modify given dict always.
      The recent new non-backtracking operations are:

    dict_set(T, K, V) : Non-failure signalling,
      override just as in the JSON read semantics.
    dict_add(T, K, V) : Failure signalling if K already present. dict_remove(+Dict, +Term) : Non-failure signalling.

    The non-backtracking dicts are also usable with some
    of the backtracking dicts operations, these operations
    extend to compounds modified via change_arg/3:

    dict_current(T, K, V) : Failure signalling if K isn not present.
    Etc.. Etc..

    But currently everything is implemented with {}/0
    interpreting as empty dict, which I will probably change
    into {false} everywhere, so that change_arg/3 and

    non change_arg/3 work uniformly. The change will
    affect both library(misc/json) and library(misc/dict).

    Bye

    Mild Shock schrieb:
    Hi,

    Give Money to all the students working on SWI/XSB.
    I feel this phrasing could be misleading:

    Mapping Types: The translation of Python dictionaries takes
    advantage of the syntax of braces, which is supported by all Prologs
    that support DCGs. The term form of a dictionary is;

    https://prolog-lang.org/ImplementersForum/janus-bitrans.html

    The {}/1 syntax is already part of the ISO core standard.
    No ISO DCG standard needed. You find it in section 6.3.6:

    ------------------- cut here -------------------

    6.3.6 Compound terms - curly bracketed term

    A term with principal functor '{}'/1 can also be expressed
    by enclosing its argument in curly brackets.

    term = open curly, term, close curly ;

    Abstract: {}(l)           l
    Priority: 0               1201


    NOTE - For the syntax of an empty curly brackets, see 6.3.1.3.

    ------------------- cut here -------------------

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 14 17:25:52 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Here some more testing, that shows that Affine Dicts
    are quite different from SWI-Dicts, since they do
    not sort the keys:

    JSON.parse('{"a":123, "b":"abc", "a":456}')
    {a: 456, b: 'abc'}

    JSON.parse('{"b":"abc", "a":123, "a":456}')
    {b: 'abc', a: 456}

    This is now the standard across JavaScript and Python.
    Since Python 3.7, the insertion-order behavior of
    dict is a language guarantee.

    Further ECMAScript defines a deterministic property
    order for plain objects, String keys (non-integer)
    are in insertion order.

    Still JavaScript and Python seem not to mourne some
    performance issues. ChatGPT claims JavaScript: Order
    is tracked efficiently, and Python: Order is cheap

    because of a major redesign. So maybe lets figure
    out how to do either or both transparently. So far
    my Affine do not perform key sorting, not having

    the beauty of O(log N) access as SWI has sometimes.
    The access is O(N). But I expect it can be made O(1)
    under the hood! Not yet sure, the arrow functions can

    do it, since they provide hash table of anonymous
    predicate clauses. But this is still a little too
    heavy for dicts I guess.

    Bye

    P.S.: Corollary, ditch PIP0102 and PIP0104 as well.

    Dictionaries in Prolog (dynamic) https://prolog-lang.org/ImplementersForum/0102-dicts.html

    Terms with named arguments (static dicts) https://prolog-lang.org/ImplementersForum/0104-argnames.html

    Mild Shock schrieb:
    Hi,

    A few months / years ago, I called the {}/1 based
    Dicts, Affine Dicts, because they play with the
    syntax affinity between JavaScript, Python and JSON

    dicts, and the curly backeted terms, when used
    in connection with (:)/2 and (,)/2. Which lead
    to the libraries library(misc/json) and

    library(misc/dict). Recently I made a few
    observations, namely:

    - library(misc/json) deduplicate:
      The library does not yet deduplicate key value pairs.
      It seems that JavaScript and Python use a replacement
      strategy where the first pair defines the order
      and the last pair defines the definite value:

    JSON.parse('{"a":123, "b":"abc", "a":456}')
    {a: 456, b: 'abc'}

    - library(misc/dict) empty dict:
      The reader order influences the backtracking operations,
      as well the non-backtracking operations I am
      currently working on. Now I would prefer that
      the empty dict is represented as {true} using {}/1
      and not using {}/0. This would facilitate the non-
      backtracking operations, making them also applicable
      to empty dicts, or allowing them also to return
      the empty dict. Because the change_arg/3 based
      operations can the modify given dict always.
      The recent new non-backtracking operations are:

    dict_set(T, K, V) : Non-failure signalling,
      override just as in the JSON read semantics.
    dict_add(T, K, V) : Failure signalling if K already present. dict_remove(+Dict, +Term) : Non-failure signalling.

    The non-backtracking dicts are also usable with some
    of the backtracking dicts operations, these operations
    extend to compounds modified via change_arg/3:

    dict_current(T, K, V) : Failure signalling if K isn not present.
    Etc.. Etc..

    But currently everything is implemented with {}/0
    interpreting as empty dict, which I will probably change
    into {false} everywhere, so that change_arg/3 and

    non change_arg/3 work uniformly. The change will
    affect both library(misc/json) and library(misc/dict).

    Bye

    Mild Shock schrieb:
    Hi,

    Give Money to all the students working on SWI/XSB.
    I feel this phrasing could be misleading:

    Mapping Types: The translation of Python dictionaries takes
    advantage of the syntax of braces, which is supported by all Prologs
    that support DCGs. The term form of a dictionary is;

    https://prolog-lang.org/ImplementersForum/janus-bitrans.html

    The {}/1 syntax is already part of the ISO core standard.
    No ISO DCG standard needed. You find it in section 6.3.6:

    ------------------- cut here -------------------

    6.3.6 Compound terms - curly bracketed term

    A term with principal functor '{}'/1 can also be expressed
    by enclosing its argument in curly brackets.

    term = open curly, term, close curly ;

    Abstract: {}(l)           l
    Priority: 0               1201


    NOTE - For the syntax of an empty curly brackets, see 6.3.1.3.

    ------------------- cut here -------------------

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 14 17:41:18 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I wrote:

    - library(misc/dict) empty dict:
    The reader order influences the backtracking operations,
    as well the non-backtracking operations I am
    currently working on.

    Well in as far to keep the input order intact
    as required by Python/JavaScript . But semantically
    one might expect that the order doesn't make

    a dent, since dicts are leaning towards maps,
    and maps have usually this equality, derived
    from some thinking in higher order logic:

    f1 = f2 <=> forall x(f1(x) = f2(x))

    On the other hand input order has a lot of advantage
    for certain tooling. Like making a "diff" of two
    JSON texts, etc.. etc.. The original ISO core {}/1

    curly bracket has surely this behaviour:

    ?- {a:1,b:2} = {b:2,a:1}
    fail

    Python will disagree, and say the above two are
    equivalent. And the gist of PIP0102 and PIP0104,
    is probably to find the two also equivalent.

    Maybe the next step, besides finding O(1) access
    and modification, would be to change the equality
    as well, but not based on some dynamic/static dicts,

    but slowly somehow behind the scenes for {}/1. Whereby
    the tricky part can be not to get in conflict with
    DCGs use of {}/1 and the module standard, since

    this DCG is not a dict:

    foo --> {bar:baz}.

    bar:baz is a qualified predicate call. But DCG is
    one of those preproprossors, and it might all boil
    done to a clever priorizing when which preoprocessory

    does what. Challenging!

    Bye

    Mild Shock schrieb:
    Hi,

    Here some more testing, that shows that Affine Dicts
    are quite different from SWI-Dicts, since they do
    not sort the keys:

    JSON.parse('{"a":123, "b":"abc", "a":456}')
    {a: 456, b: 'abc'}

    JSON.parse('{"b":"abc", "a":123, "a":456}')
    {b: 'abc', a: 456}

    This is now the standard across JavaScript and Python.
    Since Python 3.7, the insertion-order behavior of
    dict is a language guarantee.

    Further ECMAScript defines a deterministic property
    order for plain objects, String keys (non-integer)
    are in insertion order.

    Still JavaScript and Python seem not to mourne some
    performance issues. ChatGPT claims JavaScript: Order
    is tracked efficiently, and Python: Order is cheap

    because of a major redesign. So maybe lets figure
    out how to do either or both transparently. So far
    my Affine do not perform key sorting, not having

    the beauty of O(log N) access as SWI has sometimes.
    The access is O(N). But I expect it can be made O(1)
    under the hood! Not yet sure, the arrow functions can

    do it, since they provide hash table of anonymous
    predicate clauses. But this is still a little too
    heavy for dicts I guess.

    Bye

    P.S.: Corollary, ditch PIP0102 and PIP0104 as well.

    Dictionaries in Prolog (dynamic) https://prolog-lang.org/ImplementersForum/0102-dicts.html

    Terms with named arguments (static dicts) https://prolog-lang.org/ImplementersForum/0104-argnames.html

    Mild Shock schrieb:
    Hi,

    A few months / years ago, I called the {}/1 based
    Dicts, Affine Dicts, because they play with the
    syntax affinity between JavaScript, Python and JSON

    dicts, and the curly backeted terms, when used
    in connection with (:)/2 and (,)/2. Which lead
    to the libraries library(misc/json) and

    library(misc/dict). Recently I made a few
    observations, namely:

    - library(misc/json) deduplicate:
       The library does not yet deduplicate key value pairs.
       It seems that JavaScript and Python use a replacement
       strategy where the first pair defines the order
       and the last pair defines the definite value:

    JSON.parse('{"a":123, "b":"abc", "a":456}')
    {a: 456, b: 'abc'}

    - library(misc/dict) empty dict:
       The reader order influences the backtracking operations,
       as well the non-backtracking operations I am
       currently working on. Now I would prefer that
       the empty dict is represented as {true} using {}/1
       and not using {}/0. This would facilitate the non-
       backtracking operations, making them also applicable
       to empty dicts, or allowing them also to return
       the empty dict. Because the change_arg/3 based
       operations can the modify given dict always.
       The recent new non-backtracking operations are:

    dict_set(T, K, V) : Non-failure signalling,
       override just as in the JSON read semantics.
    dict_add(T, K, V) : Failure signalling if K already present.
    dict_remove(+Dict, +Term) : Non-failure signalling.

    The non-backtracking dicts are also usable with some
    of the backtracking dicts operations, these operations
    extend to compounds modified via change_arg/3:

    dict_current(T, K, V) : Failure signalling if K isn not present.
    Etc.. Etc..

    But currently everything is implemented with {}/0
    interpreting as empty dict, which I will probably change
    into {false} everywhere, so that change_arg/3 and

    non change_arg/3 work uniformly. The change will
    affect both library(misc/json) and library(misc/dict).

    Bye

    Mild Shock schrieb:
    Hi,

    Give Money to all the students working on SWI/XSB.
    I feel this phrasing could be misleading:

    Mapping Types: The translation of Python dictionaries takes
    advantage of the syntax of braces, which is supported by all Prologs >>>  > that support DCGs. The term form of a dictionary is;

    https://prolog-lang.org/ImplementersForum/janus-bitrans.html

    The {}/1 syntax is already part of the ISO core standard.
    No ISO DCG standard needed. You find it in section 6.3.6:

    ------------------- cut here -------------------

    6.3.6 Compound terms - curly bracketed term

    A term with principal functor '{}'/1 can also be expressed
    by enclosing its argument in curly brackets.

    term = open curly, term, close curly ;

    Abstract: {}(l)           l
    Priority: 0               1201


    NOTE - For the syntax of an empty curly brackets, see 6.3.1.3.

    ------------------- cut here -------------------

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 14 17:42:26 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I wrote:

    - library(misc/dict) empty dict:
    The reader order influences the backtracking operations,
    as well the non-backtracking operations I am
    currently working on.

    Well in as far to keep the input order intact
    as required by Python/JavaScript . But semantically
    one might expect that the order doesn't make

    a dent, since dicts are leaning towards maps,
    and maps have usually this equality, derived
    from some thinking in higher order logic:

    f1 = f2 <=> forall x(f1(x) = f2(x))

    On the other hand input order has a lot of advantage
    for certain tooling. Like making a "diff" of two
    JSON texts, etc.. etc.. The original ISO core {}/1

    curly bracket has surely this behaviour:

    ?- {a:1,b:2} = {b:2,a:1}
    fail

    Python will disagree, and say the above two are
    equivalent. And the gist of PIP0102 and PIP0104,
    is probably to find the two also equivalent.

    Maybe the next step, besides finding O(1) access
    and modification, would be to change the equality
    as well, but not based on some dynamic/static dicts,

    but somehow behind the scenes for {}/1. Whereby
    the tricky part can be not to get in conflict with
    DCGs use of {}/1 and the module standard, since

    this DCG is not a dict:

    foo --> {bar:baz}.

    bar:baz is a qualified predicate call. But DCG is
    one of those preproprossors, and it might all boil
    done to a clever priorizing when which preoprocessory

    does what. Challenging!

    Bye

    Mild Shock schrieb:
    Hi,

    Here some more testing, that shows that Affine Dicts
    are quite different from SWI-Dicts, since they do
    not sort the keys:

    JSON.parse('{"a":123, "b":"abc", "a":456}')
    {a: 456, b: 'abc'}

    JSON.parse('{"b":"abc", "a":123, "a":456}')
    {b: 'abc', a: 456}

    This is now the standard across JavaScript and Python.
    Since Python 3.7, the insertion-order behavior of
    dict is a language guarantee.

    Further ECMAScript defines a deterministic property
    order for plain objects, String keys (non-integer)
    are in insertion order.

    Still JavaScript and Python seem not to mourne some
    performance issues. ChatGPT claims JavaScript: Order
    is tracked efficiently, and Python: Order is cheap

    because of a major redesign. So maybe lets figure
    out how to do either or both transparently. So far
    my Affine do not perform key sorting, not having

    the beauty of O(log N) access as SWI has sometimes.
    The access is O(N). But I expect it can be made O(1)
    under the hood! Not yet sure, the arrow functions can

    do it, since they provide hash table of anonymous
    predicate clauses. But this is still a little too
    heavy for dicts I guess.

    Bye

    P.S.: Corollary, ditch PIP0102 and PIP0104 as well.

    Dictionaries in Prolog (dynamic) https://prolog-lang.org/ImplementersForum/0102-dicts.html

    Terms with named arguments (static dicts) https://prolog-lang.org/ImplementersForum/0104-argnames.html

    Mild Shock schrieb:
    Hi,

    A few months / years ago, I called the {}/1 based
    Dicts, Affine Dicts, because they play with the
    syntax affinity between JavaScript, Python and JSON

    dicts, and the curly backeted terms, when used
    in connection with (:)/2 and (,)/2. Which lead
    to the libraries library(misc/json) and

    library(misc/dict). Recently I made a few
    observations, namely:

    - library(misc/json) deduplicate:
       The library does not yet deduplicate key value pairs.
       It seems that JavaScript and Python use a replacement
       strategy where the first pair defines the order
       and the last pair defines the definite value:

    JSON.parse('{"a":123, "b":"abc", "a":456}')
    {a: 456, b: 'abc'}

    - library(misc/dict) empty dict:
       The reader order influences the backtracking operations,
       as well the non-backtracking operations I am
       currently working on. Now I would prefer that
       the empty dict is represented as {true} using {}/1
       and not using {}/0. This would facilitate the non-
       backtracking operations, making them also applicable
       to empty dicts, or allowing them also to return
       the empty dict. Because the change_arg/3 based
       operations can the modify given dict always.
       The recent new non-backtracking operations are:

    dict_set(T, K, V) : Non-failure signalling,
       override just as in the JSON read semantics.
    dict_add(T, K, V) : Failure signalling if K already present.
    dict_remove(+Dict, +Term) : Non-failure signalling.

    The non-backtracking dicts are also usable with some
    of the backtracking dicts operations, these operations
    extend to compounds modified via change_arg/3:

    dict_current(T, K, V) : Failure signalling if K isn not present.
    Etc.. Etc..

    But currently everything is implemented with {}/0
    interpreting as empty dict, which I will probably change
    into {false} everywhere, so that change_arg/3 and

    non change_arg/3 work uniformly. The change will
    affect both library(misc/json) and library(misc/dict).

    Bye

    Mild Shock schrieb:
    Hi,

    Give Money to all the students working on SWI/XSB.
    I feel this phrasing could be misleading:

    Mapping Types: The translation of Python dictionaries takes
    advantage of the syntax of braces, which is supported by all Prologs >>>  > that support DCGs. The term form of a dictionary is;

    https://prolog-lang.org/ImplementersForum/janus-bitrans.html

    The {}/1 syntax is already part of the ISO core standard.
    No ISO DCG standard needed. You find it in section 6.3.6:

    ------------------- cut here -------------------

    6.3.6 Compound terms - curly bracketed term

    A term with principal functor '{}'/1 can also be expressed
    by enclosing its argument in curly brackets.

    term = open curly, term, close curly ;

    Abstract: {}(l)           l
    Priority: 0               1201


    NOTE - For the syntax of an empty curly brackets, see 6.3.1.3.

    ------------------- cut here -------------------

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 14 17:44:05 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I wrote:

    - library(misc/dict) empty dict:
    The reader order influences the backtracking operations,
    as well the non-backtracking operations I am
    currently working on.

    Well in as far to keep the input order intact
    as required by Python/JavaScript . But semantically
    one might expect that the order doesn't make

    a dent, since dicts are leaning towards maps,
    and maps have usually this equality, derived
    from some thinking in higher order logic:

    f1 = f2 <=> forall x(f1(x) = f2(x))

    On the other hand input order has a lot of advantage
    for certain tooling. Like making a "diff" of two
    JSON texts, etc.. etc.. The original ISO core {}/1

    curly bracket has surely this behaviour:

    ?- {a:1,b:2} = {b:2,a:1}
    fail

    Python will disagree, and say the above two are
    equivalent. And the gist of PIP0102 and PIP0104,
    is probably to find the two also equivalent.

    Maybe the next step, besides finding O(1) access
    and modification, would be to change the equality
    as well, but not based on some dynamic/static dicts,

    but somehow behind the scenes for {}/1. Whereby
    the tricky part can be not to get in conflict with
    DCGs use of {}/1 and the module standard, since

    this DCG is not a dict:

    foo --> {bar:baz}.

    bar:baz is a qualified predicate call. But DCG is
    one of those preprocessor, and it might all boil
    done to a clever priorizing when which preprocessor

    does what. Challenging!

    Bye

    Mild Shock schrieb:
    Hi,

    Here some more testing, that shows that Affine Dicts
    are quite different from SWI-Dicts, since they do
    not sort the keys:

    JSON.parse('{"a":123, "b":"abc", "a":456}')
    {a: 456, b: 'abc'}

    JSON.parse('{"b":"abc", "a":123, "a":456}')
    {b: 'abc', a: 456}

    This is now the standard across JavaScript and Python.
    Since Python 3.7, the insertion-order behavior of
    dict is a language guarantee.

    Further ECMAScript defines a deterministic property
    order for plain objects, String keys (non-integer)
    are in insertion order.

    Still JavaScript and Python seem not to mourne some
    performance issues. ChatGPT claims JavaScript: Order
    is tracked efficiently, and Python: Order is cheap

    because of a major redesign. So maybe lets figure
    out how to do either or both transparently. So far
    my Affine do not perform key sorting, not having

    the beauty of O(log N) access as SWI has sometimes.
    The access is O(N). But I expect it can be made O(1)
    under the hood! Not yet sure, the arrow functions can

    do it, since they provide hash table of anonymous
    predicate clauses. But this is still a little too
    heavy for dicts I guess.

    Bye

    P.S.: Corollary, ditch PIP0102 and PIP0104 as well.

    Dictionaries in Prolog (dynamic) https://prolog-lang.org/ImplementersForum/0102-dicts.html

    Terms with named arguments (static dicts) https://prolog-lang.org/ImplementersForum/0104-argnames.html

    Mild Shock schrieb:
    Hi,

    A few months / years ago, I called the {}/1 based
    Dicts, Affine Dicts, because they play with the
    syntax affinity between JavaScript, Python and JSON

    dicts, and the curly backeted terms, when used
    in connection with (:)/2 and (,)/2. Which lead
    to the libraries library(misc/json) and

    library(misc/dict). Recently I made a few
    observations, namely:

    - library(misc/json) deduplicate:
       The library does not yet deduplicate key value pairs.
       It seems that JavaScript and Python use a replacement
       strategy where the first pair defines the order
       and the last pair defines the definite value:

    JSON.parse('{"a":123, "b":"abc", "a":456}')
    {a: 456, b: 'abc'}

    - library(misc/dict) empty dict:
       The reader order influences the backtracking operations,
       as well the non-backtracking operations I am
       currently working on. Now I would prefer that
       the empty dict is represented as {true} using {}/1
       and not using {}/0. This would facilitate the non-
       backtracking operations, making them also applicable
       to empty dicts, or allowing them also to return
       the empty dict. Because the change_arg/3 based
       operations can the modify given dict always.
       The recent new non-backtracking operations are:

    dict_set(T, K, V) : Non-failure signalling,
       override just as in the JSON read semantics.
    dict_add(T, K, V) : Failure signalling if K already present.
    dict_remove(+Dict, +Term) : Non-failure signalling.

    The non-backtracking dicts are also usable with some
    of the backtracking dicts operations, these operations
    extend to compounds modified via change_arg/3:

    dict_current(T, K, V) : Failure signalling if K isn not present.
    Etc.. Etc..

    But currently everything is implemented with {}/0
    interpreting as empty dict, which I will probably change
    into {false} everywhere, so that change_arg/3 and

    non change_arg/3 work uniformly. The change will
    affect both library(misc/json) and library(misc/dict).

    Bye

    Mild Shock schrieb:
    Hi,

    Give Money to all the students working on SWI/XSB.
    I feel this phrasing could be misleading:

    Mapping Types: The translation of Python dictionaries takes
    advantage of the syntax of braces, which is supported by all Prologs >>>  > that support DCGs. The term form of a dictionary is;

    https://prolog-lang.org/ImplementersForum/janus-bitrans.html

    The {}/1 syntax is already part of the ISO core standard.
    No ISO DCG standard needed. You find it in section 6.3.6:

    ------------------- cut here -------------------

    6.3.6 Compound terms - curly bracketed term

    A term with principal functor '{}'/1 can also be expressed
    by enclosing its argument in curly brackets.

    term = open curly, term, close curly ;

    Abstract: {}(l)           l
    Priority: 0               1201


    NOTE - For the syntax of an empty curly brackets, see 6.3.1.3.

    ------------------- cut here -------------------

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Nov 16 11:21:05 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    How it started, some useless GOFAI framing and
    production systems lore:

    Computational Logic and Human Thinking:
    How to Be Artificially Intelligent https://www.cambridge.org/core/books/computational-logic-and-human-thinking/C2AFB0483D922944067DBC76FFFEB295

    How its going, please note CodeMender from Google:

    New Google Riftrunner AI (Gemini 3) Shocks Everyone https://www.youtube.com/watch?v=F_YWQ12qQ8M

    Especially note the section about CodeMender(*), and AI
    bild on Gemini, which does inspect and suggest changes
    to OpenSource projects.

    So whats the rule of predicting the future in AI. Well
    just take skeptics, like Boris the Loris (**) (nah we don't
    use Fuzzy Testing here, CodeMender uses this among other

    methods), Linus Torwald (nah, AI for OpenSource is still
    far away, CodeMender is here) etc.. Negate what they are
    saying and you get a perfect prediction for 2025 / 2026.

    LoL

    Bye

    (*) Already *old* anouncement from October 6, 2025:

    Introducing CodeMender: an AI agent for code security https://deepmind.google/blog/introducing-codemender-an-ai-agent-for-code-security/

    (**) Ok, when you don't find Boris the Loris on
    SWI-Prolog discourse, you might find him here:

    Hello. My name is Boris and this is my family. We're
    lorises and we are primates - a bit like small
    monkeys. We tend to move quite slowly which is
    why we are Slow Lorises. We have big eyes so we
    can see well in the dark to catch insects for our dinner.

    My name... is Boris
    https://x.com/mrborisloris


    Mild Shock schrieb:
    Hi,

    Descartes’ “divide problems into parts” works
    only for well-behaved, linear, decomposable systems.
    But its just that parts might end up as Schrödingers

    equation. It could be that stable diffusion is the
    new constraint solver. In a sense, stable diffusion
    models (or other generative AI) are functioning as

    probabilistic, fuzzy constraint solvers — but in a
    very different paradigm from classical logic or
    formal methods. But what was neglected?

    - Cybernetics (1940s–50s)
    Focused on feedback loops, control, and self-regulation
    in machines and biological systems. Showed that
    decomposition can fail because subparts are interdependent.

    - Chaos Theory (1960s–80s)
    Nonlinear deterministic systems can produce unpredictable,
    sensitive dependence on initial conditions. Decomposition
    into parts is tricky: small errors explode, and “solving
    subparts” may not help predict the whole.

    - Santa Fe Institute & Complex Systems (1980s–present)
    Studied emergent behavior, networks, adaptation,
    self-organization. Linear, reductionist thinking fails
    to capture dynamics of economic, social, and ecological systems.

    Bye

    Mild Shock schrieb:
    Hi,

    How it started:

    https://conceptbase.sourceforge.net/

    How its going:

    https://www.ibm.com/products/datastax

    The problem with claims such as " Formal languages,
    such as KAOS, are based on predicate logic and
    capture additional details about an application
    in a precise manner. They also provide a foundation
    for reasoning with information models." is that
    every thing in the quoted sentence is wrong.

    Real AI systems scale by approximation,
    vectorization, distributed representations,
    and partial knowledge — not by globally
    consistent logical models. No classical requirements
    language or ontology captures the informal
    cognitive machinery that makes
    intelligence flexible. Intelligence needs the
    whole messy cognitive spectrum.

    Somehow DataStax looks like n8n married AI embedding.
    I hope Amazon, Meta, Google, etc.. get the message.
    I don't worry about Microsoft, they might come with

    something from their Encarta corner and Copilot+ is
    more Local AI. After all we need things like Wikidata
    in a Robot and not in a Data Center.

    LoL

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Nov 16 12:07:53 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Something tells me the Prolog community has a
    sever blind spot, in their Logic education.
    Possibly never touch a book like this here,

    even not with tweezers:

    Undergraduate Texts in Mathematics - 1983
    H .- D. Ebbinghaus et. al - Mathematical Logic http://www.fuchs-braun.com/media/ca80d9e55f6d3bfaffff8005fffffff0.pdf

    The front cover features a smiling face,
    illustrating Ehrenfeucht Fraisse (EF) games.
    There is a compelling relationship between

    EF and Fuzzy Testing. Just take A and B, a formal
    form of a spec and of some code. This is quite
    different from Lorentz Games, where the initial

    set-up is different. But here if Anna plays
    Player II in G(M1,M2) and Bert plays Player II
    in G(M2,M1). Then if Anna has a winning strategy,

    then Bert has a winning strategy. Sounds like
    Bisimulation again. One of the biggest struggels
    for Boris the Loris and Nazi Retart Julio of

    all time. Or this complete blunder, navigating
    in the dark, trying to identify an elephant:

    @kuniaki.mukai https://swi-prolog.discourse.group/t/cyclic-terms-unification-x-f-f-x-x-y-f-y-f-y-x-y/9097/72

    Bye

    P.S.: Mostlikely the cardinal sin of the Prolog
    Community is that they don't apply Proof Theoretic
    methods and Model Theoretic methods on equal

    footing. They don't understand how the two
    methods are related, even on the most basic
    level, such as counter models, which is a level

    more basic than EF games. One of the future
    challenges for the community could be extending
    proof theoretic methods and model theoretic

    methods to (seemingly) higher order logic. This
    could be quite messy, or not? I am currengly
    fascinated by Feferman Operative Sets and

    like Melvin Fittings work in higher order logic.

    Mild Shock schrieb:
    Hi,

    How it started, some useless GOFAI framing and
    production systems lore:

    Computational Logic and Human Thinking:
    How to Be Artificially Intelligent https://www.cambridge.org/core/books/computational-logic-and-human-thinking/C2AFB0483D922944067DBC76FFFEB295


    How its going, please note CodeMender from Google:

    New Google Riftrunner AI (Gemini 3) Shocks Everyone https://www.youtube.com/watch?v=F_YWQ12qQ8M

    Especially note the section about CodeMender(*), and AI
    bild on Gemini, which does inspect and suggest changes
    to OpenSource projects.

    So whats the rule of predicting the future in AI. Well
    just take skeptics, like Boris the Loris (**) (nah we don't
    use Fuzzy Testing here, CodeMender uses this among other

    methods), Linus Torwald (nah, AI for OpenSource is still
    far away, CodeMender is here) etc.. Negate what they are
    saying and you get a perfect prediction for 2025 / 2026.

    LoL

    Bye

    (*) Already *old* anouncement from October 6, 2025:

    Introducing CodeMender: an AI agent for code security https://deepmind.google/blog/introducing-codemender-an-ai-agent-for-code-security/


    (**) Ok, when you don't find Boris the Loris on
    SWI-Prolog discourse, you might find him here:

    Hello. My name is Boris and this is my family. We're
    lorises and we are primates - a bit like small
    monkeys. We tend to move quite slowly which is
    why we are Slow Lorises. We have big eyes so we
    can see well in the dark to catch insects for our dinner.

    My name... is Boris
    https://x.com/mrborisloris


    Mild Shock schrieb:
    Hi,

    Descartes’ “divide problems into parts” works
    only for well-behaved, linear, decomposable systems.
    But its just that parts might end up as Schrödingers

    equation. It could be that stable diffusion is the
    new constraint solver. In a sense, stable diffusion
    models (or other generative AI) are functioning as

    probabilistic, fuzzy constraint solvers — but in a
    very different paradigm from classical logic or
    formal methods. But what was neglected?

    - Cybernetics (1940s–50s)
    Focused on feedback loops, control, and self-regulation
    in machines and biological systems. Showed that
    decomposition can fail because subparts are interdependent.

    - Chaos Theory (1960s–80s)
    Nonlinear deterministic systems can produce unpredictable,
    sensitive dependence on initial conditions. Decomposition
    into parts is tricky: small errors explode, and “solving
    subparts” may not help predict the whole.

    - Santa Fe Institute & Complex Systems (1980s–present)
    Studied emergent behavior, networks, adaptation,
    self-organization. Linear, reductionist thinking fails
    to capture dynamics of economic, social, and ecological systems.

    Bye

    Mild Shock schrieb:
    Hi,

    How it started:

    https://conceptbase.sourceforge.net/

    How its going:

    https://www.ibm.com/products/datastax

    The problem with claims such as " Formal languages,
    such as KAOS, are based on predicate logic and
    capture additional details about an application
    in a precise manner. They also provide a foundation
    for reasoning with information models." is that
    every thing in the quoted sentence is wrong.

    Real AI systems scale by approximation,
    vectorization, distributed representations,
    and partial knowledge — not by globally
    consistent logical models. No classical requirements
    language or ontology captures the informal
    cognitive machinery that makes
    intelligence flexible. Intelligence needs the
    whole messy cognitive spectrum.

    Somehow DataStax looks like n8n married AI embedding.
    I hope Amazon, Meta, Google, etc.. get the message.
    I don't worry about Microsoft, they might come with

    something from their Encarta corner and Copilot+ is
    more Local AI. After all we need things like Wikidata
    in a Robot and not in a Data Center.

    LoL

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Nov 16 13:10:08 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Set theory was initially praised as a foundation
    for mathematics. But the reduction of mathematics
    to foundation, was also carried out in type theories.
    And we see that this reduction is rather arbitrary,

    we now have dozen of competing set theories and
    type theories. Whats more stunning, the reduction
    allows us to view Model Theory in a Proof Theory
    fashion, possibly implying that Model Theory doesn't

    exist or is not needed? But this is a dangerous
    conclusion, since the foundation might be a theoretical
    technical thing, far away from practical use.
    Take naive comprehension, was it wrong?

    ∃x∀y(y e x <=> phi(y))

    It was fixed after Russell/Frege by ?von Neumann?:

    ∀z∃x∀y(y e x <=> y e z & phi(y))

    But in reverse mathematics we find another refinement:

    ∃x∀y(y e x <=> phi(y)) for phi a certain class

    So instead introducing a set like upper bound z,
    where we would only look at the projection of
    comprehension to z, we take the naive intuition more
    seriously, born from informal usage, and say,

    naive comprehension was not really wrong!

    Bye

    P.S.: It would be interesting to see whether
    Operational Set theory as presented today, has
    a developed Model Theoretic language? Or does it
    also subscribe to the Computational Logic Primate?

    5.3 Relativizing operational set theory
    It is shown in [45] that a direct relativization
    of operational reflection leads to theories that are
    significantly stronger than theories formalizing the
    admissible analogues of classical large cardinal axioms.
    This refutes the conjecture 14(1) on p. 977 of Feferman [19]. https://home.inf.unibe.ch/ltg/publications/2018/jae18.pdf

    Is relativizing the backdoor of a model theory,
    that might also be useful for OST? OST is used like
    Lego bricks here. Adding this or that, one gets different
    set theories (or maybe type theories).

    Operational set theory and small large cardinals
    Solomon Feferman - 2006
    Conjecture 14.
    (1) OST + (Inacc) ≡ KPi.
    (2) OST + (Mahlo) ≡ KPM.
    (3) OST + (Reg2) ≡ KPω +( 3 −Reflection). https://math.stanford.edu/~feferman/papers/OST-Final.pdf

    One gets the impression of OST being a sub-foundation toy.

    Mild Shock schrieb:
    Hi,

    Something tells me the Prolog community has a
    sever blind spot, in their Logic education.
    Possibly never touch a book like this here,

    even not with tweezers:

    Undergraduate Texts in Mathematics - 1983
    H .- D. Ebbinghaus et. al - Mathematical Logic http://www.fuchs-braun.com/media/ca80d9e55f6d3bfaffff8005fffffff0.pdf

    The front cover features a smiling face,
    illustrating Ehrenfeucht Fraisse (EF) games.
    There is a compelling relationship between

    EF and Fuzzy Testing. Just take A and B, a formal
    form of a spec and of some code. This is quite
    different from Lorentz Games, where the initial

    set-up is different. But here if Anna plays
    Player II in G(M1,M2) and Bert plays Player II
    in G(M2,M1). Then if Anna has a winning strategy,

    then Bert has a winning strategy. Sounds like
    Bisimulation again. One of the biggest struggels
    for Boris the Loris and Nazi Retart Julio of

    all time. Or this complete blunder, navigating
    in the dark, trying to identify an elephant:

    @kuniaki.mukai https://swi-prolog.discourse.group/t/cyclic-terms-unification-x-f-f-x-x-y-f-y-f-y-x-y/9097/72


    Bye

    P.S.: Mostlikely the cardinal sin of the Prolog
    Community is that they don't apply Proof Theoretic
    methods and Model Theoretic methods on equal

    footing. They don't understand how the two
    methods are related, even on the most basic
    level, such as counter models, which is a level

    more basic than EF games. One of the future
    challenges for the community could be extending
    proof theoretic methods and model theoretic

    methods to (seemingly) higher order logic. This
    could be quite messy, or not? I am currengly
    fascinated by Feferman Operative Sets and

    like Melvin Fittings work in higher order logic.

    Mild Shock schrieb:
    Hi,

    How it started, some useless GOFAI framing and
    production systems lore:

    Computational Logic and Human Thinking:
    How to Be Artificially Intelligent
    https://www.cambridge.org/core/books/computational-logic-and-human-thinking/C2AFB0483D922944067DBC76FFFEB295


    How its going, please note CodeMender from Google:

    New Google Riftrunner AI (Gemini 3) Shocks Everyone
    https://www.youtube.com/watch?v=F_YWQ12qQ8M

    Especially note the section about CodeMender(*), and AI
    bild on Gemini, which does inspect and suggest changes
    to OpenSource projects.

    So whats the rule of predicting the future in AI. Well
    just take skeptics, like Boris the Loris (**) (nah we don't
    use Fuzzy Testing here, CodeMender uses this among other

    methods), Linus Torwald (nah, AI for OpenSource is still
    far away, CodeMender is here) etc.. Negate what they are
    saying and you get a perfect prediction for 2025 / 2026.

    LoL

    Bye

    (*) Already *old* anouncement from October 6, 2025:

    Introducing CodeMender: an AI agent for code security
    https://deepmind.google/blog/introducing-codemender-an-ai-agent-for-code-security/


    (**) Ok, when you don't find Boris the Loris on
    SWI-Prolog discourse, you might find him here:

    Hello. My name is Boris and this is my family. We're
    lorises and we are primates - a bit like small
    monkeys. We tend to move quite slowly which is
    why we are Slow Lorises. We have big eyes so we
    can see well in the dark to catch insects for our dinner.

    My name... is Boris
    https://x.com/mrborisloris


    Mild Shock schrieb:
    Hi,

    Descartes’ “divide problems into parts” works
    only for well-behaved, linear, decomposable systems.
    But its just that parts might end up as Schrödingers

    equation. It could be that stable diffusion is the
    new constraint solver. In a sense, stable diffusion
    models (or other generative AI) are functioning as

    probabilistic, fuzzy constraint solvers — but in a
    very different paradigm from classical logic or
    formal methods. But what was neglected?

    - Cybernetics (1940s–50s)
    Focused on feedback loops, control, and self-regulation
    in machines and biological systems. Showed that
    decomposition can fail because subparts are interdependent.

    - Chaos Theory (1960s–80s)
    Nonlinear deterministic systems can produce unpredictable,
    sensitive dependence on initial conditions. Decomposition
    into parts is tricky: small errors explode, and “solving
    subparts” may not help predict the whole.

    - Santa Fe Institute & Complex Systems (1980s–present)
    Studied emergent behavior, networks, adaptation,
    self-organization. Linear, reductionist thinking fails
    to capture dynamics of economic, social, and ecological systems.

    Bye

    Mild Shock schrieb:
    Hi,

    How it started:

    https://conceptbase.sourceforge.net/

    How its going:

    https://www.ibm.com/products/datastax

    The problem with claims such as " Formal languages,
    such as KAOS, are based on predicate logic and
    capture additional details about an application
    in a precise manner. They also provide a foundation
    for reasoning with information models." is that
    every thing in the quoted sentence is wrong.

    Real AI systems scale by approximation,
    vectorization, distributed representations,
    and partial knowledge — not by globally
    consistent logical models. No classical requirements
    language or ontology captures the informal
    cognitive machinery that makes
    intelligence flexible. Intelligence needs the
    whole messy cognitive spectrum.

    Somehow DataStax looks like n8n married AI embedding.
    I hope Amazon, Meta, Google, etc.. get the message.
    I don't worry about Microsoft, they might come with

    something from their Encarta corner and Copilot+ is
    more Local AI. After all we need things like Wikidata
    in a Robot and not in a Data Center.

    LoL

    Bye





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Nov 16 16:27:57 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    While mathematics is possibly not subject to "deflationism":

    "According to deflationists, such suggestions are
    mistaken, and, moreover, they all share a common mistake.
    The common mistake is to assume that truth has a nature
    of the kind that philosophers might find out about and
    develop theories of."
    https://plato.stanford.edu/entries/truth-deflationary/

    Its the other way around for "computation", since it is more real?

    For example Roberts Kowalski AI booklet is a master class
    in snake oil selling, only looking at the "Logic" in
    "Computational Logic", communicating a certain fascination
    in logic representation, including chain of thoughts and
    complex dynamic situations. Largely ignoring serious

    discussion of Algorithm = Logic + Control. So maybe besides
    a chapter " The grass is wet ", adding a chapter " If
    we were God " would have helped in putting "Computation" back
    into the picture. After all solving a connection graph is
    only a small part of reasoning. Its only the Matrix in

    Herbrand's original method, but there is also another more
    creative step related to quantifiers in Herbrand's original
    method. See also Jens Ottens implementation of leanCoP.

    Bye

    See also here:

    The Pocket Reasoner– Automatic Reasoning on Small Devices https://www.ntnu.no/ojs/index.php/nikt/article/download/5368/4844/20629

    Mild Shock schrieb:
    Hi,

    Set theory was initially praised as a foundation
    for mathematics. But the reduction of mathematics
    to foundation, was also carried out in type theories.
    And we see that this reduction is rather arbitrary,

    we now have dozen of competing set theories and
    type theories. Whats more stunning, the reduction
    allows us to view Model Theory in a Proof Theory
    fashion, possibly implying that Model Theory doesn't

    exist or is not needed? But this is a dangerous
    conclusion, since the foundation might be a theoretical
    technical thing, far away from practical use.
    Take naive comprehension, was it wrong?

    ∃x∀y(y e x <=> phi(y))

    It was fixed after Russell/Frege by ?von Neumann?:

    ∀z∃x∀y(y e x <=> y e z & phi(y))

    But in reverse mathematics we find another refinement:

    ∃x∀y(y e x <=> phi(y))   for phi a certain class

    So instead introducing a set like upper bound z,
    where we would only look at the projection of
    comprehension to z, we take the naive intuition more
    seriously, born from informal usage, and say,

    naive comprehension was not really wrong!

    Bye

    P.S.: It would be interesting to see whether
    Operational Set theory as presented today, has
    a developed Model Theoretic language? Or does it
    also subscribe to the Computational Logic Primate?

    5.3 Relativizing operational set theory
    It is shown in [45] that a direct relativization
    of operational reflection leads to theories that are
    significantly stronger than theories formalizing the
    admissible analogues of classical large cardinal axioms.
    This refutes the conjecture 14(1) on p. 977 of Feferman [19]. https://home.inf.unibe.ch/ltg/publications/2018/jae18.pdf

    Is relativizing the backdoor of a model theory,
    that might also be useful for OST? OST is used like
    Lego bricks here. Adding this or that, one gets different
    set theories (or maybe type theories).

    Operational set theory and small large cardinals
    Solomon Feferman - 2006
    Conjecture 14.
     (1) OST + (Inacc) ≡ KPi.
     (2) OST + (Mahlo) ≡ KPM.
     (3) OST + (Reg2) ≡ KPω +( 3 −Reflection). https://math.stanford.edu/~feferman/papers/OST-Final.pdf

    One gets the impression of OST being a sub-foundation toy.

    Mild Shock schrieb:
    Hi,

    Something tells me the Prolog community has a
    sever blind spot, in their Logic education.
    Possibly never touch a book like this here,

    even not with tweezers:

    Undergraduate Texts in Mathematics - 1983
    H .- D. Ebbinghaus et. al - Mathematical Logic
    http://www.fuchs-braun.com/media/ca80d9e55f6d3bfaffff8005fffffff0.pdf

    The front cover features a smiling face,
    illustrating Ehrenfeucht Fraisse (EF) games.
    There is a compelling relationship between

    EF and Fuzzy Testing. Just take A and B, a formal
    form of a spec and of some code. This is quite
    different from Lorentz Games, where the initial

    set-up is different. But here if Anna plays
    Player II in G(M1,M2) and Bert plays Player II
    in G(M2,M1). Then if Anna has a winning strategy,

    then Bert has a winning strategy. Sounds like
    Bisimulation again. One of the biggest struggels
    for Boris the Loris and Nazi Retart Julio of

    all time. Or this complete blunder, navigating
    in the dark, trying to identify an elephant:

    @kuniaki.mukai
    https://swi-prolog.discourse.group/t/cyclic-terms-unification-x-f-f-x-x-y-f-y-f-y-x-y/9097/72


    Bye

    P.S.: Mostlikely the cardinal sin of the Prolog
    Community is that they don't apply Proof Theoretic
    methods and Model Theoretic methods on equal

    footing. They don't understand how the two
    methods are related, even on the most basic
    level, such as counter models, which is a level

    more basic than EF games. One of the future
    challenges for the community could be extending
    proof theoretic methods and model theoretic

    methods to (seemingly) higher order logic. This
    could be quite messy, or not? I am currengly
    fascinated by Feferman Operative Sets and

    like Melvin Fittings work in higher order logic.

    Mild Shock schrieb:
    Hi,

    How it started, some useless GOFAI framing and
    production systems lore:

    Computational Logic and Human Thinking:
    How to Be Artificially Intelligent
    https://www.cambridge.org/core/books/computational-logic-and-human-thinking/C2AFB0483D922944067DBC76FFFEB295


    How its going, please note CodeMender from Google:

    New Google Riftrunner AI (Gemini 3) Shocks Everyone
    https://www.youtube.com/watch?v=F_YWQ12qQ8M

    Especially note the section about CodeMender(*), and AI
    bild on Gemini, which does inspect and suggest changes
    to OpenSource projects.

    So whats the rule of predicting the future in AI. Well
    just take skeptics, like Boris the Loris (**) (nah we don't
    use Fuzzy Testing here, CodeMender uses this among other

    methods), Linus Torwald (nah, AI for OpenSource is still
    far away, CodeMender is here) etc.. Negate what they are
    saying and you get a perfect prediction for 2025 / 2026.

    LoL

    Bye

    (*) Already *old* anouncement from October 6, 2025:

    Introducing CodeMender: an AI agent for code security
    https://deepmind.google/blog/introducing-codemender-an-ai-agent-for-code-security/


    (**) Ok, when you don't find Boris the Loris on
    SWI-Prolog discourse, you might find him here:

    Hello. My name is Boris and this is my family. We're
    lorises and we are primates - a bit like small
    monkeys. We tend to move quite slowly which is
    why we are Slow Lorises. We have big eyes so we
    can see well in the dark to catch insects for our dinner.

    My name... is Boris
    https://x.com/mrborisloris


    Mild Shock schrieb:
    Hi,

    Descartes’ “divide problems into parts” works
    only for well-behaved, linear, decomposable systems.
    But its just that parts might end up as Schrödingers

    equation. It could be that stable diffusion is the
    new constraint solver. In a sense, stable diffusion
    models (or other generative AI) are functioning as

    probabilistic, fuzzy constraint solvers — but in a
    very different paradigm from classical logic or
    formal methods. But what was neglected?

    - Cybernetics (1940s–50s)
    Focused on feedback loops, control, and self-regulation
    in machines and biological systems. Showed that
    decomposition can fail because subparts are interdependent.

    - Chaos Theory (1960s–80s)
    Nonlinear deterministic systems can produce unpredictable,
    sensitive dependence on initial conditions. Decomposition
    into parts is tricky: small errors explode, and “solving
    subparts” may not help predict the whole.

    - Santa Fe Institute & Complex Systems (1980s–present)
    Studied emergent behavior, networks, adaptation,
    self-organization. Linear, reductionist thinking fails
    to capture dynamics of economic, social, and ecological systems.

    Bye

    Mild Shock schrieb:
    Hi,

    How it started:

    https://conceptbase.sourceforge.net/

    How its going:

    https://www.ibm.com/products/datastax

    The problem with claims such as " Formal languages,
    such as KAOS, are based on predicate logic and
    capture additional details about an application
    in a precise manner. They also provide a foundation
    for reasoning with information models." is that
    every thing in the quoted sentence is wrong.

    Real AI systems scale by approximation,
    vectorization, distributed representations,
    and partial knowledge — not by globally
    consistent logical models. No classical requirements
    language or ontology captures the informal
    cognitive machinery that makes
    intelligence flexible. Intelligence needs the
    whole messy cognitive spectrum.

    Somehow DataStax looks like n8n married AI embedding.
    I hope Amazon, Meta, Google, etc.. get the message.
    I don't worry about Microsoft, they might come with

    something from their Encarta corner and Copilot+ is
    more Local AI. After all we need things like Wikidata
    in a Robot and not in a Data Center.

    LoL

    Bye






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Nov 18 23:32:49 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    This is almost like Peter Aczels lost Notes,
    where he describes cyclic formulas. Works astonishing
    well for cyclic Prolog terms and arrow functions,

    see the odd/even example. But jokes aside, check this out:

    Q: WHAT IS THE KEY TO SUCCESS?
    A: HIRE THE RIGHT EMPLOYEES!

    Q: HOW DO YOU KNOW YOU HIRED THE RIGHT ONES?
    A: YOU KNOW BECAUSE THE BUSINESS IS SUCCESSFUL.

    Q: SO THE KEY TO SUCCESS IS CIRCULAR REASONING?
    A: YES, BECAUSE CIRCULAR REASON-ING IS THE KEY.
    DilbertCartoonist@gmail.com

    Non-Well-Founded Proofs and Non-Well-Founded Research https://logic-mentoring-workshop.github.io/lics25/slides/lmw_Liron.pdf

    Bye

    Mild Shock schrieb:
    Hi,

    How it started, some useless GOFAI framing and
    production systems lore:

    Computational Logic and Human Thinking:
    How to Be Artificially Intelligent https://www.cambridge.org/core/books/computational-logic-and-human-thinking/C2AFB0483D922944067DBC76FFFEB295


    How its going, please note CodeMender from Google:

    New Google Riftrunner AI (Gemini 3) Shocks Everyone https://www.youtube.com/watch?v=F_YWQ12qQ8M

    Especially note the section about CodeMender(*), and AI
    bild on Gemini, which does inspect and suggest changes
    to OpenSource projects.

    So whats the rule of predicting the future in AI. Well
    just take skeptics, like Boris the Loris (**) (nah we don't
    use Fuzzy Testing here, CodeMender uses this among other

    methods), Linus Torwald (nah, AI for OpenSource is still
    far away, CodeMender is here) etc.. Negate what they are
    saying and you get a perfect prediction for 2025 / 2026.

    LoL

    Bye

    (*) Already *old* anouncement from October 6, 2025:

    Introducing CodeMender: an AI agent for code security https://deepmind.google/blog/introducing-codemender-an-ai-agent-for-code-security/


    (**) Ok, when you don't find Boris the Loris on
    SWI-Prolog discourse, you might find him here:

    Hello. My name is Boris and this is my family. We're
    lorises and we are primates - a bit like small
    monkeys. We tend to move quite slowly which is
    why we are Slow Lorises. We have big eyes so we
    can see well in the dark to catch insects for our dinner.

    My name... is Boris
    https://x.com/mrborisloris


    Mild Shock schrieb:
    Hi,

    Descartes’ “divide problems into parts” works
    only for well-behaved, linear, decomposable systems.
    But its just that parts might end up as Schrödingers

    equation. It could be that stable diffusion is the
    new constraint solver. In a sense, stable diffusion
    models (or other generative AI) are functioning as

    probabilistic, fuzzy constraint solvers — but in a
    very different paradigm from classical logic or
    formal methods. But what was neglected?

    - Cybernetics (1940s–50s)
    Focused on feedback loops, control, and self-regulation
    in machines and biological systems. Showed that
    decomposition can fail because subparts are interdependent.

    - Chaos Theory (1960s–80s)
    Nonlinear deterministic systems can produce unpredictable,
    sensitive dependence on initial conditions. Decomposition
    into parts is tricky: small errors explode, and “solving
    subparts” may not help predict the whole.

    - Santa Fe Institute & Complex Systems (1980s–present)
    Studied emergent behavior, networks, adaptation,
    self-organization. Linear, reductionist thinking fails
    to capture dynamics of economic, social, and ecological systems.

    Bye

    Mild Shock schrieb:
    Hi,

    How it started:

    https://conceptbase.sourceforge.net/

    How its going:

    https://www.ibm.com/products/datastax

    The problem with claims such as " Formal languages,
    such as KAOS, are based on predicate logic and
    capture additional details about an application
    in a precise manner. They also provide a foundation
    for reasoning with information models." is that
    every thing in the quoted sentence is wrong.

    Real AI systems scale by approximation,
    vectorization, distributed representations,
    and partial knowledge — not by globally
    consistent logical models. No classical requirements
    language or ontology captures the informal
    cognitive machinery that makes
    intelligence flexible. Intelligence needs the
    whole messy cognitive spectrum.

    Somehow DataStax looks like n8n married AI embedding.
    I hope Amazon, Meta, Google, etc.. get the message.
    I don't worry about Microsoft, they might come with

    something from their Encarta corner and Copilot+ is
    more Local AI. After all we need things like Wikidata
    in a Robot and not in a Data Center.

    LoL

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Nov 25 20:05:16 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Ha Ha, remember this post on SWI-Prolog
    discourse, the primary source for morons such
    as Boris the Loris and Nazi Retard Julio:

    "The idea that LLM-based methods can become
    more intelligent by using massive amounts
    of computation is false. They can generate
    more kinds of BS, but at an enormous cost in
    hardware and in the electricity to run that
    massive hardware. But without methods of
    evaluation, the probability that random mixtures
    of data are true or useful or worth the cost
    of generating them becomes less and less likely."
    - John Sowa
    https://swi-prolog.discourse.group/t/prolog-and-llms-genai/8699

    Guess what my new ThinkCenter, that just arrived
    via Lenovo, China, with a Snapdragon X, for around
    700.- USD could easily run locally some inferencing.

    I was using AnythingLLM, it has little idioctic
    electron user user interface, but can dedicatedly
    support Snapdragon X NPU and models, via QNN/ONNX:

    The all-in-one AI application
    https://anythingllm.com/

    Tested a LLama Model, a little bit chatty to
    be honest, and a Phi Silica model, not yet that
    good in coding. Where did the massive computation

    come from? From the SOC and the unified memory
    of the Snapdragon. I had 32 GB, and 16 GB was
    shared with the NPU. So you don't need to

    buy an Aura Yoga laptop, which has separate
    NVIDIA Graphics card, with only 8 GB. This
    graphic card will be useless, many interesting

    models are above 8 GB. And yes the massive
    computation obviously leads to more intelligence.
    The later is a riddle for every Prologer, how

    could more LIPS (logical inference per second)
    lead to more intelligence?

    Bye

    Mild Shock schrieb:
    Hi,

    How it started:

    https://conceptbase.sourceforge.net/

    How its going:

    https://www.ibm.com/products/datastax

    The problem with claims such as " Formal languages,
    such as KAOS, are based on predicate logic and
    capture additional details about an application
    in a precise manner. They also provide a foundation
    for reasoning with information models." is that
    every thing in the quoted sentence is wrong.

    Real AI systems scale by approximation,
    vectorization, distributed representations,
    and partial knowledge — not by globally
    consistent logical models. No classical requirements
    language or ontology captures the informal
    cognitive machinery that makes
    intelligence flexible. Intelligence needs the
    whole messy cognitive spectrum.

    Somehow DataStax looks like n8n married AI embedding.
    I hope Amazon, Meta, Google, etc.. get the message.
    I don't worry about Microsoft, they might come with

    something from their Encarta corner and Copilot+ is
    more Local AI. After all we need things like Wikidata
    in a Robot and not in a Data Center.

    LoL

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Nov 25 20:14:49 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    This super cute Snapdragon X box, has massive
    benchmark score for quantisized neural networks (qANN):

    sCPU mCPU GPU sANN hANN qANN
    AcerSwift 2835 13393 25395 6744 10167 5175
    YogaUltra 2785 9844 30545 7270 13936 4830
    ThinkCentre 2145 9754 13782 1414 16456 40721

    AcerSwift OpenCL DirectML
    YogaUltra OpenCL DirectML
    ThinkCentre Vulkan QNN

    But maybe the low qANN numbers are a problem of
    Geekbench AI, and how it uses DirectML, that it cannot
    yet address teh full potential of the NPUs on the

    other two Local AI machines. But impressively the
    QNN API and the ONNX format, goes very smooth on
    the ThinkCentre. For machine translation via quantisized

    neural networks (qANN). I see that the ThinkCentre
    is 10 times faster than the other two machines. But I
    guess with a suitable version of Geekbench AI,

    the gap between to the other two machines will close.
    They are just too new, so that Geekbench AI is
    lacking behind.

    Bye

    Mild Shock schrieb:
    Hi,

    Ha Ha, remember this post on SWI-Prolog
    discourse, the primary source for morons such
    as Boris the Loris and Nazi Retard Julio:

    "The idea that LLM-based methods can become
    more intelligent by using massive amounts
    of computation is false. They can generate
    more kinds of BS, but at an enormous cost in
    hardware and in the electricity to run that
    massive hardware. But without methods of
    evaluation, the probability that random mixtures
    of data are true or useful or worth the cost
    of generating them becomes less and less likely."
    - John Sowa
    https://swi-prolog.discourse.group/t/prolog-and-llms-genai/8699

    Guess what my new ThinkCenter, that just arrived
    via Lenovo, China, with a Snapdragon X, for around
    700.- USD could easily run locally some inferencing.

    I was using AnythingLLM, it has little idioctic
    electron user user interface, but can dedicatedly
    support Snapdragon X NPU and models, via QNN/ONNX:

    The all-in-one AI application
    https://anythingllm.com/

    Tested a LLama Model, a little bit chatty to
    be honest, and a Phi Silica model, not yet that
    good in coding. Where did the massive computation

    come from? From the SOC and the unified memory
    of the Snapdragon. I had 32 GB, and 16 GB was
    shared with the NPU. So you don't need to

    buy an Aura Yoga laptop, which has separate
    NVIDIA Graphics card, with only 8 GB. This
    graphic card will be useless, many interesting

    models are above 8 GB. And yes the massive
    computation obviously leads to more intelligence.
    The later is a riddle for every Prologer, how

    could more LIPS (logical inference per second)
    lead to more intelligence?

    Bye

    Mild Shock schrieb:
    Hi,

    How it started:

    https://conceptbase.sourceforge.net/

    How its going:

    https://www.ibm.com/products/datastax

    The problem with claims such as " Formal languages,
    such as KAOS, are based on predicate logic and
    capture additional details about an application
    in a precise manner. They also provide a foundation
    for reasoning with information models." is that
    every thing in the quoted sentence is wrong.

    Real AI systems scale by approximation,
    vectorization, distributed representations,
    and partial knowledge — not by globally
    consistent logical models. No classical requirements
    language or ontology captures the informal
    cognitive machinery that makes
    intelligence flexible. Intelligence needs the
    whole messy cognitive spectrum.

    Somehow DataStax looks like n8n married AI embedding.
    I hope Amazon, Meta, Google, etc.. get the message.
    I don't worry about Microsoft, they might come with

    something from their Encarta corner and Copilot+ is
    more Local AI. After all we need things like Wikidata
    in a Robot and not in a Data Center.

    LoL

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2