Hi,
How it started:
https://conceptbase.sourceforge.net/
How its going:
https://www.ibm.com/products/datastax
The problem with claims such as " Formal languages,
such as KAOS, are based on predicate logic and
capture additional details about an application
in a precise manner. They also provide a foundation
for reasoning with information models." is that
every thing in the quoted sentence is wrong.
Real AI systems scale by approximation,
vectorization, distributed representations,
and partial knowledge — not by globally
consistent logical models. No classical requirements
language or ontology captures the informal
cognitive machinery that makes
intelligence flexible. Intelligence needs the
whole messy cognitive spectrum.
Somehow DataStax looks like n8n married AI embedding.
I hope Amazon, Meta, Google, etc.. get the message.
I don't worry about Microsoft, they might come with
something from their Encarta corner and Copilot+ is
more Local AI. After all we need things like Wikidata
in a Robot and not in a Data Center.
LoL
Bye
@jp-diegidio Yes, term/goal expansion does not apply to assert.
Does not apply to assert and call, as Jan was saying.
Do you or anybody happen to know why that is so?
b.
Hi,
Descartes’ “divide problems into parts” works
only for well-behaved, linear, decomposable systems.
But its just that parts might end up as Schrödingers
equation. It could be that stable diffusion is the
new constraint solver. In a sense, stable diffusion
models (or other generative AI) are functioning as
probabilistic, fuzzy constraint solvers — but in a
very different paradigm from classical logic or
formal methods. But what was neglected?
- Cybernetics (1940s–50s)
Focused on feedback loops, control, and self-regulation
in machines and biological systems. Showed that
decomposition can fail because subparts are interdependent.
- Chaos Theory (1960s–80s)
Nonlinear deterministic systems can produce unpredictable,
sensitive dependence on initial conditions. Decomposition
into parts is tricky: small errors explode, and “solving
subparts” may not help predict the whole.
- Santa Fe Institute & Complex Systems (1980s–present)
Studied emergent behavior, networks, adaptation,
self-organization. Linear, reductionist thinking fails
to capture dynamics of economic, social, and ecological systems.
Bye
Mild Shock schrieb:
Hi,
How it started:
https://conceptbase.sourceforge.net/
How its going:
https://www.ibm.com/products/datastax
The problem with claims such as " Formal languages,
such as KAOS, are based on predicate logic and
capture additional details about an application
in a precise manner. They also provide a foundation
for reasoning with information models." is that
every thing in the quoted sentence is wrong.
Real AI systems scale by approximation,
vectorization, distributed representations,
and partial knowledge — not by globally
consistent logical models. No classical requirements
language or ontology captures the informal
cognitive machinery that makes
intelligence flexible. Intelligence needs the
whole messy cognitive spectrum.
Somehow DataStax looks like n8n married AI embedding.
I hope Amazon, Meta, Google, etc.. get the message.
I don't worry about Microsoft, they might come with
something from their Encarta corner and Copilot+ is
more Local AI. After all we need things like Wikidata
in a Robot and not in a Data Center.
LoL
Bye
Hi,
The good thing is while some people care
for the foundation of reasoning, which also
includes performance of "compiled" knowledge,
some people are like Vestal Virgins that have
never been touched by a running computer program,
still they hord in some SWI-Prolog forum:
Take this question:
@jp-diegidio Yes, term/goal expansion does not apply to assert.
Does not apply to assert and call, as Jan was saying.
Do you or anybody happen to know why that is so?
Guess what? DCG expansion is also not applied to
the so called "dynamic database", see ISO core standard.
My speculation it is for speed, and the idea is that
assert/retract/clause are for already "compiled"
knowledge, i.e. facts and rules produced by all the
dozen pipelines that could sit in front the dynamic
database, before one gets to the bare bone Horn clauses.
LoL
Bye
P.S.: Try for yourself, no DCG processing:
?- assertz((a --> b)).
true.
?- listing((-->)/2).
:- dynamic (-->)/2.
b.
https://wasm.swi-prolog.org/wasm/tinker
Mild Shock schrieb:
Hi,
Descartes’ “divide problems into parts” works
only for well-behaved, linear, decomposable systems.
But its just that parts might end up as Schrödingers
equation. It could be that stable diffusion is the
new constraint solver. In a sense, stable diffusion
models (or other generative AI) are functioning as
probabilistic, fuzzy constraint solvers — but in a
very different paradigm from classical logic or
formal methods. But what was neglected?
- Cybernetics (1940s–50s)
Focused on feedback loops, control, and self-regulation
in machines and biological systems. Showed that
decomposition can fail because subparts are interdependent.
- Chaos Theory (1960s–80s)
Nonlinear deterministic systems can produce unpredictable,
sensitive dependence on initial conditions. Decomposition
into parts is tricky: small errors explode, and “solving
subparts” may not help predict the whole.
- Santa Fe Institute & Complex Systems (1980s–present)
Studied emergent behavior, networks, adaptation,
self-organization. Linear, reductionist thinking fails
to capture dynamics of economic, social, and ecological systems.
Bye
Mild Shock schrieb:
Hi,
How it started:
https://conceptbase.sourceforge.net/
How its going:
https://www.ibm.com/products/datastax
The problem with claims such as " Formal languages,
such as KAOS, are based on predicate logic and
capture additional details about an application
in a precise manner. They also provide a foundation
for reasoning with information models." is that
every thing in the quoted sentence is wrong.
Real AI systems scale by approximation,
vectorization, distributed representations,
and partial knowledge — not by globally
consistent logical models. No classical requirements
language or ontology captures the informal
cognitive machinery that makes
intelligence flexible. Intelligence needs the
whole messy cognitive spectrum.
Somehow DataStax looks like n8n married AI embedding.
I hope Amazon, Meta, Google, etc.. get the message.
I don't worry about Microsoft, they might come with
something from their Encarta corner and Copilot+ is
more Local AI. After all we need things like Wikidata
in a Robot and not in a Data Center.
LoL
Bye
Hi,
Give this man (@jp-diegidio) Money to buy the
ISO CORE standard. I wouldn't say the main
argument is performance. If SWI-Prolog has a slow
goal / term expansion framework, its their problem.
I guess the main argument for having a dynamic
database as is, as defined in the ISO CORE standard,
is "coreness". The idea of the ISO CORE standard
is to define Horn clause processor. And it does
not want to hide Horn clauses, and the interface
for asssert/retract are Horn clauses. And in this
way the ISO CORE standard becomes a lower level
foundation, that is quite versatile, and supports
a couple of translation pipelines. Its a little
bit the opposite to SWI-Prolog, with its unspecified dict
madness. And the current PIP efforts, which go on somehow
out of the blue, without specifying a new deep preprocessor
foundation, introducing certain concepts that might
require preprocessors beyond goal expansion and
term expansion. Might have the same gaps has here:
https://simon.peytonjones.org/assets/pdfs/verse-icfp23.pdf
Does it have assert/retract ?
Bye
P.S.: The ISO core standard careful navigates around
another issue. While it specifies dynamic database
kind of behavioural, very detailed including that
dynamic database has a very small shallow preprocessig,
it for example still requires that p(X) :- X, is translated
into p(X) :- call(X). It specifies the loader in the
form of "prepare for execution". But static predicates
share something with dynamic predicates. Their minimal
shallow preprocessing is as well that p(X) :- X, is
translated into p(X) :- call(X).
Mild Shock schrieb:
Hi,
The good thing is while some people care
for the foundation of reasoning, which also
includes performance of "compiled" knowledge,
some people are like Vestal Virgins that have
never been touched by a running computer program,
still they hord in some SWI-Prolog forum:
Take this question:
@jp-diegidio Yes, term/goal expansion does not apply to assert.
Does not apply to assert and call, as Jan was saying.
Do you or anybody happen to know why that is so?
Guess what? DCG expansion is also not applied to
the so called "dynamic database", see ISO core standard.
My speculation it is for speed, and the idea is that
assert/retract/clause are for already "compiled"
knowledge, i.e. facts and rules produced by all the
dozen pipelines that could sit in front the dynamic
database, before one gets to the bare bone Horn clauses.
LoL
Bye
P.S.: Try for yourself, no DCG processing:
?- assertz((a --> b)).
true.
?- listing((-->)/2).
:- dynamic (-->)/2.
b.
https://wasm.swi-prolog.org/wasm/tinker
Mild Shock schrieb:
Hi,
Descartes’ “divide problems into parts” works
only for well-behaved, linear, decomposable systems.
But its just that parts might end up as Schrödingers
equation. It could be that stable diffusion is the
new constraint solver. In a sense, stable diffusion
models (or other generative AI) are functioning as
probabilistic, fuzzy constraint solvers — but in a
very different paradigm from classical logic or
formal methods. But what was neglected?
- Cybernetics (1940s–50s)
Focused on feedback loops, control, and self-regulation
in machines and biological systems. Showed that
decomposition can fail because subparts are interdependent.
- Chaos Theory (1960s–80s)
Nonlinear deterministic systems can produce unpredictable,
sensitive dependence on initial conditions. Decomposition
into parts is tricky: small errors explode, and “solving
subparts” may not help predict the whole.
- Santa Fe Institute & Complex Systems (1980s–present)
Studied emergent behavior, networks, adaptation,
self-organization. Linear, reductionist thinking fails
to capture dynamics of economic, social, and ecological systems.
Bye
Mild Shock schrieb:
Hi,
How it started:
https://conceptbase.sourceforge.net/
How its going:
https://www.ibm.com/products/datastax
The problem with claims such as " Formal languages,
such as KAOS, are based on predicate logic and
capture additional details about an application
in a precise manner. They also provide a foundation
for reasoning with information models." is that
every thing in the quoted sentence is wrong.
Real AI systems scale by approximation,
vectorization, distributed representations,
and partial knowledge — not by globally
consistent logical models. No classical requirements
language or ontology captures the informal
cognitive machinery that makes
intelligence flexible. Intelligence needs the
whole messy cognitive spectrum.
Somehow DataStax looks like n8n married AI embedding.
I hope Amazon, Meta, Google, etc.. get the message.
I don't worry about Microsoft, they might come with
something from their Encarta corner and Copilot+ is
more Local AI. After all we need things like Wikidata
in a Robot and not in a Data Center.
LoL
Bye
expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B). expand_term(A, A).
Hi,
Already the idea that expansion needs fixpoints,
is the most stupid idea. If you need fixpoints
you did something wrong.
For example this here is not a fixpoint:
expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B). expand_term(A, A).
Its only the transitive closure R* of R, where
R = term_expansion. The predicate term_expansion/2
decides when to stop, by signalling a failure.
I have never needed fixpoint neigher for term expansion,
nor for goal_expansion, nor for function_expansion.
You can spare all (==)/2 loop checking.
The problem might be badly written term_expansion/2,
that cannot signal failure. Or a relation R that is
indeed cyclic. But a cyclic R doesn't make any
sense in an expansion.
Bye
Mild Shock schrieb:
Hi,
Give this man (@jp-diegidio) Money to buy the
ISO CORE standard. I wouldn't say the main
argument is performance. If SWI-Prolog has a slow
goal / term expansion framework, its their problem.
I guess the main argument for having a dynamic
database as is, as defined in the ISO CORE standard,
is "coreness". The idea of the ISO CORE standard
is to define Horn clause processor. And it does
not want to hide Horn clauses, and the interface
for asssert/retract are Horn clauses. And in this
way the ISO CORE standard becomes a lower level
foundation, that is quite versatile, and supports
a couple of translation pipelines. Its a little
bit the opposite to SWI-Prolog, with its unspecified dict
madness. And the current PIP efforts, which go on somehow
out of the blue, without specifying a new deep preprocessor
foundation, introducing certain concepts that might
require preprocessors beyond goal expansion and
term expansion. Might have the same gaps has here:
https://simon.peytonjones.org/assets/pdfs/verse-icfp23.pdf
Does it have assert/retract ?
Bye
P.S.: The ISO core standard careful navigates around
another issue. While it specifies dynamic database
kind of behavioural, very detailed including that
dynamic database has a very small shallow preprocessig,
it for example still requires that p(X) :- X, is translated
into p(X) :- call(X). It specifies the loader in the
form of "prepare for execution". But static predicates
share something with dynamic predicates. Their minimal
shallow preprocessing is as well that p(X) :- X, is
translated into p(X) :- call(X).
Mild Shock schrieb:
Hi,
The good thing is while some people care
for the foundation of reasoning, which also
includes performance of "compiled" knowledge,
some people are like Vestal Virgins that have
never been touched by a running computer program,
still they hord in some SWI-Prolog forum:
Take this question:
@jp-diegidio Yes, term/goal expansion does not apply to assert.
Does not apply to assert and call, as Jan was saying.
Do you or anybody happen to know why that is so?
Guess what? DCG expansion is also not applied to
the so called "dynamic database", see ISO core standard.
My speculation it is for speed, and the idea is that
assert/retract/clause are for already "compiled"
knowledge, i.e. facts and rules produced by all the
dozen pipelines that could sit in front the dynamic
database, before one gets to the bare bone Horn clauses.
LoL
Bye
P.S.: Try for yourself, no DCG processing:
?- assertz((a --> b)).
true.
?- listing((-->)/2).
:- dynamic (-->)/2.
b.
https://wasm.swi-prolog.org/wasm/tinker
Mild Shock schrieb:
Hi,
Descartes’ “divide problems into parts” works
only for well-behaved, linear, decomposable systems.
But its just that parts might end up as Schrödingers
equation. It could be that stable diffusion is the
new constraint solver. In a sense, stable diffusion
models (or other generative AI) are functioning as
probabilistic, fuzzy constraint solvers — but in a
very different paradigm from classical logic or
formal methods. But what was neglected?
- Cybernetics (1940s–50s)
Focused on feedback loops, control, and self-regulation
in machines and biological systems. Showed that
decomposition can fail because subparts are interdependent.
- Chaos Theory (1960s–80s)
Nonlinear deterministic systems can produce unpredictable,
sensitive dependence on initial conditions. Decomposition
into parts is tricky: small errors explode, and “solving
subparts” may not help predict the whole.
- Santa Fe Institute & Complex Systems (1980s–present)
Studied emergent behavior, networks, adaptation,
self-organization. Linear, reductionist thinking fails
to capture dynamics of economic, social, and ecological systems.
Bye
Mild Shock schrieb:
Hi,
How it started:
https://conceptbase.sourceforge.net/
How its going:
https://www.ibm.com/products/datastax
The problem with claims such as " Formal languages,
such as KAOS, are based on predicate logic and
capture additional details about an application
in a precise manner. They also provide a foundation
for reasoning with information models." is that
every thing in the quoted sentence is wrong.
Real AI systems scale by approximation,
vectorization, distributed representations,
and partial knowledge — not by globally
consistent logical models. No classical requirements
language or ontology captures the informal
cognitive machinery that makes
intelligence flexible. Intelligence needs the
whole messy cognitive spectrum.
Somehow DataStax looks like n8n married AI embedding.
I hope Amazon, Meta, Google, etc.. get the message.
I don't worry about Microsoft, they might come with
something from their Encarta corner and Copilot+ is
more Local AI. After all we need things like Wikidata
in a Robot and not in a Data Center.
LoL
Bye
Hi,
Maybe one should analyze the cases where
fixpoints are needed. The worst case analysis
of poor failure signalling could be that
in the last step of this fixpoint search,
that in the very last step a copy of the entire
clause is created, which is then identical,
while proper failure signalling, would avoid
this copy. When term_expansion/2 fails, you
have no intermediate term H:
expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B). expand_term(A, A).
So proper failure signalling could be faster,
and less memory consuming, less pressure on the
garbage collection of the Prolog system.
Bye
Mild Shock schrieb:
Hi,
Already the idea that expansion needs fixpoints,
is the most stupid idea. If you need fixpoints
you did something wrong.
For example this here is not a fixpoint:
expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B).
expand_term(A, A).
Its only the transitive closure R* of R, where
R = term_expansion. The predicate term_expansion/2
decides when to stop, by signalling a failure.
I have never needed fixpoint neigher for term expansion,
nor for goal_expansion, nor for function_expansion.
You can spare all (==)/2 loop checking.
The problem might be badly written term_expansion/2,
that cannot signal failure. Or a relation R that is
indeed cyclic. But a cyclic R doesn't make any
sense in an expansion.
Bye
Mild Shock schrieb:
Hi,
Give this man (@jp-diegidio) Money to buy the
ISO CORE standard. I wouldn't say the main
argument is performance. If SWI-Prolog has a slow
goal / term expansion framework, its their problem.
I guess the main argument for having a dynamic
database as is, as defined in the ISO CORE standard,
is "coreness". The idea of the ISO CORE standard
is to define Horn clause processor. And it does
not want to hide Horn clauses, and the interface
for asssert/retract are Horn clauses. And in this
way the ISO CORE standard becomes a lower level
foundation, that is quite versatile, and supports
a couple of translation pipelines. Its a little
bit the opposite to SWI-Prolog, with its unspecified dict
madness. And the current PIP efforts, which go on somehow
out of the blue, without specifying a new deep preprocessor
foundation, introducing certain concepts that might
require preprocessors beyond goal expansion and
term expansion. Might have the same gaps has here:
https://simon.peytonjones.org/assets/pdfs/verse-icfp23.pdf
Does it have assert/retract ?
Bye
P.S.: The ISO core standard careful navigates around
another issue. While it specifies dynamic database
kind of behavioural, very detailed including that
dynamic database has a very small shallow preprocessig,
it for example still requires that p(X) :- X, is translated
into p(X) :- call(X). It specifies the loader in the
form of "prepare for execution". But static predicates
share something with dynamic predicates. Their minimal
shallow preprocessing is as well that p(X) :- X, is
translated into p(X) :- call(X).
Mild Shock schrieb:
Hi,
The good thing is while some people care
for the foundation of reasoning, which also
includes performance of "compiled" knowledge,
some people are like Vestal Virgins that have
never been touched by a running computer program,
still they hord in some SWI-Prolog forum:
Take this question:
@jp-diegidio Yes, term/goal expansion does not apply to assert.
Does not apply to assert and call, as Jan was saying.
Do you or anybody happen to know why that is so?
Guess what? DCG expansion is also not applied to
the so called "dynamic database", see ISO core standard.
My speculation it is for speed, and the idea is that
assert/retract/clause are for already "compiled"
knowledge, i.e. facts and rules produced by all the
dozen pipelines that could sit in front the dynamic
database, before one gets to the bare bone Horn clauses.
LoL
Bye
P.S.: Try for yourself, no DCG processing:
?- assertz((a --> b)).
true.
?- listing((-->)/2).
:- dynamic (-->)/2.
b.
https://wasm.swi-prolog.org/wasm/tinker
Mild Shock schrieb:
Hi,
Descartes’ “divide problems into parts” works
only for well-behaved, linear, decomposable systems.
But its just that parts might end up as Schrödingers
equation. It could be that stable diffusion is the
new constraint solver. In a sense, stable diffusion
models (or other generative AI) are functioning as
probabilistic, fuzzy constraint solvers — but in a
very different paradigm from classical logic or
formal methods. But what was neglected?
- Cybernetics (1940s–50s)
Focused on feedback loops, control, and self-regulation
in machines and biological systems. Showed that
decomposition can fail because subparts are interdependent.
- Chaos Theory (1960s–80s)
Nonlinear deterministic systems can produce unpredictable,
sensitive dependence on initial conditions. Decomposition
into parts is tricky: small errors explode, and “solving
subparts” may not help predict the whole.
- Santa Fe Institute & Complex Systems (1980s–present)
Studied emergent behavior, networks, adaptation,
self-organization. Linear, reductionist thinking fails
to capture dynamics of economic, social, and ecological systems.
Bye
Mild Shock schrieb:
Hi,
How it started:
https://conceptbase.sourceforge.net/
How its going:
https://www.ibm.com/products/datastax
The problem with claims such as " Formal languages,
such as KAOS, are based on predicate logic and
capture additional details about an application
in a precise manner. They also provide a foundation
for reasoning with information models." is that
every thing in the quoted sentence is wrong.
Real AI systems scale by approximation,
vectorization, distributed representations,
and partial knowledge — not by globally
consistent logical models. No classical requirements
language or ontology captures the informal
cognitive machinery that makes
intelligence flexible. Intelligence needs the
whole messy cognitive spectrum.
Somehow DataStax looks like n8n married AI embedding.
I hope Amazon, Meta, Google, etc.. get the message.
I don't worry about Microsoft, they might come with
something from their Encarta corner and Copilot+ is
more Local AI. After all we need things like Wikidata
in a Robot and not in a Data Center.
LoL
Bye
Hi,--- Synchronet 3.21a-Linux NewsLink 1.2
Is this another dumb down inside the Prolog
community. The narrative of fixpoint for expansion
is repeated over and over, up to the point that
@jan
goal_expansion/2 is called by expand_goal/2
until fixed point is reached.
one finds it even in the code:
File: swipl-devel/library/prolog_clause.pl
goal_expansion(G0, G, P, P) :-
user:goal_expansion(G0, G), % TBD: we need the module!
G0 \== G. % \=@=?
But given the many question marks and TBD, it
seems people don't know what they are doing.
Its just one more mess.
Bye
This here looks fine somehow:
expand_goal(A, B, Module, P0, P) :-
goal_expansion(A, B0, P0, P1),
!,
expand_goal(B0, B, Module, P1, P).
expand_goal(A, A, _, P, P).
Mild Shock schrieb:
Hi,
Maybe one should analyze the cases where
fixpoints are needed. The worst case analysis
of poor failure signalling could be that
in the last step of this fixpoint search,
that in the very last step a copy of the entire
clause is created, which is then identical,
while proper failure signalling, would avoid
this copy. When term_expansion/2 fails, you
have no intermediate term H:
expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B).
expand_term(A, A).
So proper failure signalling could be faster,
and less memory consuming, less pressure on the
garbage collection of the Prolog system.
Bye
Mild Shock schrieb:
Hi,
Already the idea that expansion needs fixpoints,
is the most stupid idea. If you need fixpoints
you did something wrong.
For example this here is not a fixpoint:
expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B).
expand_term(A, A).
Its only the transitive closure R* of R, where
R = term_expansion. The predicate term_expansion/2
decides when to stop, by signalling a failure.
I have never needed fixpoint neigher for term expansion,
nor for goal_expansion, nor for function_expansion.
You can spare all (==)/2 loop checking.
The problem might be badly written term_expansion/2,
that cannot signal failure. Or a relation R that is
indeed cyclic. But a cyclic R doesn't make any
sense in an expansion.
Bye
Mild Shock schrieb:
Hi,
Give this man (@jp-diegidio) Money to buy the
ISO CORE standard. I wouldn't say the main
argument is performance. If SWI-Prolog has a slow
goal / term expansion framework, its their problem.
I guess the main argument for having a dynamic
database as is, as defined in the ISO CORE standard,
is "coreness". The idea of the ISO CORE standard
is to define Horn clause processor. And it does
not want to hide Horn clauses, and the interface
for asssert/retract are Horn clauses. And in this
way the ISO CORE standard becomes a lower level
foundation, that is quite versatile, and supports
a couple of translation pipelines. Its a little
bit the opposite to SWI-Prolog, with its unspecified dict
madness. And the current PIP efforts, which go on somehow
out of the blue, without specifying a new deep preprocessor
foundation, introducing certain concepts that might
require preprocessors beyond goal expansion and
term expansion. Might have the same gaps has here:
https://simon.peytonjones.org/assets/pdfs/verse-icfp23.pdf
Does it have assert/retract ?
Bye
P.S.: The ISO core standard careful navigates around
another issue. While it specifies dynamic database
kind of behavioural, very detailed including that
dynamic database has a very small shallow preprocessig,
it for example still requires that p(X) :- X, is translated
into p(X) :- call(X). It specifies the loader in the
form of "prepare for execution". But static predicates
share something with dynamic predicates. Their minimal
shallow preprocessing is as well that p(X) :- X, is
translated into p(X) :- call(X).
Hi,
I remember Ciao People slapping in my face,
I don't work for the Prolog cause. But they
probably sit on 30 year old code mess, and don't
know what "compiled" knowledge they have
dispersed all over the place. And how this
could be cast in a sane preprocessor framework.
Now we have 3 candidates and the rest is draft:
0001.0 What is a PIP?
0101.0 Communication between Prolog and Python via Janus 0105.0 Options in write_term https://prolog-lang.org/ImplementersForum/PIPs
Bravo! Nobody really cares anymore of the ISO
CORE standard and compiled knowledge in the form
of Horn clauses. Prolog findally becomes
an irrelevant scripting language.
Bye
Mild Shock schrieb:
Hi,
Is this another dumb down inside the Prolog
community. The narrative of fixpoint for expansion
is repeated over and over, up to the point that
@jan
goal_expansion/2 is called by expand_goal/2
until fixed point is reached.
one finds it even in the code:
File: swipl-devel/library/prolog_clause.pl
goal_expansion(G0, G, P, P) :-
user:goal_expansion(G0, G), % TBD: we need the module!
G0 \== G. % \=@=?
But given the many question marks and TBD, it
seems people don't know what they are doing.
Its just one more mess.
Bye
This here looks fine somehow:
expand_goal(A, B, Module, P0, P) :-
goal_expansion(A, B0, P0, P1),
!,
expand_goal(B0, B, Module, P1, P).
expand_goal(A, A, _, P, P).
Mild Shock schrieb:
Hi,
Maybe one should analyze the cases where
fixpoints are needed. The worst case analysis
of poor failure signalling could be that
in the last step of this fixpoint search,
that in the very last step a copy of the entire
clause is created, which is then identical,
while proper failure signalling, would avoid
this copy. When term_expansion/2 fails, you
have no intermediate term H:
expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B).
expand_term(A, A).
So proper failure signalling could be faster,
and less memory consuming, less pressure on the
garbage collection of the Prolog system.
Bye
Mild Shock schrieb:
Hi,
Already the idea that expansion needs fixpoints,
is the most stupid idea. If you need fixpoints
you did something wrong.
For example this here is not a fixpoint:
expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B).
expand_term(A, A).
Its only the transitive closure R* of R, where
R = term_expansion. The predicate term_expansion/2
decides when to stop, by signalling a failure.
I have never needed fixpoint neigher for term expansion,
nor for goal_expansion, nor for function_expansion.
You can spare all (==)/2 loop checking.
The problem might be badly written term_expansion/2,
that cannot signal failure. Or a relation R that is
indeed cyclic. But a cyclic R doesn't make any
sense in an expansion.
Bye
Mild Shock schrieb:
Hi,
Give this man (@jp-diegidio) Money to buy the
ISO CORE standard. I wouldn't say the main
argument is performance. If SWI-Prolog has a slow
goal / term expansion framework, its their problem.
I guess the main argument for having a dynamic
database as is, as defined in the ISO CORE standard,
is "coreness". The idea of the ISO CORE standard
is to define Horn clause processor. And it does
not want to hide Horn clauses, and the interface
for asssert/retract are Horn clauses. And in this
way the ISO CORE standard becomes a lower level
foundation, that is quite versatile, and supports
a couple of translation pipelines. Its a little
bit the opposite to SWI-Prolog, with its unspecified dict
madness. And the current PIP efforts, which go on somehow
out of the blue, without specifying a new deep preprocessor
foundation, introducing certain concepts that might
require preprocessors beyond goal expansion and
term expansion. Might have the same gaps has here:
https://simon.peytonjones.org/assets/pdfs/verse-icfp23.pdf
Does it have assert/retract ?
Bye
P.S.: The ISO core standard careful navigates around
another issue. While it specifies dynamic database
kind of behavioural, very detailed including that
dynamic database has a very small shallow preprocessig,
it for example still requires that p(X) :- X, is translated
into p(X) :- call(X). It specifies the loader in the
form of "prepare for execution". But static predicates
share something with dynamic predicates. Their minimal
shallow preprocessing is as well that p(X) :- X, is
translated into p(X) :- call(X).
Hi,
I remember Ciao People slapping in my face,
I don't work for the Prolog cause. But they
probably sit on 30 year old code mess, and don't
know what "compiled" knowledge they have
dispersed all over the place. And how this
could be cast in a sane preprocessor framework.
Now we have 3 candidates and the rest is draft:
0001.0 What is a PIP?
0101.0 Communication between Prolog and Python via Janus 0105.0 Options in write_term https://prolog-lang.org/ImplementersForum/PIPs
Bravo! Nobody really cares anymore of the ISO
CORE standard and compiled knowledge in the form
of Horn clauses. Prolog findally becomes
an irrelevant scripting language.
Bye
Mild Shock schrieb:
Hi,
Is this another dumb down inside the Prolog
community. The narrative of fixpoint for expansion
is repeated over and over, up to the point that
@jan
goal_expansion/2 is called by expand_goal/2
until fixed point is reached.
one finds it even in the code:
File: swipl-devel/library/prolog_clause.pl
goal_expansion(G0, G, P, P) :-
user:goal_expansion(G0, G), % TBD: we need the module!
G0 \== G. % \=@=?
But given the many question marks and TBD, it
seems people don't know what they are doing.
Its just one more mess.
Bye
This here looks fine somehow:
expand_goal(A, B, Module, P0, P) :-
goal_expansion(A, B0, P0, P1),
!,
expand_goal(B0, B, Module, P1, P).
expand_goal(A, A, _, P, P).
Mild Shock schrieb:
Hi,
Maybe one should analyze the cases where
fixpoints are needed. The worst case analysis
of poor failure signalling could be that
in the last step of this fixpoint search,
that in the very last step a copy of the entire
clause is created, which is then identical,
while proper failure signalling, would avoid
this copy. When term_expansion/2 fails, you
have no intermediate term H:
expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B).
expand_term(A, A).
So proper failure signalling could be faster,
and less memory consuming, less pressure on the
garbage collection of the Prolog system.
Bye
Mild Shock schrieb:
Hi,
Already the idea that expansion needs fixpoints,
is the most stupid idea. If you need fixpoints
you did something wrong.
For example this here is not a fixpoint:
expand_term(A, B) :- term_expansion(A, H), !, expand_term(H, B).
expand_term(A, A).
Its only the transitive closure R* of R, where
R = term_expansion. The predicate term_expansion/2
decides when to stop, by signalling a failure.
I have never needed fixpoint neigher for term expansion,
nor for goal_expansion, nor for function_expansion.
You can spare all (==)/2 loop checking.
The problem might be badly written term_expansion/2,
that cannot signal failure. Or a relation R that is
indeed cyclic. But a cyclic R doesn't make any
sense in an expansion.
Bye
Mild Shock schrieb:
Hi,
Give this man (@jp-diegidio) Money to buy the
ISO CORE standard. I wouldn't say the main
argument is performance. If SWI-Prolog has a slow
goal / term expansion framework, its their problem.
I guess the main argument for having a dynamic
database as is, as defined in the ISO CORE standard,
is "coreness". The idea of the ISO CORE standard
is to define Horn clause processor. And it does
not want to hide Horn clauses, and the interface
for asssert/retract are Horn clauses. And in this
way the ISO CORE standard becomes a lower level
foundation, that is quite versatile, and supports
a couple of translation pipelines. Its a little
bit the opposite to SWI-Prolog, with its unspecified dict
madness. And the current PIP efforts, which go on somehow
out of the blue, without specifying a new deep preprocessor
foundation, introducing certain concepts that might
require preprocessors beyond goal expansion and
term expansion. Might have the same gaps has here:
https://simon.peytonjones.org/assets/pdfs/verse-icfp23.pdf
Does it have assert/retract ?
Bye
P.S.: The ISO core standard careful navigates around
another issue. While it specifies dynamic database
kind of behavioural, very detailed including that
dynamic database has a very small shallow preprocessig,
it for example still requires that p(X) :- X, is translated
into p(X) :- call(X). It specifies the loader in the
form of "prepare for execution". But static predicates
share something with dynamic predicates. Their minimal
shallow preprocessing is as well that p(X) :- X, is
translated into p(X) :- call(X).
Mapping Types: The translation of Python dictionaries takes
advantage of the syntax of braces, which is supported by all Prologs
that support DCGs. The term form of a dictionary is;
Hi,
Give this man (@jp-diegidio) Money to buy the
ISO CORE standard. I wouldn't say the main
argument is performance. If SWI-Prolog has a slow
goal / term expansion framework, its their problem.
I guess the main argument for having a dynamic
database as is, as defined in the ISO CORE standard,
is "coreness". The idea of the ISO CORE standard
is to define Horn clause processor. And it does
not want to hide Horn clauses, and the interface
for asssert/retract are Horn clauses. And in this
way the ISO CORE standard becomes a lower level
foundation, that is quite versatile, and supports
a couple of translation pipelines. Its a little
bit the opposite to SWI-Prolog, with its unspecified dict
madness. And the current PIP efforts, which go on somehow
out of the blue, without specifying a new deep preprocessor
foundation, introducing certain concepts that might
require preprocessors beyond goal expansion and
term expansion. Might have the same gaps has here:
https://simon.peytonjones.org/assets/pdfs/verse-icfp23.pdf
Does it have assert/retract ?
Bye
P.S.: The ISO core standard careful navigates around
another issue. While it specifies dynamic database
kind of behavioural, very detailed including that
dynamic database has a very small shallow preprocessig,
it for example still requires that p(X) :- X, is translated
into p(X) :- call(X). It specifies the loader in the
form of "prepare for execution". But static predicates
share something with dynamic predicates. Their minimal
shallow preprocessing is as well that p(X) :- X, is
translated into p(X) :- call(X).
Mild Shock schrieb:
Hi,
The good thing is while some people care
for the foundation of reasoning, which also
includes performance of "compiled" knowledge,
some people are like Vestal Virgins that have
never been touched by a running computer program,
still they hord in some SWI-Prolog forum:
Take this question:
@jp-diegidio Yes, term/goal expansion does not apply to assert.
Does not apply to assert and call, as Jan was saying.
Do you or anybody happen to know why that is so?
Guess what? DCG expansion is also not applied to
the so called "dynamic database", see ISO core standard.
My speculation it is for speed, and the idea is that
assert/retract/clause are for already "compiled"
knowledge, i.e. facts and rules produced by all the
dozen pipelines that could sit in front the dynamic
database, before one gets to the bare bone Horn clauses.
LoL
Bye
P.S.: Try for yourself, no DCG processing:
?- assertz((a --> b)).
true.
?- listing((-->)/2).
:- dynamic (-->)/2.
b.
https://wasm.swi-prolog.org/wasm/tinker
Mild Shock schrieb:
Hi,
Descartes’ “divide problems into parts” works
only for well-behaved, linear, decomposable systems.
But its just that parts might end up as Schrödingers
equation. It could be that stable diffusion is the
new constraint solver. In a sense, stable diffusion
models (or other generative AI) are functioning as
probabilistic, fuzzy constraint solvers — but in a
very different paradigm from classical logic or
formal methods. But what was neglected?
- Cybernetics (1940s–50s)
Focused on feedback loops, control, and self-regulation
in machines and biological systems. Showed that
decomposition can fail because subparts are interdependent.
- Chaos Theory (1960s–80s)
Nonlinear deterministic systems can produce unpredictable,
sensitive dependence on initial conditions. Decomposition
into parts is tricky: small errors explode, and “solving
subparts” may not help predict the whole.
- Santa Fe Institute & Complex Systems (1980s–present)
Studied emergent behavior, networks, adaptation,
self-organization. Linear, reductionist thinking fails
to capture dynamics of economic, social, and ecological systems.
Bye
Mild Shock schrieb:
Hi,
How it started:
https://conceptbase.sourceforge.net/
How its going:
https://www.ibm.com/products/datastax
The problem with claims such as " Formal languages,
such as KAOS, are based on predicate logic and
capture additional details about an application
in a precise manner. They also provide a foundation
for reasoning with information models." is that
every thing in the quoted sentence is wrong.
Real AI systems scale by approximation,
vectorization, distributed representations,
and partial knowledge — not by globally
consistent logical models. No classical requirements
language or ontology captures the informal
cognitive machinery that makes
intelligence flexible. Intelligence needs the
whole messy cognitive spectrum.
Somehow DataStax looks like n8n married AI embedding.
I hope Amazon, Meta, Google, etc.. get the message.
I don't worry about Microsoft, they might come with
something from their Encarta corner and Copilot+ is
more Local AI. After all we need things like Wikidata
in a Robot and not in a Data Center.
LoL
Bye
Hi,
Give Money to all the students working on SWI/XSB.
I feel this phrasing could be misleading:
Mapping Types: The translation of Python dictionaries takes
advantage of the syntax of braces, which is supported by all Prologs
that support DCGs. The term form of a dictionary is;
https://prolog-lang.org/ImplementersForum/janus-bitrans.html
The {}/1 syntax is already part of the ISO core standard.
No ISO DCG standard needed. You find it in section 6.3.6:
------------------- cut here -------------------
6.3.6 Compound terms - curly bracketed term
A term with principal functor '{}'/1 can also be expressed
by enclosing its argument in curly brackets.
term = open curly, term, close curly ;
Abstract: {}(l) l
Priority: 0 1201
NOTE - For the syntax of an empty curly brackets, see 6.3.1.3.
------------------- cut here -------------------
Bye
Hi,
A few months / years ago, I called the {}/1 based
Dicts, Affine Dicts, because they play with the
syntax affinity between JavaScript, Python and JSON
dicts, and the curly backeted terms, when used
in connection with (:)/2 and (,)/2. Which lead
to the libraries library(misc/json) and
library(misc/dict). Recently I made a few
observations, namely:
- library(misc/json) deduplicate:
The library does not yet deduplicate key value pairs.
It seems that JavaScript and Python use a replacement
strategy where the first pair defines the order
and the last pair defines the definite value:
JSON.parse('{"a":123, "b":"abc", "a":456}')
{a: 456, b: 'abc'}
- library(misc/dict) empty dict:
The reader order influences the backtracking operations,
as well the non-backtracking operations I am
currently working on. Now I would prefer that
the empty dict is represented as {true} using {}/1
and not using {}/0. This would facilitate the non-
backtracking operations, making them also applicable
to empty dicts, or allowing them also to return
the empty dict. Because the change_arg/3 based
operations can the modify given dict always.
The recent new non-backtracking operations are:
dict_set(T, K, V) : Non-failure signalling,
override just as in the JSON read semantics.
dict_add(T, K, V) : Failure signalling if K already present. dict_remove(+Dict, +Term) : Non-failure signalling.
The non-backtracking dicts are also usable with some
of the backtracking dicts operations, these operations
extend to compounds modified via change_arg/3:
dict_current(T, K, V) : Failure signalling if K isn not present.
Etc.. Etc..
But currently everything is implemented with {}/0
interpreting as empty dict, which I will probably change
into {false} everywhere, so that change_arg/3 and
non change_arg/3 work uniformly. The change will
affect both library(misc/json) and library(misc/dict).
Bye
Mild Shock schrieb:
Hi,
Give Money to all the students working on SWI/XSB.
I feel this phrasing could be misleading:
Mapping Types: The translation of Python dictionaries takes
advantage of the syntax of braces, which is supported by all Prologs
that support DCGs. The term form of a dictionary is;
https://prolog-lang.org/ImplementersForum/janus-bitrans.html
The {}/1 syntax is already part of the ISO core standard.
No ISO DCG standard needed. You find it in section 6.3.6:
------------------- cut here -------------------
6.3.6 Compound terms - curly bracketed term
A term with principal functor '{}'/1 can also be expressed
by enclosing its argument in curly brackets.
term = open curly, term, close curly ;
Abstract: {}(l) l
Priority: 0 1201
NOTE - For the syntax of an empty curly brackets, see 6.3.1.3.
------------------- cut here -------------------
Bye
Hi,
A few months / years ago, I called the {}/1 based
Dicts, Affine Dicts, because they play with the
syntax affinity between JavaScript, Python and JSON
dicts, and the curly backeted terms, when used
in connection with (:)/2 and (,)/2. Which lead
to the libraries library(misc/json) and
library(misc/dict). Recently I made a few
observations, namely:
- library(misc/json) deduplicate:
The library does not yet deduplicate key value pairs.
It seems that JavaScript and Python use a replacement
strategy where the first pair defines the order
and the last pair defines the definite value:
JSON.parse('{"a":123, "b":"abc", "a":456}')
{a: 456, b: 'abc'}
- library(misc/dict) empty dict:
The reader order influences the backtracking operations,
as well the non-backtracking operations I am
currently working on. Now I would prefer that
the empty dict is represented as {true} using {}/1
and not using {}/0. This would facilitate the non-
backtracking operations, making them also applicable
to empty dicts, or allowing them also to return
the empty dict. Because the change_arg/3 based
operations can the modify given dict always.
The recent new non-backtracking operations are:
dict_set(T, K, V) : Non-failure signalling,
override just as in the JSON read semantics.
dict_add(T, K, V) : Failure signalling if K already present. dict_remove(+Dict, +Term) : Non-failure signalling.
The non-backtracking dicts are also usable with some
of the backtracking dicts operations, these operations
extend to compounds modified via change_arg/3:
dict_current(T, K, V) : Failure signalling if K isn not present.
Etc.. Etc..
But currently everything is implemented with {}/0
interpreting as empty dict, which I will probably change
into {false} everywhere, so that change_arg/3 and
non change_arg/3 work uniformly. The change will
affect both library(misc/json) and library(misc/dict).
Bye
Mild Shock schrieb:
Hi,
Give Money to all the students working on SWI/XSB.
I feel this phrasing could be misleading:
Mapping Types: The translation of Python dictionaries takes
advantage of the syntax of braces, which is supported by all Prologs
that support DCGs. The term form of a dictionary is;
https://prolog-lang.org/ImplementersForum/janus-bitrans.html
The {}/1 syntax is already part of the ISO core standard.
No ISO DCG standard needed. You find it in section 6.3.6:
------------------- cut here -------------------
6.3.6 Compound terms - curly bracketed term
A term with principal functor '{}'/1 can also be expressed
by enclosing its argument in curly brackets.
term = open curly, term, close curly ;
Abstract: {}(l) l
Priority: 0 1201
NOTE - For the syntax of an empty curly brackets, see 6.3.1.3.
------------------- cut here -------------------
Bye
- library(misc/dict) empty dict:
The reader order influences the backtracking operations,
as well the non-backtracking operations I am
currently working on.
Hi,
Here some more testing, that shows that Affine Dicts
are quite different from SWI-Dicts, since they do
not sort the keys:
JSON.parse('{"a":123, "b":"abc", "a":456}')
{a: 456, b: 'abc'}
JSON.parse('{"b":"abc", "a":123, "a":456}')
{b: 'abc', a: 456}
This is now the standard across JavaScript and Python.
Since Python 3.7, the insertion-order behavior of
dict is a language guarantee.
Further ECMAScript defines a deterministic property
order for plain objects, String keys (non-integer)
are in insertion order.
Still JavaScript and Python seem not to mourne some
performance issues. ChatGPT claims JavaScript: Order
is tracked efficiently, and Python: Order is cheap
because of a major redesign. So maybe lets figure
out how to do either or both transparently. So far
my Affine do not perform key sorting, not having
the beauty of O(log N) access as SWI has sometimes.
The access is O(N). But I expect it can be made O(1)
under the hood! Not yet sure, the arrow functions can
do it, since they provide hash table of anonymous
predicate clauses. But this is still a little too
heavy for dicts I guess.
Bye
P.S.: Corollary, ditch PIP0102 and PIP0104 as well.
Dictionaries in Prolog (dynamic) https://prolog-lang.org/ImplementersForum/0102-dicts.html
Terms with named arguments (static dicts) https://prolog-lang.org/ImplementersForum/0104-argnames.html
Mild Shock schrieb:
Hi,
A few months / years ago, I called the {}/1 based
Dicts, Affine Dicts, because they play with the
syntax affinity between JavaScript, Python and JSON
dicts, and the curly backeted terms, when used
in connection with (:)/2 and (,)/2. Which lead
to the libraries library(misc/json) and
library(misc/dict). Recently I made a few
observations, namely:
- library(misc/json) deduplicate:
The library does not yet deduplicate key value pairs.
It seems that JavaScript and Python use a replacement
strategy where the first pair defines the order
and the last pair defines the definite value:
JSON.parse('{"a":123, "b":"abc", "a":456}')
{a: 456, b: 'abc'}
- library(misc/dict) empty dict:
The reader order influences the backtracking operations,
as well the non-backtracking operations I am
currently working on. Now I would prefer that
the empty dict is represented as {true} using {}/1
and not using {}/0. This would facilitate the non-
backtracking operations, making them also applicable
to empty dicts, or allowing them also to return
the empty dict. Because the change_arg/3 based
operations can the modify given dict always.
The recent new non-backtracking operations are:
dict_set(T, K, V) : Non-failure signalling,
override just as in the JSON read semantics.
dict_add(T, K, V) : Failure signalling if K already present.
dict_remove(+Dict, +Term) : Non-failure signalling.
The non-backtracking dicts are also usable with some
of the backtracking dicts operations, these operations
extend to compounds modified via change_arg/3:
dict_current(T, K, V) : Failure signalling if K isn not present.
Etc.. Etc..
But currently everything is implemented with {}/0
interpreting as empty dict, which I will probably change
into {false} everywhere, so that change_arg/3 and
non change_arg/3 work uniformly. The change will
affect both library(misc/json) and library(misc/dict).
Bye
Mild Shock schrieb:
Hi,
Give Money to all the students working on SWI/XSB.
I feel this phrasing could be misleading:
Mapping Types: The translation of Python dictionaries takes
advantage of the syntax of braces, which is supported by all Prologs >>> > that support DCGs. The term form of a dictionary is;
https://prolog-lang.org/ImplementersForum/janus-bitrans.html
The {}/1 syntax is already part of the ISO core standard.
No ISO DCG standard needed. You find it in section 6.3.6:
------------------- cut here -------------------
6.3.6 Compound terms - curly bracketed term
A term with principal functor '{}'/1 can also be expressed
by enclosing its argument in curly brackets.
term = open curly, term, close curly ;
Abstract: {}(l) l
Priority: 0 1201
NOTE - For the syntax of an empty curly brackets, see 6.3.1.3.
------------------- cut here -------------------
Bye
- library(misc/dict) empty dict:
The reader order influences the backtracking operations,
as well the non-backtracking operations I am
currently working on.
Hi,
Here some more testing, that shows that Affine Dicts
are quite different from SWI-Dicts, since they do
not sort the keys:
JSON.parse('{"a":123, "b":"abc", "a":456}')
{a: 456, b: 'abc'}
JSON.parse('{"b":"abc", "a":123, "a":456}')
{b: 'abc', a: 456}
This is now the standard across JavaScript and Python.
Since Python 3.7, the insertion-order behavior of
dict is a language guarantee.
Further ECMAScript defines a deterministic property
order for plain objects, String keys (non-integer)
are in insertion order.
Still JavaScript and Python seem not to mourne some
performance issues. ChatGPT claims JavaScript: Order
is tracked efficiently, and Python: Order is cheap
because of a major redesign. So maybe lets figure
out how to do either or both transparently. So far
my Affine do not perform key sorting, not having
the beauty of O(log N) access as SWI has sometimes.
The access is O(N). But I expect it can be made O(1)
under the hood! Not yet sure, the arrow functions can
do it, since they provide hash table of anonymous
predicate clauses. But this is still a little too
heavy for dicts I guess.
Bye
P.S.: Corollary, ditch PIP0102 and PIP0104 as well.
Dictionaries in Prolog (dynamic) https://prolog-lang.org/ImplementersForum/0102-dicts.html
Terms with named arguments (static dicts) https://prolog-lang.org/ImplementersForum/0104-argnames.html
Mild Shock schrieb:
Hi,
A few months / years ago, I called the {}/1 based
Dicts, Affine Dicts, because they play with the
syntax affinity between JavaScript, Python and JSON
dicts, and the curly backeted terms, when used
in connection with (:)/2 and (,)/2. Which lead
to the libraries library(misc/json) and
library(misc/dict). Recently I made a few
observations, namely:
- library(misc/json) deduplicate:
The library does not yet deduplicate key value pairs.
It seems that JavaScript and Python use a replacement
strategy where the first pair defines the order
and the last pair defines the definite value:
JSON.parse('{"a":123, "b":"abc", "a":456}')
{a: 456, b: 'abc'}
- library(misc/dict) empty dict:
The reader order influences the backtracking operations,
as well the non-backtracking operations I am
currently working on. Now I would prefer that
the empty dict is represented as {true} using {}/1
and not using {}/0. This would facilitate the non-
backtracking operations, making them also applicable
to empty dicts, or allowing them also to return
the empty dict. Because the change_arg/3 based
operations can the modify given dict always.
The recent new non-backtracking operations are:
dict_set(T, K, V) : Non-failure signalling,
override just as in the JSON read semantics.
dict_add(T, K, V) : Failure signalling if K already present.
dict_remove(+Dict, +Term) : Non-failure signalling.
The non-backtracking dicts are also usable with some
of the backtracking dicts operations, these operations
extend to compounds modified via change_arg/3:
dict_current(T, K, V) : Failure signalling if K isn not present.
Etc.. Etc..
But currently everything is implemented with {}/0
interpreting as empty dict, which I will probably change
into {false} everywhere, so that change_arg/3 and
non change_arg/3 work uniformly. The change will
affect both library(misc/json) and library(misc/dict).
Bye
Mild Shock schrieb:
Hi,
Give Money to all the students working on SWI/XSB.
I feel this phrasing could be misleading:
Mapping Types: The translation of Python dictionaries takes
advantage of the syntax of braces, which is supported by all Prologs >>> > that support DCGs. The term form of a dictionary is;
https://prolog-lang.org/ImplementersForum/janus-bitrans.html
The {}/1 syntax is already part of the ISO core standard.
No ISO DCG standard needed. You find it in section 6.3.6:
------------------- cut here -------------------
6.3.6 Compound terms - curly bracketed term
A term with principal functor '{}'/1 can also be expressed
by enclosing its argument in curly brackets.
term = open curly, term, close curly ;
Abstract: {}(l) l
Priority: 0 1201
NOTE - For the syntax of an empty curly brackets, see 6.3.1.3.
------------------- cut here -------------------
Bye
- library(misc/dict) empty dict:
The reader order influences the backtracking operations,
as well the non-backtracking operations I am
currently working on.
Hi,
Here some more testing, that shows that Affine Dicts
are quite different from SWI-Dicts, since they do
not sort the keys:
JSON.parse('{"a":123, "b":"abc", "a":456}')
{a: 456, b: 'abc'}
JSON.parse('{"b":"abc", "a":123, "a":456}')
{b: 'abc', a: 456}
This is now the standard across JavaScript and Python.
Since Python 3.7, the insertion-order behavior of
dict is a language guarantee.
Further ECMAScript defines a deterministic property
order for plain objects, String keys (non-integer)
are in insertion order.
Still JavaScript and Python seem not to mourne some
performance issues. ChatGPT claims JavaScript: Order
is tracked efficiently, and Python: Order is cheap
because of a major redesign. So maybe lets figure
out how to do either or both transparently. So far
my Affine do not perform key sorting, not having
the beauty of O(log N) access as SWI has sometimes.
The access is O(N). But I expect it can be made O(1)
under the hood! Not yet sure, the arrow functions can
do it, since they provide hash table of anonymous
predicate clauses. But this is still a little too
heavy for dicts I guess.
Bye
P.S.: Corollary, ditch PIP0102 and PIP0104 as well.
Dictionaries in Prolog (dynamic) https://prolog-lang.org/ImplementersForum/0102-dicts.html
Terms with named arguments (static dicts) https://prolog-lang.org/ImplementersForum/0104-argnames.html
Mild Shock schrieb:
Hi,
A few months / years ago, I called the {}/1 based
Dicts, Affine Dicts, because they play with the
syntax affinity between JavaScript, Python and JSON
dicts, and the curly backeted terms, when used
in connection with (:)/2 and (,)/2. Which lead
to the libraries library(misc/json) and
library(misc/dict). Recently I made a few
observations, namely:
- library(misc/json) deduplicate:
The library does not yet deduplicate key value pairs.
It seems that JavaScript and Python use a replacement
strategy where the first pair defines the order
and the last pair defines the definite value:
JSON.parse('{"a":123, "b":"abc", "a":456}')
{a: 456, b: 'abc'}
- library(misc/dict) empty dict:
The reader order influences the backtracking operations,
as well the non-backtracking operations I am
currently working on. Now I would prefer that
the empty dict is represented as {true} using {}/1
and not using {}/0. This would facilitate the non-
backtracking operations, making them also applicable
to empty dicts, or allowing them also to return
the empty dict. Because the change_arg/3 based
operations can the modify given dict always.
The recent new non-backtracking operations are:
dict_set(T, K, V) : Non-failure signalling,
override just as in the JSON read semantics.
dict_add(T, K, V) : Failure signalling if K already present.
dict_remove(+Dict, +Term) : Non-failure signalling.
The non-backtracking dicts are also usable with some
of the backtracking dicts operations, these operations
extend to compounds modified via change_arg/3:
dict_current(T, K, V) : Failure signalling if K isn not present.
Etc.. Etc..
But currently everything is implemented with {}/0
interpreting as empty dict, which I will probably change
into {false} everywhere, so that change_arg/3 and
non change_arg/3 work uniformly. The change will
affect both library(misc/json) and library(misc/dict).
Bye
Mild Shock schrieb:
Hi,
Give Money to all the students working on SWI/XSB.
I feel this phrasing could be misleading:
Mapping Types: The translation of Python dictionaries takes
advantage of the syntax of braces, which is supported by all Prologs >>> > that support DCGs. The term form of a dictionary is;
https://prolog-lang.org/ImplementersForum/janus-bitrans.html
The {}/1 syntax is already part of the ISO core standard.
No ISO DCG standard needed. You find it in section 6.3.6:
------------------- cut here -------------------
6.3.6 Compound terms - curly bracketed term
A term with principal functor '{}'/1 can also be expressed
by enclosing its argument in curly brackets.
term = open curly, term, close curly ;
Abstract: {}(l) l
Priority: 0 1201
NOTE - For the syntax of an empty curly brackets, see 6.3.1.3.
------------------- cut here -------------------
Bye
Hi,
Descartes’ “divide problems into parts” works
only for well-behaved, linear, decomposable systems.
But its just that parts might end up as Schrödingers
equation. It could be that stable diffusion is the
new constraint solver. In a sense, stable diffusion
models (or other generative AI) are functioning as
probabilistic, fuzzy constraint solvers — but in a
very different paradigm from classical logic or
formal methods. But what was neglected?
- Cybernetics (1940s–50s)
Focused on feedback loops, control, and self-regulation
in machines and biological systems. Showed that
decomposition can fail because subparts are interdependent.
- Chaos Theory (1960s–80s)
Nonlinear deterministic systems can produce unpredictable,
sensitive dependence on initial conditions. Decomposition
into parts is tricky: small errors explode, and “solving
subparts” may not help predict the whole.
- Santa Fe Institute & Complex Systems (1980s–present)
Studied emergent behavior, networks, adaptation,
self-organization. Linear, reductionist thinking fails
to capture dynamics of economic, social, and ecological systems.
Bye
Mild Shock schrieb:
Hi,
How it started:
https://conceptbase.sourceforge.net/
How its going:
https://www.ibm.com/products/datastax
The problem with claims such as " Formal languages,
such as KAOS, are based on predicate logic and
capture additional details about an application
in a precise manner. They also provide a foundation
for reasoning with information models." is that
every thing in the quoted sentence is wrong.
Real AI systems scale by approximation,
vectorization, distributed representations,
and partial knowledge — not by globally
consistent logical models. No classical requirements
language or ontology captures the informal
cognitive machinery that makes
intelligence flexible. Intelligence needs the
whole messy cognitive spectrum.
Somehow DataStax looks like n8n married AI embedding.
I hope Amazon, Meta, Google, etc.. get the message.
I don't worry about Microsoft, they might come with
something from their Encarta corner and Copilot+ is
more Local AI. After all we need things like Wikidata
in a Robot and not in a Data Center.
LoL
Bye
Hi,
How it started, some useless GOFAI framing and
production systems lore:
Computational Logic and Human Thinking:
How to Be Artificially Intelligent https://www.cambridge.org/core/books/computational-logic-and-human-thinking/C2AFB0483D922944067DBC76FFFEB295
How its going, please note CodeMender from Google:
New Google Riftrunner AI (Gemini 3) Shocks Everyone https://www.youtube.com/watch?v=F_YWQ12qQ8M
Especially note the section about CodeMender(*), and AI
bild on Gemini, which does inspect and suggest changes
to OpenSource projects.
So whats the rule of predicting the future in AI. Well
just take skeptics, like Boris the Loris (**) (nah we don't
use Fuzzy Testing here, CodeMender uses this among other
methods), Linus Torwald (nah, AI for OpenSource is still
far away, CodeMender is here) etc.. Negate what they are
saying and you get a perfect prediction for 2025 / 2026.
LoL
Bye
(*) Already *old* anouncement from October 6, 2025:
Introducing CodeMender: an AI agent for code security https://deepmind.google/blog/introducing-codemender-an-ai-agent-for-code-security/
(**) Ok, when you don't find Boris the Loris on
SWI-Prolog discourse, you might find him here:
Hello. My name is Boris and this is my family. We're
lorises and we are primates - a bit like small
monkeys. We tend to move quite slowly which is
why we are Slow Lorises. We have big eyes so we
can see well in the dark to catch insects for our dinner.
My name... is Boris
https://x.com/mrborisloris
Mild Shock schrieb:
Hi,
Descartes’ “divide problems into parts” works
only for well-behaved, linear, decomposable systems.
But its just that parts might end up as Schrödingers
equation. It could be that stable diffusion is the
new constraint solver. In a sense, stable diffusion
models (or other generative AI) are functioning as
probabilistic, fuzzy constraint solvers — but in a
very different paradigm from classical logic or
formal methods. But what was neglected?
- Cybernetics (1940s–50s)
Focused on feedback loops, control, and self-regulation
in machines and biological systems. Showed that
decomposition can fail because subparts are interdependent.
- Chaos Theory (1960s–80s)
Nonlinear deterministic systems can produce unpredictable,
sensitive dependence on initial conditions. Decomposition
into parts is tricky: small errors explode, and “solving
subparts” may not help predict the whole.
- Santa Fe Institute & Complex Systems (1980s–present)
Studied emergent behavior, networks, adaptation,
self-organization. Linear, reductionist thinking fails
to capture dynamics of economic, social, and ecological systems.
Bye
Mild Shock schrieb:
Hi,
How it started:
https://conceptbase.sourceforge.net/
How its going:
https://www.ibm.com/products/datastax
The problem with claims such as " Formal languages,
such as KAOS, are based on predicate logic and
capture additional details about an application
in a precise manner. They also provide a foundation
for reasoning with information models." is that
every thing in the quoted sentence is wrong.
Real AI systems scale by approximation,
vectorization, distributed representations,
and partial knowledge — not by globally
consistent logical models. No classical requirements
language or ontology captures the informal
cognitive machinery that makes
intelligence flexible. Intelligence needs the
whole messy cognitive spectrum.
Somehow DataStax looks like n8n married AI embedding.
I hope Amazon, Meta, Google, etc.. get the message.
I don't worry about Microsoft, they might come with
something from their Encarta corner and Copilot+ is
more Local AI. After all we need things like Wikidata
in a Robot and not in a Data Center.
LoL
Bye
Hi,
Something tells me the Prolog community has a
sever blind spot, in their Logic education.
Possibly never touch a book like this here,
even not with tweezers:
Undergraduate Texts in Mathematics - 1983
H .- D. Ebbinghaus et. al - Mathematical Logic http://www.fuchs-braun.com/media/ca80d9e55f6d3bfaffff8005fffffff0.pdf
The front cover features a smiling face,
illustrating Ehrenfeucht Fraisse (EF) games.
There is a compelling relationship between
EF and Fuzzy Testing. Just take A and B, a formal
form of a spec and of some code. This is quite
different from Lorentz Games, where the initial
set-up is different. But here if Anna plays
Player II in G(M1,M2) and Bert plays Player II
in G(M2,M1). Then if Anna has a winning strategy,
then Bert has a winning strategy. Sounds like
Bisimulation again. One of the biggest struggels
for Boris the Loris and Nazi Retart Julio of
all time. Or this complete blunder, navigating
in the dark, trying to identify an elephant:
@kuniaki.mukai https://swi-prolog.discourse.group/t/cyclic-terms-unification-x-f-f-x-x-y-f-y-f-y-x-y/9097/72
Bye
P.S.: Mostlikely the cardinal sin of the Prolog
Community is that they don't apply Proof Theoretic
methods and Model Theoretic methods on equal
footing. They don't understand how the two
methods are related, even on the most basic
level, such as counter models, which is a level
more basic than EF games. One of the future
challenges for the community could be extending
proof theoretic methods and model theoretic
methods to (seemingly) higher order logic. This
could be quite messy, or not? I am currengly
fascinated by Feferman Operative Sets and
like Melvin Fittings work in higher order logic.
Mild Shock schrieb:
Hi,
How it started, some useless GOFAI framing and
production systems lore:
Computational Logic and Human Thinking:
How to Be Artificially Intelligent
https://www.cambridge.org/core/books/computational-logic-and-human-thinking/C2AFB0483D922944067DBC76FFFEB295
How its going, please note CodeMender from Google:
New Google Riftrunner AI (Gemini 3) Shocks Everyone
https://www.youtube.com/watch?v=F_YWQ12qQ8M
Especially note the section about CodeMender(*), and AI
bild on Gemini, which does inspect and suggest changes
to OpenSource projects.
So whats the rule of predicting the future in AI. Well
just take skeptics, like Boris the Loris (**) (nah we don't
use Fuzzy Testing here, CodeMender uses this among other
methods), Linus Torwald (nah, AI for OpenSource is still
far away, CodeMender is here) etc.. Negate what they are
saying and you get a perfect prediction for 2025 / 2026.
LoL
Bye
(*) Already *old* anouncement from October 6, 2025:
Introducing CodeMender: an AI agent for code security
https://deepmind.google/blog/introducing-codemender-an-ai-agent-for-code-security/
(**) Ok, when you don't find Boris the Loris on
SWI-Prolog discourse, you might find him here:
Hello. My name is Boris and this is my family. We're
lorises and we are primates - a bit like small
monkeys. We tend to move quite slowly which is
why we are Slow Lorises. We have big eyes so we
can see well in the dark to catch insects for our dinner.
My name... is Boris
https://x.com/mrborisloris
Mild Shock schrieb:
Hi,
Descartes’ “divide problems into parts” works
only for well-behaved, linear, decomposable systems.
But its just that parts might end up as Schrödingers
equation. It could be that stable diffusion is the
new constraint solver. In a sense, stable diffusion
models (or other generative AI) are functioning as
probabilistic, fuzzy constraint solvers — but in a
very different paradigm from classical logic or
formal methods. But what was neglected?
- Cybernetics (1940s–50s)
Focused on feedback loops, control, and self-regulation
in machines and biological systems. Showed that
decomposition can fail because subparts are interdependent.
- Chaos Theory (1960s–80s)
Nonlinear deterministic systems can produce unpredictable,
sensitive dependence on initial conditions. Decomposition
into parts is tricky: small errors explode, and “solving
subparts” may not help predict the whole.
- Santa Fe Institute & Complex Systems (1980s–present)
Studied emergent behavior, networks, adaptation,
self-organization. Linear, reductionist thinking fails
to capture dynamics of economic, social, and ecological systems.
Bye
Mild Shock schrieb:
Hi,
How it started:
https://conceptbase.sourceforge.net/
How its going:
https://www.ibm.com/products/datastax
The problem with claims such as " Formal languages,
such as KAOS, are based on predicate logic and
capture additional details about an application
in a precise manner. They also provide a foundation
for reasoning with information models." is that
every thing in the quoted sentence is wrong.
Real AI systems scale by approximation,
vectorization, distributed representations,
and partial knowledge — not by globally
consistent logical models. No classical requirements
language or ontology captures the informal
cognitive machinery that makes
intelligence flexible. Intelligence needs the
whole messy cognitive spectrum.
Somehow DataStax looks like n8n married AI embedding.
I hope Amazon, Meta, Google, etc.. get the message.
I don't worry about Microsoft, they might come with
something from their Encarta corner and Copilot+ is
more Local AI. After all we need things like Wikidata
in a Robot and not in a Data Center.
LoL
Bye
Hi,
Set theory was initially praised as a foundation
for mathematics. But the reduction of mathematics
to foundation, was also carried out in type theories.
And we see that this reduction is rather arbitrary,
we now have dozen of competing set theories and
type theories. Whats more stunning, the reduction
allows us to view Model Theory in a Proof Theory
fashion, possibly implying that Model Theory doesn't
exist or is not needed? But this is a dangerous
conclusion, since the foundation might be a theoretical
technical thing, far away from practical use.
Take naive comprehension, was it wrong?
∃x∀y(y e x <=> phi(y))
It was fixed after Russell/Frege by ?von Neumann?:
∀z∃x∀y(y e x <=> y e z & phi(y))
But in reverse mathematics we find another refinement:
∃x∀y(y e x <=> phi(y)) for phi a certain class
So instead introducing a set like upper bound z,
where we would only look at the projection of
comprehension to z, we take the naive intuition more
seriously, born from informal usage, and say,
naive comprehension was not really wrong!
Bye
P.S.: It would be interesting to see whether
Operational Set theory as presented today, has
a developed Model Theoretic language? Or does it
also subscribe to the Computational Logic Primate?
5.3 Relativizing operational set theory
It is shown in [45] that a direct relativization
of operational reflection leads to theories that are
significantly stronger than theories formalizing the
admissible analogues of classical large cardinal axioms.
This refutes the conjecture 14(1) on p. 977 of Feferman [19]. https://home.inf.unibe.ch/ltg/publications/2018/jae18.pdf
Is relativizing the backdoor of a model theory,
that might also be useful for OST? OST is used like
Lego bricks here. Adding this or that, one gets different
set theories (or maybe type theories).
Operational set theory and small large cardinals
Solomon Feferman - 2006
Conjecture 14.
(1) OST + (Inacc) ≡ KPi.
(2) OST + (Mahlo) ≡ KPM.
(3) OST + (Reg2) ≡ KPω +( 3 −Reflection). https://math.stanford.edu/~feferman/papers/OST-Final.pdf
One gets the impression of OST being a sub-foundation toy.
Mild Shock schrieb:
Hi,
Something tells me the Prolog community has a
sever blind spot, in their Logic education.
Possibly never touch a book like this here,
even not with tweezers:
Undergraduate Texts in Mathematics - 1983
H .- D. Ebbinghaus et. al - Mathematical Logic
http://www.fuchs-braun.com/media/ca80d9e55f6d3bfaffff8005fffffff0.pdf
The front cover features a smiling face,
illustrating Ehrenfeucht Fraisse (EF) games.
There is a compelling relationship between
EF and Fuzzy Testing. Just take A and B, a formal
form of a spec and of some code. This is quite
different from Lorentz Games, where the initial
set-up is different. But here if Anna plays
Player II in G(M1,M2) and Bert plays Player II
in G(M2,M1). Then if Anna has a winning strategy,
then Bert has a winning strategy. Sounds like
Bisimulation again. One of the biggest struggels
for Boris the Loris and Nazi Retart Julio of
all time. Or this complete blunder, navigating
in the dark, trying to identify an elephant:
@kuniaki.mukai
https://swi-prolog.discourse.group/t/cyclic-terms-unification-x-f-f-x-x-y-f-y-f-y-x-y/9097/72
Bye
P.S.: Mostlikely the cardinal sin of the Prolog
Community is that they don't apply Proof Theoretic
methods and Model Theoretic methods on equal
footing. They don't understand how the two
methods are related, even on the most basic
level, such as counter models, which is a level
more basic than EF games. One of the future
challenges for the community could be extending
proof theoretic methods and model theoretic
methods to (seemingly) higher order logic. This
could be quite messy, or not? I am currengly
fascinated by Feferman Operative Sets and
like Melvin Fittings work in higher order logic.
Mild Shock schrieb:
Hi,
How it started, some useless GOFAI framing and
production systems lore:
Computational Logic and Human Thinking:
How to Be Artificially Intelligent
https://www.cambridge.org/core/books/computational-logic-and-human-thinking/C2AFB0483D922944067DBC76FFFEB295
How its going, please note CodeMender from Google:
New Google Riftrunner AI (Gemini 3) Shocks Everyone
https://www.youtube.com/watch?v=F_YWQ12qQ8M
Especially note the section about CodeMender(*), and AI
bild on Gemini, which does inspect and suggest changes
to OpenSource projects.
So whats the rule of predicting the future in AI. Well
just take skeptics, like Boris the Loris (**) (nah we don't
use Fuzzy Testing here, CodeMender uses this among other
methods), Linus Torwald (nah, AI for OpenSource is still
far away, CodeMender is here) etc.. Negate what they are
saying and you get a perfect prediction for 2025 / 2026.
LoL
Bye
(*) Already *old* anouncement from October 6, 2025:
Introducing CodeMender: an AI agent for code security
https://deepmind.google/blog/introducing-codemender-an-ai-agent-for-code-security/
(**) Ok, when you don't find Boris the Loris on
SWI-Prolog discourse, you might find him here:
Hello. My name is Boris and this is my family. We're
lorises and we are primates - a bit like small
monkeys. We tend to move quite slowly which is
why we are Slow Lorises. We have big eyes so we
can see well in the dark to catch insects for our dinner.
My name... is Boris
https://x.com/mrborisloris
Mild Shock schrieb:
Hi,
Descartes’ “divide problems into parts” works
only for well-behaved, linear, decomposable systems.
But its just that parts might end up as Schrödingers
equation. It could be that stable diffusion is the
new constraint solver. In a sense, stable diffusion
models (or other generative AI) are functioning as
probabilistic, fuzzy constraint solvers — but in a
very different paradigm from classical logic or
formal methods. But what was neglected?
- Cybernetics (1940s–50s)
Focused on feedback loops, control, and self-regulation
in machines and biological systems. Showed that
decomposition can fail because subparts are interdependent.
- Chaos Theory (1960s–80s)
Nonlinear deterministic systems can produce unpredictable,
sensitive dependence on initial conditions. Decomposition
into parts is tricky: small errors explode, and “solving
subparts” may not help predict the whole.
- Santa Fe Institute & Complex Systems (1980s–present)
Studied emergent behavior, networks, adaptation,
self-organization. Linear, reductionist thinking fails
to capture dynamics of economic, social, and ecological systems.
Bye
Mild Shock schrieb:
Hi,
How it started:
https://conceptbase.sourceforge.net/
How its going:
https://www.ibm.com/products/datastax
The problem with claims such as " Formal languages,
such as KAOS, are based on predicate logic and
capture additional details about an application
in a precise manner. They also provide a foundation
for reasoning with information models." is that
every thing in the quoted sentence is wrong.
Real AI systems scale by approximation,
vectorization, distributed representations,
and partial knowledge — not by globally
consistent logical models. No classical requirements
language or ontology captures the informal
cognitive machinery that makes
intelligence flexible. Intelligence needs the
whole messy cognitive spectrum.
Somehow DataStax looks like n8n married AI embedding.
I hope Amazon, Meta, Google, etc.. get the message.
I don't worry about Microsoft, they might come with
something from their Encarta corner and Copilot+ is
more Local AI. After all we need things like Wikidata
in a Robot and not in a Data Center.
LoL
Bye
Hi,
How it started, some useless GOFAI framing and
production systems lore:
Computational Logic and Human Thinking:
How to Be Artificially Intelligent https://www.cambridge.org/core/books/computational-logic-and-human-thinking/C2AFB0483D922944067DBC76FFFEB295
How its going, please note CodeMender from Google:
New Google Riftrunner AI (Gemini 3) Shocks Everyone https://www.youtube.com/watch?v=F_YWQ12qQ8M
Especially note the section about CodeMender(*), and AI
bild on Gemini, which does inspect and suggest changes
to OpenSource projects.
So whats the rule of predicting the future in AI. Well
just take skeptics, like Boris the Loris (**) (nah we don't
use Fuzzy Testing here, CodeMender uses this among other
methods), Linus Torwald (nah, AI for OpenSource is still
far away, CodeMender is here) etc.. Negate what they are
saying and you get a perfect prediction for 2025 / 2026.
LoL
Bye
(*) Already *old* anouncement from October 6, 2025:
Introducing CodeMender: an AI agent for code security https://deepmind.google/blog/introducing-codemender-an-ai-agent-for-code-security/
(**) Ok, when you don't find Boris the Loris on
SWI-Prolog discourse, you might find him here:
Hello. My name is Boris and this is my family. We're
lorises and we are primates - a bit like small
monkeys. We tend to move quite slowly which is
why we are Slow Lorises. We have big eyes so we
can see well in the dark to catch insects for our dinner.
My name... is Boris
https://x.com/mrborisloris
Mild Shock schrieb:
Hi,
Descartes’ “divide problems into parts” works
only for well-behaved, linear, decomposable systems.
But its just that parts might end up as Schrödingers
equation. It could be that stable diffusion is the
new constraint solver. In a sense, stable diffusion
models (or other generative AI) are functioning as
probabilistic, fuzzy constraint solvers — but in a
very different paradigm from classical logic or
formal methods. But what was neglected?
- Cybernetics (1940s–50s)
Focused on feedback loops, control, and self-regulation
in machines and biological systems. Showed that
decomposition can fail because subparts are interdependent.
- Chaos Theory (1960s–80s)
Nonlinear deterministic systems can produce unpredictable,
sensitive dependence on initial conditions. Decomposition
into parts is tricky: small errors explode, and “solving
subparts” may not help predict the whole.
- Santa Fe Institute & Complex Systems (1980s–present)
Studied emergent behavior, networks, adaptation,
self-organization. Linear, reductionist thinking fails
to capture dynamics of economic, social, and ecological systems.
Bye
Mild Shock schrieb:
Hi,
How it started:
https://conceptbase.sourceforge.net/
How its going:
https://www.ibm.com/products/datastax
The problem with claims such as " Formal languages,
such as KAOS, are based on predicate logic and
capture additional details about an application
in a precise manner. They also provide a foundation
for reasoning with information models." is that
every thing in the quoted sentence is wrong.
Real AI systems scale by approximation,
vectorization, distributed representations,
and partial knowledge — not by globally
consistent logical models. No classical requirements
language or ontology captures the informal
cognitive machinery that makes
intelligence flexible. Intelligence needs the
whole messy cognitive spectrum.
Somehow DataStax looks like n8n married AI embedding.
I hope Amazon, Meta, Google, etc.. get the message.
I don't worry about Microsoft, they might come with
something from their Encarta corner and Copilot+ is
more Local AI. After all we need things like Wikidata
in a Robot and not in a Data Center.
LoL
Bye
Hi,
How it started:
https://conceptbase.sourceforge.net/
How its going:
https://www.ibm.com/products/datastax
The problem with claims such as " Formal languages,
such as KAOS, are based on predicate logic and
capture additional details about an application
in a precise manner. They also provide a foundation
for reasoning with information models." is that
every thing in the quoted sentence is wrong.
Real AI systems scale by approximation,
vectorization, distributed representations,
and partial knowledge — not by globally
consistent logical models. No classical requirements
language or ontology captures the informal
cognitive machinery that makes
intelligence flexible. Intelligence needs the
whole messy cognitive spectrum.
Somehow DataStax looks like n8n married AI embedding.
I hope Amazon, Meta, Google, etc.. get the message.
I don't worry about Microsoft, they might come with
something from their Encarta corner and Copilot+ is
more Local AI. After all we need things like Wikidata
in a Robot and not in a Data Center.
LoL
Bye
Hi,
Ha Ha, remember this post on SWI-Prolog
discourse, the primary source for morons such
as Boris the Loris and Nazi Retard Julio:
"The idea that LLM-based methods can become
more intelligent by using massive amounts
of computation is false. They can generate
more kinds of BS, but at an enormous cost in
hardware and in the electricity to run that
massive hardware. But without methods of
evaluation, the probability that random mixtures
of data are true or useful or worth the cost
of generating them becomes less and less likely."
- John Sowa
https://swi-prolog.discourse.group/t/prolog-and-llms-genai/8699
Guess what my new ThinkCenter, that just arrived
via Lenovo, China, with a Snapdragon X, for around
700.- USD could easily run locally some inferencing.
I was using AnythingLLM, it has little idioctic
electron user user interface, but can dedicatedly
support Snapdragon X NPU and models, via QNN/ONNX:
The all-in-one AI application
https://anythingllm.com/
Tested a LLama Model, a little bit chatty to
be honest, and a Phi Silica model, not yet that
good in coding. Where did the massive computation
come from? From the SOC and the unified memory
of the Snapdragon. I had 32 GB, and 16 GB was
shared with the NPU. So you don't need to
buy an Aura Yoga laptop, which has separate
NVIDIA Graphics card, with only 8 GB. This
graphic card will be useless, many interesting
models are above 8 GB. And yes the massive
computation obviously leads to more intelligence.
The later is a riddle for every Prologer, how
could more LIPS (logical inference per second)
lead to more intelligence?
Bye
Mild Shock schrieb:
Hi,
How it started:
https://conceptbase.sourceforge.net/
How its going:
https://www.ibm.com/products/datastax
The problem with claims such as " Formal languages,
such as KAOS, are based on predicate logic and
capture additional details about an application
in a precise manner. They also provide a foundation
for reasoning with information models." is that
every thing in the quoted sentence is wrong.
Real AI systems scale by approximation,
vectorization, distributed representations,
and partial knowledge — not by globally
consistent logical models. No classical requirements
language or ontology captures the informal
cognitive machinery that makes
intelligence flexible. Intelligence needs the
whole messy cognitive spectrum.
Somehow DataStax looks like n8n married AI embedding.
I hope Amazon, Meta, Google, etc.. get the message.
I don't worry about Microsoft, they might come with
something from their Encarta corner and Copilot+ is
more Local AI. After all we need things like Wikidata
in a Robot and not in a Data Center.
LoL
Bye
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,090 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 158:12:31 |
| Calls: | 13,922 |
| Files: | 187,021 |
| D/L today: |
220 files (58,313K bytes) |
| Messages: | 2,457,273 |