Concerning this boring nonsense:
https://book.simply-logical.space/src/text/2_part_ii/5.3.html#
Funny idea that anybody would be interested just now in
the year 2025 in things like teaching breadth first
search versus depth first search, or even be “mystified”
by such stuff. Its extremly trivial stuff:
Insert your favorite tree traversal pictures here.
Its even not artificial intelligence neither has anything
to do with mathematical logic, rather belongs to computer
science and discrete mathematics which you have in
1st year university
courses, making it moot to call it “simply logical”. It
reminds me of the idea of teaching how wax candles work
to dumb down students, when just light bulbs have been
invented. If this is the outcome
of the Prolog Education Group 2.0, then good night.
My suspicion, teaching WalkSAT as an altermative
to DPLL, and show its limitations would maybe give
more bang. We are currently entering an era that
already started in the end of 1990
when some new probabilistic complexity classes
were defined. Many machine learning techniques
have also such an aspect, and it will only get
worse with Quantum Computing. Having
a grip on these things helps also distinguishing
when a AI acts by chance, or whether it diverts from
chance and shows some excelling adaptation to the
problem domain at hand. Like here,
anybody an idea what they mean by “above chance”?
Intuitive physics understanding emerges from
self-supervised pretraining on natural videos https://arxiv.org/abs/2502.11831
Mild Shock schrieb:
Concerning this boring nonsense:
https://book.simply-logical.space/src/text/2_part_ii/5.3.html#
Funny idea that anybody would be interested just now in
the year 2025 in things like teaching breadth first
search versus depth first search, or even be “mystified”
by such stuff. Its extremly trivial stuff:
Insert your favorite tree traversal pictures here.
Its even not artificial intelligence neither has anything
to do with mathematical logic, rather belongs to computer
science and discrete mathematics which you have in
1st year university
courses, making it moot to call it “simply logical”. It
reminds me of the idea of teaching how wax candles work
to dumb down students, when just light bulbs have been
invented. If this is the outcome
of the Prolog Education Group 2.0, then good night.
In 2011, Geoffrey Hinton started reaching out
to colleagues about “What do I have to do to
convince you that neural networks are the future?” https://en.wikipedia.org/wiki/AlexNet
We pull it out of thin air. And the job that does
is, indeed, that it breaks up relations into
sub-relations or sub-routines, if you prefer.
Background knowledge (Second Order)
-----------------------------------
(Chain) ∃.P,Q,R ∀.x,y,z: P(x,y)← Q(x,z),R(z,y)
https://github.com/stassa/vanilla/tree/master/lib/poker
Concerning this boring nonsense:
https://book.simply-logical.space/src/text/2_part_ii/5.3.html#
Funny idea that anybody would be interested just now in
the year 2025 in things like teaching breadth first
search versus depth first search, or even be “mystified”
by such stuff. Its extremly trivial stuff:
Insert your favorite tree traversal pictures here.
Its even not artificial intelligence neither has anything
to do with mathematical logic, rather belongs to computer
science and discrete mathematics which you have in
1st year university
courses, making it moot to call it “simply logical”. It
reminds me of the idea of teaching how wax candles work
to dumb down students, when just light bulbs have been
invented. If this is the outcome
of the Prolog Education Group 2.0, then good night.
The H is the bottleneck on purpose:
relation(X, Y) :- encoder(X, H), decoder(H, Y).
The first deep learning breakthrough was
AlexNet by Alex Krizhevsky, Ilya Sutskever
and Geoffrey Hinton:
In 2011, Geoffrey Hinton started reaching out
to colleagues about “What do I have to do to
convince you that neural networks are the future?” https://en.wikipedia.org/wiki/AlexNet
Meanwhile ILP is still dreaming of higher order logic:
We pull it out of thin air. And the job that does
is, indeed, that it breaks up relations into
sub-relations or sub-routines, if you prefer.
You mean this here:
Background knowledge (Second Order)
-----------------------------------
(Chain) ∃.P,Q,R ∀.x,y,z: P(x,y)← Q(x,z),R(z,y)
https://github.com/stassa/vanilla/tree/master/lib/poker
Thats too general, it doesn’t adress
analogical reasoning.
Mild Shock schrieb:
Concerning this boring nonsense:
https://book.simply-logical.space/src/text/2_part_ii/5.3.html#
Funny idea that anybody would be interested just now in
the year 2025 in things like teaching breadth first
search versus depth first search, or even be “mystified”
by such stuff. Its extremly trivial stuff:
Insert your favorite tree traversal pictures here.
Its even not artificial intelligence neither has anything
to do with mathematical logic, rather belongs to computer
science and discrete mathematics which you have in
1st year university
courses, making it moot to call it “simply logical”. It
reminds me of the idea of teaching how wax candles work
to dumb down students, when just light bulbs have been
invented. If this is the outcome
of the Prolog Education Group 2.0, then good night.
Concerning this boring nonsense:
https://book.simply-logical.space/src/text/2_part_ii/5.3.html#
Funny idea that anybody would be interested just now in
the year 2025 in things like teaching breadth first
search versus depth first search, or even be “mystified”
by such stuff. Its extremly trivial stuff:
Insert your favorite tree traversal pictures here.
Its even not artificial intelligence neither has anything
to do with mathematical logic, rather belongs to computer
science and discrete mathematics which you have in
1st year university
courses, making it moot to call it “simply logical”. It
reminds me of the idea of teaching how wax candles work
to dumb down students, when just light bulbs have been
invented. If this is the outcome
of the Prolog Education Group 2.0, then good night.
Struggle understanding Hopcroft and Karp (1971) ?
No worries Philip Zucker is the man:
Co-Egraphs: Streams, Unification, PEGs, Rational Lambdas
You can make egraphs support stream like things / rational terms. https://www.philipzucker.com/coegraph/
Strange I came up with the same stuff over the last
weeks while discussing with @kuniaki.mukai .
Lets say 2025 is the year of rational trees!
Mild Shock schrieb:
Concerning this boring nonsense:
https://book.simply-logical.space/src/text/2_part_ii/5.3.html#
Funny idea that anybody would be interested just now in
the year 2025 in things like teaching breadth first
search versus depth first search, or even be “mystified”
by such stuff. Its extremly trivial stuff:
Insert your favorite tree traversal pictures here.
Its even not artificial intelligence neither has anything
to do with mathematical logic, rather belongs to computer
science and discrete mathematics which you have in
1st year university
courses, making it moot to call it “simply logical”. It
reminds me of the idea of teaching how wax candles work
to dumb down students, when just light bulbs have been
invented. If this is the outcome
of the Prolog Education Group 2.0, then good night.
observations on the decidability
Hi,
Using Hopcroft & Karp (HK) everywhere
clearly reases the bar. I decided to use
HK also in compare/3 , just to have it
detect structure sharing. But here a little
test comparing with WebPL, only (=)/2 because
I didn't find compare/3:
/* WebPL May 2025 */
test(25).
True
(1040.4ms)
https://webpl.whenderson.dev/
/* Dogelog Player 1.3.6 */
:- version.
:- time(test(25)).
Dogelog Spieler, Prolog zum Mond, 1.3.6 (02.08.2025)
(c) 1985-2025, XLOG Technologies AG, Schweiz
% Zeit 1 ms, GC 0 ms, Lips 125000, Uhr 14.08.2025 12:30 https://www.dogelog.ch/littab/doclet/live/07_basic/example01/package.html
SWI-Prolog WASM can also do it fast. The
old Trealla WASM version cannot do it fast.
Bye
P.S.: The test code:
hydra(0,_) :- !.
hydra(N,s(X,X)) :- M is N-1, hydra(M,X).
test(N) :-
hydra(N,X), hydra(N,Y), X = Y.
Mild Shock schrieb:
Struggle understanding Hopcroft and Karp (1971) ?
No worries Philip Zucker is the man:
Co-Egraphs: Streams, Unification, PEGs, Rational Lambdas
You can make egraphs support stream like things / rational terms.
https://www.philipzucker.com/coegraph/
Strange I came up with the same stuff over the last
weeks while discussing with @kuniaki.mukai .
Lets say 2025 is the year of rational trees!
Mild Shock schrieb:
Concerning this boring nonsense:
https://book.simply-logical.space/src/text/2_part_ii/5.3.html#
Funny idea that anybody would be interested just now in
the year 2025 in things like teaching breadth first
search versus depth first search, or even be “mystified”
by such stuff. Its extremly trivial stuff:
Insert your favorite tree traversal pictures here.
Its even not artificial intelligence neither has anything
to do with mathematical logic, rather belongs to computer
science and discrete mathematics which you have in
1st year university
courses, making it moot to call it “simply logical”. It
reminds me of the idea of teaching how wax candles work
to dumb down students, when just light bulbs have been
invented. If this is the outcome
of the Prolog Education Group 2.0, then good night.
Hi,--- Synchronet 3.21a-Linux NewsLink 1.2
This is a nice confusion of the highest order.
observations on the decidability
Well in summary the existence of a pretest (==)/2
makes Mercio's decidable. Equality on rational trees
is decidable, even if it comes in disguise as Mercio’s
mathematical definition, which gives a total order
which is unfortunately not a natural order:
A=B :⟺ A' =lex B'
A<B :⟺ A' <lex B'
Natural order of rational trees?
https://math.stackexchange.com/a/210730
Why is Mercio’s mathematical definition of equality
decidable? Because it is semantically equivalent to
bisimulation as implemented in (==)/2 , and decidable
is a semantic notion:
Decidability (logic)
In logic, a true/false decision problem is decidableif there
exists an effective method for deriving the correct answer.
Decidability (logic) - Wikipedia
So if you talk about semi-decidable, decidable etc..
its always about the existence of an algorithm in
the large space of effective method according to
the Church hypothesis.
If you want to talk about a particular algorithm
you would use the terminology “incomplete”. Like for
example DFS algorithms are “incomplete” compared to
BFS algorithms on certain problems.
Bye
There is a difference between this:
/* Version I */
fun(X, Y, Z, A) :-
X =< Y, !,
Z = A.
fun(X, Y, Z, A) :-
X1 is X - 1,
fun(X1, Y, Z, A1),
Y1 is Y - 1,
fun(Y1, Z, X, A2),
Z1 is Z - 1,
fun(Z1, X, Y, A3),
fun(A1, A2, A3, A).
And this in Dogelog Player:
/* Version II */
fun(X, Y, Z, A) :-
X =< Y, !,
Z = A.
fun(X, Y, Z, A) :-
X1 is X - 1,
Y1 is Y - 1,
Z1 is Z - 1,
fun(X1, Y, Z, A1),
fun(Y1, Z, X, A2),
fun(Z1, X, Y, A3),
fun(A1, A2, A3, A).
Usually I am benchmarking Version I, but Version II
gives Dogelog Player the opportunity to do more
static Variable Shunting. Which is seen in the
performance. Not its JavaScript not WASM:
/* Version I */
% Zeit 1723 ms, GC 6 ms, Lips 4863960, Uhr 16.08.2025 11:21
/* Version II */
% Zeit 1489 ms, GC 2 ms, Lips 5628343, Uhr 16.08.2025 11:21
Was resting:
mtak :- between(1,31,_), fun(18, 12, 6, _), fail.
mtak.
:- time(mtak).
Hi,
It seems the gist of WebPL is to examine
dynamic variable shunting as part of a
garbage collection strategy, as found here:
Variable Shunting for the WAM
Sahlin, D. and Carlsson, M. - 1991 https://www.diva-portal.org/smash/get/diva2:1041385/FULLTEXT01.pdf
Does it pay off? I am testing
Version II of mtak:
/* WebPL GC Rust-WAMS */
True
(1875.2ms)
No more results
/* Trealla Prolog C-WASM */
True
(1269.5ms)
No more results
/* SWI-Prolog C-WASM */
True
(1368.9ms)
No more results
The 1875.2ms are worse than my 1489 ms. The
1269.5ms and 1368.9ms are only slightly better
than my 1489 ms.
I would say static shunting is the secret ingredient
of Dogelog Player, that makes it fast sometimes!
Just think about it, in the above JavaScript is
compared to WASM. Whats going on? Of course
one should repeat the test with newest and newest
SWI-Prolog and Trealla Prolog. Don't know
whether there is a site that does all the updates.
Bye
BTW: For between/2 was using:
between(Lo, Lo, R) :- !, Lo = R.
between(Lo, _, Lo).
between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).
Mild Shock schrieb:
There is a difference between this:
/* Version I */
fun(X, Y, Z, A) :-
X =< Y, !,
Z = A.
fun(X, Y, Z, A) :-
X1 is X - 1,
fun(X1, Y, Z, A1),
Y1 is Y - 1,
fun(Y1, Z, X, A2),
Z1 is Z - 1,
fun(Z1, X, Y, A3),
fun(A1, A2, A3, A).
And this in Dogelog Player:
/* Version II */
fun(X, Y, Z, A) :-
X =< Y, !,
Z = A.
fun(X, Y, Z, A) :-
X1 is X - 1,
Y1 is Y - 1,
Z1 is Z - 1,
fun(X1, Y, Z, A1),
fun(Y1, Z, X, A2),
fun(Z1, X, Y, A3),
fun(A1, A2, A3, A).
Usually I am benchmarking Version I, but Version II
gives Dogelog Player the opportunity to do more
static Variable Shunting. Which is seen in the
performance. Not its JavaScript not WASM:
/* Version I */
% Zeit 1723 ms, GC 6 ms, Lips 4863960, Uhr 16.08.2025 11:21
/* Version II */
% Zeit 1489 ms, GC 2 ms, Lips 5628343, Uhr 16.08.2025 11:21
Was resting:
mtak :- between(1,31,_), fun(18, 12, 6, _), fail.
mtak.
:- time(mtak).
Hi,
BTW: Here a test without WASM:
/* Trealla Prolog 2.82.12 WSL2 */
?- time(mtak).
% Time elapsed 0.734s, 8873504 Inferences, 12.091 MLips
true.
/* SWI Prolog 9.0.4 WSL2 */
?- time(mtak).
% 3,943,790 inferences, 0.210 CPU in 0.217 seconds
(97% CPU, 18777906 Lips)
true.
/* Dogelog Player 1.3.6 Windows 11 */
?- time(mtak).
% Zeit 412 ms, GC 0 ms, Lips 20341271, Uhr 16.08.2025 11:40
true.
/* Jekejeke Prolog 1.7.3 Windows 11 */
?- time(mtak).
% Zeit 664 ms, GC 3 ms, Uhr 16.08.2025 11:41
true.
I didn't test Scryer Prolog because it has no GC.
I keep testing Jekejeke Prolog because it has
reference counting. But Dogelog Player without
reference counting and true garbage collection,
seems to be more speedy. Not as fast as SWI-Prolog.
But faster than Trealla Prolog.
So SWI-Prolog is still the Orginal Gangster (OG)
of garbage collection (GC). Some Prolog systems
like Ciao Prolog, ECLiPSe Prolog or SICStus Prolog
might be even faster, since it seems to me they use
native backend compilation schemes, AOT or JIT. The
tested systems above are all more of the
interpreter type Prolog systems.
Bye
Mild Shock schrieb:
Hi,
It seems the gist of WebPL is to examine
dynamic variable shunting as part of a
garbage collection strategy, as found here:
Variable Shunting for the WAM
Sahlin, D. and Carlsson, M. - 1991
https://www.diva-portal.org/smash/get/diva2:1041385/FULLTEXT01.pdf
Does it pay off? I am testing
Version II of mtak:
/* WebPL GC Rust-WAMS */
True
(1875.2ms)
No more results
/* Trealla Prolog C-WASM */
True
(1269.5ms)
No more results
/* SWI-Prolog C-WASM */
True
(1368.9ms)
No more results
The 1875.2ms are worse than my 1489 ms. The
1269.5ms and 1368.9ms are only slightly better
than my 1489 ms.
I would say static shunting is the secret ingredient
of Dogelog Player, that makes it fast sometimes!
Just think about it, in the above JavaScript is
compared to WASM. Whats going on? Of course
one should repeat the test with newest and newest
SWI-Prolog and Trealla Prolog. Don't know
whether there is a site that does all the updates.
Bye
BTW: For between/2 was using:
between(Lo, Lo, R) :- !, Lo = R.
between(Lo, _, Lo).
between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).
Mild Shock schrieb:
There is a difference between this:
/* Version I */
fun(X, Y, Z, A) :-
X =< Y, !,
Z = A.
fun(X, Y, Z, A) :-
X1 is X - 1,
fun(X1, Y, Z, A1),
Y1 is Y - 1,
fun(Y1, Z, X, A2),
Z1 is Z - 1,
fun(Z1, X, Y, A3),
fun(A1, A2, A3, A).
And this in Dogelog Player:
/* Version II */
fun(X, Y, Z, A) :-
X =< Y, !,
Z = A.
fun(X, Y, Z, A) :-
X1 is X - 1,
Y1 is Y - 1,
Z1 is Z - 1,
fun(X1, Y, Z, A1),
fun(Y1, Z, X, A2),
fun(Z1, X, Y, A3),
fun(A1, A2, A3, A).
Usually I am benchmarking Version I, but Version II
gives Dogelog Player the opportunity to do more
static Variable Shunting. Which is seen in the
performance. Not its JavaScript not WASM:
/* Version I */
% Zeit 1723 ms, GC 6 ms, Lips 4863960, Uhr 16.08.2025 11:21
/* Version II */
% Zeit 1489 ms, GC 2 ms, Lips 5628343, Uhr 16.08.2025 11:21
Was resting:
mtak :- between(1,31,_), fun(18, 12, 6, _), fail.
mtak.
:- time(mtak).
Hi,
The paper by Shalin and Carlson from 1991
did not yet ring a bell. But it suggest testing
something with primes and freeze. Lets do
primes as suggested but without freeze. SWI-Prolog
seems not to the OG of GC. Putting aside Shalin
and Carlson, its an typical example of a lot of
intermediate results, that can be discarded by
a garbage collection. Every candidate number that
is not a prime number can be remove from the
trail they get unreachable in the first clause
of search/3. Besides this obvious unreachability
task, I don't have statistics or don't see immediately
where large variable instantiation chains are supposed
to be created. At least not in my Prolog system, since
a result variable is passed without binding it to a
local variable, this "shunting" happens independent
of neck tests and the "shunting" there. The result variable
passing is extremly simple to implement and could
be what is effective here besides the reachability thingy.
At least the 1 ms GC time in Dogelog Player show that
the reachability thingy is the minor effort or optimization
to get nice performance:
/* WebPL GC */
(1846.1ms)
/* Dogelog Player 1.3.6 for JavaScript (16.08.2025) */
% Zeit 2992 ms, GC 1 ms, Lips 2514202, Uhr 17.08.2025 17:44
/* SWI-Prolog WASM */
(4204.2ms)
/* Trealla Prolog WASM */
(23568.9ms)
The test code was:
test :-
len(L, 1000),
primes(L, _).
primes([], 1).
primes([J|L], J) :-
primes(L, I),
K is I+1,
search(L, K, J).
search(L, I, J) :-
mem(X, L),
I mod X =:= 0, !,
K is I+1,
search(L, K, J).
search(_, I, I).
mem(X, [X|_]).
mem(X, [_|Y]) :-
mem(X, Y).
len([], 0) :- !.
len([_|L], N) :-
N > 0,
M is N-1,
len(L, M).
Bye
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,064 |
Nodes: | 10 (0 / 10) |
Uptime: | 149:56:39 |
Calls: | 13,691 |
Calls today: | 1 |
Files: | 186,936 |
D/L today: |
437 files (114M bytes) |
Messages: | 2,410,967 |