Mercio’s Algorithm (2012) for Rational
Tree Compare is specified here mathematically.
It is based on computing truncations A' = (A_0,
A_1, etc..) of a rational tree A:
A < B ⟺ A′ <_lex B′
https://math.stackexchange.com/a/210730
Here is an implementation in Prolog.
First the truncation:
trunc(_, T, T) :- var(T), !.
trunc(0, T, F) :- !, functor(T, F, _).
trunc(N, T, S) :-
M is N-1,
T =.. [F|L],
maplist(trunc(M), L, R),
S =.. [F|R].
And then the iterative deepening:
mercio(N, X, Y, C) :-
trunc(N, X, A),
trunc(N, Y, B),
compare(D, A, B),
D \== (=), !, C = D.
mercio(N, X, Y, C) :-
M is N + 1,
mercio(M, X, Y, C).
The main entry first uses (==)/2 for a
terminating equality check and if the
rational trees are not equal, falls back
to the iterative deepening:
mercio(C, X, Y) :- X == Y, !, C = (=).
mercio(C, X, Y) :- mercio(0, X, Y, C).
I couldn’t find yet a triple that violates
transitivity. But I am also not much happy
with the code. Looks a little bit expensive
to create a truncation copy iteratively.
Provided there is really no counter example,
maybe we can do mit more smart and faster? It
might also stand the test of conservativity?
Hi,
Nope, stack overflow didn't invent Gameification.
This was already quite fun:
IEEE Guide to Classificaton of Software Anomalies https://github.com/Orthant/IEEE/blob/master/1044.1-1995.pdf
Here some examples:
- Boing Airplane lost in the Maledives:
No problem, severity 7
- Salary Payed twice by Booking Software:
No problem, severity 0
LoL
Bye
Mild Shock schrieb:
Mercio’s Algorithm (2012) for Rational
Tree Compare is specified here mathematically.
It is based on computing truncations A' = (A_0,
A_1, etc..) of a rational tree A:
A < B ⟺ A′ <_lex B′
https://math.stackexchange.com/a/210730
Here is an implementation in Prolog.
First the truncation:
trunc(_, T, T) :- var(T), !.
trunc(0, T, F) :- !, functor(T, F, _).
trunc(N, T, S) :-
M is N-1,
T =.. [F|L],
maplist(trunc(M), L, R),
S =.. [F|R].
And then the iterative deepening:
mercio(N, X, Y, C) :-
trunc(N, X, A),
trunc(N, Y, B),
compare(D, A, B),
D \== (=), !, C = D.
mercio(N, X, Y, C) :-
M is N + 1,
mercio(M, X, Y, C).
The main entry first uses (==)/2 for a
terminating equality check and if the
rational trees are not equal, falls back
to the iterative deepening:
mercio(C, X, Y) :- X == Y, !, C = (=).
mercio(C, X, Y) :- mercio(0, X, Y, C).
I couldn’t find yet a triple that violates
transitivity. But I am also not much happy
with the code. Looks a little bit expensive
to create a truncation copy iteratively.
Provided there is really no counter example,
maybe we can do mit more smart and faster? It
might also stand the test of conservativity?
Hi,
But who would have thought that Gameification leads to
Bitrot? I mean I asked the question when I had user4414,
until somebody mobbed me on MSE.
Now there is a nonsense answer accepted, and I will
not correct it. Since I don't use Bitrot anymore:
https://math.stackexchange.com/a/210730
The question "What is a natural one that is closest
to the lexical order?" of course wants "Conservativity".
Because by "lexical order" we mean (@<)/2 from Prolog.
But Mercio’s Algorithm doesn't satisfy Mats Carlsons appeal,
so we find a counter example to "Conservativity", which
is quite interesting:
problem(X, Y) :-
repeat, fuzzy(X), acyclic_term(X),
fuzzy(Y), acyclic_term(Y),
mercio(<, X, Y), \+ X @< Y.
?- problem(X, Y).
X = s(s(1, 1), 1),
Y = s(s(1, _), s(_, 1))
It is interesting, since it is now an a) an acyclic term,
and that b) violates Mats Carlsons appeal:
?- X = s(s(1, 1), 1), Y = s(s(1, _), s(_, 1)), mercio(C, X, Y).
X = s(s(1, 1), 1),
Y = s(s(1, _), s(_, 1)),
C = (<).
?- X1 = s(1,1), Y1 = s(1,_), mercio(C, X1, Y1).
X1 = s(1, 1),
Y1 = s(1, _),
C = (>).
This is kind of an independence proof of total
order and Mats Carlsons appeal.
Cool!
Bye
Mild Shock schrieb:
Hi,
Nope, stack overflow didn't invent Gameification.
This was already quite fun:
IEEE Guide to Classificaton of Software Anomalies
https://github.com/Orthant/IEEE/blob/master/1044.1-1995.pdf
Here some examples:
- Boing Airplane lost in the Maledives:
No problem, severity 7
- Salary Payed twice by Booking Software:
No problem, severity 0
LoL
Bye
Mild Shock schrieb:
Mercio’s Algorithm (2012) for Rational
Tree Compare is specified here mathematically.
It is based on computing truncations A' = (A_0,
A_1, etc..) of a rational tree A:
A < B ⟺ A′ <_lex B′
https://math.stackexchange.com/a/210730
Here is an implementation in Prolog.
First the truncation:
trunc(_, T, T) :- var(T), !.
trunc(0, T, F) :- !, functor(T, F, _).
trunc(N, T, S) :-
M is N-1,
T =.. [F|L],
maplist(trunc(M), L, R),
S =.. [F|R].
And then the iterative deepening:
mercio(N, X, Y, C) :-
trunc(N, X, A),
trunc(N, Y, B),
compare(D, A, B),
D \== (=), !, C = D.
mercio(N, X, Y, C) :-
M is N + 1,
mercio(M, X, Y, C).
The main entry first uses (==)/2 for a
terminating equality check and if the
rational trees are not equal, falls back
to the iterative deepening:
mercio(C, X, Y) :- X == Y, !, C = (=).
mercio(C, X, Y) :- mercio(0, X, Y, C).
I couldn’t find yet a triple that violates
transitivity. But I am also not much happy
with the code. Looks a little bit expensive
to create a truncation copy iteratively.
Provided there is really no counter example,
maybe we can do mit more smart and faster? It
might also stand the test of conservativity?
Mercio’s Algorithm (2012) for Rational
Tree Compare is specified here mathematically.
It is based on computing truncations A' = (A_0,
A_1, etc..) of a rational tree A:
A < B ⟺ A′ <_lex B′
https://math.stackexchange.com/a/210730
Here is an implementation in Prolog.
First the truncation:
trunc(_, T, T) :- var(T), !.
trunc(0, T, F) :- !, functor(T, F, _).
trunc(N, T, S) :-
M is N-1,
T =.. [F|L],
maplist(trunc(M), L, R),
S =.. [F|R].
And then the iterative deepening:
mercio(N, X, Y, C) :-
trunc(N, X, A),
trunc(N, Y, B),
compare(D, A, B),
D \== (=), !, C = D.
mercio(N, X, Y, C) :-
M is N + 1,
mercio(M, X, Y, C).
The main entry first uses (==)/2 for a
terminating equality check and if the
rational trees are not equal, falls back
to the iterative deepening:
mercio(C, X, Y) :- X == Y, !, C = (=).
mercio(C, X, Y) :- mercio(0, X, Y, C).
I couldn’t find yet a triple that violates
transitivity. But I am also not much happy
with the code. Looks a little bit expensive
to create a truncation copy iteratively.
Provided there is really no counter example,
maybe we can do mit more smart and faster? It
might also stand the test of conservativity?
Now question is whether compare_rat/2 can--- Synchronet 3.21a-Linux NewsLink 1.2
be extended to a total order or not. On
the positive side we find that a partial
order can always be extended:
Szpilrajn Extension Theorem https://en.wikipedia.org/wiki/Szpilrajn_extension_theorem
But what if compare_rat/2 by @kuniaki.mukai
is not a partial order? What if it is only a
preorder, or even worse only a binary relation.
Returning 4 values doesn’t guarantee that it
is a partial order. We have help from here:
Szpilrajn, Arrow and Suzumura https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-999X.2011.04130.x
A binary relation needs to be Suzumura
consistent, so that it can be extended
into a total order. And compare_rat/2 is
not Suzumura consistent, as the following
cycle a < c < b < a shows:
/* Not SWI, Windows Console is SNAFU */
?- repeat, fuzzy(A), fuzzy(C),
compare_rat(_X, A, C), _X = (<),
fuzzy(B), compare_rat(_Y, C, B), _Y = (<),
compare_rat(_Z, B, A), _Z = (<).
A = s(s(A, A), 1), _A = s(_A, s(_, 1)), C = s(_A, _),
_B = s(1, _B), B = s(B, s(1, _B))
Was only testing with compare_rat/3, but
the theorem applies also to other compare
proposals, and can be used to exclude
them, as soon as a Suzumura inconsistency is found.
The good thing is we have at least Mercio’s
Algorithm. This can be used for Total Order Sorting
of Prolog terms, cyclic and acyclic. But how speed
it up? Here is a take. The idea is sketched as follows:
Sketch to determine (<) or (>):
1. Use a list of pairs. These are the leafs of
a truncation.
2. Compare this list, if the result
differs from (=) you are done.
3. Expand the list of pairs to get a new level,
and continue with step 1.
The Prolog code reads as follows:
Step 1:
compare_truncs(C, []) :- !, C = (=).
compare_truncs(C, [X-Y|_]) :-
trunc(X, A),
trunc(Y, B),
compare(D, A, B),
D \== (=), !, C = D.
compare_truncs(C, [_|L]) :-
compare_truncs(C, L).
trunc(X, A) :- var(X), !, A = X.
trunc(X, F/N) :- functor(X, F, N).
Step 2:
next_truncs([], []).
next_truncs([X-_|L], R) :- var(X), !,
next_truncs(L, R).
next_truncs([X-Y|L], R) :-
next_truncs(L, H),
X =.. [_|A],
Y =.. [_|B],
zip(A, B, J),
append(J, H, R).
zip([], [], []).
zip([X|L], [Y|R], [X-Y|H]) :-
zip(L, R, H).
Step 3:
mercio2(C, X, Y) :- X == Y, !, C = (=).
mercio2(C, X, Y) :- mercio2_iter(C, [X-Y]).
mercio2_iter(C, L) :-
compare_truncs(D, L),
D \== (=), !, C = D.
mercio2_iter(C, L) :-
next_truncs(L, R),
mercio2_iter(C, R).
The new mercio2/3 is an itch faster than the old mercio/3:
?- X = s(s(X, 1), 0), Y = s(X, 1),
Z = s(s(1, s(Z, 1)), 1),
time((between(1,10000,_), mercio(C, X, Z),
mercio(D, Z, Y), fail; true)).
% Zeit 184 ms, GC 0 ms, Lips 11141728, Uhr 06.08.2025 07:39
?- X = s(s(X, 1), 0), Y = s(X, 1),
Z = s(s(1, s(Z, 1)), 1),
time((between(1,10000,_), mercio2(C, X, Z),
mercio2(D, Z, Y), fail; true)).
% Zeit 145 ms, GC 0 ms, Lips 9379848, Uhr 06.08.2025 07:39
Mild Shock schrieb:
Now question is whether compare_rat/2 can
be extended to a total order or not. On
the positive side we find that a partial
order can always be extended:
Szpilrajn Extension Theorem https://en.wikipedia.org/wiki/Szpilrajn_extension_theorem
But what if compare_rat/2 by @kuniaki.mukai
is not a partial order? What if it is only a
preorder, or even worse only a binary relation.
Returning 4 values doesn’t guarantee that it
is a partial order. We have help from here:
Szpilrajn, Arrow and Suzumura https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-999X.2011.04130.x
A binary relation needs to be Suzumura
consistent, so that it can be extended
into a total order. And compare_rat/2 is
not Suzumura consistent, as the following
cycle a < c < b < a shows:
/* Not SWI, Windows Console is SNAFU */
?- repeat, fuzzy(A), fuzzy(C),
compare_rat(_X, A, C), _X = (<),
fuzzy(B), compare_rat(_Y, C, B), _Y = (<),
compare_rat(_Z, B, A), _Z = (<).
A = s(s(A, A), 1), _A = s(_A, s(_, 1)), C = s(_A, _),
_B = s(1, _B), B = s(B, s(1, _B))
Was only testing with compare_rat/3, but
the theorem applies also to other compare
proposals, and can be used to exclude
them, as soon as a Suzumura inconsistency is found.
How could we speed it up any further. Well
we can make at least the following observation.
Let A and B be two lists of truncation pairs.
Capital greek letters denote sublists:
A = Γ, P, Δ, P, Π
B = Γ, P, Δ, Π
Then mercio2_iter(A) gives the same comparison
like the shorter mercio2_iter(B). The truncation
leafs obey a contraction law. So if we find methods
to get smaller pair lists, then I guess
we will also get faster Mercio’s Algorithm
implementation. Possible methods to contract the
pair list are at least either Naive HK based
on same_term/2 or Non-Naive HK based on same_term/2:
Hopcroft and Karp’s algorithm (HK)
https://inria.hal.science/hal-00639716v2
But there is a twist, which stuns me, how to
make contraction between levels? Is such an inter
level optimization even somehow defined? It seems
levels are still copied during expansion?
Mild Shock schrieb:
The good thing is we have at least Mercio’s
Algorithm. This can be used for Total Order Sorting
of Prolog terms, cyclic and acyclic. But how speed
it up? Here is a take. The idea is sketched as follows:
Sketch to determine (<) or (>):
1. Use a list of pairs. These are the leafs of
a truncation.
2. Compare this list, if the result
differs from (=) you are done.
3. Expand the list of pairs to get a new level,
and continue with step 1.
The Prolog code reads as follows:
Step 1:
compare_truncs(C, []) :- !, C = (=).
compare_truncs(C, [X-Y|_]) :-
trunc(X, A),
trunc(Y, B),
compare(D, A, B),
D \== (=), !, C = D.
compare_truncs(C, [_|L]) :-
compare_truncs(C, L).
trunc(X, A) :- var(X), !, A = X.
trunc(X, F/N) :- functor(X, F, N).
Step 2:
next_truncs([], []).
next_truncs([X-_|L], R) :- var(X), !,
next_truncs(L, R).
next_truncs([X-Y|L], R) :-
next_truncs(L, H),
X =.. [_|A],
Y =.. [_|B],
zip(A, B, J),
append(J, H, R).
zip([], [], []).
zip([X|L], [Y|R], [X-Y|H]) :-
zip(L, R, H).
Step 3:
mercio2(C, X, Y) :- X == Y, !, C = (=).
mercio2(C, X, Y) :- mercio2_iter(C, [X-Y]).
mercio2_iter(C, L) :-
compare_truncs(D, L),
D \== (=), !, C = D.
mercio2_iter(C, L) :-
next_truncs(L, R),
mercio2_iter(C, R).
The new mercio2/3 is an itch faster than the old mercio/3:
?- X = s(s(X, 1), 0), Y = s(X, 1),
Z = s(s(1, s(Z, 1)), 1),
time((between(1,10000,_), mercio(C, X, Z),
mercio(D, Z, Y), fail; true)).
% Zeit 184 ms, GC 0 ms, Lips 11141728, Uhr 06.08.2025 07:39
?- X = s(s(X, 1), 0), Y = s(X, 1),
Z = s(s(1, s(Z, 1)), 1),
time((between(1,10000,_), mercio2(C, X, Z),
mercio2(D, Z, Y), fail; true)).
% Zeit 145 ms, GC 0 ms, Lips 9379848, Uhr 06.08.2025 07:39
Mild Shock schrieb:
;https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-999X.2011.04130.x
Now question is whether compare_rat/2 can
be extended to a total order or not. On
the positive side we find that a partial
;
order can always be extended:
;
Szpilrajn Extension Theorem
https://en.wikipedia.org/wiki/Szpilrajn_extension_theorem
;
But what if compare_rat/2 by @kuniaki.mukai
is not a partial order? What if it is only a
preorder, or even worse only a binary relation.
;
Returning 4 values doesn’t guarantee that it
is a partial order. We have help from here:
;
Szpilrajn, Arrow and Suzumura
;
A binary relation needs to be Suzumura
consistent, so that it can be extended
into a total order. And compare_rat/2 is
;
not Suzumura consistent, as the following
cycle a < c < b < a shows:
;
/* Not SWI, Windows Console is SNAFU */
?- repeat, fuzzy(A), fuzzy(C),
; compare_rat(_X, A, C), _X = (<),
; fuzzy(B), compare_rat(_Y, C, B), _Y = (<),
; compare_rat(_Z, B, A), _Z = (<).
A = s(s(A, A), 1), _A = s(_A, s(_, 1)), C = s(_A, _),
_B = s(1, _B), B = s(B, s(1, _B))
;
Was only testing with compare_rat/3, but
the theorem applies also to other compare
proposals, and can be used to exclude
;
them, as soon as a Suzumura inconsistency is found.
You have the habit to post incomplete code:
bfs_sort(X, Y):- minimum_coa(X, Is, Mcoa),
My friend, where did you hide the predicate
minimum_coa/3 ? Also its irrelevant to mercio/3
or any frontier method. The TRUNCATIONS are
the same either with
or without a canonical form as input. The
only thing that might be a little faster is
the (==)/2 pretest in mercio/3, and short cuts
like here profit also:
fpl_compare(C, [I-J|F], FPL, FPL0, Mcoa):-
( I = J -> fpl_compare(C, F, FPL, FPL0, Mcoa)
But practically one could also use same_term/2
and gain some speed. But given that minimum_coa/3
can be quite expensive, I don’t count this as the
real mercio/3 sorting method. If sorting is O(N * log(N))
for a fast compare but minimum_coa/3 is O(N^2),
not much is gained, rather contra productive.
Thats basically why bisimiliarity was invented
by Hopcroft and Carp and reported in their 1971
paper to avoid the trap of doing equality via
expensive canonical form.
Mercio’s Algorithm (2012) for Rational
Tree Compare is specified here mathematically.
It is based on computing truncations A' = (A_0,
A_1, etc..) of a rational tree A:
A < B ⟺ A′ <_lex B′
https://math.stackexchange.com/a/210730
Here is an implementation in Prolog.
First the truncation:
trunc(_, T, T) :- var(T), !.
trunc(0, T, F) :- !, functor(T, F, _).
trunc(N, T, S) :-
M is N-1,
T =.. [F|L],
maplist(trunc(M), L, R),
S =.. [F|R].
And then the iterative deepening:
mercio(N, X, Y, C) :-
trunc(N, X, A),
trunc(N, Y, B),
compare(D, A, B),
D \== (=), !, C = D.
mercio(N, X, Y, C) :-
M is N + 1,
mercio(M, X, Y, C).
The main entry first uses (==)/2 for a
terminating equality check and if the
rational trees are not equal, falls back
to the iterative deepening:
mercio(C, X, Y) :- X == Y, !, C = (=).
mercio(C, X, Y) :- mercio(0, X, Y, C).
I couldn’t find yet a triple that violates
transitivity. But I am also not much happy
with the code. Looks a little bit expensive
to create a truncation copy iteratively.
Provided there is really no counter example,
maybe we can do mit more smart and faster? It
might also stand the test of conservativity?
I see it as fuzzy testing of the community.
It is certainly beneficial if used correctly
You can use Fuzzy Testing also for
benchmarking. Not only to find faults.
For example when I benchmark mercio/3 via
fuzzy/1, I find it doesn’t fare extremly bad:
?- time((between(1,100,_), mercio, fail; true)).
% 4,386,933 inferences, 0.375 CPU in 0.376 seconds (100% CPU, 11698488
Lips)
true.
And I am not using some of the optimization
that @kuniaki.mukai posted elsewhere and that
I posted 06.08.2025 on comp.lang.prolog. Fact is,
it only ca. 20% slower than SWI-Prologs compare/3:
?- time((between(1,100,_), swi, fail; true)).
% 3,786,880 inferences, 0.312 CPU in 0.325 seconds (96% CPU, 12118016 Lips) true.
The test harness was:
swi :-
between(1,1000,_),
fuzzy(X), fuzzy(Y),
swi(_, X, Y), fail; true.
mercio :-
between(1,1000,_),
fuzzy(X), fuzzy(Y),
mercio(_, X, Y), fail; true.
The difficulty was to find a 100% Prolog compare/3
that corresponds to SWI-Prolog. But you find a
fresh implementation in 100% Prolog using a Union
Find structure in the below:
% swi(-Atom, +Term, +Term)
swi(C, X, Y) :-
swi(X, Y, C, [], _).
% swi( -Atom, +Term, +Term,+List, -List)
swi(C, X, Y, L, R) :- compound(X), compound(Y), !,
sys_union_find(X, L, Z),
sys_union_find(Y, L, T),
swi_found(C, Z, T, L, R).
swi(X, Y, C, L, L) :- compare(C, X, Y).
% swi_found(-Atom, +Term, +Term, +List, -List)
swi_found(C, X, Y, L, L) :-
same_term(X, Y), !, C = (=).
swi_found(C, X, Y, _, _) :-
functor(X, F, N),
functor(Y, G, M),
compare(D, N/F, M/G),
D \== (=), !, C = D.
swi_found(C, X, Y, L, R) :-
X =.. [_|P],
Y =.. [_|Q],
foldl(swi(C), P, Q, [X-Y|L], R).
% sys_union_find(+Term, +List, -Term)
sys_union_find(X, L, T) :-
member(Y-Z, L),
same_term(X, Y), !,
sys_union_find(Z, L, T).
sys_union_find(X, _, X).
I see it as fuzzy testing of the community.
It is certainly beneficial if used correctly
Fuzzy Testing goes also by the name QuickCheck.
You can use Fuzzy Testing also for benchmarking.
Mathematically it uses the Law of Large Numbers:
Law of large numbers
https://en.wikipedia.org/wiki/Law_of_large_numbers
Means you even don’t need a random generator
with a programmable seed, so that a comparison
involves the exact same random number sequences.
Just assume that your results have a variation σ.
Then most likely the overall variation decreases
proportionally to the number n of experiments,
i.e. gets washed out:
VAR(X) = σ^2 / n
A third use case of Fuzzy Testing is to determine
frequentist probabilities . Like when I determined
that 25% of a variant of @kuniaki.mukai compare/3
triples are not transitive.
Mild Shock schrieb:
You can use Fuzzy Testing also for
benchmarking. Not only to find faults.
For example when I benchmark mercio/3 via
fuzzy/1, I find it doesn’t fare extremly bad:
?- time((between(1,100,_), mercio, fail; true)).
% 4,386,933 inferences, 0.375 CPU in 0.376 seconds (100% CPU, 11698488
Lips)
true.
And I am not using some of the optimization
that @kuniaki.mukai posted elsewhere and that
I posted 06.08.2025 on comp.lang.prolog. Fact is,
it only ca. 20% slower than SWI-Prologs compare/3:
?- time((between(1,100,_), swi, fail; true)).
% 3,786,880 inferences, 0.312 CPU in 0.325 seconds (96% CPU, 12118016
Lips)
true.
The test harness was:
swi :-
between(1,1000,_),
fuzzy(X), fuzzy(Y),
swi(_, X, Y), fail; true.
mercio :-
between(1,1000,_),
fuzzy(X), fuzzy(Y),
mercio(_, X, Y), fail; true.
The difficulty was to find a 100% Prolog compare/3
that corresponds to SWI-Prolog. But you find a
fresh implementation in 100% Prolog using a Union
Find structure in the below:
% swi(-Atom, +Term, +Term)
swi(C, X, Y) :-
swi(X, Y, C, [], _).
% swi( -Atom, +Term, +Term,+List, -List)
swi(C, X, Y, L, R) :- compound(X), compound(Y), !,
sys_union_find(X, L, Z),
sys_union_find(Y, L, T),
swi_found(C, Z, T, L, R).
swi(X, Y, C, L, L) :- compare(C, X, Y).
% swi_found(-Atom, +Term, +Term, +List, -List)
swi_found(C, X, Y, L, L) :-
same_term(X, Y), !, C = (=).
swi_found(C, X, Y, _, _) :-
functor(X, F, N),
functor(Y, G, M),
compare(D, N/F, M/G),
D \== (=), !, C = D.
swi_found(C, X, Y, L, R) :-
X =.. [_|P],
Y =.. [_|Q],
foldl(swi(C), P, Q, [X-Y|L], R).
% sys_union_find(+Term, +List, -Term)
sys_union_find(X, L, T) :-
member(Y-Z, L),
same_term(X, Y), !,
sys_union_find(Z, L, T).
sys_union_find(X, _, X).
I like the vibe, clearing the mind of everything--- Synchronet 3.21a-Linux NewsLink 1.2
existing has a touch of a mystic human being living
an eremitic solitary vocation on a far out mountain
top. Using the internet only to emit his wisdom,
but not to ingest the outer world, just as in
Thus Spoke Zarathustra (*). Another name for Fuzzy Testing,
if the outcome is not finding the needle in the haystack,
but rather producing quantitative outcomes is,
unless of course you were living in a submerged pineapple (**):
Monte Carlo methods
or Monte Carlo experiments, are a broad class of
computational algorithms that rely on repeated random
sampling to obtain numerical results https://en.wikipedia.org/wiki/Monte_Carlo_method
(*)
Also Sprach Zarathustra, Op. 30 - Strauss https://www.youtube.com/watch?v=dfe8tCcHnKY
(**)
Every Time Patrick Was Actually Smart https://www.youtube.com/watch?v=siBRUuDxU1E
I see it as fuzzy testing of the community.
It is certainly beneficial if used correctly
Fuzzy Testing goes also by the name QuickCheck.
You can use Fuzzy Testing also for benchmarking.
Mathematically it uses the Law of Large Numbers:
Law of large numbers
https://en.wikipedia.org/wiki/Law_of_large_numbers
Means you even don’t need a random generator
with a programmable seed, so that a comparison
involves the exact same random number sequences.
Just assume that your results have a variation σ.
Then most likely the overall variation decreases
proportionally to the number n of experiments,
i.e. gets washed out:
VAR(X) = σ^2 / n
A third use case of Fuzzy Testing is to determine
frequentist probabilities . Like when I determined
that 25% of a variant of @kuniaki.mukai compare/3
triples are not transitive.
Mild Shock schrieb:
You can use Fuzzy Testing also for
benchmarking. Not only to find faults.
For example when I benchmark mercio/3 via
fuzzy/1, I find it doesn’t fare extremly bad:
?- time((between(1,100,_), mercio, fail; true)).
% 4,386,933 inferences, 0.375 CPU in 0.376 seconds (100% CPU, 11698488
Lips)
true.
And I am not using some of the optimization
that @kuniaki.mukai posted elsewhere and that
I posted 06.08.2025 on comp.lang.prolog. Fact is,
it only ca. 20% slower than SWI-Prologs compare/3:
?- time((between(1,100,_), swi, fail; true)).
% 3,786,880 inferences, 0.312 CPU in 0.325 seconds (96% CPU, 12118016
Lips)
true.
The test harness was:
swi :-
between(1,1000,_),
fuzzy(X), fuzzy(Y),
swi(_, X, Y), fail; true.
mercio :-
between(1,1000,_),
fuzzy(X), fuzzy(Y),
mercio(_, X, Y), fail; true.
The difficulty was to find a 100% Prolog compare/3
that corresponds to SWI-Prolog. But you find a
fresh implementation in 100% Prolog using a Union
Find structure in the below:
% swi(-Atom, +Term, +Term)
swi(C, X, Y) :-
swi(X, Y, C, [], _).
% swi( -Atom, +Term, +Term,+List, -List)
swi(C, X, Y, L, R) :- compound(X), compound(Y), !,
sys_union_find(X, L, Z),
sys_union_find(Y, L, T),
swi_found(C, Z, T, L, R).
swi(X, Y, C, L, L) :- compare(C, X, Y).
% swi_found(-Atom, +Term, +Term, +List, -List)
swi_found(C, X, Y, L, L) :-
same_term(X, Y), !, C = (=).
swi_found(C, X, Y, _, _) :-
functor(X, F, N),
functor(Y, G, M),
compare(D, N/F, M/G),
D \== (=), !, C = D.
swi_found(C, X, Y, L, R) :-
X =.. [_|P],
Y =.. [_|Q],
foldl(swi(C), P, Q, [X-Y|L], R).
% sys_union_find(+Term, +List, -Term)
sys_union_find(X, L, T) :-
member(Y-Z, L),
same_term(X, Y), !,
sys_union_find(Z, L, T).
sys_union_find(X, _, X).
Mercio’s Algorithm (2012) for Rational
Tree Compare is specified here mathematically.
It is based on computing truncations A' = (A_0,
A_1, etc..) of a rational tree A:
A < B ⟺ A′ <_lex B′
https://math.stackexchange.com/a/210730
Here is an implementation in Prolog.
First the truncation:
trunc(_, T, T) :- var(T), !.
trunc(0, T, F) :- !, functor(T, F, _).
trunc(N, T, S) :-
M is N-1,
T =.. [F|L],
maplist(trunc(M), L, R),
S =.. [F|R].
And then the iterative deepening:
mercio(N, X, Y, C) :-
trunc(N, X, A),
trunc(N, Y, B),
compare(D, A, B),
D \== (=), !, C = D.
mercio(N, X, Y, C) :-
M is N + 1,
mercio(M, X, Y, C).
The main entry first uses (==)/2 for a
terminating equality check and if the
rational trees are not equal, falls back
to the iterative deepening:
mercio(C, X, Y) :- X == Y, !, C = (=).
mercio(C, X, Y) :- mercio(0, X, Y, C).
I couldn’t find yet a triple that violates
transitivity. But I am also not much happy
with the code. Looks a little bit expensive
to create a truncation copy iteratively.
Provided there is really no counter example,
maybe we can do mit more smart and faster? It
might also stand the test of conservativity?
A storm of symbolic differentiation libraries
was posted. But what can these Prolog code
fossils do?
Does one of these libraries support Python symbolic
Pieceweise ? For example one can define rectified
linear unit (ReLU) with it:
/ x x >= 0
ReLU(x) := <
\ 0 otherwise
With the above one can already translate a
propositional logic program, that uses negation
as failure, into a neural network:
NOT \+ p 1 - x
AND p1, ..., pn ReLU(x1 + ... + xn - (n-1))
OR p1; ...; pn 1 - ReLU(-x1 - .. - xn + 1)
For clauses just use Clark Completion, it makes
the defined predicate a new neuron, dependent on
other predicate neurons,
through a network of intermediate neurons. Because
of the constant shift in AND and OR, the neurons
will have a bias b.
So rule based in zero order logic is a subset
of neural network.
Python symbolic Pieceweise
rectified linear unit (ReLU) https://en.wikipedia.org/wiki/Rectifier_(neural_networks)
Clark Completion
https://www.cs.utexas.edu/~vl/teaching/lbai/completion.pdf
Hi,
I am spekulating an NPU could give 1000x more LIPS.
For certain combinatorial search problems. It all
boils down to implement this thingy:
In June 2020, Stockfish introduced the efficiently
updatable neural network (NNUE) approach, based
on earlier work by computer shogi programmers https://en.wikipedia.org/wiki/Stockfish_%28chess%29
There are varying degrees what gets updated of
a neural network. But the specs of an NPU tell
me very simply the following:
- An NPU can make 40 TFLOPS, all my AI Laptops
from 2025 can do that right now. The brands
are Intel Ultra, AMD Ryzen and Snapdragon X,
but I guess there might be more brands around,
which can do that with a price tag less
than 1000.- USD.
- SWI Prolog can make 30 MLIPS, Dogelog Player
runs similar, some Prolog systems are faster.
Now thats is 10^12 versus 10^6. If some of the
LIPS can be delegated to a NPU, and if we assume
for example less locality or more primitive
operations that require a layering. Would could assume
that from the NPU 10^12 a factor of 1000 goes
away. So we might still see 10'9 LIPS emerge.
Now make the calculation:
- Without NPU: MLIPS
- With NPU: GLIPS
- Ratio: 1000x times faster
Have fun!
Bye
Mild Shock schrieb:
Mercio’s Algorithm (2012) for Rational
Tree Compare is specified here mathematically.
It is based on computing truncations A' = (A_0,
A_1, etc..) of a rational tree A:
A < B ⟺ A′ <_lex B′
https://math.stackexchange.com/a/210730
Here is an implementation in Prolog.
First the truncation:
trunc(_, T, T) :- var(T), !.
trunc(0, T, F) :- !, functor(T, F, _).
trunc(N, T, S) :-
M is N-1,
T =.. [F|L],
maplist(trunc(M), L, R),
S =.. [F|R].
And then the iterative deepening:
mercio(N, X, Y, C) :-
trunc(N, X, A),
trunc(N, Y, B),
compare(D, A, B),
D \== (=), !, C = D.
mercio(N, X, Y, C) :-
M is N + 1,
mercio(M, X, Y, C).
The main entry first uses (==)/2 for a
terminating equality check and if the
rational trees are not equal, falls back
to the iterative deepening:
mercio(C, X, Y) :- X == Y, !, C = (=).
mercio(C, X, Y) :- mercio(0, X, Y, C).
I couldn’t find yet a triple that violates
transitivity. But I am also not much happy
with the code. Looks a little bit expensive
to create a truncation copy iteratively.
Provided there is really no counter example,
maybe we can do mit more smart and faster? It
might also stand the test of conservativity?
Hi,
I already posted how to do SAT and Clark Completion
with ReLU. This was a post from 15.03.2025, 16:13,
see also below. But can we do CLP as well? Here
is a take on the dif/2 constraint, or more precisely
a very primitive (#\=)/2 from CLP(FD), going towards
analogical computing. Might work for domains that
fit into the quantization size of a NPU:
1) First note that we can model abs() via ReLU:
abs(x) = ReLU(x) + ReLU(- x)
2) Then note that for integer values, we can model
chi(x>0), the characteristic function of the predicate x > 0:
chi(x>0) = 1 - ReLU(1 - x).
3) Now chi(x=\=y) is simply:
chi(x=\=y) = chi(abs(x - y) > 0)
Now insert the formula for chi(x>0) based on ReLU
and the formula for abs() based on ReLU. Eh voila you
got an manually created neural network for the
(#\=)/2 condition of CLP(FD), constraint logic
programming for finite domains.
Have Fun!
Bye
Mild Shock schrieb:
A storm of symbolic differentiation libraries
was posted. But what can these Prolog code
fossils do?
Does one of these libraries support Python symbolic
Pieceweise ? For example one can define rectified
linear unit (ReLU) with it:
/ x x >= 0
ReLU(x) := <
\ 0 otherwise
With the above one can already translate a
propositional logic program, that uses negation
as failure, into a neural network:
NOT \+ p 1 - x
AND p1, ..., pn ReLU(x1 + ... + xn - (n-1))
OR p1; ...; pn 1 - ReLU(-x1 - .. - xn + 1)
For clauses just use Clark Completion, it makes
the defined predicate a new neuron, dependent on
other predicate neurons,
through a network of intermediate neurons. Because
of the constant shift in AND and OR, the neurons
will have a bias b.
So rule based in zero order logic is a subset
of neural network.
Python symbolic Pieceweise
https://how-to-data.org/how-to-write-a-piecewise-defined-function-in-python-using-sympy/
rectified linear unit (ReLU) https://en.wikipedia.org/wiki/Rectifier_(neural_networks)
Clark Completion https://www.cs.utexas.edu/~vl/teaching/lbai/completion.pdf
Mild Shock schrieb:
Hi,
I am spekulating an NPU could give 1000x more LIPS.
For certain combinatorial search problems. It all
boils down to implement this thingy:
In June 2020, Stockfish introduced the efficiently
updatable neural network (NNUE) approach, based
on earlier work by computer shogi programmers
https://en.wikipedia.org/wiki/Stockfish_%28chess%29
There are varying degrees what gets updated of
a neural network. But the specs of an NPU tell
me very simply the following:
- An NPU can make 40 TFLOPS, all my AI Laptops
from 2025 can do that right now. The brands
are Intel Ultra, AMD Ryzen and Snapdragon X,
but I guess there might be more brands around,
which can do that with a price tag less
than 1000.- USD.
- SWI Prolog can make 30 MLIPS, Dogelog Player
runs similar, some Prolog systems are faster.
Now thats is 10^12 versus 10^6. If some of the
LIPS can be delegated to a NPU, and if we assume
for example less locality or more primitive
operations that require a layering. Would could assume
that from the NPU 10^12 a factor of 1000 goes
away. So we might still see 10'9 LIPS emerge.
Now make the calculation:
- Without NPU: MLIPS
- With NPU: GLIPS
- Ratio: 1000x times faster
Have fun!
Bye
Mild Shock schrieb:
Mercio’s Algorithm (2012) for Rational
Tree Compare is specified here mathematically.
It is based on computing truncations A' = (A_0,
A_1, etc..) of a rational tree A:
A < B ⟺ A′ <_lex B′
https://math.stackexchange.com/a/210730
Here is an implementation in Prolog.
First the truncation:
trunc(_, T, T) :- var(T), !.
trunc(0, T, F) :- !, functor(T, F, _).
trunc(N, T, S) :-
M is N-1,
T =.. [F|L],
maplist(trunc(M), L, R),
S =.. [F|R].
And then the iterative deepening:
mercio(N, X, Y, C) :-
trunc(N, X, A),
trunc(N, Y, B),
compare(D, A, B),
D \== (=), !, C = D.
mercio(N, X, Y, C) :-
M is N + 1,
mercio(M, X, Y, C).
The main entry first uses (==)/2 for a
terminating equality check and if the
rational trees are not equal, falls back
to the iterative deepening:
mercio(C, X, Y) :- X == Y, !, C = (=).
mercio(C, X, Y) :- mercio(0, X, Y, C).
I couldn’t find yet a triple that violates
transitivity. But I am also not much happy
with the code. Looks a little bit expensive
to create a truncation copy iteratively.
Provided there is really no counter example,
maybe we can do mit more smart and faster? It
might also stand the test of conservativity?
Hi,
What mindset is needed to program an NPU. Mostlikely
a mindset based on fork/join parallelism is nonsense.
What could be more fruitful is view the AI accellerator
as a blackbox that runs a neural network, whereby
a neural network can be effectively viewed as a form
of hardware, although unter the hood, it is open weights
and matrix operations. So the mindest needs:
Zeus: A Language for Expressing Algorithms in Hardware
K. J. Lieberherr - 01 February 1985 https://dl.acm.org/doi/10.1109/MC.1985.1662799
What changed back to then?
- 80's Field Programmable Gate Array (FPGA)
- 20's AI Boom: NPUs, Unified Memory and Routing Fabric
Bye
Mild Shock schrieb:
Hi,
I already posted how to do SAT and Clark Completion
with ReLU. This was a post from 15.03.2025, 16:13,
see also below. But can we do CLP as well? Here
is a take on the dif/2 constraint, or more precisely
a very primitive (#\=)/2 from CLP(FD), going towards
analogical computing. Might work for domains that
fit into the quantization size of a NPU:
1) First note that we can model abs() via ReLU:
abs(x) = ReLU(x) + ReLU(- x)
2) Then note that for integer values, we can model
chi(x>0), the characteristic function of the predicate x > 0:
chi(x>0) = 1 - ReLU(1 - x).
3) Now chi(x=\=y) is simply:
chi(x=\=y) = chi(abs(x - y) > 0)
Now insert the formula for chi(x>0) based on ReLU
and the formula for abs() based on ReLU. Eh voila you
got an manually created neural network for the
(#\=)/2 condition of CLP(FD), constraint logic
programming for finite domains.
Have Fun!
Bye
Mild Shock schrieb:
A storm of symbolic differentiation librarieshttps://how-to-data.org/how-to-write-a-piecewise-defined-function-in-python-using-sympy/
was posted. But what can these Prolog code
fossils do?
;
Does one of these libraries support Python symbolic
Pieceweise ? For example one can define rectified
linear unit (ReLU) with it:
;
; / x x >= 0
; ReLU(x) := <
; \ 0 otherwise
;
With the above one can already translate a
propositional logic program, that uses negation
as failure, into a neural network:
;
NOT \+ p 1 - x
AND p1, ..., pn ReLU(x1 + ... + xn - (n-1))
OR p1; ...; pn 1 - ReLU(-x1 - .. - xn + 1)
;
For clauses just use Clark Completion, it makes
the defined predicate a new neuron, dependent on
other predicate neurons,
;
through a network of intermediate neurons. Because
of the constant shift in AND and OR, the neurons
will have a bias b.
;
So rule based in zero order logic is a subset
of neural network.
;
Python symbolic Pieceweise
;
;
rectified linear unit (ReLU)
https://en.wikipedia.org/wiki/Rectifier_(neural_networks)
;
Clark Completion
https://www.cs.utexas.edu/~vl/teaching/lbai/completion.pdf
Mild Shock schrieb:
Hi,
I am spekulating an NPU could give 1000x more LIPS.
For certain combinatorial search problems. It all
boils down to implement this thingy:
In June 2020, Stockfish introduced the efficiently
updatable neural network (NNUE) approach, based
on earlier work by computer shogi programmers
https://en.wikipedia.org/wiki/Stockfish_%28chess%29
There are varying degrees what gets updated of
a neural network. But the specs of an NPU tell
me very simply the following:
- An NPU can make 40 TFLOPS, all my AI Laptops
from 2025 can do that right now. The brands
are Intel Ultra, AMD Ryzen and Snapdragon X,
but I guess there might be more brands around,
which can do that with a price tag less
than 1000.- USD.
- SWI Prolog can make 30 MLIPS, Dogelog Player
runs similar, some Prolog systems are faster.
Now thats is 10^12 versus 10^6. If some of the
LIPS can be delegated to a NPU, and if we assume
for example less locality or more primitive
operations that require a layering. Would could assume
that from the NPU 10^12 a factor of 1000 goes
away. So we might still see 10'9 LIPS emerge.
Now make the calculation:
- Without NPU: MLIPS
- With NPU: GLIPS
- Ratio: 1000x times faster
Have fun!
Bye
Mild Shock schrieb:
Mercio’s Algorithm (2012) for Rational
Tree Compare is specified here mathematically.
It is based on computing truncations A' = (A_0,
A_1, etc..) of a rational tree A:
A < B ⟺ A′ <_lex B′
https://math.stackexchange.com/a/210730
Here is an implementation in Prolog.
First the truncation:
trunc(_, T, T) :- var(T), !.
trunc(0, T, F) :- !, functor(T, F, _).
trunc(N, T, S) :-
M is N-1,
T =.. [F|L],
maplist(trunc(M), L, R),
S =.. [F|R].
And then the iterative deepening:
mercio(N, X, Y, C) :-
trunc(N, X, A),
trunc(N, Y, B),
compare(D, A, B),
D \== (=), !, C = D.
mercio(N, X, Y, C) :-
M is N + 1,
mercio(M, X, Y, C).
The main entry first uses (==)/2 for a
terminating equality check and if the
rational trees are not equal, falls back
to the iterative deepening:
mercio(C, X, Y) :- X == Y, !, C = (=).
mercio(C, X, Y) :- mercio(0, X, Y, C).
I couldn’t find yet a triple that violates
transitivity. But I am also not much happy
with the code. Looks a little bit expensive
to create a truncation copy iteratively.
Provided there is really no counter example,
maybe we can do mit more smart and faster? It
might also stand the test of conservativity?
Hi,
I am spekulating an NPU could give 1000x more LIPS.
For certain combinatorial search problems. It all
boils down to implement this thingy:
In June 2020, Stockfish introduced the efficiently
updatable neural network (NNUE) approach, based
on earlier work by computer shogi programmers https://en.wikipedia.org/wiki/Stockfish_%28chess%29
There are varying degrees what gets updated of
a neural network. But the specs of an NPU tell
me very simply the following:
- An NPU can make 40 TFLOPS, all my AI Laptops
from 2025 can do that right now. The brands
are Intel Ultra, AMD Ryzen and Snapdragon X,
but I guess there might be more brands around,
which can do that with a price tag less
than 1000.- USD.
- SWI Prolog can make 30 MLIPS, Dogelog Player
runs similar, some Prolog systems are faster.
Now thats is 10^12 versus 10^6. If some of the
LIPS can be delegated to a NPU, and if we assume
for example less locality or more primitive
operations that require a layering. Would could assume
that from the NPU 10^12 a factor of 1000 goes
away. So we might still see 10'9 LIPS emerge.
Now make the calculation:
- Without NPU: MLIPS
- With NPU: GLIPS
- Ratio: 1000x times faster
Have fun!
Bye
Mild Shock schrieb:
Mercio’s Algorithm (2012) for Rational
Tree Compare is specified here mathematically.
It is based on computing truncations A' = (A_0,
A_1, etc..) of a rational tree A:
A < B ⟺ A′ <_lex B′
https://math.stackexchange.com/a/210730
Here is an implementation in Prolog.
First the truncation:
trunc(_, T, T) :- var(T), !.
trunc(0, T, F) :- !, functor(T, F, _).
trunc(N, T, S) :-
M is N-1,
T =.. [F|L],
maplist(trunc(M), L, R),
S =.. [F|R].
And then the iterative deepening:
mercio(N, X, Y, C) :-
trunc(N, X, A),
trunc(N, Y, B),
compare(D, A, B),
D \== (=), !, C = D.
mercio(N, X, Y, C) :-
M is N + 1,
mercio(M, X, Y, C).
The main entry first uses (==)/2 for a
terminating equality check and if the
rational trees are not equal, falls back
to the iterative deepening:
mercio(C, X, Y) :- X == Y, !, C = (=).
mercio(C, X, Y) :- mercio(0, X, Y, C).
I couldn’t find yet a triple that violates
transitivity. But I am also not much happy
with the code. Looks a little bit expensive
to create a truncation copy iteratively.
Provided there is really no counter example,
maybe we can do mit more smart and faster? It
might also stand the test of conservativity?
Hi,--- Synchronet 3.21a-Linux NewsLink 1.2
I am 100% serious about Giga Logical Inferences
per Second (GLIPS). Leaving behind the sequential
constraint solving world:
The Complexity of Constraint Satisfaction Revisited https://www.cs.ubc.ca/~mack/Publications/AIP93.pdf
Only I have missed the deep learning bandwagon,
never programmed with PyTorch or Keras. So even
for the banal problem of coding some
ReLU networks and shipping them to a GPU or NPU,
or a hybrid, I don't have much experience. So
I am marveling at papers such as:
Learning Variable Ordering Heuristics
for Solving Constraint Satisfaction Problems
https://arxiv.org/abs/1912.10762
Given that the AI Boom started after 2019,
the above paper is already old, and it has
currious antique terminology like Multilayer
Perceptron, which is not so common anymore?
It does also more than what I want to demonstrate,
it does also do policy learning.
Bye
Mild Shock schrieb:
Hi,
I am spekulating an NPU could give 1000x more LIPS.
For certain combinatorial search problems. It all
boils down to implement this thingy:
In June 2020, Stockfish introduced the efficiently
updatable neural network (NNUE) approach, based
on earlier work by computer shogi programmers
https://en.wikipedia.org/wiki/Stockfish_%28chess%29
There are varying degrees what gets updated of
a neural network. But the specs of an NPU tell
me very simply the following:
- An NPU can make 40 TFLOPS, all my AI Laptops
from 2025 can do that right now. The brands
are Intel Ultra, AMD Ryzen and Snapdragon X,
but I guess there might be more brands around,
which can do that with a price tag less
than 1000.- USD.
- SWI Prolog can make 30 MLIPS, Dogelog Player
runs similar, some Prolog systems are faster.
Now thats is 10^12 versus 10^6. If some of the
LIPS can be delegated to a NPU, and if we assume
for example less locality or more primitive
operations that require a layering. Would could assume
that from the NPU 10^12 a factor of 1000 goes
away. So we might still see 10'9 LIPS emerge.
Now make the calculation:
- Without NPU: MLIPS
- With NPU: GLIPS
- Ratio: 1000x times faster
Have fun!
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,090 |
| Nodes: | 10 (1 / 9) |
| Uptime: | 156:41:31 |
| Calls: | 13,922 |
| Calls today: | 3 |
| Files: | 187,021 |
| D/L today: |
4,096 files (1,045M bytes) |
| Messages: | 2,457,227 |