Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg
It’s a self-supervised form of ILP.
No autoencoders anywhere at all.
Hi,
One idea I had was that autoencoders would
become kind of invisible, and work under the hood
to compress Prolog facts. Take these facts:
% standard _, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
data(seg7, [0,0,0,0,0,0,0], [0,0,0,0,0,0,0]).
data(seg7, [1,1,1,1,1,1,0], [1,1,1,1,1,1,0]).
data(seg7, [0,1,1,0,0,0,0], [0,1,1,0,0,0,0]).
data(seg7, [1,1,0,1,1,0,1], [1,1,0,1,1,0,1]).
data(seg7, [1,1,1,1,0,0,1], [1,1,1,1,0,0,1]).
data(seg7, [0,1,1,0,0,1,1], [0,1,1,0,0,1,1]).
data(seg7, [1,0,1,1,0,1,1], [1,0,1,1,0,1,1]).
data(seg7, [1,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
data(seg7, [1,1,1,0,0,0,0], [1,1,1,0,0,0,0]).
data(seg7, [1,1,1,1,1,1,1], [1,1,1,1,1,1,1]).
data(seg7, [1,1,1,1,0,1,1], [1,1,1,1,0,1,1]).
% alternatives 9, 7, 6, 1
data(seg7, [1,1,1,0,0,1,1], [1,1,1,1,0,1,1]).
data(seg7, [1,1,1,0,0,1,0], [1,1,1,0,0,0,0]).
data(seg7, [0,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
data(seg7, [0,0,0,0,1,1,0], [0,1,1,0,0,0,0]). https://en.wikipedia.org/wiki/Seven-segment_display
Or more visually, 9 7 6 1 have variants trained:
:- show.
_0123456789(9)(7)(6)(1)
The auto encoder would create a latent space, an
encoder, and a decoder. And we could basically query
?- data(seg7, X, Y) with X input, and Y output,
9 7 6 1 were corrected:
:- random2.
0, 0
_01234567899761
The autoencoder might also tolerate errors in the
input that are not in the data, giving it some inferential
capability. And then choose an output again not in
the data, giving it some generative capabilities.
Bye
See also:
What is Latent Space in Deep Learning? https://www.geeksforgeeks.org/what-is-latent-space-in-deep-learning/
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
One idea I had was that autoencoders would
become kind of invisible, and work under the hood
to compress Prolog facts. Take these facts:
% standard _, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
data(seg7, [0,0,0,0,0,0,0], [0,0,0,0,0,0,0]).
data(seg7, [1,1,1,1,1,1,0], [1,1,1,1,1,1,0]).
data(seg7, [0,1,1,0,0,0,0], [0,1,1,0,0,0,0]).
data(seg7, [1,1,0,1,1,0,1], [1,1,0,1,1,0,1]).
data(seg7, [1,1,1,1,0,0,1], [1,1,1,1,0,0,1]).
data(seg7, [0,1,1,0,0,1,1], [0,1,1,0,0,1,1]).
data(seg7, [1,0,1,1,0,1,1], [1,0,1,1,0,1,1]).
data(seg7, [1,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
data(seg7, [1,1,1,0,0,0,0], [1,1,1,0,0,0,0]).
data(seg7, [1,1,1,1,1,1,1], [1,1,1,1,1,1,1]).
data(seg7, [1,1,1,1,0,1,1], [1,1,1,1,0,1,1]).
% alternatives 9, 7, 6, 1
data(seg7, [1,1,1,0,0,1,1], [1,1,1,1,0,1,1]).
data(seg7, [1,1,1,0,0,1,0], [1,1,1,0,0,0,0]).
data(seg7, [0,0,1,1,1,1,1], [1,0,1,1,1,1,1]).
data(seg7, [0,0,0,0,1,1,0], [0,1,1,0,0,0,0]). https://en.wikipedia.org/wiki/Seven-segment_display
Or more visually, 9 7 6 1 have variants trained:
:- show.
_0123456789(9)(7)(6)(1)
The auto encoder would create a latent space, an
encoder, and a decoder. And we could basically query
?- data(seg7, X, Y) with X input, and Y output,
9 7 6 1 were corrected:
:- random2.
0, 0
_01234567899761
The autoencoder might also tolerate errors in the
input that are not in the data, giving it some inferential
capability. And then choose an output again not in
the data, giving it some generative capabilities.
Bye
See also:
What is Latent Space in Deep Learning? https://www.geeksforgeeks.org/what-is-latent-space-in-deep-learning/
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg
Ivar Jacobson defines a variation point as follows:
A variation point identifies one or more locations at
which the variation will occur.
One idea I had was that autoencoders would become
kind of invisible, and work under the hood to compress
Prolog facts. Take these facts:
% standard _, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
data(seg7, [0,0,0,0,0,0,0], [0,0,0,0,0,0,0]).
Hi,
A software engineering analyis why Prolog fails ================================================
You would also get more done, if Prolog had some
well design plug and play machine learning libraries.
Currently most SWI Prolog packages are just GitHub dumps:
(Python) Problem ---> import solver ---> Solution
(SWI) Problem ---> install pack ---> Problem
Python shows more success in the practitioners domain,
since it has more libraries that have made the test of
time of practial use. Whereas Prolog is still in its
infancy in many domains,
you don’t arrive at the same level of convenience and
breadth as Python, if you have only fire and forget dumps
offered, from some PhD projects where software engineering
is secondary.
I don’t know exactly why Prolog has so much problems
with software engineering. Python has object orientation,
but Logtalk didn’t make the situation better. SWI-Prolog
has modules, but they are never used. For example this
here is a big monolith:
This module performs learning over Logic Programs https://github.com/friguzzi/liftcover/blob/main/prolog/liftcover.pl
Its more designed towards providing some command line
control. But if you look into it, it has EM algorithms
and gradient algorithm, and who knows what. These building
blocks are not exposed,
not made towards reused or towards improvement by
switching in 3rd party alternatives. Mostlikely a design
flaw inside the pack mechanism itself, since it assumes a
single main module?
So the pack mechanism works, if a unit pack imports a
clp(BNR) pack, since it uses the single entry of clp(BNR).
But it is never on paar with the richness of Python packages,
which have more a hierarchical structure of many
many modules in their packs.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
I have retracted those posts, that had Python-first
in it, not sure whether my analysis about some projects
was water thight. I only made the Python example as to
illustrate the idea of
a variation point. I do not think programming language
trench wars are good idea, and one should put software
engineering -first, as an abstract computer science
discipline. Not doing so
is only a distraction from the real issues at hand.
Variation points where defined quite vaguely
on purpose:
Ivar Jacobson defines a variation point as follows:
A variation point identifies one or more locations at
which the variation will occur.
Variation points can come in many shades, and for
example ProbLog based approaches take the viewpoint
of a Prolog text with a lot of configuration flags
and predicate
annotations. This is quite different from the
autoencoder or transformer component approach I
suggested here. In particular component oriented
approach could be
more flexible and dynamic, when they allow programmatic
configuration of components. The drawback is you cannot
understand what the program does by looking at a
simply structured Prolog text. Although I expected
the situation is not that bad, and one could do
something similar to a table/1 directive, i.e. some
directive that says
look, this predicate is an autoencoder or transformer:
One idea I had was that autoencoders would become
kind of invisible, and work under the hood to compress
Prolog facts. Take these facts:
% standard _, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
data(seg7, [0,0,0,0,0,0,0], [0,0,0,0,0,0,0]).
So to instruct the Prolog system to do what is sketched,
one would possibly need a new directive autoencoder/1:
:- autoencoder data/3.
Mild Shock schrieb:
Hi,
A software engineering analyis why Prolog fails
================================================
You would also get more done, if Prolog had some
well design plug and play machine learning libraries.
Currently most SWI Prolog packages are just GitHub dumps:
(Python) Problem ---> import solver ---> Solution
(SWI) Problem ---> install pack ---> Problem
Python shows more success in the practitioners domain,
since it has more libraries that have made the test of
time of practial use. Whereas Prolog is still in its
infancy in many domains,
you don’t arrive at the same level of convenience and
breadth as Python, if you have only fire and forget dumps
offered, from some PhD projects where software engineering
is secondary.
I don’t know exactly why Prolog has so much problems
with software engineering. Python has object orientation,
but Logtalk didn’t make the situation better. SWI-Prolog
has modules, but they are never used. For example this
here is a big monolith:
This module performs learning over Logic Programs
https://github.com/friguzzi/liftcover/blob/main/prolog/liftcover.pl
Its more designed towards providing some command line
control. But if you look into it, it has EM algorithms
and gradient algorithm, and who knows what. These building
blocks are not exposed,
not made towards reused or towards improvement by
switching in 3rd party alternatives. Mostlikely a design
flaw inside the pack mechanism itself, since it assumes a
single main module?
So the pack mechanism works, if a unit pack imports a
clp(BNR) pack, since it uses the single entry of clp(BNR).
But it is never on paar with the richness of Python packages,
which have more a hierarchical structure of many
many modules in their packs.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Libraries are (often) written for eitherand thus the libraries make the choice.
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg
Concerning library(portray_text) which is in limbo:
Libraries are (often) written for eitherand thus the libraries make the choice.
But who writes these libraries? The SWI Prolog
community. And who doesn’t improve these libraries,
instead floods the web with workaround tips?
The SWI Prolog community.
Conclusion the SWI-Prolog community has itself
trapped in an ancient status quo, creating an island.
Cannot improve its own tooling, is not willing
to support code from else where that uses chars.
Same with the missed AI Boom.
(*) Code from elsewhere is dangerous, People
might use other Prolog systems than only SWI-Prolog,
like for exampe Trealla Prolog and Scryer Prolog.
(**) Keeping the status quo is comfy. No need to
think in terms of programm code. Its like biology
teachers versus pathology staff, biology teachers
do not everyday see opened corpses.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Technically SWI-Prolog doesn't prefer codes.
Library `library(pure_input)` might prefer codes.
But this is again an issue of improving the
library by some non existent SWI-Prolog community.
The ISO core standard is silent about a flag
back_quotes, but has a lot of API requirements
that support both codes and chars, for example it
requires atom_codes/2 and atom_chars/2.
Implementation wise there can be an issue,
like one might decide to implement the atoms
of length=1 more efficiently, since with Unicode
there is now an explosion.
Not sure whether Trealla Prolog and Scryer
Prolog thought about this problem, that the
atom table gets quite large. Whereas codes don't
eat the atom table. Maybe they forbit predicates
that have an atom of length=1 head:
h(X) :-
write('Hello '), write(X), write('!'), nl.
Does this still work?
Mild Shock schrieb:
Concerning library(portray_text) which is in limbo:
Libraries are (often) written for eitherand thus the libraries make the choice.
But who writes these libraries? The SWI Prolog
community. And who doesn’t improve these libraries,
instead floods the web with workaround tips?
The SWI Prolog community.
Conclusion the SWI-Prolog community has itself
trapped in an ancient status quo, creating an island.
Cannot improve its own tooling, is not willing
to support code from else where that uses chars.
Same with the missed AI Boom.
(*) Code from elsewhere is dangerous, People
might use other Prolog systems than only SWI-Prolog,
like for exampe Trealla Prolog and Scryer Prolog.
(**) Keeping the status quo is comfy. No need to
think in terms of programm code. Its like biology
teachers versus pathology staff, biology teachers
do not everyday see opened corpses.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
I don’t know whether they realised that you
cannot meaningfully support both in the same
system and surely not in the same application.
The ISO core standard is silent about a flag back_quoteslibrary(portray_codes) and library(portray_chars),
Its more a naming problem. Have two libraries
Hi,
The most radical approach is Novacore from
Dogelog Player. It consists of the following
major incisions in the ISO core standard:
- We do not forbid chars, like for example
using lists of the form [a,b,c], we also
provide char_code/2 predicate bidirectionally.
- We do not provide and _chars built-in
predicates also there is nothing _strings. The
Prolog system is clever enough to not put
every atom it sees in an atom table. There
is only a predicate table.
- Some host languages have garbage collection that
deduplicates Strings. For example some Java
versions have an options to do that. But we
do not have any efforts to deduplicate atoms,
which are simply plain strings.
- Some languages have constant pools. For example
the Java byte code format includes a constant
pool in every class header. We do not do that
during transpilation , but we could of course.
But it begs the question, why only deduplicate
strings and not other constant expressions as well?
- We are totally happy that we have only codes,
there are chances that the host languages use
tagged pointers to represent them. So they
are represented similar to the tagged pointers
in SWI-Prolog which works for small integers.
- But the tagged pointer argument is moot,
since atom length=1 entities can be also
represented as tagged pointers, and some
programming languages do that. Dogelog Player
would use such tagged pointers without
poluting the atom table.
- What else?
Bye
Mild Shock schrieb:
Technically SWI-Prolog doesn't prefer codes.
Library `library(pure_input)` might prefer codes.
But this is again an issue of improving the
library by some non existent SWI-Prolog community.
The ISO core standard is silent about a flag
back_quotes, but has a lot of API requirements
that support both codes and chars, for example it
requires atom_codes/2 and atom_chars/2.
Implementation wise there can be an issue,
like one might decide to implement the atoms
of length=1 more efficiently, since with Unicode
there is now an explosion.
Not sure whether Trealla Prolog and Scryer
Prolog thought about this problem, that the
atom table gets quite large. Whereas codes don't
eat the atom table. Maybe they forbit predicates
that have an atom of length=1 head:
h(X) :-
write('Hello '), write(X), write('!'), nl.
Does this still work?
Mild Shock schrieb:
Concerning library(portray_text) which is in limbo:
Libraries are (often) written for eitherand thus the libraries make the choice.
But who writes these libraries? The SWI Prolog
community. And who doesn’t improve these libraries,
instead floods the web with workaround tips?
The SWI Prolog community.
Conclusion the SWI-Prolog community has itself
trapped in an ancient status quo, creating an island.
Cannot improve its own tooling, is not willing
to support code from else where that uses chars.
Same with the missed AI Boom.
(*) Code from elsewhere is dangerous, People
might use other Prolog systems than only SWI-Prolog,
like for exampe Trealla Prolog and Scryer Prolog.
(**) Keeping the status quo is comfy. No need to
think in terms of programm code. Its like biology
teachers versus pathology staff, biology teachers
do not everyday see opened corpses.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
Even the SWI-Prolog master not wide awake,
doing day-sleeping.
I don’t know whether they realised that you
cannot meaningfully support both in the same
system and surely not in the same application.
Maybe you didn’t notice this nifty detail.
Thats all you need:
The ISO core standard is silent about a flag back_quotes
Its more a naming problem. Have two librarieslibrary(portray_codes) and library(portray_chars),
Or one library(portray_text).
Just add one more rule:
user:portray(Chars) :-
portray_text_option(enabled, true),
'$skip_list'(Length, Chars, _Tail),
portray_text_option(min_length, MinLen),
Length >= MinLen,
mostly_chars(Chars, 0.9),
portray_text_option(ellipsis, IfLonger),
quote2(C),
put_code(C),
maplist(char_code, Chars, Codes),
( Length > IfLonger
-> First is IfLonger - 5,
Skip is Length - 5,
skip_first(Skip, Codes, Rest),
put_n_codes(First, Codes, C),
format('...', [])
; Rest = Codes
),
put_var_codes(Rest, C),
put_code(C).
The use of maplist/3 is elegant, and works since we do
not print open lists, right?
Mild Shock schrieb:
Hi,
The most radical approach is Novacore from
Dogelog Player. It consists of the following
major incisions in the ISO core standard:
- We do not forbid chars, like for example
using lists of the form [a,b,c], we also
provide char_code/2 predicate bidirectionally.
- We do not provide and _chars built-in
predicates also there is nothing _strings. The
Prolog system is clever enough to not put
every atom it sees in an atom table. There
is only a predicate table.
- Some host languages have garbage collection that
deduplicates Strings. For example some Java
versions have an options to do that. But we
do not have any efforts to deduplicate atoms,
which are simply plain strings.
- Some languages have constant pools. For example
the Java byte code format includes a constant
pool in every class header. We do not do that
during transpilation , but we could of course.
But it begs the question, why only deduplicate
strings and not other constant expressions as well?
- We are totally happy that we have only codes,
there are chances that the host languages use
tagged pointers to represent them. So they
are represented similar to the tagged pointers
in SWI-Prolog which works for small integers.
- But the tagged pointer argument is moot,
since atom length=1 entities can be also
represented as tagged pointers, and some
programming languages do that. Dogelog Player
would use such tagged pointers without
poluting the atom table.
- What else?
Bye
Mild Shock schrieb:
Technically SWI-Prolog doesn't prefer codes.
Library `library(pure_input)` might prefer codes.
But this is again an issue of improving the
library by some non existent SWI-Prolog community.
The ISO core standard is silent about a flag
back_quotes, but has a lot of API requirements
that support both codes and chars, for example it
requires atom_codes/2 and atom_chars/2.
Implementation wise there can be an issue,
like one might decide to implement the atoms
of length=1 more efficiently, since with Unicode
there is now an explosion.
Not sure whether Trealla Prolog and Scryer
Prolog thought about this problem, that the
atom table gets quite large. Whereas codes don't
eat the atom table. Maybe they forbit predicates
that have an atom of length=1 head:
h(X) :-
write('Hello '), write(X), write('!'), nl.
Does this still work?
Mild Shock schrieb:
Concerning library(portray_text) which is in limbo:
Libraries are (often) written for eitherand thus the libraries make the choice.
But who writes these libraries? The SWI Prolog
community. And who doesn’t improve these libraries,
instead floods the web with workaround tips?
The SWI Prolog community.
Conclusion the SWI-Prolog community has itself
trapped in an ancient status quo, creating an island.
Cannot improve its own tooling, is not willing
to support code from else where that uses chars.
Same with the missed AI Boom.
(*) Code from elsewhere is dangerous, People
might use other Prolog systems than only SWI-Prolog,
like for exampe Trealla Prolog and Scryer Prolog.
(**) Keeping the status quo is comfy. No need to
think in terms of programm code. Its like biology
teachers versus pathology staff, biology teachers
do not everyday see opened corpses.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
Even the SWI-Prolog master not wide awake,
doing day-sleeping.
I don’t know whether they realised that you
cannot meaningfully support both in the same
system and surely not in the same application.
Maybe you didn’t notice this nifty detail.
Thats all you need:
The ISO core standard is silent about a flag back_quotes
Its more a naming problem. Have two librarieslibrary(portray_codes) and library(portray_chars),
Or one library(portray_text).
Just add one more rule:
user:portray(Chars) :-
portray_text_option(enabled, true),
'$skip_list'(Length, Chars, _Tail),
portray_text_option(min_length, MinLen),
Length >= MinLen,
mostly_chars(Chars, 0.9),
portray_text_option(ellipsis, IfLonger),
quote2(C),
put_code(C),
maplist(char_code, Chars, Codes),
( Length > IfLonger
-> First is IfLonger - 5,
Skip is Length - 5,
skip_first(Skip, Codes, Rest),
put_n_codes(First, Codes, C),
format('...', [])
; Rest = Codes
),
put_var_codes(Rest, C),
put_code(C).
The use of maplist/3 is elegant, and works since we do
not print open lists, right?
Mild Shock schrieb:
Hi,
The most radical approach is Novacore from
Dogelog Player. It consists of the following
major incisions in the ISO core standard:
- We do not forbid chars, like for example
using lists of the form [a,b,c], we also
provide char_code/2 predicate bidirectionally.
- We do not provide and _chars built-in
predicates also there is nothing _strings. The
Prolog system is clever enough to not put
every atom it sees in an atom table. There
is only a predicate table.
- Some host languages have garbage collection that
deduplicates Strings. For example some Java
versions have an options to do that. But we
do not have any efforts to deduplicate atoms,
which are simply plain strings.
- Some languages have constant pools. For example
the Java byte code format includes a constant
pool in every class header. We do not do that
during transpilation , but we could of course.
But it begs the question, why only deduplicate
strings and not other constant expressions as well?
- We are totally happy that we have only codes,
there are chances that the host languages use
tagged pointers to represent them. So they
are represented similar to the tagged pointers
in SWI-Prolog which works for small integers.
- But the tagged pointer argument is moot,
since atom length=1 entities can be also
represented as tagged pointers, and some
programming languages do that. Dogelog Player
would use such tagged pointers without
poluting the atom table.
- What else?
Bye
Mild Shock schrieb:
Technically SWI-Prolog doesn't prefer codes.
Library `library(pure_input)` might prefer codes.
But this is again an issue of improving the
library by some non existent SWI-Prolog community.
The ISO core standard is silent about a flag
back_quotes, but has a lot of API requirements
that support both codes and chars, for example it
requires atom_codes/2 and atom_chars/2.
Implementation wise there can be an issue,
like one might decide to implement the atoms
of length=1 more efficiently, since with Unicode
there is now an explosion.
Not sure whether Trealla Prolog and Scryer
Prolog thought about this problem, that the
atom table gets quite large. Whereas codes don't
eat the atom table. Maybe they forbit predicates
that have an atom of length=1 head:
h(X) :-
write('Hello '), write(X), write('!'), nl.
Does this still work?
Mild Shock schrieb:
Concerning library(portray_text) which is in limbo:
Libraries are (often) written for eitherand thus the libraries make the choice.
But who writes these libraries? The SWI Prolog
community. And who doesn’t improve these libraries,
instead floods the web with workaround tips?
The SWI Prolog community.
Conclusion the SWI-Prolog community has itself
trapped in an ancient status quo, creating an island.
Cannot improve its own tooling, is not willing
to support code from else where that uses chars.
Same with the missed AI Boom.
(*) Code from elsewhere is dangerous, People
might use other Prolog systems than only SWI-Prolog,
like for exampe Trealla Prolog and Scryer Prolog.
(**) Keeping the status quo is comfy. No need to
think in terms of programm code. Its like biology
teachers versus pathology staff, biology teachers
do not everyday see opened corpses.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Full source code here:
swi2.pl.log https://github.com/SWI-Prolog/swipl-devel/issues/1373#issuecomment-2997214639
Since it has a dual use hook, works fine simultaneously:
?- set_portray_text(enabled, false).
true.
?- X = [a,b,c].
X = [a, b, c].
?- X = [0'a,0'b,0'c].
X = [97, 98, 99].
And then:
?- set_prolog_flag(double_quotes, codes).
true.
?- set_prolog_flag(back_quotes, chars).
true.
?- set_portray_text(enabled, true).
true.
?- X = [a,b,c].
X = `abc`.
?- X = [0'a,0'b,0'c].
X = "abc".
Mild Shock schrieb:
Hi,
Even the SWI-Prolog master not wide awake,
doing day-sleeping.
I don’t know whether they realised that you
cannot meaningfully support both in the same
system and surely not in the same application.
Maybe you didn’t notice this nifty detail.
Thats all you need:
The ISO core standard is silent about a flag back_quoteslibrary(portray_codes) and library(portray_chars),
Its more a naming problem. Have two libraries
Or one library(portray_text).
Just add one more rule:
user:portray(Chars) :-
portray_text_option(enabled, true),
'$skip_list'(Length, Chars, _Tail),
portray_text_option(min_length, MinLen),
Length >= MinLen,
mostly_chars(Chars, 0.9),
portray_text_option(ellipsis, IfLonger),
quote2(C),
put_code(C),
maplist(char_code, Chars, Codes),
( Length > IfLonger
-> First is IfLonger - 5,
Skip is Length - 5,
skip_first(Skip, Codes, Rest),
put_n_codes(First, Codes, C),
format('...', [])
; Rest = Codes
),
put_var_codes(Rest, C),
put_code(C).
The use of maplist/3 is elegant, and works since we do
not print open lists, right?
Mild Shock schrieb:
Hi,
The most radical approach is Novacore from
Dogelog Player. It consists of the following
major incisions in the ISO core standard:
- We do not forbid chars, like for example
using lists of the form [a,b,c], we also
provide char_code/2 predicate bidirectionally.
- We do not provide and _chars built-in
predicates also there is nothing _strings. The
Prolog system is clever enough to not put
every atom it sees in an atom table. There
is only a predicate table.
- Some host languages have garbage collection that
deduplicates Strings. For example some Java
versions have an options to do that. But we
do not have any efforts to deduplicate atoms,
which are simply plain strings.
- Some languages have constant pools. For example
the Java byte code format includes a constant
pool in every class header. We do not do that
during transpilation , but we could of course.
But it begs the question, why only deduplicate
strings and not other constant expressions as well?
- We are totally happy that we have only codes,
there are chances that the host languages use
tagged pointers to represent them. So they
are represented similar to the tagged pointers
in SWI-Prolog which works for small integers.
- But the tagged pointer argument is moot,
since atom length=1 entities can be also
represented as tagged pointers, and some
programming languages do that. Dogelog Player
would use such tagged pointers without
poluting the atom table.
- What else?
Bye
Mild Shock schrieb:
Technically SWI-Prolog doesn't prefer codes.
Library `library(pure_input)` might prefer codes.
But this is again an issue of improving the
library by some non existent SWI-Prolog community.
The ISO core standard is silent about a flag
back_quotes, but has a lot of API requirements
that support both codes and chars, for example it
requires atom_codes/2 and atom_chars/2.
Implementation wise there can be an issue,
like one might decide to implement the atoms
of length=1 more efficiently, since with Unicode
there is now an explosion.
Not sure whether Trealla Prolog and Scryer
Prolog thought about this problem, that the
atom table gets quite large. Whereas codes don't
eat the atom table. Maybe they forbit predicates
that have an atom of length=1 head:
h(X) :-
write('Hello '), write(X), write('!'), nl.
Does this still work?
Mild Shock schrieb:
Concerning library(portray_text) which is in limbo:
Libraries are (often) written for eitherand thus the libraries make the choice.
But who writes these libraries? The SWI Prolog
community. And who doesn’t improve these libraries,
instead floods the web with workaround tips?
The SWI Prolog community.
Conclusion the SWI-Prolog community has itself
trapped in an ancient status quo, creating an island.
Cannot improve its own tooling, is not willing
to support code from else where that uses chars.
Same with the missed AI Boom.
(*) Code from elsewhere is dangerous, People
might use other Prolog systems than only SWI-Prolog,
like for exampe Trealla Prolog and Scryer Prolog.
(**) Keeping the status quo is comfy. No need to
think in terms of programm code. Its like biology
teachers versus pathology staff, biology teachers
do not everyday see opened corpses.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses? >>>>>> The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
I wouldn’t call it an “ancient status”.
Using again my super powered library(portray_text):
?- set_prolog_flag(double_quotes, codes).
true.
?- set_prolog_flag(back_quotes, chars).
true.
?- set_portray_text(enabled, true).
true.
?- maplist(char_code, `abc`, X).
X = "abc".
?- maplist(char_code, X, "abc").
X = `abc`.
So if you have a Prolog system that has chars, you
could bootstrap as follows:
atom_codes(X, Y) :-
var(X), !,
atom_chars(Z, Y),
maplist(char_code, X, Z).
atom_codes(X, Y) :-
atom_chars(X, Z),
maplist(char_code, Z, Y).
Or if you have a Prolog system that has codes, you
could bootstrap as follows:
atom_chars(X, Y) :-
var(X), !,
atom_codes(Z, Y),
maplist(char_code, Z, X).
atom_chars(X, Y) :-
atom_codes(X, Z),
maplist(char_code, Y, Z).
Mild Shock schrieb:
Full source code here:
swi2.pl.log
https://github.com/SWI-Prolog/swipl-devel/issues/1373#issuecomment-2997214639
Since it has a dual use hook, works fine simultaneously:
?- set_portray_text(enabled, false).
true.
?- X = [a,b,c].
X = [a, b, c].
?- X = [0'a,0'b,0'c].
X = [97, 98, 99].
And then:
?- set_prolog_flag(double_quotes, codes).
true.
?- set_prolog_flag(back_quotes, chars).
true.
?- set_portray_text(enabled, true).
true.
?- X = [a,b,c].
X = `abc`.
?- X = [0'a,0'b,0'c].
X = "abc".
Mild Shock schrieb:
Hi,
Even the SWI-Prolog master not wide awake,
doing day-sleeping.
I don’t know whether they realised that you
cannot meaningfully support both in the same
system and surely not in the same application.
Maybe you didn’t notice this nifty detail.
Thats all you need:
The ISO core standard is silent about a flag back_quoteslibrary(portray_codes) and library(portray_chars),
Its more a naming problem. Have two libraries
Or one library(portray_text).
Just add one more rule:
user:portray(Chars) :-
portray_text_option(enabled, true),
'$skip_list'(Length, Chars, _Tail),
portray_text_option(min_length, MinLen),
Length >= MinLen,
mostly_chars(Chars, 0.9),
portray_text_option(ellipsis, IfLonger),
quote2(C),
put_code(C),
maplist(char_code, Chars, Codes),
( Length > IfLonger
-> First is IfLonger - 5,
Skip is Length - 5,
skip_first(Skip, Codes, Rest),
put_n_codes(First, Codes, C),
format('...', [])
; Rest = Codes
),
put_var_codes(Rest, C),
put_code(C).
The use of maplist/3 is elegant, and works since we do
not print open lists, right?
Mild Shock schrieb:
Hi,
The most radical approach is Novacore from
Dogelog Player. It consists of the following
major incisions in the ISO core standard:
- We do not forbid chars, like for example
using lists of the form [a,b,c], we also
provide char_code/2 predicate bidirectionally.
- We do not provide and _chars built-in
predicates also there is nothing _strings. The
Prolog system is clever enough to not put
every atom it sees in an atom table. There
is only a predicate table.
- Some host languages have garbage collection that
deduplicates Strings. For example some Java
versions have an options to do that. But we
do not have any efforts to deduplicate atoms,
which are simply plain strings.
- Some languages have constant pools. For example
the Java byte code format includes a constant
pool in every class header. We do not do that
during transpilation , but we could of course.
But it begs the question, why only deduplicate
strings and not other constant expressions as well?
- We are totally happy that we have only codes,
there are chances that the host languages use
tagged pointers to represent them. So they
are represented similar to the tagged pointers
in SWI-Prolog which works for small integers.
- But the tagged pointer argument is moot,
since atom length=1 entities can be also
represented as tagged pointers, and some
programming languages do that. Dogelog Player
would use such tagged pointers without
poluting the atom table.
- What else?
Bye
Mild Shock schrieb:
Technically SWI-Prolog doesn't prefer codes.
Library `library(pure_input)` might prefer codes.
But this is again an issue of improving the
library by some non existent SWI-Prolog community.
The ISO core standard is silent about a flag
back_quotes, but has a lot of API requirements
that support both codes and chars, for example it
requires atom_codes/2 and atom_chars/2.
Implementation wise there can be an issue,
like one might decide to implement the atoms
of length=1 more efficiently, since with Unicode
there is now an explosion.
Not sure whether Trealla Prolog and Scryer
Prolog thought about this problem, that the
atom table gets quite large. Whereas codes don't
eat the atom table. Maybe they forbit predicates
that have an atom of length=1 head:
h(X) :-
write('Hello '), write(X), write('!'), nl.
Does this still work?
Mild Shock schrieb:
Concerning library(portray_text) which is in limbo:
Libraries are (often) written for eitherand thus the libraries make the choice.
But who writes these libraries? The SWI Prolog
community. And who doesn’t improve these libraries,
instead floods the web with workaround tips?
The SWI Prolog community.
Conclusion the SWI-Prolog community has itself
trapped in an ancient status quo, creating an island.
Cannot improve its own tooling, is not willing
to support code from else where that uses chars.
Same with the missed AI Boom.
(*) Code from elsewhere is dangerous, People
might use other Prolog systems than only SWI-Prolog,
like for exampe Trealla Prolog and Scryer Prolog.
(**) Keeping the status quo is comfy. No need to
think in terms of programm code. Its like biology
teachers versus pathology staff, biology teachers
do not everyday see opened corpses.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder >>>>>>> turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of >>>>>>>
things like stacked tensors, are they related to k-literal clauses? >>>>>>> The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg
What is holy is only for Dogelog Player!
Do not give dogs what is holy, and do not
throw your pearls before pigs, lest they
trample them underfoot and turn to attack you.
-- Matthew 7:6
https://www.biblegateway.com/passage/?search=Matthew%207%3A6
I have deleted my posts and the swi2.pl.log proposal:
between(C, 0'0, 0'9), Digit is C-0'0.`
Just rewrite it to:
0'0 =< [Digit], [Digit] =< 0'9, [Digit] is C-0'0`.
The [X] in an evaluation is dual use again:
?- X is [a].
X = 97.
?- X is [0'a].
X = 97.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Oops should read:
0'0 =< [C], [C] =< 0'9, Digit is [C]-0'0`.
Mild Shock schrieb:
What is holy is only for Dogelog Player!
Do not give dogs what is holy, and do not
throw your pearls before pigs, lest they
trample them underfoot and turn to attack you.
-- Matthew 7:6
https://www.biblegateway.com/passage/?search=Matthew%207%3A6
I have deleted my posts and the swi2.pl.log proposal:
between(C, 0'0, 0'9), Digit is C-0'0.`
Just rewrite it to:
0'0 =< [Digit], [Digit] =< 0'9, [Digit] is C-0'0`.
The [X] in an evaluation is dual use again:
?- X is [a].
X = 97.
?- X is [0'a].
X = 97.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
What WG17 could do to prevent segregation.
It could specify:
- The back_quotes flag. Not really something
new , most Prolog systems have it already.
- The [X] evaluable function. Not really something
new , most Prolog systems have it already. For
example DEC-10 Prolog (10 November 1982) had it
already, The new thing for some Prolog systems
would be its non-strict evaluation strategy
and the dual use:
[X] (a list of just one element) evaluates to X if X is an
integer. Since a quoted string is just a list of integers,
this allows a quoted character to be used in place of its
ASCII code; e.g. "A" behaves within arithmetic expressions
as the integer 65.
https://userweb.fct.unl.pt/~lmp/publications/online-papers/DECsystem-10%20PROLOG%20USER%27S%20MANUAL.pdf
Instead what is WG17 doing?
- Introducing a notation for open strings:
[a, b, c|X] = "abc" || X
With a new separator ||, giving possibly much more
headache to Prolog system implementors than a flag
and an evaluable function.
Bye
Mild Shock schrieb:
Oops should read:
0'0 =< [C], [C] =< 0'9, Digit is [C]-0'0`.
Mild Shock schrieb:
What is holy is only for Dogelog Player!
Do not give dogs what is holy, and do not
throw your pearls before pigs, lest they
trample them underfoot and turn to attack you.
-- Matthew 7:6
https://www.biblegateway.com/passage/?search=Matthew%207%3A6
I have deleted my posts and the swi2.pl.log proposal:
between(C, 0'0, 0'9), Digit is C-0'0.`
Just rewrite it to:
0'0 =< [Digit], [Digit] =< 0'9, [Digit] is C-0'0`.
The [X] in an evaluation is dual use again:
?- X is [a].
X = 97.
?- X is [0'a].
X = 97.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
I don’t know, I would try to get out of the
cornering that Scryer Prolog and Trealla Prolog
tries to do with cheap tricks like this here,
which library(portray_text) will probably
/* Scryer Prolog 0.9.4-411 */
?- "٢١٠" = [H|T]. /* [0x0662, 0x0661, 0x0660] */
H = '٢', T = "١٠".
never attempt for codes, but might easily do
for chars. It has currently only implemented:
/* SWI-Prolog 9.3.24 */
text_code(Code) :-
is_text_code(Code),
!.
text_code(9). % horizontal tab, \t
text_code(10). % newline \n
text_code(13). % carriage return \r
text_code(C) :- % space to tilde (127 is DEL)
between(32, 126, C).
And a greater range might really start getting into the
way in working with lists that carry numbers.
My guess SWI-Prolog could position its self as dual use.
Mild Shock schrieb:
Hi,
What WG17 could do to prevent segregation.
It could specify:
- The back_quotes flag. Not really something
new , most Prolog systems have it already.
- The [X] evaluable function. Not really something
new , most Prolog systems have it already. For
example DEC-10 Prolog (10 November 1982) had it
already, The new thing for some Prolog systems
would be its non-strict evaluation strategy
and the dual use:
[X] (a list of just one element) evaluates to X if X is an
integer. Since a quoted string is just a list of integers,
this allows a quoted character to be used in place of its
ASCII code; e.g. "A" behaves within arithmetic expressions
as the integer 65.
https://userweb.fct.unl.pt/~lmp/publications/online-papers/DECsystem-10%20PROLOG%20USER%27S%20MANUAL.pdf
Instead what is WG17 doing?
- Introducing a notation for open strings:
[a, b, c|X] = "abc" || X
With a new separator ||, giving possibly much more
headache to Prolog system implementors than a flag
and an evaluable function.
Bye
Mild Shock schrieb:
Oops should read:
0'0 =< [C], [C] =< 0'9, Digit is [C]-0'0`.
Mild Shock schrieb:
What is holy is only for Dogelog Player!
Do not give dogs what is holy, and do not
throw your pearls before pigs, lest they
trample them underfoot and turn to attack you.
-- Matthew 7:6
https://www.biblegateway.com/passage/?search=Matthew%207%3A6
I have deleted my posts and the swi2.pl.log proposal:
between(C, 0'0, 0'9), Digit is C-0'0.`
Just rewrite it to:
0'0 =< [Digit], [Digit] =< 0'9, [Digit] is C-0'0`.
The [X] in an evaluation is dual use again:
?- X is [a].
X = 97.
?- X is [0'a].
X = 97.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Replacement Character
https://www.compart.com/de/unicode/U+FFFD
Hi,
The most radical approach is Novacore from
Dogelog Player. It consists of the following
major incisions in the ISO core standard:
- We do not forbid chars, like for example
using lists of the form [a,b,c], we also
provide char_code/2 predicate bidirectionally.
- We do not provide and _chars built-in
predicates also there is nothing _strings. The
Prolog system is clever enough to not put
every atom it sees in an atom table. There
is only a predicate table.
- Some host languages have garbage collection that
deduplicates Strings. For example some Java
versions have an options to do that. But we
do not have any efforts to deduplicate atoms,
which are simply plain strings.
- Some languages have constant pools. For example
the Java byte code format includes a constant
pool in every class header. We do not do that
during transpilation , but we could of course.
But it begs the question, why only deduplicate
strings and not other constant expressions as well?
- We are totally happy that we have only codes,
there are chances that the host languages use
tagged pointers to represent them. So they
are represented similar to the tagged pointers
in SWI-Prolog which works for small integers.
- But the tagged pointer argument is moot,
since atom length=1 entities can be also
represented as tagged pointers, and some
programming languages do that. Dogelog Player
would use such tagged pointers without
poluting the atom table.
- What else?
Bye
Mild Shock schrieb:
Technically SWI-Prolog doesn't prefer codes.
Library `library(pure_input)` might prefer codes.
But this is again an issue of improving the
library by some non existent SWI-Prolog community.
The ISO core standard is silent about a flag
back_quotes, but has a lot of API requirements
that support both codes and chars, for example it
requires atom_codes/2 and atom_chars/2.
Implementation wise there can be an issue,
like one might decide to implement the atoms
of length=1 more efficiently, since with Unicode
there is now an explosion.
Not sure whether Trealla Prolog and Scryer
Prolog thought about this problem, that the
atom table gets quite large. Whereas codes don't
eat the atom table. Maybe they forbit predicates
that have an atom of length=1 head:
h(X) :-
write('Hello '), write(X), write('!'), nl.
Does this still work?
Mild Shock schrieb:
Concerning library(portray_text) which is in limbo:
Libraries are (often) written for eitherand thus the libraries make the choice.
But who writes these libraries? The SWI Prolog
community. And who doesn’t improve these libraries,
instead floods the web with workaround tips?
The SWI Prolog community.
Conclusion the SWI-Prolog community has itself
trapped in an ancient status quo, creating an island.
Cannot improve its own tooling, is not willing
to support code from else where that uses chars.
Same with the missed AI Boom.
(*) Code from elsewhere is dangerous, People
might use other Prolog systems than only SWI-Prolog,
like for exampe Trealla Prolog and Scryer Prolog.
(**) Keeping the status quo is comfy. No need to
think in terms of programm code. Its like biology
teachers versus pathology staff, biology teachers
do not everyday see opened corpses.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
It seems that it reads in as ðŸ‘\u008D but writes out as ðŸ‘\\x8D\\.
The official replacement character is 0xFFFD:
Replacement Character
https://www.compart.com/de/unicode/U+FFFD
Well that is what people did in the past, replace
non-printables by the ever same code, instead of
using ‘\uXXXX’ notation. I have studied the
library(portray_text) extensively. And my conclusion
is still that it extremly ancient.
For example I find:
mostly_codes([H|T], Yes, No, MinFactor) :-
integer(H),
H >= 0,
H =< 0x1ffff,
[...]
; catch(code_type(H, print),error(_,_),fail),
[...]
https://github.com/SWI-Prolog/swipl-devel/blob/eddbde61be09b95eb3ca2e160e73c2340744a3d2/library/portray_text.pl#L235
Why even 0x1ffff and not 0x10ffff, this is a bug,
do you want to starve is_text_code/1 ? The official
Unicode range is 0x0 to 0x10ffff. Ulrich Neumerkel
often confused the range in some of his code snippets,
maybe based on a limited interpretation of Unicode.
But if one would switch to chars one could easily
support any Unicode code point even without
knowing the range. Just do this:
mostly_chars([H|T], Yes, No, MinFactor) :-
atom(H),
atom_length(H, 1),
[...]
; /* printable check not needed */
[...]
Mild Shock schrieb:
Hi,
The most radical approach is Novacore from
Dogelog Player. It consists of the following
major incisions in the ISO core standard:
- We do not forbid chars, like for example
using lists of the form [a,b,c], we also
provide char_code/2 predicate bidirectionally.
- We do not provide and _chars built-in
predicates also there is nothing _strings. The
Prolog system is clever enough to not put
every atom it sees in an atom table. There
is only a predicate table.
- Some host languages have garbage collection that
deduplicates Strings. For example some Java
versions have an options to do that. But we
do not have any efforts to deduplicate atoms,
which are simply plain strings.
- Some languages have constant pools. For example
the Java byte code format includes a constant
pool in every class header. We do not do that
during transpilation , but we could of course.
But it begs the question, why only deduplicate
strings and not other constant expressions as well?
- We are totally happy that we have only codes,
there are chances that the host languages use
tagged pointers to represent them. So they
are represented similar to the tagged pointers
in SWI-Prolog which works for small integers.
- But the tagged pointer argument is moot,
since atom length=1 entities can be also
represented as tagged pointers, and some
programming languages do that. Dogelog Player
would use such tagged pointers without
poluting the atom table.
- What else?
Bye
Mild Shock schrieb:
Technically SWI-Prolog doesn't prefer codes.
Library `library(pure_input)` might prefer codes.
But this is again an issue of improving the
library by some non existent SWI-Prolog community.
The ISO core standard is silent about a flag
back_quotes, but has a lot of API requirements
that support both codes and chars, for example it
requires atom_codes/2 and atom_chars/2.
Implementation wise there can be an issue,
like one might decide to implement the atoms
of length=1 more efficiently, since with Unicode
there is now an explosion.
Not sure whether Trealla Prolog and Scryer
Prolog thought about this problem, that the
atom table gets quite large. Whereas codes don't
eat the atom table. Maybe they forbit predicates
that have an atom of length=1 head:
h(X) :-
write('Hello '), write(X), write('!'), nl.
Does this still work?
Mild Shock schrieb:
Concerning library(portray_text) which is in limbo:
Libraries are (often) written for eitherand thus the libraries make the choice.
But who writes these libraries? The SWI Prolog
community. And who doesn’t improve these libraries,
instead floods the web with workaround tips?
The SWI Prolog community.
Conclusion the SWI-Prolog community has itself
trapped in an ancient status quo, creating an island.
Cannot improve its own tooling, is not willing
to support code from else where that uses chars.
Same with the missed AI Boom.
(*) Code from elsewhere is dangerous, People
might use other Prolog systems than only SWI-Prolog,
like for exampe Trealla Prolog and Scryer Prolog.
(**) Keeping the status quo is comfy. No need to
think in terms of programm code. Its like biology
teachers versus pathology staff, biology teachers
do not everyday see opened corpses.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Somebody wrote:
It seems that it reads in as ðŸ‘\u008D but writes out as ðŸ‘\\x8D\\.
Can one then do ‘\uXXXX’ in 100% Prolog as
well? Even including surrogates? Of course,
here some DCG generator snippet from Dogelog
Player which is 100% Prolog. This is from the
Java backend, because I didn’t introduce ‘\uXXXX’
in my Prolog system, because it is not part of
ISO core standard. The ISO core standard would want '\xXX':
crossj_escape_code2(X) --> {X =< 0xFFFF}, !,
{atom_integer(J, 16, X), atom_codes(J, H),
length(H, N), M is 4-N}, [0'\\, 0'u],
cross_escape_zeros(M),
cross_escape_codes2(H).
crossj_escape_code2(X) --> {crossj_high_surrogate(X, Y),
crossj_low_surrogate(X, Z)},
crossj_escape_code2(Y),
crossj_escape_code2(Z).
crossj_high_surrogate(X, Y) :- Y is (X >> 10) + 0xD7C0.
crossj_low_surrogate(X, Y) :- Y is (X /\ 0x3FF) + 0xDC00.
Mild Shock schrieb:
The official replacement character is 0xFFFD:
Replacement Character
https://www.compart.com/de/unicode/U+FFFD
Well that is what people did in the past, replace
non-printables by the ever same code, instead of
using ‘\uXXXX’ notation. I have studied the
library(portray_text) extensively. And my conclusion
is still that it extremly ancient.
For example I find:
mostly_codes([H|T], Yes, No, MinFactor) :-
integer(H),
H >= 0,
H =< 0x1ffff,
[...]
; catch(code_type(H, print),error(_,_),fail),
[...]
https://github.com/SWI-Prolog/swipl-devel/blob/eddbde61be09b95eb3ca2e160e73c2340744a3d2/library/portray_text.pl#L235
Why even 0x1ffff and not 0x10ffff, this is a bug,
do you want to starve is_text_code/1 ? The official
Unicode range is 0x0 to 0x10ffff. Ulrich Neumerkel
often confused the range in some of his code snippets,
maybe based on a limited interpretation of Unicode.
But if one would switch to chars one could easily
support any Unicode code point even without
knowing the range. Just do this:
mostly_chars([H|T], Yes, No, MinFactor) :-
atom(H),
atom_length(H, 1),
[...]
; /* printable check not needed */
[...]
Mild Shock schrieb:
Hi,
The most radical approach is Novacore from
Dogelog Player. It consists of the following
major incisions in the ISO core standard:
- We do not forbid chars, like for example
using lists of the form [a,b,c], we also
provide char_code/2 predicate bidirectionally.
- We do not provide and _chars built-in
predicates also there is nothing _strings. The
Prolog system is clever enough to not put
every atom it sees in an atom table. There
is only a predicate table.
- Some host languages have garbage collection that
deduplicates Strings. For example some Java
versions have an options to do that. But we
do not have any efforts to deduplicate atoms,
which are simply plain strings.
- Some languages have constant pools. For example
the Java byte code format includes a constant
pool in every class header. We do not do that
during transpilation , but we could of course.
But it begs the question, why only deduplicate
strings and not other constant expressions as well?
- We are totally happy that we have only codes,
there are chances that the host languages use
tagged pointers to represent them. So they
are represented similar to the tagged pointers
in SWI-Prolog which works for small integers.
- But the tagged pointer argument is moot,
since atom length=1 entities can be also
represented as tagged pointers, and some
programming languages do that. Dogelog Player
would use such tagged pointers without
poluting the atom table.
- What else?
Bye
Mild Shock schrieb:
Technically SWI-Prolog doesn't prefer codes.
Library `library(pure_input)` might prefer codes.
But this is again an issue of improving the
library by some non existent SWI-Prolog community.
The ISO core standard is silent about a flag
back_quotes, but has a lot of API requirements
that support both codes and chars, for example it
requires atom_codes/2 and atom_chars/2.
Implementation wise there can be an issue,
like one might decide to implement the atoms
of length=1 more efficiently, since with Unicode
there is now an explosion.
Not sure whether Trealla Prolog and Scryer
Prolog thought about this problem, that the
atom table gets quite large. Whereas codes don't
eat the atom table. Maybe they forbit predicates
that have an atom of length=1 head:
h(X) :-
write('Hello '), write(X), write('!'), nl.
Does this still work?
Mild Shock schrieb:
Concerning library(portray_text) which is in limbo:
Libraries are (often) written for eitherand thus the libraries make the choice.
But who writes these libraries? The SWI Prolog
community. And who doesn’t improve these libraries,
instead floods the web with workaround tips?
The SWI Prolog community.
Conclusion the SWI-Prolog community has itself
trapped in an ancient status quo, creating an island.
Cannot improve its own tooling, is not willing
to support code from else where that uses chars.
Same with the missed AI Boom.
(*) Code from elsewhere is dangerous, People
might use other Prolog systems than only SWI-Prolog,
like for exampe Trealla Prolog and Scryer Prolog.
(**) Keeping the status quo is comfy. No need to
think in terms of programm code. Its like biology
teachers versus pathology staff, biology teachers
do not everyday see opened corpses.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses? >>>>>> The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Those that use a large part pay a pretty
high price in terms of memory and currently
also time for code points > 0xffff
:rocket: Call for Papers: Integrating Logical
Reasoning & Large Language Models (LLMs) :brain:
https://swi-prolog.discourse.group/t/9065
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg
Those that use a large part pay a pretty
high price in terms of memory and currently
also time for code points > 0xffff
Emojis are typically above 0xffff. And from this
announcement its seem, Emojis are a big part with
keeping up with the AI Boom:
:rocket: Call for Papers: Integrating Logical
Reasoning & Large Language Models (LLMs) :brain:
https://swi-prolog.discourse.group/t/9065
But it would cost you nothing to support this here in
library(portray_text):
/* SWI-Prolog 9.3.24 */
?- X = [a,b,c]
X = `abc`
It is extremly trivial to implement, its not really
rocket science. It doesn need much brains and
it works also for Emojis:
/* Scryer Prolog 0.9.4-411 */
?- X = [a,b,c].
X = "abc".
?- X = ['🚀', a, '🧠', b, c].
X = "🚀a🧠bc".
In Scryer Prolog it shows double quotes and not
back quotes, because of the different default settings
of the Prolog flags double_quotes and back_quotes.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg
Name: Bart Demoen
Dissertation: Stability and Equilibrium for Clasical infinite Systems Advisor: Andre Frans Maria Verbeure https://www.genealogy.math.ndsu.nodak.edu/id.php?id=70951
Gap in Section 7.6.2 and some Insecurity Arising from it
?- call((Z=!, a(X), Z)).
Z = !
X = 1 ?;
Z = !
X = 2
yes
?- findall(Z-X,call((Z=!, a(X), Z)),L).
L = [!-1]
https://groups.google.com/g/comp.lang.prolog/c/FNou9Z-A_Zs/m/NSSTmbx4E4wJ
PLEASE, DO NOT ALARM THIS NEWSGROUP ABOUT "FUNNY" RESULTS
WITHOUT TELLING US WHICH SYSTEM CAUSED THE RESULTS
Inductive logic programming at 30--- Synchronet 3.21a-Linux NewsLink 1.2
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Programming vs. Specification
?- X = a, setof(Y, p(X, Y), S).
and
?- setof(Y, p(X, Y), S), X = a.
not for the following definition of p/2:
p(b,1) :- ! .
p(_,2) .
Sigh... We were discussing *logical* advantages.
Of course setof cannot give a declarative reading
to a program that doesn't have one to start with.
https://groups.google.com/g/comp.lang.prolog/c/-oerQs4l2Zw/m/v0kjwLIwuI0J
Hi,
That Prolog missed the AI Boom is quite amazing,
given that neural networks have a lot to do
with physics, and there were even Prologers with
a physics PhD, well almost if there werent a typo:
Name: Bart Demoen
Dissertation: Stability and Equilibrium for Clasical infinite Systems Advisor: Andre Frans Maria Verbeure https://www.genealogy.math.ndsu.nodak.edu/id.php?id=70951
What does Clasical mean? But then there is a famous
test case, which can melt Bart Demoen's brain:
Gap in Section 7.6.2 and some Insecurity Arising from it
?- call((Z=!, a(X), Z)).
Z = !
X = 1 ?;
Z = !
X = 2
yes
?- findall(Z-X,call((Z=!, a(X), Z)),L).
L = [!-1]
https://groups.google.com/g/comp.lang.prolog/c/FNou9Z-A_Zs/m/NSSTmbx4E4wJ
Bart Demoen's memorable reaction in 04.11.2011, 23:06:47 was:
PLEASE, DO NOT ALARM THIS NEWSGROUP ABOUT "FUNNY" RESULTS
WITHOUT TELLING US WHICH SYSTEM CAUSED THE RESULTS
LoL
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Oh, but you did much better: you tried to ridicule
my research, my home country and the size of my
Prolog programs. I can't match that.
https://groups.google.com/g/comp.lang.prolog/c/uh_HUytRGJE/m/tXc7euv1KngJ
Hi,
Although Paulo Moura followed the lead of Bart
Demoen and started mobbing me. Many people were
rather shaking their head over Bart Demoen.
When Bart Demoen posted:
Programming vs. Specification
?- X = a, setof(Y, p(X, Y), S).
and
?- setof(Y, p(X, Y), S), X = a.
not for the following definition of p/2:
p(b,1) :- ! .
p(_,2) .
Fernando Pereira, sighted:
Sigh... We were discussing *logical* advantages.
Of course setof cannot give a declarative reading
to a program that doesn't have one to start with.
https://groups.google.com/g/comp.lang.prolog/c/-oerQs4l2Zw/m/v0kjwLIwuI0J
Bye
Mild Shock schrieb:
Hi,
That Prolog missed the AI Boom is quite amazing,
given that neural networks have a lot to do
with physics, and there were even Prologers with
a physics PhD, well almost if there werent a typo:
Name: Bart Demoen
Dissertation: Stability and Equilibrium for Clasical infinite Systems
Advisor: Andre Frans Maria Verbeure
https://www.genealogy.math.ndsu.nodak.edu/id.php?id=70951
What does Clasical mean? But then there is a famous
test case, which can melt Bart Demoen's brain:
Gap in Section 7.6.2 and some Insecurity Arising from ithttps://groups.google.com/g/comp.lang.prolog/c/FNou9Z-A_Zs/m/NSSTmbx4E4wJ
;
?- call((Z=!, a(X), Z)).
Z = !
X = 1 ?;
Z = !
X = 2
yes
;
?- findall(Z-X,call((Z=!, a(X), Z)),L).
L = [!-1]
;
Bart Demoen's memorable reaction in 04.11.2011, 23:06:47 was:
PLEASE, DO NOT ALARM THIS NEWSGROUP ABOUT "FUNNY" RESULTS
WITHOUT TELLING US WHICH SYSTEM CAUSED THE RESULTS
LoL
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Isaiah 30:8 (NIV): “Go now, write it
on a tablet for them, inscribe it on a
scroll, that for days to come it may
be an everlasting witness.”
Bart Demoen retired ...
I retired on 1 october 2018.
https://people.cs.kuleuven.be/~bart.demoen/
Hi,
Ok, one last sample, barty boy in full swing:
Oh, but you did much better: you tried to ridicule
my research, my home country and the size of my
Prolog programs. I can't match that.
https://groups.google.com/g/comp.lang.prolog/c/uh_HUytRGJE/m/tXc7euv1KngJ
So he got into struggle with industry? I would
add not only small programms, but also a micro
penis and an empty head.
LoL
Bye
Mild Shock schrieb:
Hi,
Although Paulo Moura followed the lead of Bart
Demoen and started mobbing me. Many people were
rather shaking their head over Bart Demoen.
When Bart Demoen posted:
Programming vs. Specification
;
?- X = a, setof(Y, p(X, Y), S).
and
?- setof(Y, p(X, Y), S), X = a.
;
not for the following definition of p/2:
;
p(b,1) :- ! .
p(_,2) .
Fernando Pereira, sighted:
Sigh... We were discussing *logical* advantages.https://groups.google.com/g/comp.lang.prolog/c/-oerQs4l2Zw/m/v0kjwLIwuI0J
Of course setof cannot give a declarative reading
to a program that doesn't have one to start with.
;
Bye
Mild Shock schrieb:
Hi,
That Prolog missed the AI Boom is quite amazing,
given that neural networks have a lot to do
with physics, and there were even Prologers with
a physics PhD, well almost if there werent a typo:
Name: Bart Demoen
Dissertation: Stability and Equilibrium for Clasical infinite Systems >>> > Advisor: Andre Frans Maria Verbeure
https://www.genealogy.math.ndsu.nodak.edu/id.php?id=70951
What does Clasical mean? But then there is a famous
test case, which can melt Bart Demoen's brain:
Gap in Section 7.6.2 and some Insecurity Arising from ithttps://groups.google.com/g/comp.lang.prolog/c/FNou9Z-A_Zs/m/NSSTmbx4E4wJ >>>
;
?- call((Z=!, a(X), Z)).
Z = !
X = 1 ?;
Z = !
X = 2
yes
;
?- findall(Z-X,call((Z=!, a(X), Z)),L).
L = [!-1]
;
Bart Demoen's memorable reaction in 04.11.2011, 23:06:47 was:
PLEASE, DO NOT ALARM THIS NEWSGROUP ABOUT "FUNNY" RESULTS
WITHOUT TELLING US WHICH SYSTEM CAUSED THE RESULTS
LoL
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Hi,
But I shouldn't waste too much time.
One shouldn't punish people for just
being plain stupid.
Like for example this clueless french
philosopher who had a lot of troubles
with non-classical logic.
His brain tried to eliminate non-classical
logic, it was keen on avoiding non-classical
logic. A typical species of a human with
an extremly small brain, again working
in the wrong place!
Bye
P.S.: Maybe this a Poincaré thingy? Poincaré
was a strong critic of logicism (as championed
by Frege and Russell) and of Hilbert’s
formalist program.
But, he did not formally use or promote systems
like intuitionistic logic, modal logic, or
relevance logic. His logical framework remained
within the bounds of classical logic,
though he was skeptical of excessive formalism.
He thought formal systems could miss the creative
and synthetic nature of mathematical
invention.
Henri Poincaré believed that mathematical
and scientific creativity came from a deep,
unconscious intuition that could not be
captured by mechanical reasoning or formal
systems. He famously wrote about how insights
came not from plodding logic but from sudden
illuminations — leaps of creative synthesis.
But now we have generative AI — models like GPT — that:
- produce poetry, proofs, stories, and code,
- combine ideas in novel ways,
- and do so by processing patterns in massive
datasets, without conscious understanding.
And that does seem to contradict Poincaré's belief
that true invention cannot come from automation.
Mild Shock schrieb:
Hi,
But I shouldn't waste too much time.
One shouldn't punish people for just
being plain stupid.
Like for example this clueless french
philosopher who had a lot of troubles
with non-classical logic.
His brain tried to eliminate non-classical
logic, it was keen on avoiding non-classical
logic. A typical species of a human with
an extremly small brain, again working
in the wrong place!
Bye
P.S.: Maybe this a Poincaré thingy? Poincaré
was a strong critic of logicism (as championed
by Frege and Russell) and of Hilbert’s
formalist program.
But, he did not formally use or promote systems
like intuitionistic logic, modal logic, or
relevance logic. His logical framework remained
within the bounds of classical logic,
though he was skeptical of excessive formalism.
He thought formal systems could miss the creative
and synthetic nature of mathematical
invention.
Hi,
This is nothing for Bart Demoen, Physics PhD,
academic fraud. The ideal choice point can
be formulated as a logical formula, involving
an existential quantifier. Assume we have
a query and already these answers, and the
Prolog system is prompting the interactive user:
?- p(X).
X = a1 ;
X = a2 ;
...
X = ak-1 ;
X = ak
A mathematical oracle that could indicate whether
it is even necessary to prompt the user could be:
∃X ( p(X) & X =\= a1 & ... & X =\= ak)
It doesn't match 100% Prolog since Prolog might
give duplicate answers or non-ground answers,
but assume for the moment the query q(X),
produces only distinct and ground results.
Nice existential FOL formula we have in the above.
Bye
Mild Shock schrieb:
Henri Poincaré believed that mathematical
and scientific creativity came from a deep,
unconscious intuition that could not be
captured by mechanical reasoning or formal
systems. He famously wrote about how insights
came not from plodding logic but from sudden
illuminations — leaps of creative synthesis.
But now we have generative AI — models like GPT — that:
- produce poetry, proofs, stories, and code,
- combine ideas in novel ways,
- and do so by processing patterns in massive
datasets, without conscious understanding.
And that does seem to contradict Poincaré's belief
that true invention cannot come from automation.
Mild Shock schrieb:
Hi,
But I shouldn't waste too much time.
One shouldn't punish people for just
being plain stupid.
Like for example this clueless french
philosopher who had a lot of troubles
with non-classical logic.
His brain tried to eliminate non-classical
logic, it was keen on avoiding non-classical
logic. A typical species of a human with
an extremly small brain, again working
in the wrong place!
Bye
P.S.: Maybe this a Poincaré thingy? Poincaré
was a strong critic of logicism (as championed
by Frege and Russell) and of Hilbert’s
formalist program.
But, he did not formally use or promote systems
like intuitionistic logic, modal logic, or
relevance logic. His logical framework remained
within the bounds of classical logic,
though he was skeptical of excessive formalism.
He thought formal systems could miss the creative
and synthetic nature of mathematical
invention.
Hi,
Now what does a Prolog system do? Well when
it prompts the end-user it has somewhere
a list of the current query choice points:
CPs = [CP1, CP2, .., CPn]
This is implementation specific, what choice
points a system creates, also the ISO core standard
shows a machine in its more procedural explanation,
that depicts something that has also somewhere
choice points. Since it is implementation specific
a Prolog System A and Prolog System B might
use different choice points:
System A:
CPs = [CP1, CP2, .., CPn]
System B:
CP's = [CP'1, CP'2, .., CP'n]
We say a System B could eliminate a choice point CP,
relative to a System A, if we have:
System A:
CP ∈ CPs
System B:
CP ∉ CPs
So System B might have an advantage over System A,
since it will not backtrack over CP.
When it comes to answer substitution display,
it is now very common, that a Prolog system checks
its own choice points, and when it finds that
CP = []
It knows that the query left no choice points,
either because there were never any, because
there was no branching in the executed code, or
because a cut removed branching, or because
they were eliminated somehow. Like through
some index analysis.
Bye
Mild Shock schrieb:
Hi,
This is nothing for Bart Demoen, Physics PhD,
academic fraud. The ideal choice point can
be formulated as a logical formula, involving
an existential quantifier. Assume we have
a query and already these answers, and the
Prolog system is prompting the interactive user:
?- p(X).
X = a1 ;
X = a2 ;
...
X = ak-1 ;
X = ak
A mathematical oracle that could indicate whether
it is even necessary to prompt the user could be:
∃X ( p(X) & X =\= a1 & ... & X =\= ak)
It doesn't match 100% Prolog since Prolog might
give duplicate answers or non-ground answers,
but assume for the moment the query q(X),
produces only distinct and ground results.
Nice existential FOL formula we have in the above.
Bye
Mild Shock schrieb:
Henri Poincaré believed that mathematical
and scientific creativity came from a deep,
unconscious intuition that could not be
captured by mechanical reasoning or formal
systems. He famously wrote about how insights
came not from plodding logic but from sudden
illuminations — leaps of creative synthesis.
But now we have generative AI — models like GPT — that:
- produce poetry, proofs, stories, and code,
- combine ideas in novel ways,
- and do so by processing patterns in massive
datasets, without conscious understanding.
And that does seem to contradict Poincaré's belief
that true invention cannot come from automation.
Mild Shock schrieb:
Hi,
But I shouldn't waste too much time.
One shouldn't punish people for just
being plain stupid.
Like for example this clueless french
philosopher who had a lot of troubles
with non-classical logic.
His brain tried to eliminate non-classical
logic, it was keen on avoiding non-classical
logic. A typical species of a human with
an extremly small brain, again working
in the wrong place!
Bye
P.S.: Maybe this a Poincaré thingy? Poincaré
was a strong critic of logicism (as championed
by Frege and Russell) and of Hilbert’s
formalist program.
But, he did not formally use or promote systems
like intuitionistic logic, modal logic, or
relevance logic. His logical framework remained
within the bounds of classical logic,
though he was skeptical of excessive formalism.
He thought formal systems could miss the creative
and synthetic nature of mathematical
invention.
Hi,
Now one might ask, if we have a Prolog system
that anyway juggles with choice points, why
would we need a logical formula for choice points?
Well there is a funny correctness criteria,
for example in the top-level, if the top-level
doesn't prompt the end user anymore in such a scenario:
?- p(X).
X = a1 ;
X = a2 ;
...
X = ak-1 ;
X = ak
So the end user is not prompted because the
Prolog system founds CP = []. This is licensed
by this correctness statement for any choice
point elimination:
CP = [] => ~∃X ( p(X) & X =\= a1 & ... & X =\= ak)
Have Fun!
Bye
Mild Shock schrieb:
Hi,
Now what does a Prolog system do? Well when
it prompts the end-user it has somewhere
a list of the current query choice points:
CPs = [CP1, CP2, .., CPn]
This is implementation specific, what choice
points a system creates, also the ISO core standard
shows a machine in its more procedural explanation,
that depicts something that has also somewhere
choice points. Since it is implementation specific
a Prolog System A and Prolog System B might
use different choice points:
System A:
CPs = [CP1, CP2, .., CPn]
System B:
CP's = [CP'1, CP'2, .., CP'n]
We say a System B could eliminate a choice point CP,
relative to a System A, if we have:
System A:
CP ∈ CPs
System B:
CP ∉ CPs
So System B might have an advantage over System A,
since it will not backtrack over CP.
When it comes to answer substitution display,
it is now very common, that a Prolog system checks
its own choice points, and when it finds that
CP = []
It knows that the query left no choice points,
either because there were never any, because
there was no branching in the executed code, or
because a cut removed branching, or because
they were eliminated somehow. Like through
some index analysis.
Bye
Mild Shock schrieb:
Hi,
This is nothing for Bart Demoen, Physics PhD,
academic fraud. The ideal choice point can
be formulated as a logical formula, involving
an existential quantifier. Assume we have
a query and already these answers, and the
Prolog system is prompting the interactive user:
?- p(X).
X = a1 ;
X = a2 ;
...
X = ak-1 ;
X = ak
A mathematical oracle that could indicate whether
it is even necessary to prompt the user could be:
∃X ( p(X) & X =\= a1 & ... & X =\= ak)
It doesn't match 100% Prolog since Prolog might
give duplicate answers or non-ground answers,
but assume for the moment the query q(X),
produces only distinct and ground results.
Nice existential FOL formula we have in the above.
Bye
Mild Shock schrieb:
Henri Poincaré believed that mathematical
and scientific creativity came from a deep,
unconscious intuition that could not be
captured by mechanical reasoning or formal
systems. He famously wrote about how insights
came not from plodding logic but from sudden
illuminations — leaps of creative synthesis.
But now we have generative AI — models like GPT — that:
- produce poetry, proofs, stories, and code,
- combine ideas in novel ways,
- and do so by processing patterns in massive
datasets, without conscious understanding.
And that does seem to contradict Poincaré's belief
that true invention cannot come from automation.
Mild Shock schrieb:
Hi,
But I shouldn't waste too much time.
One shouldn't punish people for just
being plain stupid.
Like for example this clueless french
philosopher who had a lot of troubles
with non-classical logic.
His brain tried to eliminate non-classical
logic, it was keen on avoiding non-classical
logic. A typical species of a human with
an extremly small brain, again working
in the wrong place!
Bye
P.S.: Maybe this a Poincaré thingy? Poincaré
was a strong critic of logicism (as championed
by Frege and Russell) and of Hilbert’s
formalist program.
But, he did not formally use or promote systems
like intuitionistic logic, modal logic, or
relevance logic. His logical framework remained
within the bounds of classical logic,
though he was skeptical of excessive formalism.
He thought formal systems could miss the creative
and synthetic nature of mathematical
invention.
Hi,
Today I had an idea, of some semi-deep Prolog
argument indexing. Just because choice point
elimination is so important and has so many
benefits for performance and the end user
experience, like also debugging. And because it is
tied to indexing. An index and the resulting clause
list, which can be always checked for having reached
its end. This gives a look-ahead information to the
Prolog system which answers this oracle, concering
clause instantiation:
∃X ( p(X) & X =\= a1 & ... & X =\= ak)
So the idea of semi-deep Prolog argument indexing
would be a hybrid between Scryer Prolog and
SWI-Prolog taking the best of both worls.
It would adopt skip indexes from Scryer Prolog
and deep indexing of SWI-Prolog, but deep indexing
through a Key computation trick. The Key computation
trick is quickly explained.
Normal Key Computations:
p(a, ..) ~~> Computed Key: a/0 or sometimes a alone
p(b(x,y), ..) ~~> Computed Key: b/2 or sometimes b alone
Etc..
Semi Deep Key Computation:
p(a, ..) ~~> Computed Key: 'a'
p([a, ..], ..) ~~> Computed Key: '.a'
Ect..
Got it?
The Scryer Prolog skip index is needed because
in a DCG the interesting arguments are usually
not the first argument.
Bye
Mild Shock schrieb:
Hi,
Now one might ask, if we have a Prolog system
that anyway juggles with choice points, why
would we need a logical formula for choice points?
Well there is a funny correctness criteria,
for example in the top-level, if the top-level
doesn't prompt the end user anymore in such a scenario:
?- p(X).
X = a1 ;
X = a2 ;
...
X = ak-1 ;
X = ak
So the end user is not prompted because the
Prolog system founds CP = []. This is licensed
by this correctness statement for any choice
point elimination:
CP = [] => ~∃X ( p(X) & X =\= a1 & ... & X =\= ak)
Have Fun!
Bye
Mild Shock schrieb:
Hi,
Now what does a Prolog system do? Well when
it prompts the end-user it has somewhere
a list of the current query choice points:
CPs = [CP1, CP2, .., CPn]
This is implementation specific, what choice
points a system creates, also the ISO core standard
shows a machine in its more procedural explanation,
that depicts something that has also somewhere
choice points. Since it is implementation specific
a Prolog System A and Prolog System B might
use different choice points:
System A:
CPs = [CP1, CP2, .., CPn]
System B:
CP's = [CP'1, CP'2, .., CP'n]
We say a System B could eliminate a choice point CP,
relative to a System A, if we have:
System A:
CP ∈ CPs
System B:
CP ∉ CPs
So System B might have an advantage over System A,
since it will not backtrack over CP.
When it comes to answer substitution display,
it is now very common, that a Prolog system checks
its own choice points, and when it finds that
CP = []
It knows that the query left no choice points,
either because there were never any, because
there was no branching in the executed code, or
because a cut removed branching, or because
they were eliminated somehow. Like through
some index analysis.
Bye
Mild Shock schrieb:
Hi,
This is nothing for Bart Demoen, Physics PhD,
academic fraud. The ideal choice point can
be formulated as a logical formula, involving
an existential quantifier. Assume we have
a query and already these answers, and the
Prolog system is prompting the interactive user:
?- p(X).
X = a1 ;
X = a2 ;
...
X = ak-1 ;
X = ak
A mathematical oracle that could indicate whether
it is even necessary to prompt the user could be:
∃X ( p(X) & X =\= a1 & ... & X =\= ak)
It doesn't match 100% Prolog since Prolog might
give duplicate answers or non-ground answers,
but assume for the moment the query q(X),
produces only distinct and ground results.
Nice existential FOL formula we have in the above.
Bye
Mild Shock schrieb:
Henri Poincaré believed that mathematical
and scientific creativity came from a deep,
unconscious intuition that could not be
captured by mechanical reasoning or formal
systems. He famously wrote about how insights
came not from plodding logic but from sudden
illuminations — leaps of creative synthesis.
But now we have generative AI — models like GPT — that:
- produce poetry, proofs, stories, and code,
- combine ideas in novel ways,
- and do so by processing patterns in massive
datasets, without conscious understanding.
And that does seem to contradict Poincaré's belief
that true invention cannot come from automation.
Mild Shock schrieb:
Hi,
But I shouldn't waste too much time.
One shouldn't punish people for just
being plain stupid.
Like for example this clueless french
philosopher who had a lot of troubles
with non-classical logic.
His brain tried to eliminate non-classical
logic, it was keen on avoiding non-classical
logic. A typical species of a human with
an extremly small brain, again working
in the wrong place!
Bye
P.S.: Maybe this a Poincaré thingy? Poincaré
was a strong critic of logicism (as championed
by Frege and Russell) and of Hilbert’s
formalist program.
But, he did not formally use or promote systems
like intuitionistic logic, modal logic, or
relevance logic. His logical framework remained
within the bounds of classical logic,
though he was skeptical of excessive formalism.
He thought formal systems could miss the creative
and synthetic nature of mathematical
invention.
Hi,
Lets say these semi-deep Prolog argument indexing
will really work. Then I could rollback some
uses of ROKs trick in my DCG based code base,
where I massaged the DCG to have a terminal in
the first argument, and DCG was somehow degraded
in only doing the concatenative stuff, through its
monad rewriting. This would lead to elegant code.
But it will not perform on a couple of Prolog systems,
that don't have deep indexing. I suspect the more
elegant code will not perform on these Prolog system:
- GNU Prolog
- Scryer Prolog
- Trealla Prolog
-
I didn't check ECLiPSe Prolog towards deep indexing,
and also I didn't check Ciao Prolog towards deep
indexing yet. It will show good performance:
- SWI-Prolog
- Dogelog Player (if I add semi-deep and skip there)
- Jekejeke Runtime (if I add semi-deep there, it has already skip)
-
Bye
Mild Shock schrieb:
Hi,
Today I had an idea, of some semi-deep Prolog
argument indexing. Just because choice point
elimination is so important and has so many
benefits for performance and the end user
experience, like also debugging. And because it is
tied to indexing. An index and the resulting clause
list, which can be always checked for having reached
its end. This gives a look-ahead information to the
Prolog system which answers this oracle, concering
clause instantiation:
∃X ( p(X) & X =\= a1 & ... & X =\= ak)
So the idea of semi-deep Prolog argument indexing
would be a hybrid between Scryer Prolog and
SWI-Prolog taking the best of both worls.
It would adopt skip indexes from Scryer Prolog
and deep indexing of SWI-Prolog, but deep indexing
through a Key computation trick. The Key computation
trick is quickly explained.
Normal Key Computations:
p(a, ..) ~~> Computed Key: a/0 or sometimes a alone
p(b(x,y), ..) ~~> Computed Key: b/2 or sometimes b alone
Etc..
Semi Deep Key Computation:
p(a, ..) ~~> Computed Key: 'a'
p([a, ..], ..) ~~> Computed Key: '.a'
Ect..
Got it?
The Scryer Prolog skip index is needed because
in a DCG the interesting arguments are usually
not the first argument.
Bye
Mild Shock schrieb:
Hi,
Now one might ask, if we have a Prolog system
that anyway juggles with choice points, why
would we need a logical formula for choice points?
Well there is a funny correctness criteria,
for example in the top-level, if the top-level
doesn't prompt the end user anymore in such a scenario:
?- p(X).
X = a1 ;
X = a2 ;
...
X = ak-1 ;
X = ak
So the end user is not prompted because the
Prolog system founds CP = []. This is licensed
by this correctness statement for any choice
point elimination:
CP = [] => ~∃X ( p(X) & X =\= a1 & ... & X =\= ak)
Have Fun!
Bye
Mild Shock schrieb:
Hi,
Now what does a Prolog system do? Well when
it prompts the end-user it has somewhere
a list of the current query choice points:
CPs = [CP1, CP2, .., CPn]
This is implementation specific, what choice
points a system creates, also the ISO core standard
shows a machine in its more procedural explanation,
that depicts something that has also somewhere
choice points. Since it is implementation specific
a Prolog System A and Prolog System B might
use different choice points:
System A:
CPs = [CP1, CP2, .., CPn]
System B:
CP's = [CP'1, CP'2, .., CP'n]
We say a System B could eliminate a choice point CP,
relative to a System A, if we have:
System A:
CP ∈ CPs
System B:
CP ∉ CPs
So System B might have an advantage over System A,
since it will not backtrack over CP.
When it comes to answer substitution display,
it is now very common, that a Prolog system checks
its own choice points, and when it finds that
CP = []
It knows that the query left no choice points,
either because there were never any, because
there was no branching in the executed code, or
because a cut removed branching, or because
they were eliminated somehow. Like through
some index analysis.
Bye
Mild Shock schrieb:
Hi,
This is nothing for Bart Demoen, Physics PhD,
academic fraud. The ideal choice point can
be formulated as a logical formula, involving
an existential quantifier. Assume we have
a query and already these answers, and the
Prolog system is prompting the interactive user:
?- p(X).
X = a1 ;
X = a2 ;
...
X = ak-1 ;
X = ak
A mathematical oracle that could indicate whether
it is even necessary to prompt the user could be:
∃X ( p(X) & X =\= a1 & ... & X =\= ak)
It doesn't match 100% Prolog since Prolog might
give duplicate answers or non-ground answers,
but assume for the moment the query q(X),
produces only distinct and ground results.
Nice existential FOL formula we have in the above.
Bye
Mild Shock schrieb:
Henri Poincaré believed that mathematical
and scientific creativity came from a deep,
unconscious intuition that could not be
captured by mechanical reasoning or formal
systems. He famously wrote about how insights
came not from plodding logic but from sudden
illuminations — leaps of creative synthesis.
But now we have generative AI — models like GPT — that:
- produce poetry, proofs, stories, and code,
- combine ideas in novel ways,
- and do so by processing patterns in massive
datasets, without conscious understanding.
And that does seem to contradict Poincaré's belief
that true invention cannot come from automation.
Mild Shock schrieb:
Hi,
But I shouldn't waste too much time.
One shouldn't punish people for just
being plain stupid.
Like for example this clueless french
philosopher who had a lot of troubles
with non-classical logic.
His brain tried to eliminate non-classical
logic, it was keen on avoiding non-classical
logic. A typical species of a human with
an extremly small brain, again working
in the wrong place!
Bye
P.S.: Maybe this a Poincaré thingy? Poincaré
was a strong critic of logicism (as championed
by Frege and Russell) and of Hilbert’s
formalist program.
But, he did not formally use or promote systems
like intuitionistic logic, modal logic, or
relevance logic. His logical framework remained
within the bounds of classical logic,
though he was skeptical of excessive formalism.
He thought formal systems could miss the creative
and synthetic nature of mathematical
invention.
Hi,
The most radical approach is Novacore from
Dogelog Player. It consists of the following
major incisions in the ISO core standard:
- We do not forbid chars, like for example
using lists of the form [a,b,c], we also
provide char_code/2 predicate bidirectionally.
- We do not provide and _chars built-in
predicates also there is nothing _strings. The
Prolog system is clever enough to not put
every atom it sees in an atom table. There
is only a predicate table.
- Some host languages have garbage collection that
deduplicates Strings. For example some Java
versions have an options to do that. But we
do not have any efforts to deduplicate atoms,
which are simply plain strings.
- Some languages have constant pools. For example
the Java byte code format includes a constant
pool in every class header. We do not do that
during transpilation , but we could of course.
But it begs the question, why only deduplicate
strings and not other constant expressions as well?
- We are totally happy that we have only codes,
there are chances that the host languages use
tagged pointers to represent them. So they
are represented similar to the tagged pointers
in SWI-Prolog which works for small integers.
- But the tagged pointer argument is moot,
since atom length=1 entities can be also
represented as tagged pointers, and some
programming languages do that. Dogelog Player
would use such tagged pointers without
poluting the atom table.
- What else?
Bye
Mild Shock schrieb:
Technically SWI-Prolog doesn't prefer codes.
Library `library(pure_input)` might prefer codes.
But this is again an issue of improving the
library by some non existent SWI-Prolog community.
The ISO core standard is silent about a flag
back_quotes, but has a lot of API requirements
that support both codes and chars, for example it
requires atom_codes/2 and atom_chars/2.
Implementation wise there can be an issue,
like one might decide to implement the atoms
of length=1 more efficiently, since with Unicode
there is now an explosion.
Not sure whether Trealla Prolog and Scryer
Prolog thought about this problem, that the
atom table gets quite large. Whereas codes don't
eat the atom table. Maybe they forbit predicates
that have an atom of length=1 head:
h(X) :-
write('Hello '), write(X), write('!'), nl.
Does this still work?
Mild Shock schrieb:
Concerning library(portray_text) which is in limbo:
Libraries are (often) written for eitherand thus the libraries make the choice.
But who writes these libraries? The SWI Prolog
community. And who doesn’t improve these libraries,
instead floods the web with workaround tips?
The SWI Prolog community.
Conclusion the SWI-Prolog community has itself
trapped in an ancient status quo, creating an island.
Cannot improve its own tooling, is not willing
to support code from else where that uses chars.
Same with the missed AI Boom.
(*) Code from elsewhere is dangerous, People
might use other Prolog systems than only SWI-Prolog,
like for exampe Trealla Prolog and Scryer Prolog.
(**) Keeping the status quo is comfy. No need to
think in terms of programm code. Its like biology
teachers versus pathology staff, biology teachers
do not everyday see opened corpses.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
BTW: I see what you did here:
doge.pl: $(PROG)
$(file >$@,false :- \+true. ?- ['$<'],$(MAIN).)
https://github.com/hurufu/prolog-all/blob/main/rules.mk
Yes, I do not yet have a -g option.
Maybe should change that... The issue is a
little tricky. Only recently I managed to handle
some stuff that is tied to to the command line
after the Novacore has been loaded.
For example the top-level is now entered after
the Novacore is loaded, and the top-level loads
in itself library(session) etc.. To have a -g option
there is a dependency on
library(charsio), to convert a string into a term,
which is not part of Novacore itself. So maybe I could
do the same for a -g option, so that I can keep
the Novacore small, load
library(charsio) depending on the command line.
I just did yesterday something to make the Novacore
smaller. And handling a -g option this way could
be a viable way to keep it small.
Mild Shock schrieb:
Hi,
The most radical approach is Novacore from
Dogelog Player. It consists of the following
major incisions in the ISO core standard:
- We do not forbid chars, like for example
using lists of the form [a,b,c], we also
provide char_code/2 predicate bidirectionally.
- We do not provide and _chars built-in
predicates also there is nothing _strings. The
Prolog system is clever enough to not put
every atom it sees in an atom table. There
is only a predicate table.
- Some host languages have garbage collection that
deduplicates Strings. For example some Java
versions have an options to do that. But we
do not have any efforts to deduplicate atoms,
which are simply plain strings.
- Some languages have constant pools. For example
the Java byte code format includes a constant
pool in every class header. We do not do that
during transpilation , but we could of course.
But it begs the question, why only deduplicate
strings and not other constant expressions as well?
- We are totally happy that we have only codes,
there are chances that the host languages use
tagged pointers to represent them. So they
are represented similar to the tagged pointers
in SWI-Prolog which works for small integers.
- But the tagged pointer argument is moot,
since atom length=1 entities can be also
represented as tagged pointers, and some
programming languages do that. Dogelog Player
would use such tagged pointers without
poluting the atom table.
- What else?
Bye
Mild Shock schrieb:
Technically SWI-Prolog doesn't prefer codes.
Library `library(pure_input)` might prefer codes.
But this is again an issue of improving the
library by some non existent SWI-Prolog community.
The ISO core standard is silent about a flag
back_quotes, but has a lot of API requirements
that support both codes and chars, for example it
requires atom_codes/2 and atom_chars/2.
Implementation wise there can be an issue,
like one might decide to implement the atoms
of length=1 more efficiently, since with Unicode
there is now an explosion.
Not sure whether Trealla Prolog and Scryer
Prolog thought about this problem, that the
atom table gets quite large. Whereas codes don't
eat the atom table. Maybe they forbit predicates
that have an atom of length=1 head:
h(X) :-
write('Hello '), write(X), write('!'), nl.
Does this still work?
Mild Shock schrieb:
Concerning library(portray_text) which is in limbo:
Libraries are (often) written for eitherand thus the libraries make the choice.
But who writes these libraries? The SWI Prolog
community. And who doesn’t improve these libraries,
instead floods the web with workaround tips?
The SWI Prolog community.
Conclusion the SWI-Prolog community has itself
trapped in an ancient status quo, creating an island.
Cannot improve its own tooling, is not willing
to support code from else where that uses chars.
Same with the missed AI Boom.
(*) Code from elsewhere is dangerous, People
might use other Prolog systems than only SWI-Prolog,
like for exampe Trealla Prolog and Scryer Prolog.
(**) Keeping the status quo is comfy. No need to
think in terms of programm code. Its like biology
teachers versus pathology staff, biology teachers
do not everyday see opened corpses.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg
Henri Poincaré believed that mathematical
and scientific creativity came from a deep,
unconscious intuition that could not be
captured by mechanical reasoning or formal
systems. He famously wrote about how insights
came not from plodding logic but from sudden
illuminations — leaps of creative synthesis.
But now we have generative AI — models like GPT — that:
- produce poetry, proofs, stories, and code,
- combine ideas in novel ways,
- and do so by processing patterns in massive
datasets, without conscious understanding.
And that does seem to contradict Poincaré's belief
that true invention cannot come from automation.
Hi,
So StackOverflow has already fallen, and
GitHub will be the next one. StackOverflow
was eclectic, insuinated a high Signal
quality but repelled its newcomers by
strick language rules and deletism.
StackOverlow is suplaned by ChatGPT, etc..
They are more tolerant and can deliver
excellent Signals, much beter than
StackOverflow. ChatGPT and other assistants
flipped the model: No downvotes.
No “duplicate question” shaming. Conversational,
exploratory, and often faster than Googling +
scanning SO threads. Most importantly: they don’t
punish incomplete knowledge, which is where
most human learning happens.
LLMs give a more forgiving learning curve.
Bye
Mild Shock schrieb:
Henri Poincaré believed that mathematical
and scientific creativity came from a deep,
unconscious intuition that could not be
captured by mechanical reasoning or formal
systems. He famously wrote about how insights
came not from plodding logic but from sudden
illuminations — leaps of creative synthesis.
But now we have generative AI — models like GPT — that:
- produce poetry, proofs, stories, and code,
- combine ideas in novel ways,
- and do so by processing patterns in massive
datasets, without conscious understanding.
And that does seem to contradict Poincaré's belief
that true invention cannot come from automation.
Henri Poincaré believed that mathematical
and scientific creativity came from a deep,
unconscious intuition that could not be
captured by mechanical reasoning or formal
systems. He famously wrote about how insights
came not from plodding logic but from sudden
illuminations — leaps of creative synthesis.
But now we have generative AI — models like GPT — that:
- produce poetry, proofs, stories, and code,
- combine ideas in novel ways,
- and do so by processing patterns in massive
datasets, without conscious understanding.
And that does seem to contradict Poincaré's belief
that true invention cannot come from automation.
Mild Shock schrieb:
Hi,
But I shouldn't waste too much time.
One shouldn't punish people for just
being plain stupid.
Like for example this clueless french
philosopher who had a lot of troubles
with non-classical logic.
His brain tried to eliminate non-classical
logic, it was keen on avoiding non-classical
logic. A typical species of a human with
an extremly small brain, again working
in the wrong place!
Bye
P.S.: Maybe this a Poincaré thingy? Poincaré
was a strong critic of logicism (as championed
by Frege and Russell) and of Hilbert’s
formalist program.
But, he did not formally use or promote systems
like intuitionistic logic, modal logic, or
relevance logic. His logical framework remained
within the bounds of classical logic,
though he was skeptical of excessive formalism.
He thought formal systems could miss the creative
and synthetic nature of mathematical
invention.
Hi,
Will the world build on American Stacks?
Or is the american dream over?
How it started, 1 month go:
Nvidia CEO Jensen Huang on AI, Musk and Trump https://www.youtube.com/watch?v=c-XAL2oYelI
How its going, now:
Are you still talking about Jeffrey Epstein? https://www.bbc.com/news/articles/cm2m879neljo
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
Rota often celebrated symbolic, analogical, and
conceptual understanding over brute calculation.
This philosophy has come full circle in modern AI:
- Large Language Models (LLMs) like GPT-4 don't
just store facts — they recognize patterns,
make analogies, and generate new structures
from old ones.
- Rota’s work in combinatorics, symbolic logic, and
operator theory is essentially pattern-based
manipulation — exactly the kind of reasoning LLMs
aim to emulate at scale.
Rota had a clear aesthetic. He valued clean formalisms,
symbolic beauty, and well-defined structures. Rota wanted
mathematics to mean something — to be not just correct,
but intelligible and expressive.
In contrast, modern AI (especially LLMs like GPT) thrives
on the messy, including: Noisy data , Inconsistency ,
Uncertainty, Contradiction. AI engineers today are mining
meaning from noise.
What counts as “structure” is often just the best
pragmatic/effective description available at that moment.
Bye
Mild Shock schrieb:
Hi,
Will the world build on American Stacks?
Or is the american dream over?
How it started, 1 month go:
Nvidia CEO Jensen Huang on AI, Musk and Trump
https://www.youtube.com/watch?v=c-XAL2oYelI
How its going, now:
Are you still talking about Jeffrey Epstein?
https://www.bbc.com/news/articles/cm2m879neljo
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
An example of human intelligence, is of course the
name "Rational Term" for cyclic terms set forth by
Alain Colmerauer. Since it plays with "Rational Numbers".
A subset of cyclic terms can indeed represent
rational numbers, and they give a nice counter
example to transitivity:
?- problem(X,Y,Z).
X = _S1-7-9-1, % where
_S1 = _S1-6-8-0-6-2-8,
Y = _S2-1-6-1-5-4-6-1, % where
_S2 = _S2-0-9-2,
Z = _S3-3-0, % where
_S3 = _S3-8-1
The Fuzzer 2 from 2025 does just what I did in 2023,
expanding rational numbers into rational terms:
% fuzzy(-Term)
fuzzy(X) :-
random_between(1,100,A),
random_between(1,100,B),
random_between(1,10,M),
fuzzy_chunk(M,A,B,C,X,Y),
random_between(1,10,L),
fuzzy_chunk(L,C,B,_,Y,Z),
Z = Y.
% fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
fuzzy_chunk(0, A, _, A, X, X) :- !.
fuzzy_chunk(N, A, B, C, Y-D, X) :-
M is N-1,
D is A // B,
H is 10*(A - B*D),
fuzzy_chunk(M, H, B, C, Y, X).
Bye
Mild Shock schrieb:
Hi,
Rota often celebrated symbolic, analogical, and
conceptual understanding over brute calculation.
This philosophy has come full circle in modern AI:
- Large Language Models (LLMs) like GPT-4 don't
just store facts — they recognize patterns,
make analogies, and generate new structures
from old ones.
- Rota’s work in combinatorics, symbolic logic, and
operator theory is essentially pattern-based
manipulation — exactly the kind of reasoning LLMs
aim to emulate at scale.
Rota had a clear aesthetic. He valued clean formalisms,
symbolic beauty, and well-defined structures. Rota wanted
mathematics to mean something — to be not just correct,
but intelligible and expressive.
In contrast, modern AI (especially LLMs like GPT) thrives
on the messy, including: Noisy data , Inconsistency ,
Uncertainty, Contradiction. AI engineers today are mining
meaning from noise.
What counts as “structure” is often just the best
pragmatic/effective description available at that moment.
Bye
Mild Shock schrieb:
Hi,
Will the world build on American Stacks?
Or is the american dream over?
How it started, 1 month go:
Nvidia CEO Jensen Huang on AI, Musk and Trump
https://www.youtube.com/watch?v=c-XAL2oYelI
How its going, now:
Are you still talking about Jeffrey Epstein?
https://www.bbc.com/news/articles/cm2m879neljo
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
Ok I have to correct "Rational Term" was less
common, what was more in use "Rational Trees",
but they might have also talked about finitely
represented infinite tree. Rational trees itself
probably an echo from Dmitry Mirimanoffs
(1861–1945) “extraordinaire” sets.
Dmitry Semionovitch Mirimanoff (Russian:
Дми́трий Семёнович Мирима́нов; 13 September 1861, Pereslavl-Zalessky, Russia – 5 January 1945, Geneva,
Switzerland) was a member of the Moscow Mathematical
Society in 1897.[1] And later became a doctor of
mathematical sciences in 1900, in Geneva, and
taught at the universities of Geneva and Lausanne. https://en.wikipedia.org/wiki/Dmitry_Mirimanoff
This year we can again celebrate another researcher,
who died in 2023, Peter Aczel R.I.P., who made
as well some thoughtful deviance from orthodoxy.
Peter Aczel Memorial Conference on 10th September 2025.
Logic Colloquium will take place at the University
of Manchester (UK) from 11th to 12th September 2025 https://sites.google.com/view/blc2025/home
Have Fun!
Bye
Mild Shock schrieb:
Hi,
An example of human intelligence, is of course the
name "Rational Term" for cyclic terms set forth by
Alain Colmerauer. Since it plays with "Rational Numbers".
A subset of cyclic terms can indeed represent
rational numbers, and they give a nice counter
example to transitivity:
?- problem(X,Y,Z).
X = _S1-7-9-1, % where
_S1 = _S1-6-8-0-6-2-8,
Y = _S2-1-6-1-5-4-6-1, % where
_S2 = _S2-0-9-2,
Z = _S3-3-0, % where
_S3 = _S3-8-1
The Fuzzer 2 from 2025 does just what I did in 2023,
expanding rational numbers into rational terms:
% fuzzy(-Term)
fuzzy(X) :-
random_between(1,100,A),
random_between(1,100,B),
random_between(1,10,M),
fuzzy_chunk(M,A,B,C,X,Y),
random_between(1,10,L),
fuzzy_chunk(L,C,B,_,Y,Z),
Z = Y.
% fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
fuzzy_chunk(0, A, _, A, X, X) :- !.
fuzzy_chunk(N, A, B, C, Y-D, X) :-
M is N-1,
D is A // B,
H is 10*(A - B*D),
fuzzy_chunk(M, H, B, C, Y, X).
Bye
Mild Shock schrieb:
Hi,
Rota often celebrated symbolic, analogical, and
conceptual understanding over brute calculation.
This philosophy has come full circle in modern AI:
- Large Language Models (LLMs) like GPT-4 don't
just store facts — they recognize patterns,
make analogies, and generate new structures
from old ones.
- Rota’s work in combinatorics, symbolic logic, and
operator theory is essentially pattern-based
manipulation — exactly the kind of reasoning LLMs
aim to emulate at scale.
Rota had a clear aesthetic. He valued clean formalisms,
symbolic beauty, and well-defined structures. Rota wanted
mathematics to mean something — to be not just correct,
but intelligible and expressive.
In contrast, modern AI (especially LLMs like GPT) thrives
on the messy, including: Noisy data , Inconsistency ,
Uncertainty, Contradiction. AI engineers today are mining
meaning from noise.
What counts as “structure” is often just the best
pragmatic/effective description available at that moment.
Bye
Mild Shock schrieb:
Hi,
Will the world build on American Stacks?
Or is the american dream over?
How it started, 1 month go:
Nvidia CEO Jensen Huang on AI, Musk and Trump
https://www.youtube.com/watch?v=c-XAL2oYelI
How its going, now:
Are you still talking about Jeffrey Epstein?
https://www.bbc.com/news/articles/cm2m879neljo
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
I am trying to verify my hypothesis
that Rocq is a dead horse. Dead
horses can come in different forms,
for example a project that just
imitates what was already done by
the precursor, is most likely a
Dead horse. For example MetaRocq,
verifying a logic framework inside
some strong enough set theory,
is not novell. Maybe they get more
out of doing MetaRocq:
MetaRocq is a project formalizing Rocq in Rocq https://github.com/MetaRocq/metarocq#papers
#50 Nicolas Tabareau
https://www.youtube.com/watch?v=8kwe24gvigk
Bye
Mild Shock schrieb:
Hi,
Ok I have to correct "Rational Term" was less
common, what was more in use "Rational Trees",
but they might have also talked about finitely
represented infinite tree. Rational trees itself
probably an echo from Dmitry Mirimanoffs
(1861–1945) “extraordinaire” sets.
Dmitry Semionovitch Mirimanoff (Russian:
Дми́трий Семёнович Мирима́нов; 13 September 1861, >> Pereslavl-Zalessky, Russia – 5 January 1945, Geneva,
Switzerland) was a member of the Moscow Mathematical
Society in 1897.[1] And later became a doctor of
mathematical sciences in 1900, in Geneva, and
taught at the universities of Geneva and Lausanne.
https://en.wikipedia.org/wiki/Dmitry_Mirimanoff
This year we can again celebrate another researcher,
who died in 2023, Peter Aczel R.I.P., who made
as well some thoughtful deviance from orthodoxy.
Peter Aczel Memorial Conference on 10th September 2025.
Logic Colloquium will take place at the University
of Manchester (UK) from 11th to 12th September 2025
https://sites.google.com/view/blc2025/home
Have Fun!
Bye
Mild Shock schrieb:
Hi,
An example of human intelligence, is of course the
name "Rational Term" for cyclic terms set forth by
Alain Colmerauer. Since it plays with "Rational Numbers".
A subset of cyclic terms can indeed represent
rational numbers, and they give a nice counter
example to transitivity:
?- problem(X,Y,Z).
X = _S1-7-9-1, % where
_S1 = _S1-6-8-0-6-2-8,
Y = _S2-1-6-1-5-4-6-1, % where
_S2 = _S2-0-9-2,
Z = _S3-3-0, % where
_S3 = _S3-8-1
The Fuzzer 2 from 2025 does just what I did in 2023,
expanding rational numbers into rational terms:
% fuzzy(-Term)
fuzzy(X) :-
random_between(1,100,A),
random_between(1,100,B),
random_between(1,10,M),
fuzzy_chunk(M,A,B,C,X,Y),
random_between(1,10,L),
fuzzy_chunk(L,C,B,_,Y,Z),
Z = Y.
% fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
fuzzy_chunk(0, A, _, A, X, X) :- !.
fuzzy_chunk(N, A, B, C, Y-D, X) :-
M is N-1,
D is A // B,
H is 10*(A - B*D),
fuzzy_chunk(M, H, B, C, Y, X).
Bye
Mild Shock schrieb:
Hi,
Rota often celebrated symbolic, analogical, and
conceptual understanding over brute calculation.
This philosophy has come full circle in modern AI:
- Large Language Models (LLMs) like GPT-4 don't
just store facts — they recognize patterns,
make analogies, and generate new structures
from old ones.
- Rota’s work in combinatorics, symbolic logic, and
operator theory is essentially pattern-based
manipulation — exactly the kind of reasoning LLMs
aim to emulate at scale.
Rota had a clear aesthetic. He valued clean formalisms,
symbolic beauty, and well-defined structures. Rota wanted
mathematics to mean something — to be not just correct,
but intelligible and expressive.
In contrast, modern AI (especially LLMs like GPT) thrives
on the messy, including: Noisy data , Inconsistency ,
Uncertainty, Contradiction. AI engineers today are mining
meaning from noise.
What counts as “structure” is often just the best
pragmatic/effective description available at that moment.
Bye
Mild Shock schrieb:
Hi,
Will the world build on American Stacks?
Or is the american dream over?
How it started, 1 month go:
Nvidia CEO Jensen Huang on AI, Musk and Trump
https://www.youtube.com/watch?v=c-XAL2oYelI
How its going, now:
Are you still talking about Jeffrey Epstein?
https://www.bbc.com/news/articles/cm2m879neljo
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses? >>>>>> The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
The most radical approach is Novacore from
Dogelog Player. It consists of the following
major incisions in the ISO core standard:
- We do not forbid chars, like for example
using lists of the form [a,b,c], we also
provide char_code/2 predicate bidirectionally.
- We do not provide and _chars built-in
predicates also there is nothing _strings. The
Prolog system is clever enough to not put
every atom it sees in an atom table. There
is only a predicate table.
- Some host languages have garbage collection that
deduplicates Strings. For example some Java
versions have an options to do that. But we
do not have any efforts to deduplicate atoms,
which are simply plain strings.
- Some languages have constant pools. For example
the Java byte code format includes a constant
pool in every class header. We do not do that
during transpilation , but we could of course.
But it begs the question, why only deduplicate
strings and not other constant expressions as well?
- We are totally happy that we have only codes,
there are chances that the host languages use
tagged pointers to represent them. So they
are represented similar to the tagged pointers
in SWI-Prolog which works for small integers.
- But the tagged pointer argument is moot,
since atom length=1 entities can be also
represented as tagged pointers, and some
programming languages do that. Dogelog Player
would use such tagged pointers without
poluting the atom table.
- What else?
Bye
Mild Shock schrieb:
Technically SWI-Prolog doesn't prefer codes.
Library `library(pure_input)` might prefer codes.
But this is again an issue of improving the
library by some non existent SWI-Prolog community.
The ISO core standard is silent about a flag
back_quotes, but has a lot of API requirements
that support both codes and chars, for example it
requires atom_codes/2 and atom_chars/2.
Implementation wise there can be an issue,
like one might decide to implement the atoms
of length=1 more efficiently, since with Unicode
there is now an explosion.
Not sure whether Trealla Prolog and Scryer
Prolog thought about this problem, that the
atom table gets quite large. Whereas codes don't
eat the atom table. Maybe they forbit predicates
that have an atom of length=1 head:
h(X) :-
write('Hello '), write(X), write('!'), nl.
Does this still work?
Mild Shock schrieb:
Concerning library(portray_text) which is in limbo:
Libraries are (often) written for eitherand thus the libraries make the choice.
But who writes these libraries? The SWI Prolog
community. And who doesn’t improve these libraries,
instead floods the web with workaround tips?
The SWI Prolog community.
Conclusion the SWI-Prolog community has itself
trapped in an ancient status quo, creating an island.
Cannot improve its own tooling, is not willing
to support code from else where that uses chars.
Same with the missed AI Boom.
(*) Code from elsewhere is dangerous, People
might use other Prolog systems than only SWI-Prolog,
like for exampe Trealla Prolog and Scryer Prolog.
(**) Keeping the status quo is comfy. No need to
think in terms of programm code. Its like biology
teachers versus pathology staff, biology teachers
do not everyday see opened corpses.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
I didn't expect the topic to be that rich.
The big challenge in a top-level display is
the "interleaving" of equations and their
factorization as well as the "inlining" of
equations with existing variable names,
trying hard to do exactly that, was looking
at these test cases:
?- [user].
p(X,Y) :- X = f(f(f(X))), Y = f(f(Y)).
p(X,Y) :- X = a(f(X,Y)), Y = b(g(X,Y)).
p(X,Y) :- X = s(s(X,Y),_), Y = s(Y,X).
Using cycle detection via (==)/2 I get:
/* Dogelog Player 1.3.5 */
?- p(X,Y).
X = f(X), Y = X;
X = a(f(X, Y)), Y = b(g(X, Y));
X = s(s(X, Y), _), Y = s(Y, X).
Using cycle detection via same_term/2 I get:
/* Dogelog Player 1.3.5 */
?- p(X,Y).
X = f(f(f(X))), Y = f(f(Y));
X = a(f(X, Y)), Y = b(g(X, Y));
X = s(s(X, Y), _), Y = s(Y, X).
Cool!
Bye
Mild Shock schrieb:
Hi,
The most radical approach is Novacore from
Dogelog Player. It consists of the following
major incisions in the ISO core standard:
- We do not forbid chars, like for example
using lists of the form [a,b,c], we also
provide char_code/2 predicate bidirectionally.
- We do not provide and _chars built-in
predicates also there is nothing _strings. The
Prolog system is clever enough to not put
every atom it sees in an atom table. There
is only a predicate table.
- Some host languages have garbage collection that
deduplicates Strings. For example some Java
versions have an options to do that. But we
do not have any efforts to deduplicate atoms,
which are simply plain strings.
- Some languages have constant pools. For example
the Java byte code format includes a constant
pool in every class header. We do not do that
during transpilation , but we could of course.
But it begs the question, why only deduplicate
strings and not other constant expressions as well?
- We are totally happy that we have only codes,
there are chances that the host languages use
tagged pointers to represent them. So they
are represented similar to the tagged pointers
in SWI-Prolog which works for small integers.
- But the tagged pointer argument is moot,
since atom length=1 entities can be also
represented as tagged pointers, and some
programming languages do that. Dogelog Player
would use such tagged pointers without
poluting the atom table.
- What else?
Bye
Mild Shock schrieb:
Technically SWI-Prolog doesn't prefer codes.
Library `library(pure_input)` might prefer codes.
But this is again an issue of improving the
library by some non existent SWI-Prolog community.
The ISO core standard is silent about a flag
back_quotes, but has a lot of API requirements
that support both codes and chars, for example it
requires atom_codes/2 and atom_chars/2.
Implementation wise there can be an issue,
like one might decide to implement the atoms
of length=1 more efficiently, since with Unicode
there is now an explosion.
Not sure whether Trealla Prolog and Scryer
Prolog thought about this problem, that the
atom table gets quite large. Whereas codes don't
eat the atom table. Maybe they forbit predicates
that have an atom of length=1 head:
h(X) :-
write('Hello '), write(X), write('!'), nl.
Does this still work?
Mild Shock schrieb:
Concerning library(portray_text) which is in limbo:
Libraries are (often) written for eitherand thus the libraries make the choice.
But who writes these libraries? The SWI Prolog
community. And who doesn’t improve these libraries,
instead floods the web with workaround tips?
The SWI Prolog community.
Conclusion the SWI-Prolog community has itself
trapped in an ancient status quo, creating an island.
Cannot improve its own tooling, is not willing
to support code from else where that uses chars.
Same with the missed AI Boom.
(*) Code from elsewhere is dangerous, People
might use other Prolog systems than only SWI-Prolog,
like for exampe Trealla Prolog and Scryer Prolog.
(**) Keeping the status quo is comfy. No need to
think in terms of programm code. Its like biology
teachers versus pathology staff, biology teachers
do not everyday see opened corpses.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg
Looks like sorting of rational trees
needs an existential type, if we go full “logical”.
If I use my old code from 2023 which computes
a finest (*), i.e. non-monster, bisimulation
pre-quotient (**) in prefix order:
factorize(T, _, T) --> {var(T)}, !.
factorize(T, C, V) --> {compound(T), member(S-V, C), S == T}, !.
factorize(T, C, V) --> {compound(T)}, !,
[V = S],
{T =.. [F|L]},
factorize_list(L, [T-V|C], R),
{S =.. [F|R]}.
factorize(T, _, T) --> [].
I see that it always generates new
intermediate variables:
?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl. [_8066=f(_8066)]-_8066
?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl. [_10984=f(_10984)]-_10984
What would be swell if it would generate an
existential quantifier, something like T^([T = f(T)]-T)
in the above case. Then using alpha conversion
different factorization runs would be equal,
when they only differ by the introduced
intermediate variables. But Prolog has no alpha
conversion, only λ-Prolog has such things.
So what can we do, how can we produce a
representation, that can be used for sorting?
(*) Why finest and not corsets? Because it uses
non-monster instructions and not monster
instructions
(**) Why only pre-quotient? Because a
XXX_with_stack algorithm does not fully
deduplicate the equations, would
probably need a XXX_with_memo algorithm.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
So do we see a new wave in interst in bismulation,
especially in computing existential types for all
kind of things? It seems so, quite facinating find:
BQ-NCO: Bisimulation Quotienting for Efficient
Neural Combinatorial Optimization
https://arxiv.org/abs/2301.03313
Has nobody less than Jean-Marc Andreoli on the
author list. Possibly the same guy from earlier
Focusing and Linear Logic, who was associated with
ECRC Munich in 1990’s, but now working for naverlabs.com.
Bye
Mild Shock schrieb:
Looks like sorting of rational trees
needs an existential type, if we go full “logical”.
If I use my old code from 2023 which computes
a finest (*), i.e. non-monster, bisimulation
pre-quotient (**) in prefix order:
factorize(T, _, T) --> {var(T)}, !.
factorize(T, C, V) --> {compound(T), member(S-V, C), S == T}, !.
factorize(T, C, V) --> {compound(T)}, !,
[V = S],
{T =.. [F|L]},
factorize_list(L, [T-V|C], R),
{S =.. [F|R]}.
factorize(T, _, T) --> [].
I see that it always generates new
intermediate variables:
?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl.
[_8066=f(_8066)]-_8066
?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl.
[_10984=f(_10984)]-_10984
What would be swell if it would generate an
existential quantifier, something like T^([T = f(T)]-T)
in the above case. Then using alpha conversion
different factorization runs would be equal,
when they only differ by the introduced
intermediate variables. But Prolog has no alpha
conversion, only λ-Prolog has such things.
So what can we do, how can we produce a
representation, that can be used for sorting?
(*) Why finest and not corsets? Because it uses
non-monster instructions and not monster
instructions
(**) Why only pre-quotient? Because a
XXX_with_stack algorithm does not fully
deduplicate the equations, would
probably need a XXX_with_memo algorithm.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
you do need a theory of terms, and a specific one
Hi,
Ok I have to correct "Rational Term" was less
common, what was more in use "Rational Trees",
but they might have also talked about finitely
represented infinite tree. Rational trees itself
probably an echo from Dmitry Mirimanoffs
(1861–1945) “extraordinaire” sets.
Dmitry Semionovitch Mirimanoff (Russian:
Дми́трий Семёнович Мирима́нов; 13 September 1861, Pereslavl-Zalessky, Russia – 5 January 1945, Geneva,
Switzerland) was a member of the Moscow Mathematical
Society in 1897.[1] And later became a doctor of
mathematical sciences in 1900, in Geneva, and
taught at the universities of Geneva and Lausanne. https://en.wikipedia.org/wiki/Dmitry_Mirimanoff
This year we can again celebrate another researcher,
who died in 2023, Peter Aczel R.I.P., who made
as well some thoughtful deviance from orthodoxy.
Peter Aczel Memorial Conference on 10th September 2025.
Logic Colloquium will take place at the University
of Manchester (UK) from 11th to 12th September 2025 https://sites.google.com/view/blc2025/home
Have Fun!
Bye
Mild Shock schrieb:
Hi,
An example of human intelligence, is of course the
name "Rational Term" for cyclic terms set forth by
Alain Colmerauer. Since it plays with "Rational Numbers".
A subset of cyclic terms can indeed represent
rational numbers, and they give a nice counter
example to transitivity:
?- problem(X,Y,Z).
X = _S1-7-9-1, % where
_S1 = _S1-6-8-0-6-2-8,
Y = _S2-1-6-1-5-4-6-1, % where
_S2 = _S2-0-9-2,
Z = _S3-3-0, % where
_S3 = _S3-8-1
The Fuzzer 2 from 2025 does just what I did in 2023,
expanding rational numbers into rational terms:
% fuzzy(-Term)
fuzzy(X) :-
random_between(1,100,A),
random_between(1,100,B),
random_between(1,10,M),
fuzzy_chunk(M,A,B,C,X,Y),
random_between(1,10,L),
fuzzy_chunk(L,C,B,_,Y,Z),
Z = Y.
% fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
fuzzy_chunk(0, A, _, A, X, X) :- !.
fuzzy_chunk(N, A, B, C, Y-D, X) :-
M is N-1,
D is A // B,
H is 10*(A - B*D),
fuzzy_chunk(M, H, B, C, Y, X).
Bye
Mild Shock schrieb:
Hi,
Rota often celebrated symbolic, analogical, and
conceptual understanding over brute calculation.
This philosophy has come full circle in modern AI:
- Large Language Models (LLMs) like GPT-4 don't
just store facts — they recognize patterns,
make analogies, and generate new structures
from old ones.
- Rota’s work in combinatorics, symbolic logic, and
operator theory is essentially pattern-based
manipulation — exactly the kind of reasoning LLMs
aim to emulate at scale.
Rota had a clear aesthetic. He valued clean formalisms,
symbolic beauty, and well-defined structures. Rota wanted
mathematics to mean something — to be not just correct,
but intelligible and expressive.
In contrast, modern AI (especially LLMs like GPT) thrives
on the messy, including: Noisy data , Inconsistency ,
Uncertainty, Contradiction. AI engineers today are mining
meaning from noise.
What counts as “structure” is often just the best
pragmatic/effective description available at that moment.
Bye
Mild Shock schrieb:
Hi,
Will the world build on American Stacks?
Or is the american dream over?
How it started, 1 month go:
Nvidia CEO Jensen Huang on AI, Musk and Trump
https://www.youtube.com/watch?v=c-XAL2oYelI
How its going, now:
Are you still talking about Jeffrey Epstein?
https://www.bbc.com/news/articles/cm2m879neljo
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
you do need a theory of terms, and a specific one
You could pull an Anti Ackerman. Negate the
infinity axiom like Ackerman did here, where
he also kept the regularity axiom:
Die Widerspruchsfreiheit der allgemeinen Mengenlehre
Ackermann, Wilhelm - 1937 https://www.digizeitschriften.de/id/235181684_0114%7Clog23
But instead of Ackermann, you get an Anti (-Foundation)
Ackermann if you drop the regularity axiom. Result, you
get a lot of exotic sets, among which are also the
famous Quine atoms:
x = {x}
Funny that in the setting I just described , where
there is the negation of the infinity axiom, i.e.
all sets are finite, contrary to the usually vulgar
view, x = {x} is a finite object. Just like in Prolog
X = f(X) is in principle a finite object, it has
only one subtree, or what Alain Colmerauer
already postulated:
Definition: a "rational" tre is a tree which
has a finite set of subtrees.
Bye
seemingly interesting paper. In stead
particular, his final coa[l]gebra theorem
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg
**DIE ANTINOMIEN DER MENGENLEHRE**E. Specker, Dialectica, Vol. 8, No. 3 (15. 9. 1954) https://www.jstor.org/stable/42964119?seq=7
Hi,
That is extremly embarassing. I don’t know
what you are bragging about, when you wrote
the below. You are wrestling with a ghost!
Maybe you didn’t follow my superbe link:
seemingly interesting paper. In stead
particular, his final coa[l]gebra theorem
The link behind Hopcroft and Karp (1971) I
gave, which is a Bisimulation and Equirecursive
Equality hand-out, has a coalgebra example,
I used to derive pairs.pl from:
https://www.cs.cornell.edu/courses/cs6110/2014sp/Lectures/lec35a.pdf
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
My beloved Logic professor introduced Non-Wellfounded
in the form of library cards, sorry only German:
Wir denken uns dazu eine Kartothek, auf deren
Karten wieder Karten derselben Kartothek
aufgeführt sind. Ein Beispiel einer solchen
Kartothek wäre etwa das folgende : wir haben
drei Karten a, b, c; a führt a und b auf, b
die Karten a und c, c die Karte b a = (a, b),
b = (a, c), c = (b). Entsprechend den sich
nicht selbst als Element enthaltenden Mengen
fragen wir nach den Karten, die sich nicht
selbst aufführen. Die Karte a ist die einzige,
die sich selbst aufführt ; b und c sind somit
die sich nicht selbst aufführenden Karten.
He then concludes that Non-Wellfounded has still the
Russell Paradox, and hence also the productive form of it:
Es gibt somit in jeder Kartothek eine
Gesamtheit G von Karten, zu der es keine Karte
gibt, die genau jene aus G aufführt. (Für endliche
Kartotheken ist dies ziemlich selbstverständlich,
doch wollen wir auch unendliche Kartotheken in
Betracht ziehen.) Dieser Satz schliesst aber
natürlich nicht aus, dass es stets möglich ist,
eine genau die Karten aus G aufführende Karte
herzustellen und diese in die Kartothek zu legen.
Nur müssen wir mit der Möglich-
What is your opinion? Excerpt from:
**DIE ANTINOMIEN DER MENGENLEHRE**E. Specker, Dialectica, Vol. 8, No. 3 (15. 9. 1954) https://www.jstor.org/stable/42964119?seq=7
Bye
Mild Shock schrieb:
Hi,
That is extremly embarassing. I don’t know
what you are bragging about, when you wrote
the below. You are wrestling with a ghost!
Maybe you didn’t follow my superbe link:
seemingly interesting paper. In stead
particular, his final coa[l]gebra theorem
The link behind Hopcroft and Karp (1971) I
gave, which is a Bisimulation and Equirecursive
Equality hand-out, has a coalgebra example,
I used to derive pairs.pl from:
https://www.cs.cornell.edu/courses/cs6110/2014sp/Lectures/lec35a.pdf
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
I guess there is a bug in preparing flat terms vector
Hi,
Take this exercise. Exercise 4.1 Draw the tree
represented by the term n1(n2(n4),n3(n5,n6)). https://book.simply-logical.space/src/text/2_part_ii/4.1.html
Maybe there was a plan that SWISH can draw trees,
and it could be that something was implemented as well.
But I don't see anything dynamic working on the
above web site link. Next challenge for Simply Logical,
in another life. Draw a rational tree.
The Prolog system has them:
/* SWI-Prolog 9.3.26 */
?- X = a(Y,_), Y = b(X,_).
X = a(b(X, _A), _),
Y = b(X, _A).
Bye
I guess there is a bug in preparing flat terms vector
I give you a gold medal 🥇, if you can prove a
compare_index/3 correct that uses this rule. It
was already shown impossible by Matt Carlson.
There are alternative approaches that can reach
transitivity, but do not use the below step
inside some compare_index/3.
compare_term_args(I, C, X, Y, A, H):-
arg(I, X, K),
arg(I, Y, L),
!,
compare_index(D, K, L, A, H),
( D = (=) ->
I0 is I + 1,
compare_term_args(I0, C, X, Y, A, H)
; C = D
).
compare_term_args(_ ,= , _, _, _, _).
Maybe there is a grain of salt of invoking the
Axiom of Choice (AC) in some previous posts.
Although the Axiom of Choice is not needed for
finite sets, they have anyway some choice.
BTW: When Peter Aczel writes ZFC-, he then
means ZFC without AC, right? But he doesn’t
show some compare/3 .
Mild Shock schrieb:
Hi,
Take this exercise. Exercise 4.1 Draw the tree
represented by the term n1(n2(n4),n3(n5,n6)).
https://book.simply-logical.space/src/text/2_part_ii/4.1.html
Maybe there was a plan that SWISH can draw trees,
and it could be that something was implemented as well.
But I don't see anything dynamic working on the
above web site link. Next challenge for Simply Logical,
in another life. Draw a rational tree.
The Prolog system has them:
/* SWI-Prolog 9.3.26 */
?- X = a(Y,_), Y = b(X,_).
X = a(b(X, _A), _),
Y = b(X, _A).
Bye
Hi,
Did the old School Logicians waste time
with compare/3 ? I guess no:
Ernst Specker, my beloved Professor, and
Dana Scott made only a partial order. A
partial order might have transitivity
of (<') lacking:
"Scott's model construction is in fact
closely related to Specker's but there
is a subtle difference in the notion of
tree that they use. In fact neither of
them formulate their notion of tree in
terms of graphs but rather in terms of
what it will be convenient here to
call tree-partial-orderings."
See here:
NON-WELL-FOUNDED SETS
Peter Aczel - 1988 https://les-mathematiques.net/vanilla/uploads/editor/fh/v4pi6qyxfbel.pdf
There is also the notion of co-well-
foundedness, something like Noetherian but
up side down, i.e. certain ascending
chains stabilizing.
Bye
Mild Shock schrieb:
I guess there is a bug in preparing flat terms vector
I give you a gold medal 🥇, if you can prove a
compare_index/3 correct that uses this rule. It
was already shown impossible by Matt Carlson.
There are alternative approaches that can reach
transitivity, but do not use the below step
inside some compare_index/3.
compare_term_args(I, C, X, Y, A, H):-
arg(I, X, K),
arg(I, Y, L),
!,
compare_index(D, K, L, A, H),
( D = (=) ->
I0 is I + 1,
compare_term_args(I0, C, X, Y, A, H)
; C = D
).
compare_term_args(_ ,= , _, _, _, _).
Maybe there is a grain of salt of invoking the
Axiom of Choice (AC) in some previous posts.
Although the Axiom of Choice is not needed for
finite sets, they have anyway some choice.
BTW: When Peter Aczel writes ZFC-, he then
means ZFC without AC, right? But he doesn’t
show some compare/3 .
Mild Shock schrieb:
Hi,
Take this exercise. Exercise 4.1 Draw the tree
represented by the term n1(n2(n4),n3(n5,n6)).
https://book.simply-logical.space/src/text/2_part_ii/4.1.html
Maybe there was a plan that SWISH can draw trees,
and it could be that something was implemented as well.
But I don't see anything dynamic working on the
above web site link. Next challenge for Simply Logical,
in another life. Draw a rational tree.
The Prolog system has them:
/* SWI-Prolog 9.3.26 */
?- X = a(Y,_), Y = b(X,_).
X = a(b(X, _A), _),
Y = b(X, _A).
Bye
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,064 |
Nodes: | 10 (0 / 10) |
Uptime: | 152:49:13 |
Calls: | 13,691 |
Calls today: | 1 |
Files: | 186,936 |
D/L today: |
2,525 files (731M bytes) |
Messages: | 2,411,051 |