don’t think it could really do
the “ghost text” effect.
waiting for completion candidates to be suggested
Web 2.0 is all about incremental content!
don’t think it could really do
the “ghost text” effect.
It wouldn’t do the ghost text, only assist
it. There was a misunderstanding how “ghost
texts” work. Maybe you were thinking, that
the “ghost text” is part of the first response.
But usually the “ghost text” is a second response:
waiting for completion candidates to be suggested
Well you don’t use it for your primary
typing completion which is preferably fast.
The first response might give context information,
for the second request which provides a
different type of completion.
But the first response is not responsible
for any timing towards the second request.
That anyway happens in the client. And it
doesn’t hurt if the first response is
from a stupid channel.
Web 2.0 is all about incremental content!
Prolog missed the Web 2.0 Bandwagon. Unlike
web 1.0 which is static content. Web 2.0 is
all about dynamic content, including building
content incrementally.
IntelliJ just created Mellum, its open source,
their ghost texts are code snippets. So its
more like recalling typing marcros, giving
them a good guess. Not completing
partial identifiers:
Why Did JetBrains Create Mellum?
https://www.youtube.com/watch?v=7TqkvVXKxFA
LLM optimized for code-related tasks. https://huggingface.co/JetBrains/Mellum-4b-base
Mild Shock schrieb:
Web 2.0 is all about incremental content!
don’t think it could really do
the “ghost text” effect.
It wouldn’t do the ghost text, only assist
it. There was a misunderstanding how “ghost
texts” work. Maybe you were thinking, that
the “ghost text” is part of the first response.
But usually the “ghost text” is a second response:
waiting for completion candidates to be suggested
Well you don’t use it for your primary
typing completion which is preferably fast.
The first response might give context information,
for the second request which provides a
different type of completion.
But the first response is not responsible
for any timing towards the second request.
That anyway happens in the client. And it
doesn’t hurt if the first response is
from a stupid channel.
Web 2.0 is all about incremental content!
A flesh an bood cooperative multitasking Prolog system
is sometimes tricky to do. We were agonizing over the
last days how we could test our timers and tasks.
Our existing framework doesn't work, since it neither
waits for a timer callback to be fired and to complete,
nor for a task to complete. But its seems its just an
instance of a Promise again.
Turn the test case itself into a Promise, and wait for
it. In Prolo terms, the test case is a success when the
.then() port gets reached with SUCCESS, or its a failure
if the .then() port gets reached with FAILURE or if the
the .catch() port gets reached. Interesting framework
that does just that:, whereby the use assert, to turn
FAILURE into an exception:
Node.js v20.0.0 - The test runner is now stable. https://nodejs.org/api/test.html#describe-and-it-aliases
BTW: Quite inventive vocabulary...
Web 2.0 is all about incremental content!
don’t think it could really do
the “ghost text” effect.
It wouldn’t do the ghost text, only assist
it. There was a misunderstanding how “ghost
texts” work. Maybe you were thinking, that
the “ghost text” is part of the first response.
But usually the “ghost text” is a second response:
waiting for completion candidates to be suggested
Well you don’t use it for your primary
typing completion which is preferably fast.
The first response might give context information,
for the second request which provides a
different type of completion.
But the first response is not responsible
for any timing towards the second request.
That anyway happens in the client. And it
doesn’t hurt if the first response is
from a stupid channel.
Web 2.0 is all about incremental content!
Hi,
Now SWI-Prolog has amassed 1/4 Million of
student notebooks, the SWI-Prolog discourse
has become a cest pool of stupid teachers
asking stupid questions. Development and
innovation in Prolog has totally stalled.
All Prolog systems are based on completely
silly WAM or ZIP, and cannot run this trivial
constant caching test case in linear time:
data(1,[0,1,2,3,4,5,6,7,8,9]). data(2,[0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9]). data(3,[0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9]).
test(N) :- between(1,1000000,_), data(N, _), fail; true.
Here some results:
/* Trealla Prolog 2.74.10 */
?- between(1,3,N), time(test(N)), fail; true.
% Time elapsed 0.236s, 3000004 Inferences, 12.692 MLips
% Time elapsed 0.318s, 3000004 Inferences, 9.429 MLips
% Time elapsed 0.371s, 3000004 Inferences, 8.095 MLips
/* Scryer Prolog 0.9.4-411 */
?- between(1,3,N), time(test(N)), fail; true.
% CPU time: 0.793s, 7_000_100 inferences
% CPU time: 1.150s, 7_000_100 inferences
% CPU time: 1.481s, 7_000_100 inferences
Guess what formerly Jekejeke Prolog and Dogelog
Player show? They are not based on WAM or ZIP.
Its rather DAM, Dogelog Abtract Machine.
Bye
Mild Shock schrieb:
Web 2.0 is all about incremental content!
don’t think it could really do
the “ghost text” effect.
It wouldn’t do the ghost text, only assist
it. There was a misunderstanding how “ghost
texts” work. Maybe you were thinking, that
the “ghost text” is part of the first response.
But usually the “ghost text” is a second response:
waiting for completion candidates to be suggested
Well you don’t use it for your primary
typing completion which is preferably fast.
The first response might give context information,
for the second request which provides a
different type of completion.
But the first response is not responsible
for any timing towards the second request.
That anyway happens in the client. And it
doesn’t hurt if the first response is
from a stupid channel.
Web 2.0 is all about incremental content!
Hi,
Of course Teachers have better quality
than Nerds when they formulate questions.
Better reseacherd. But the goal of a teacher
is always orthodoxification. So ensentially
SWI-Prolog discourse is abused as a wiki,
with dozen of questions and answer harnessing
hundred of links. The food that teachers need.
Bye
P.S.: Its obvious what is killed in the process:
- Get rid of silly WAM and ZIP!
- Going towards web 2.0 with Prolog
- The AU Boom and Prolog
- What else...?
Mild Shock schrieb:
Hi,
Now SWI-Prolog has amassed 1/4 Million of
student notebooks, the SWI-Prolog discourse
has become a cest pool of stupid teachers
asking stupid questions. Development and
innovation in Prolog has totally stalled.
All Prolog systems are based on completely
silly WAM or ZIP, and cannot run this trivial
constant caching test case in linear time:
data(1,[0,1,2,3,4,5,6,7,8,9]).
data(2,[0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9]).
data(3,[0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9]).
test(N) :- between(1,1000000,_), data(N, _), fail; true.
Here some results:
/* Trealla Prolog 2.74.10 */
?- between(1,3,N), time(test(N)), fail; true.
% Time elapsed 0.236s, 3000004 Inferences, 12.692 MLips
% Time elapsed 0.318s, 3000004 Inferences, 9.429 MLips
% Time elapsed 0.371s, 3000004 Inferences, 8.095 MLips
/* Scryer Prolog 0.9.4-411 */
?- between(1,3,N), time(test(N)), fail; true.
% CPU time: 0.793s, 7_000_100 inferences
% CPU time: 1.150s, 7_000_100 inferences
% CPU time: 1.481s, 7_000_100 inferences
Guess what formerly Jekejeke Prolog and Dogelog
Player show? They are not based on WAM or ZIP.
Its rather DAM, Dogelog Abtract Machine.
Bye
Mild Shock schrieb:
Web 2.0 is all about incremental content!
don’t think it could really do
the “ghost text” effect.
It wouldn’t do the ghost text, only assist
it. There was a misunderstanding how “ghost
texts” work. Maybe you were thinking, that
the “ghost text” is part of the first response.
But usually the “ghost text” is a second response:
waiting for completion candidates to be suggested
Well you don’t use it for your primary
typing completion which is preferably fast.
The first response might give context information,
for the second request which provides a
different type of completion.
But the first response is not responsible
for any timing towards the second request.
That anyway happens in the client. And it
doesn’t hurt if the first response is
from a stupid channel.
Web 2.0 is all about incremental content!
Again JavaScript shines since the keyword "async"
makes the difference. We have recently experienced
its benefit, since we could remove all new Promise()
calls in our code where we are juggling with tasks.
new Promise() is only needed for callbacks that
then call resolve() or reject(), but task can
just use await and try catch. Now without
the keyword its a traditional test case:
test('synchronous failing test', (t) => {
// This test fails because it throws an exception.
assert.strictEqual(1, 2);
});
With the keyword its a test case that
can test timers and tasks:
test('asynchronous passing test', async (t) => {
// This test passes because the Promise returned by the async
// function is settled and not rejected.
assert.strictEqual(1, 1);
});
Mild Shock schrieb:
A flesh an bood cooperative multitasking Prolog system
is sometimes tricky to do. We were agonizing over the
last days how we could test our timers and tasks.
Our existing framework doesn't work, since it neither
waits for a timer callback to be fired and to complete,
nor for a task to complete. But its seems its just an
instance of a Promise again.
Turn the test case itself into a Promise, and wait for
it. In Prolo terms, the test case is a success when the
.then() port gets reached with SUCCESS, or its a failure
if the .then() port gets reached with FAILURE or if the
the .catch() port gets reached. Interesting framework
that does just that:, whereby the use assert, to turn
FAILURE into an exception:
Node.js v20.0.0 - The test runner is now stable.
https://nodejs.org/api/test.html#describe-and-it-aliases
BTW: Quite inventive vocabulary...
Hi,
ISO is loosing it because it gives in to Teachers.
GUPU from Ulrich Neumerkel is also a Teaching project.
Notebooks can be also viewed as a Teaching project.
Still there were once rumors that Prolog was used
in Industry. But this was long long ago, and these
roots are possibly totally gone.
I don't believe anybody is using CLP or s(CASP).
Or CLP(Z) from Scryer Prolog. Also the USA
compiler builders are total cluless about logic,
and USA is dominant when it comes to compiler
builder. Take the dissertation of
Combining Analyses, Combining Optimizations
Clifford Noel Click, Jr. - February, 1995
He does't know a bit how conditional constant
propagation relates to logic.
Bye
P.S.: Compiler builders never had a formal education
in mathematical logic. Not enough time. They
were always busy in guzzling in machine code
operations, building highly sophisticated tables
that describe the machine code operations and
building simlarly highly sophisticated backends,
that are sniffing these tables. You don't find
such People in Prolog anymore. Somebody that
knows aassembly, just like Linus Torwald started...
Mild Shock schrieb:
Hi,
Of course Teachers have better quality
than Nerds when they formulate questions.
Better reseacherd. But the goal of a teacher
is always orthodoxification. So ensentially
SWI-Prolog discourse is abused as a wiki,
with dozen of questions and answer harnessing
hundred of links. The food that teachers need.
Bye
P.S.: Its obvious what is killed in the process:
- Get rid of silly WAM and ZIP!
- Going towards web 2.0 with Prolog
- The AU Boom and Prolog
- What else...?
Mild Shock schrieb:
Hi,
Now SWI-Prolog has amassed 1/4 Million of
student notebooks, the SWI-Prolog discourse
has become a cest pool of stupid teachers
asking stupid questions. Development and
innovation in Prolog has totally stalled.
All Prolog systems are based on completely
silly WAM or ZIP, and cannot run this trivial
constant caching test case in linear time:
data(1,[0,1,2,3,4,5,6,7,8,9]).
data(2,[0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9]).
data(3,[0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9]).
test(N) :- between(1,1000000,_), data(N, _), fail; true.
Here some results:
/* Trealla Prolog 2.74.10 */
?- between(1,3,N), time(test(N)), fail; true.
% Time elapsed 0.236s, 3000004 Inferences, 12.692 MLips
% Time elapsed 0.318s, 3000004 Inferences, 9.429 MLips
% Time elapsed 0.371s, 3000004 Inferences, 8.095 MLips
/* Scryer Prolog 0.9.4-411 */
?- between(1,3,N), time(test(N)), fail; true.
% CPU time: 0.793s, 7_000_100 inferences
% CPU time: 1.150s, 7_000_100 inferences
% CPU time: 1.481s, 7_000_100 inferences
Guess what formerly Jekejeke Prolog and Dogelog
Player show? They are not based on WAM or ZIP.
Its rather DAM, Dogelog Abtract Machine.
Bye
Mild Shock schrieb:
Web 2.0 is all about incremental content!
don’t think it could really do
the “ghost text” effect.
It wouldn’t do the ghost text, only assist
it. There was a misunderstanding how “ghost
texts” work. Maybe you were thinking, that
the “ghost text” is part of the first response.
But usually the “ghost text” is a second response:
waiting for completion candidates to be suggested
Well you don’t use it for your primary
typing completion which is preferably fast.
The first response might give context information,
for the second request which provides a
different type of completion.
But the first response is not responsible
for any timing towards the second request.
That anyway happens in the client. And it
doesn’t hurt if the first response is
from a stupid channel.
Web 2.0 is all about incremental content!
Web 2.0 is all about incremental content!
don’t think it could really do
the “ghost text” effect.
It wouldn’t do the ghost text, only assist
it. There was a misunderstanding how “ghost
texts” work. Maybe you were thinking, that
the “ghost text” is part of the first response.
But usually the “ghost text” is a second response:
waiting for completion candidates to be suggested
Well you don’t use it for your primary
typing completion which is preferably fast.
The first response might give context information,
for the second request which provides a
different type of completion.
But the first response is not responsible
for any timing towards the second request.
That anyway happens in the client. And it
doesn’t hurt if the first response is
from a stupid channel.
Web 2.0 is all about incremental content!
Hi,
Why only phrase_from_file/2 and not also
phrase_from_url/2. Its not that difficult to
do, you can do it with change_arg/2 and
nothing else! Lets see what we have so far:
Trealla Prolog:
Base on memory mapping chars. So basically
this could be judged as a further argument
in favor of chars versus codes. But its
not Web 2.0, works only for files.
https://github.com/trealla-prolog/trealla/blob/main/library/pio.pl
SWI-Prolog:
Based on turning a stream into a lazy list.
Requires attributed variables and repositionable
streams. The stream is opened with open/3 but
maybe could be opened with http_open/3 as well?
https://github.com/SWI-Prolog/swipl-devel/blob/master/library/pio.pl
To be continued...
Bye
Mild Shock schrieb:
Web 2.0 is all about incremental content!
don’t think it could really do
the “ghost text” effect.
It wouldn’t do the ghost text, only assist
it. There was a misunderstanding how “ghost
texts” work. Maybe you were thinking, that
the “ghost text” is part of the first response.
But usually the “ghost text” is a second response:
waiting for completion candidates to be suggested
Well you don’t use it for your primary
typing completion which is preferably fast.
The first response might give context information,
for the second request which provides a
different type of completion.
But the first response is not responsible
for any timing towards the second request.
That anyway happens in the client. And it
doesn’t hurt if the first response is
from a stupid channel.
Web 2.0 is all about incremental content!
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,064 |
Nodes: | 10 (0 / 10) |
Uptime: | 148:00:44 |
Calls: | 13,691 |
Calls today: | 1 |
Files: | 186,936 |
D/L today: |
33 files (6,120K bytes) |
Messages: | 2,410,927 |