• Prolog missed the Web 2.0 Bandwagon

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jun 20 21:13:14 2025
    From Newsgroup: comp.lang.prolog

    Web 2.0 is all about incremental content!

    don’t think it could really do
    the “ghost text” effect.

    It wouldn’t do the ghost text, only assist
    it. There was a misunderstanding how “ghost
    texts” work. Maybe you were thinking, that
    the “ghost text” is part of the first response.

    But usually the “ghost text” is a second response:

    waiting for completion candidates to be suggested

    Well you don’t use it for your primary
    typing completion which is preferably fast.
    The first response might give context information,
    for the second request which provides a
    different type of completion.

    But the first response is not responsible
    for any timing towards the second request.
    That anyway happens in the client. And it
    doesn’t hurt if the first response is
    from a stupid channel.

    Web 2.0 is all about incremental content!
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jun 20 21:18:40 2025
    From Newsgroup: comp.lang.prolog

    Prolog missed the Web 2.0 Bandwagon. Unlike
    web 1.0 which is static content. Web 2.0 is
    all about dynamic content, including building
    content incrementally.

    IntelliJ just created Mellum, its open source,
    their ghost texts are code snippets. So its
    more like recalling typing marcros, giving
    them a good guess. Not completing

    partial identifiers:

    Why Did JetBrains Create Mellum?
    https://www.youtube.com/watch?v=7TqkvVXKxFA

    LLM optimized for code-related tasks. https://huggingface.co/JetBrains/Mellum-4b-base

    Mild Shock schrieb:
    Web 2.0 is all about incremental content!

    don’t think it could really do
    the “ghost text” effect.

    It wouldn’t do the ghost text, only assist
    it. There was a misunderstanding how “ghost
    texts” work. Maybe you were thinking, that
    the “ghost text” is part of the first response.

    But usually the “ghost text” is a second response:

    waiting for completion candidates to be suggested

    Well you don’t use it for your primary
    typing completion which is preferably fast.
    The first response might give context information,
    for the second request which provides a
    different type of completion.

    But the first response is not responsible
    for any timing towards the second request.
    That anyway happens in the client. And it
    doesn’t hurt if the first response is
    from a stupid channel.

    Web 2.0 is all about incremental content!

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jun 20 21:41:46 2025
    From Newsgroup: comp.lang.prolog

    Disclaimer: Didn’t double check whether
    there were already some LLM posts on SWI-Prolog
    discourse, that would approach a solution.

    The LSP and the Relay could access the same
    end-user code repository. Not all data would
    need a round trip through the client to go

    from first request to second request.

    Mild Shock schrieb:
    Prolog missed the Web 2.0 Bandwagon. Unlike
    web 1.0 which is static content. Web 2.0 is
    all about dynamic content, including building
    content incrementally.

    IntelliJ just created Mellum, its open source,
    their ghost texts are code snippets. So its
    more like recalling typing marcros, giving
    them a good guess. Not completing

    partial identifiers:

    Why Did JetBrains Create Mellum?
    https://www.youtube.com/watch?v=7TqkvVXKxFA

    LLM optimized for code-related tasks. https://huggingface.co/JetBrains/Mellum-4b-base

    Mild Shock schrieb:
    Web 2.0 is all about incremental content!

    don’t think it could really do
    the “ghost text” effect.

    It wouldn’t do the ghost text, only assist
    it. There was a misunderstanding how “ghost
    texts” work. Maybe you were thinking, that
    the “ghost text” is part of the first response.

    But usually the “ghost text” is a second response:

    waiting for completion candidates to be suggested

    Well you don’t use it for your primary
    typing completion which is preferably fast.
    The first response might give context information,
    for the second request which provides a
    different type of completion.

    But the first response is not responsible
    for any timing towards the second request.
    That anyway happens in the client. And it
    doesn’t hurt if the first response is
    from a stupid channel.

    Web 2.0 is all about incremental content!


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jun 23 13:05:27 2025
    From Newsgroup: comp.lang.prolog

    A flesh an bood cooperative multitasking Prolog system
    is sometimes tricky to do. We were agonizing over the
    last days how we could test our timers and tasks.

    Our existing framework doesn't work, since it neither
    waits for a timer callback to be fired and to complete,
    nor for a task to complete. But its seems its just an
    instance of a Promise again.

    Turn the test case itself into a Promise, and wait for
    it. In Prolo terms, the test case is a success when the
    .then() port gets reached with SUCCESS, or its a failure
    if the .then() port gets reached with FAILURE or if the

    the .catch() port gets reached. Interesting framework
    that does just that:, whereby the use assert, to turn
    FAILURE into an exception:

    Node.js v20.0.0 - The test runner is now stable. https://nodejs.org/api/test.html#describe-and-it-aliases

    BTW: Quite inventive vocabulary...



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jun 23 13:10:10 2025
    From Newsgroup: comp.lang.prolog

    Again JavaScript shines since the keyword "async"
    makes the difference. We have recently experienced
    its benefit, since we could remove all new Promise()
    calls in our code where we are juggling with tasks.

    new Promise() is only needed for callbacks that
    then call resolve() or reject(), but task can
    just use await and try catch. Now without
    the keyword its a traditional test case:

    test('synchronous failing test', (t) => {
    // This test fails because it throws an exception.
    assert.strictEqual(1, 2);
    });

    With the keyword its a test case that
    can test timers and tasks:

    test('asynchronous passing test', async (t) => {
    // This test passes because the Promise returned by the async
    // function is settled and not rejected.
    assert.strictEqual(1, 1);
    });

    Mild Shock schrieb:
    A flesh an bood cooperative multitasking Prolog system
    is sometimes tricky to do. We were agonizing over the
    last days how we could test our timers and tasks.

    Our existing framework doesn't work, since it neither
    waits for a timer callback to be fired and to complete,
    nor for a task to complete. But its seems its just an
    instance of a Promise again.

    Turn the test case itself into a Promise, and wait for
    it. In Prolo terms, the test case is a success when the
    .then() port gets reached with SUCCESS, or its a failure
    if the .then() port gets reached with FAILURE or if the

    the .catch() port gets reached. Interesting framework
    that does just that:, whereby the use assert, to turn
    FAILURE into an exception:

    Node.js v20.0.0 - The test runner is now stable. https://nodejs.org/api/test.html#describe-and-it-aliases

    BTW: Quite inventive vocabulary...




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Jun 24 01:06:28 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Now SWI-Prolog has amassed 1/4 Million of
    student notebooks, the SWI-Prolog discourse
    has become a cest pool of stupid teachers

    asking stupid questions. Development and
    innovation in Prolog has totally stalled.
    All Prolog systems are based on completely

    silly WAM or ZIP, and cannot run this trivial
    constant caching test case in linear time:

    data(1,[0,1,2,3,4,5,6,7,8,9]). data(2,[0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9]). data(3,[0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9]).

    test(N) :- between(1,1000000,_), data(N, _), fail; true.

    Here some results:

    /* Trealla Prolog 2.74.10 */

    ?- between(1,3,N), time(test(N)), fail; true.
    % Time elapsed 0.236s, 3000004 Inferences, 12.692 MLips
    % Time elapsed 0.318s, 3000004 Inferences, 9.429 MLips
    % Time elapsed 0.371s, 3000004 Inferences, 8.095 MLips

    /* Scryer Prolog 0.9.4-411 */

    ?- between(1,3,N), time(test(N)), fail; true.
    % CPU time: 0.793s, 7_000_100 inferences
    % CPU time: 1.150s, 7_000_100 inferences
    % CPU time: 1.481s, 7_000_100 inferences

    Guess what formerly Jekejeke Prolog and Dogelog
    Player show? They are not based on WAM or ZIP.
    Its rather DAM, Dogelog Abtract Machine.

    Bye

    Mild Shock schrieb:
    Web 2.0 is all about incremental content!

    don’t think it could really do
    the “ghost text” effect.

    It wouldn’t do the ghost text, only assist
    it. There was a misunderstanding how “ghost
    texts” work. Maybe you were thinking, that
    the “ghost text” is part of the first response.

    But usually the “ghost text” is a second response:

    waiting for completion candidates to be suggested

    Well you don’t use it for your primary
    typing completion which is preferably fast.
    The first response might give context information,
    for the second request which provides a
    different type of completion.

    But the first response is not responsible
    for any timing towards the second request.
    That anyway happens in the client. And it
    doesn’t hurt if the first response is
    from a stupid channel.

    Web 2.0 is all about incremental content!

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Jun 24 01:12:45 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Of course Teachers have better quality
    than Nerds when they formulate questions.
    Better reseacherd. But the goal of a teacher

    is always orthodoxification. So ensentially
    SWI-Prolog discourse is abused as a wiki,
    with dozen of questions and answer harnessing

    hundred of links. The food that teachers need.

    Bye

    P.S.: Its obvious what is killed in the process:
    - Get rid of silly WAM and ZIP!
    - Going towards web 2.0 with Prolog
    - The AU Boom and Prolog
    - What else...?

    Mild Shock schrieb:
    Hi,

    Now SWI-Prolog has amassed 1/4 Million of
    student notebooks, the SWI-Prolog discourse
    has become a cest pool of stupid teachers

    asking stupid questions. Development and
    innovation in Prolog has totally stalled.
    All Prolog systems are based on completely

    silly WAM or ZIP, and cannot run this trivial
    constant caching test case in linear time:

    data(1,[0,1,2,3,4,5,6,7,8,9]). data(2,[0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9]). data(3,[0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9]).

    test(N) :- between(1,1000000,_), data(N, _), fail; true.

    Here some results:

    /* Trealla Prolog 2.74.10 */

    ?- between(1,3,N), time(test(N)), fail; true.
    % Time elapsed 0.236s, 3000004 Inferences, 12.692 MLips
    % Time elapsed 0.318s, 3000004 Inferences, 9.429 MLips
    % Time elapsed 0.371s, 3000004 Inferences, 8.095 MLips

    /* Scryer Prolog 0.9.4-411 */

    ?- between(1,3,N), time(test(N)), fail; true.
       % CPU time: 0.793s, 7_000_100 inferences
       % CPU time: 1.150s, 7_000_100 inferences
       % CPU time: 1.481s, 7_000_100 inferences

    Guess what formerly Jekejeke Prolog and Dogelog
    Player show? They are not based on WAM or ZIP.
    Its rather DAM, Dogelog Abtract Machine.

    Bye

    Mild Shock schrieb:
    Web 2.0 is all about incremental content!

    don’t think it could really do
    the “ghost text” effect.

    It wouldn’t do the ghost text, only assist
    it. There was a misunderstanding how “ghost
    texts” work. Maybe you were thinking, that
    the “ghost text” is part of the first response.

    But usually the “ghost text” is a second response:

    waiting for completion candidates to be suggested

    Well you don’t use it for your primary
    typing completion which is preferably fast.
    The first response might give context information,
    for the second request which provides a
    different type of completion.

    But the first response is not responsible
    for any timing towards the second request.
    That anyway happens in the client. And it
    doesn’t hurt if the first response is
    from a stupid channel.

    Web 2.0 is all about incremental content!


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Jun 24 01:26:27 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    ISO is loosing it because it gives in to Teachers.
    GUPU from Ulrich Neumerkel is also a Teaching project.
    Notebooks can be also viewed as a Teaching project.

    Still there were once rumors that Prolog was used
    in Industry. But this was long long ago, and these
    roots are possibly totally gone.

    I don't believe anybody is using CLP or s(CASP).
    Or CLP(Z) from Scryer Prolog. Also the USA
    compiler builders are total cluless about logic,

    and USA is dominant when it comes to compiler
    builder. Take the dissertation of

    Combining Analyses, Combining Optimizations
    Clifford Noel Click, Jr. - February, 1995

    He does't know a bit how conditional constant
    propagation relates to logic.

    Bye

    P.S.: Compiler builders never had a formal education
    in mathematical logic. Not enough time. They
    were always busy in guzzling in machine code

    operations, building highly sophisticated tables
    that describe the machine code operations and
    building simlarly highly sophisticated backends,

    that are sniffing these tables. You don't find
    such People in Prolog anymore. Somebody that
    knows aassembly, just like Linus Torwald started...

    Mild Shock schrieb:
    Hi,

    Of course Teachers have better quality
    than Nerds when they formulate questions.
    Better reseacherd. But the goal of a teacher

    is always orthodoxification. So ensentially
    SWI-Prolog discourse is abused as a wiki,
    with dozen of questions and answer harnessing

    hundred of links. The food that teachers need.

    Bye

    P.S.: Its obvious what is killed in the process:
    - Get rid of silly WAM and ZIP!
    - Going towards web 2.0 with Prolog
    - The AU Boom and Prolog
    - What else...?

    Mild Shock schrieb:
    Hi,

    Now SWI-Prolog has amassed 1/4 Million of
    student notebooks, the SWI-Prolog discourse
    has become a cest pool of stupid teachers

    asking stupid questions. Development and
    innovation in Prolog has totally stalled.
    All Prolog systems are based on completely

    silly WAM or ZIP, and cannot run this trivial
    constant caching test case in linear time:

    data(1,[0,1,2,3,4,5,6,7,8,9]).
    data(2,[0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9]).
    data(3,[0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9]).

    test(N) :- between(1,1000000,_), data(N, _), fail; true.

    Here some results:

    /* Trealla Prolog 2.74.10 */

    ?- between(1,3,N), time(test(N)), fail; true.
    % Time elapsed 0.236s, 3000004 Inferences, 12.692 MLips
    % Time elapsed 0.318s, 3000004 Inferences, 9.429 MLips
    % Time elapsed 0.371s, 3000004 Inferences, 8.095 MLips

    /* Scryer Prolog 0.9.4-411 */

    ?- between(1,3,N), time(test(N)), fail; true.
        % CPU time: 0.793s, 7_000_100 inferences
        % CPU time: 1.150s, 7_000_100 inferences
        % CPU time: 1.481s, 7_000_100 inferences

    Guess what formerly Jekejeke Prolog and Dogelog
    Player show? They are not based on WAM or ZIP.
    Its rather DAM, Dogelog Abtract Machine.

    Bye

    Mild Shock schrieb:
    Web 2.0 is all about incremental content!

    don’t think it could really do
    the “ghost text” effect.

    It wouldn’t do the ghost text, only assist
    it. There was a misunderstanding how “ghost
    texts” work. Maybe you were thinking, that
    the “ghost text” is part of the first response.

    But usually the “ghost text” is a second response:

    waiting for completion candidates to be suggested

    Well you don’t use it for your primary
    typing completion which is preferably fast.
    The first response might give context information,
    for the second request which provides a
    different type of completion.

    But the first response is not responsible
    for any timing towards the second request.
    That anyway happens in the client. And it
    doesn’t hurt if the first response is
    from a stupid channel.

    Web 2.0 is all about incremental content!



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Jun 24 08:16:36 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    You see more and more frameworks try to get
    flesh and blood async. I tell you, its a neglected
    subject so far. There is a for example a whole

    world of async streams already integrated
    into the browser and node.js !

    Für asynchrone API-Kommunikation mit unterschiedlichen
    Transportprotokollen ist AsyncAPI als Beschreibungsstandard
    entstanden, der sich am OpenAPI-Konzept anlehnt https://de.wikipedia.org/wiki/OpenAPI

    Woa! This looks swagger:

    Bringing Asynchronous APIs to the Forefront at APIDays Singapore
    event, themed "Where APIs Meet AI: Building Tomorrow's Intelligent Ecosystems", providing an excellent platform to introduce AsyncAPI to
    the vibrant Asia-Pacific community.​ https://www.asyncapi.com/blog/2025-singapore-conf-summary

    Bye


    Mild Shock schrieb:
    Again JavaScript shines since the keyword "async"
    makes the difference. We have recently experienced
    its benefit, since we could remove all new Promise()
    calls in our code where we are juggling with tasks.

    new Promise() is only needed for callbacks that
    then call resolve() or reject(), but task can
    just use await and try catch. Now without
    the keyword its a traditional test case:

    test('synchronous failing test', (t) => {
      // This test fails because it throws an exception.
      assert.strictEqual(1, 2);
    });

    With the keyword its a test case that
    can test timers and tasks:

    test('asynchronous passing test', async (t) => {
      // This test passes because the Promise returned by the async
      // function is settled and not rejected.
      assert.strictEqual(1, 1);
    });

    Mild Shock schrieb:
    A flesh an bood cooperative multitasking Prolog system
    is sometimes tricky to do. We were agonizing over the
    last days how we could test our timers and tasks.

    Our existing framework doesn't work, since it neither
    waits for a timer callback to be fired and to complete,
    nor for a task to complete. But its seems its just an
    instance of a Promise again.

    Turn the test case itself into a Promise, and wait for
    it. In Prolo terms, the test case is a success when the
    .then() port gets reached with SUCCESS, or its a failure
    if the .then() port gets reached with FAILURE or if the

    the .catch() port gets reached. Interesting framework
    that does just that:, whereby the use assert, to turn
    FAILURE into an exception:

    Node.js v20.0.0 - The test runner is now stable.
    https://nodejs.org/api/test.html#describe-and-it-aliases

    BTW: Quite inventive vocabulary...





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Jun 24 08:24:17 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    You could still make it to London:

    API enthusiasts in London! Join the
    AsyncAPI Conference for deep dives into
    event-driven architecture and open-source
    collaboration.
    https://conference.asyncapi.com/venue/London

    You might have:

    - Real-world examples of AsyncAPI in action
    - Live demos using AsyncAPI Studio and the CLI
    - The tooling and ecosystem that surrounds the spec

    But more than the slides and demos, it was the
    questions that told the story:

    - "Tell me more about AsyncAPI"
    - “How do I introduce AsyncAPI into our hybrid systems?”
    - “Can I use this with Kafka and MQTT?”
    - “How do we contribute to the spec or tooling?”
    - "How do other AsyncAPI adopters use AsyncAPI?"

    LoL

    Bye

    P.S.: Mostlikely an attempt to revive React,
    looks like a big pile of shit to me:

    https://github.com/asyncapi/spec/blob/master/spec/asyncapi.md

    Mild Shock schrieb:
    Hi,

    ISO is loosing it because it gives in to Teachers.
    GUPU from Ulrich Neumerkel is also a Teaching project.
    Notebooks can be also viewed as a Teaching project.

    Still there were once rumors that Prolog was used
    in Industry. But this was long long ago, and these
    roots are possibly totally gone.

    I don't believe anybody is using CLP or s(CASP).
    Or CLP(Z) from Scryer Prolog. Also the USA
    compiler builders are total cluless about logic,

    and USA is dominant when it comes to compiler
    builder. Take the dissertation of

    Combining Analyses, Combining Optimizations
    Clifford Noel Click, Jr. -  February, 1995

    He does't know a bit how conditional constant
    propagation relates to logic.

    Bye

    P.S.: Compiler builders never had a formal education
    in mathematical logic. Not enough time. They
    were always busy in guzzling in machine code

    operations, building highly sophisticated tables
    that describe the machine code operations and
    building simlarly highly sophisticated backends,

    that are sniffing these tables. You don't find
    such People in Prolog anymore. Somebody that
    knows aassembly, just like Linus Torwald started...

    Mild Shock schrieb:
    Hi,

    Of course Teachers have better quality
    than Nerds when they formulate questions.
    Better reseacherd. But the goal of a teacher

    is always orthodoxification. So ensentially
    SWI-Prolog discourse is abused as a wiki,
    with dozen of questions and answer harnessing

    hundred of links. The food that teachers need.

    Bye

    P.S.: Its obvious what is killed in the process:
    - Get rid of silly WAM and ZIP!
    - Going towards web 2.0 with Prolog
    - The AU Boom and Prolog
    - What else...?

    Mild Shock schrieb:
    Hi,

    Now SWI-Prolog has amassed 1/4 Million of
    student notebooks, the SWI-Prolog discourse
    has become a cest pool of stupid teachers

    asking stupid questions. Development and
    innovation in Prolog has totally stalled.
    All Prolog systems are based on completely

    silly WAM or ZIP, and cannot run this trivial
    constant caching test case in linear time:

    data(1,[0,1,2,3,4,5,6,7,8,9]).
    data(2,[0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9]).
    data(3,[0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9]).

    test(N) :- between(1,1000000,_), data(N, _), fail; true.

    Here some results:

    /* Trealla Prolog 2.74.10 */

    ?- between(1,3,N), time(test(N)), fail; true.
    % Time elapsed 0.236s, 3000004 Inferences, 12.692 MLips
    % Time elapsed 0.318s, 3000004 Inferences, 9.429 MLips
    % Time elapsed 0.371s, 3000004 Inferences, 8.095 MLips

    /* Scryer Prolog 0.9.4-411 */

    ?- between(1,3,N), time(test(N)), fail; true.
        % CPU time: 0.793s, 7_000_100 inferences
        % CPU time: 1.150s, 7_000_100 inferences
        % CPU time: 1.481s, 7_000_100 inferences

    Guess what formerly Jekejeke Prolog and Dogelog
    Player show? They are not based on WAM or ZIP.
    Its rather DAM, Dogelog Abtract Machine.

    Bye

    Mild Shock schrieb:
    Web 2.0 is all about incremental content!

    don’t think it could really do
    the “ghost text” effect.

    It wouldn’t do the ghost text, only assist
    it. There was a misunderstanding how “ghost
    texts” work. Maybe you were thinking, that
    the “ghost text” is part of the first response.

    But usually the “ghost text” is a second response:

    waiting for completion candidates to be suggested

    Well you don’t use it for your primary
    typing completion which is preferably fast.
    The first response might give context information,
    for the second request which provides a
    different type of completion.

    But the first response is not responsible
    for any timing towards the second request.
    That anyway happens in the client. And it
    doesn’t hurt if the first response is
    from a stupid channel.

    Web 2.0 is all about incremental content!




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jun 25 15:17:09 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Why only phrase_from_file/2 and not also
    phrase_from_url/2. Its not that difficult to
    do, you can do it with change_arg/2 and

    nothing else! Lets see what we have so far:

    Trealla Prolog:
    Base on memory mapping chars. So basically
    this could be judged as a further argument
    in favor of chars versus codes. But its
    not Web 2.0, works only for files.

    https://github.com/trealla-prolog/trealla/blob/main/library/pio.pl

    SWI-Prolog:
    Based on turning a stream into a lazy list.
    Requires attributed variables and repositionable
    streams. The stream is opened with open/3 but
    maybe could be opened with http_open/3 as well?

    https://github.com/SWI-Prolog/swipl-devel/blob/master/library/pio.pl

    To be continued...

    Bye

    Mild Shock schrieb:
    Web 2.0 is all about incremental content!

    don’t think it could really do
    the “ghost text” effect.

    It wouldn’t do the ghost text, only assist
    it. There was a misunderstanding how “ghost
    texts” work. Maybe you were thinking, that
    the “ghost text” is part of the first response.

    But usually the “ghost text” is a second response:

    waiting for completion candidates to be suggested

    Well you don’t use it for your primary
    typing completion which is preferably fast.
    The first response might give context information,
    for the second request which provides a
    different type of completion.

    But the first response is not responsible
    for any timing towards the second request.
    That anyway happens in the client. And it
    doesn’t hurt if the first response is
    from a stupid channel.

    Web 2.0 is all about incremental content!

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jun 25 15:19:32 2025
    From Newsgroup: comp.lang.prolog

    Corr.: Typo change_arg/2 ~~> change_arg/3

    you can do it with change_arg/3 and nothing else!

    Mild Shock schrieb:
    Hi,

    Why only phrase_from_file/2 and not also
    phrase_from_url/2. Its not that difficult to
    do, you can do it with change_arg/2 and

    nothing else! Lets see what we have so far:

    Trealla Prolog:
       Base on memory mapping chars. So basically
       this could be judged as a further argument
       in favor of chars versus codes. But its
       not Web 2.0, works only for files.

    https://github.com/trealla-prolog/trealla/blob/main/library/pio.pl

    SWI-Prolog:
       Based on turning a stream into a lazy list.
       Requires attributed variables and repositionable
       streams. The stream is opened with open/3 but
       maybe could be opened with http_open/3 as well?

    https://github.com/SWI-Prolog/swipl-devel/blob/master/library/pio.pl

    To be continued...

    Bye

    Mild Shock schrieb:
    Web 2.0 is all about incremental content!

    don’t think it could really do
    the “ghost text” effect.

    It wouldn’t do the ghost text, only assist
    it. There was a misunderstanding how “ghost
    texts” work. Maybe you were thinking, that
    the “ghost text” is part of the first response.

    But usually the “ghost text” is a second response:

    waiting for completion candidates to be suggested

    Well you don’t use it for your primary
    typing completion which is preferably fast.
    The first response might give context information,
    for the second request which provides a
    different type of completion.

    But the first response is not responsible
    for any timing towards the second request.
    That anyway happens in the client. And it
    doesn’t hurt if the first response is
    from a stupid channel.

    Web 2.0 is all about incremental content!


    --- Synchronet 3.21a-Linux NewsLink 1.2